My son Jacob will be 5 years old this summer, and I have had to face every parent's nightmare of discovering where and how to register him for kindergarten. As the director of an education policy program at a national think tank, I imagined that I had an advantage over the average parent. After all, my job is to evaluate charter schools and tax credits, public school choice and private school voucher programs, homeschooling efforts and privatized school management. Additionally, I am a member of a Los Angeles urban school improvement committee. Finding an acceptable school for my child, I assumed, shouldn't be difficult.
But my confidence began to wane when I started to explore the Web site for our neighborhood school, El Cerrito Elementary, in Corona, California. My heart sank as I reviewed test results and realized that the school to which my son is assigned does not have a Stanford 9 score above the 50th percentile. The Stanford 9 is a standardized test widely used by schools across the country to track educational achievement. El Cerrito Elementary's score is not that low by California's standards, but who wants to send her child to a below-average school?
I still held out the hope that I would be able to exercise some limited degree of school choice by enrolling my son in a quality public school. The school district had allowed the developer of a middle-class housing tract down the hill and across the freeway from my home to erect a brand new school. By doing so, the developer bypassed the normal per-dwelling school tax and finished building this state-of-the-art facility in less than one year. The new school, Woodrow Wilson Elementary, is even a little closer to my house (I live in a semi-rural area of Riverside County) than the "neighborhood" school that requires my son to ride the bus for 45 minutes each way.
Armed with this knowledge, I naively contacted the Corona-Norco Unified School District's "central registration" office to request an intra-district school transfer. A procedure I expected to be simple and direct turned out to be almost impossible. Presumably to discourage requests like mine, the district only accepts transfer applications one week per year, from the 1st to the 7th of December. The polite voice at the other end of the phone informed me that I would be welcome to apply in December for the following year.
Like millions of other parents stuck with low-performing public schools, my only options are relocation, home- schooling, or, given the decisive failure of the California school choice initiative last fall, investment of a small fortune in private school tuition.
What's more depressing is that there is no relief in sight. President Bush's widely touted education reform plan -- which has earned him kudos on the right and attacks on the left -- will have minimal impact on American schools and will do little or nothing to improve education for the average kid. That's because the Bush plan deals almost exclusively with Title I, the major federal education program, which is designed to improve schooling for at-risk students. On paper, zeroing in on Title I, widely considered a colossal failure, makes sense. Ninety percent of America's school districts -- some 23,000 schools nationwide -- receive grants under the program. But the Bush plan's focus is misguided and diverts attention from more sweeping education reform that would help more parents provide their children with a quality education.
What Is Title I?
In 1965, President Lyndon Johnson established Title I of the Elementary and Secondary Education Act as part of his Great Society program. The lofty goal of Title I has been to improve the basic and advanced skills of students who are at risk of failing in school. In particular, the program is designed to assist low-achieving children living in low-income areas where school funding is deemed to be inadequate. At $9 billion a year, Title I is the largest program of federal aid for elementary and secondary education. The money is used mostly to provide intensive math and reading instruction.
In Title I's 36-year history, the U.S. Department of Education has released two major longitudinal studies on the program's effectiveness: Sustaining Effects in 1984 and Prospects in 1997. The Sustaining Effects study demonstrated that the $40 billion spent on the program to that point had done little to improve the achievement of the children it was designed to help. Although the elementary school students showed slight gains over their peers, "By the time students reached junior high school, there was no evidence of sustained or delayed effects of Title I," wrote Launer R. Carter, director of the study, in Educational Researcher.
Thirteen years later, the most recent longitudinal study of the program found that even after the federal government spent another $78 billion (from 1984 to 1997), bringing the total spent on Title I to $118 billion, little had changed. "After controlling for student, family, and school differences between participants and non-participants, we still find that participants score lower than non-participants and that this gap in achievement is not closed over time," the authors of the Prospects study wrote.
Researchers could not discern any long-term achievement gains directly linked to the Title I program. The program tries to identify and serve the children who need the most help, but according to the study, "The services appear to be insufficient to allow them to overcome the relatively large differences between them and their more-advantaged classmates." Similarly, Wayne Riddle, an education analyst at the federal government's Congressional Research Service, analyzed the two federal longitudinal studies and five other Title I studies. His conclusion: "Title I participants tend to increase their achievement levels at the same rate as non-disadvantaged pupils, so gaps in achievement do not significantly change."
In 1999, the U.S. Department of Education released a congressionally mandated evaluation of Title I that seemed to show, based on results from the 1998 National Assessment of Educational Progress (NAEP), that the 1994 reauthorization of Title I had led to some increases in student achievement due to program reforms. The NAEP tests a sample of fourth-graders, eighth-graders, and 12th-graders from 40 states in writing, science, math, and reading. The test is considered the "nation's school report card" and is widely viewed as an independent measurement of public school achievement. The 1998 NAEP results initially appeared to show significant improvements in fourth-grade reading scores in nine states since 1994.
The progress reported in this study was largely fictitious, however. A skeptical parent in Kentucky, Richard Innes, discovered a problem with the 1998 NAEP reading scores. According to the official results, Kentucky was one of the most improved states in fourth-grade reading. But using data gleaned from the Internet, Innes discovered that the gains in some states, including Kentucky, resulted from the exclusion of students considered to be slow learners and those with learning disabilities. Innes asked this critical question: Can a state's scores be accurate when they don't include large numbers of low-scoring students? An analysis by the U.S. Department of Education confirmed that several states had inflated average reading scores by excluding greater numbers of special-education students from testing in 1998 than in 1994. The federal analysis established that more than half of the 36 states where the NAEP is administered had excluded significantly larger numbers of special-education students in 1998. Five states excluded substantially more non-English-speaking students than they had in 1994.
For example, Kentucky dumped test results for 10 percent of the students who were selected for its 1998 sample, compared with 4 percent in 1994. Louisiana ignored 13 percent in 1998, up from 6 percent in 1994. And Connecticut, the nation's highest-scoring state, removed 10 percent of the students selected to participate, compared with 6 percent in 1994. Not surprisingly, states with larger increases in total exclusions also tended to have larger score increases. When the test scores were compared on a realistic basis, Kentucky gained nothing.
Despite these flaws, the Department of Education's 1999 report showing student improvement from 1994 was widely cited in the education press and the general media. Incredibly, the Department of Education's own investigation into special education exclusions was never made public. I only discovered it accidentally when researching congressional testimony regarding the effectiveness of Title I.