In public-policy battles, statistics fly fast and furiously. Whether you score a hit depends a great deal on presentation and credibility. During last year's presidential campaign, for example, President George Bush's assurances that the national economy was rebounding from a relatively mild 1990-91 recession were met with catcalls and snide "he just doesn't get it" remarks.
But in August, the Clinton Commerce Department released data showing that Bush was, indeed, correct: The economy grew by about 4 percent during 1992, while the depth of the previous recession had been exaggerated. "There must be people in the White House this morning [saying], `Thank God these numbers weren't released during the election,'" commented Bank of America senior economist Michael Penzer.
In debates on education policy--where state and local government agencies play the most important role--statistics are even more poorly gathered and interpreted. Each state has its own idiosyncrasies: how public education is organized, how government employees are classified, etc. These difficulties are rarely understood by the general public or even by education "experts," so the quality of the debate suffers. Countless governors and legislatures have created entire educational programs, from school-district equalization to teacher salary schedules, based on inaccurate data.
Unfortunately, the use and abuse of education statistics exists on all sides of the reform debate. Advocates of school choice and privatization sometimes overstate school spending. Defenders of the status quo frequently understate it. And through it all, self-interested players such as the National Education Association continue to enjoy reputations as non-partisan, reliable sources of information.
You might be surprised to learn that the NEA supplies the Department of Education, academic researchers, and education activists on both sides with virtually all information on the nation's teachers. Not surprisingly, this information indicates that teachers are underpaid compared to other professionals, that wide disparities exist among states in teacher pay, and that teacher pay has stagnated over the last 20 years, growing 4 percent (and declining in real terms during the 1970s). "Unfortunately," writes school privatization advocate Myron Lieberman in his new book, Public Education: An Autopsy, "the media and the U.S. Department of Education ignore the fact that the figures are prepared by an organization with a huge stake in how they are interpreted."
The "good guys," however, also parrot the NEA's line. The recent Report Card on American Education by the American Legislative Exchange Council--released to great fanfare in Washington by former Secretary of Education Bill Bennett--used the NEA's numbers, too. In its state-by-state analysis, ALEC found that some states' teachers had actually lost ground in the last 20 years. Conservatives spin this result by saying that public schools have spent an increasing share of their budgets on non-teaching personnel and non-instructional expenses, and a declining share on teachers, proving the need for vouchers and privatization to move the focus back to the classroom.
This argument is clever but unnec-essary (and probably futile, if it's intended to attract a great many public-school teachers to the choice banner). Teachers are doing quite well, thank you. The NEA's numbers purposely do not include the value of non-wage benefits such as health insurance. Since the early 1970s, teachers, like other American employees, have sought compensation increases in the form of benefits rather than wages, since the former are provided tax-free.
While the NEA has not chosen to report benefits as part of teacher compensation--though local unions can call up such data in seconds during salary negotiations--surveys have shown that benefits today can make up as much as 35 percent of a teacher's total compensation package. If benefits for teachers increased as a share of total compensation at the same rate as benefits for university professors did (and there's good evidence to suggest it may have risen even faster than that), then the average teacher saw a sizable 20-percent real rise in total compensation during the last 20 years.
State-by-state comparisons run into other problems. For one thing, the data are not adjusted for cost of living--even though earning $30,000 in New York City is markedly different from earning $30,000 in Jackson, Mississippi. Earlier this year, my think tank generated a national ranking of both average teacher compensation and per-pupil expenditure adjusted for in-state costs. More notable than the states that fell or rose significantly in the rankings was the fact that, in both categories, the gap between highest and lowest shrank. In an effort to prove that some children are "shortchanged" in public education, policy makers, authors, and journalists have trumpeted the extent to which states differ in their investments in schools. But cost-of-living differences account for a good portion of school-spending variance, weakening the case for federal "equalization" of resources.
Apart from state-by-state differences, total school spending in the United States is routinely underestimated because of other measurement problems. As Lieberman and other analysts have pointed out, official school spending statistics leave out an awful lot. A partial list of expenditures excluded from federal data includes business and foundation donations, donated time, pension contributions, the cost of negotiating contracts, the cost of training teachers, remedial education in colleges, judicial costs, out-of-pocket parental expenses, and federal educational programs in departments other than Education (such as Head Start). Since real per-pupil spending even as currently measured shot up 62 percent from 1973 to 1993 (according to the ALEC study), an accurate analysis of total spending would no doubt find an even bigger jump.
The quality of data about higher education is hardly better than that for K-12 education. The American Association of University Professors (higher education's NEA) provides professor-pay statistics that include benefits, but the AAUP does not adjust for each college's local cost of living. When I did so, I found that compensation at some schools--such as New York University, Columbia University, and U.C.-Berkeley--plunged dramatically in the national rankings.
Why does the AAUP leave out cost-of-living adjustments? Because, says AAUP General Secretary Ernst Benjamin, they would be "controversial," with each community having "a stake in saying their cost of living is the lowest." Since these data are already collected by chambers of commerce and used by corporate executives in relocation decisions, however, Benjamin's argument doesn't wash. The real reason seems to be that the adjustments would weaken the AAUP's political argument for more state aid to public universities to compete with elite private institutions for faculty. Many of those private schools are in the highest-cost areas, so the salary differential between them and their public counterparts isn't as large as the AAUP would like to portray.
With local, state, and national education statistics compromised by measurement problems and institutional bias, it should come as no surprise that international comparisons are even more seriously flawed. For years, the Education Department compared total education spending in countries only by reporting public expenditures as a percentage of the gross national product, which generates strange results; poor Third World countries may well spend more of their meager income on education than the United States, but one would hardly consider teachers and students in those countries better off. I spend a higher percentage of my income on housing than does Microsoft's billionaire founder Bill Gates. Does that mean I'm living more luxuriously?
This flaw didn't stop candidate Bill Clinton when he used the numbers to "prove" America's need for more "education investment" during the presidential campaign. Obviously, the appropriate measure is per-pupil spending, adjusted for purchasing power in the various countries. But the Education Department's attempts at this, beginning only a couple of years ago, leave a lot to be desired; for some developed countries, the most recent data we have date back to 1987. And we really have no idea how our peers in Europe and Asia--not to mention the rest of the world--measure "public" education spending. As it is, Japan's per-pupilspending number includes both public and private expenditures on education. So it's truly shocking that this inflated number is still less than half that of the United States' public spending number (see table).
Other data are equally flawed. Some education "reformers" are touting the benefits of a longer school year, arguing that students in high-achieving countries like Korea succeed because they stay in school longer (222 days) each year. But while the United States ranks low in total school days (178), it ranks very high in the length of those days: Only four countries out of 20 participating in the 1991 International Assessment of Educational Progress provide more instructional minutes each year to their students than the United States. Korea, which tops most achievement lists, is not one of those four.
These examples barely scratch the surface in identifying misuse of education statistics. Some are quite embarrassing. The ALEC Wall Chart, for example, reports only current-dollar spending amounts, thus wildly exaggerating spending growth. For advocates of free enterprise and competition in American education, the challenge is to use numbers and information to question the establishment's claims without losing our own credibility. As Bush discovered, it's not enough to be right--you also have to be believed.