Cause and Effect
Federal agencies fail their own report cards.
Correlation is not causation. It's a simple concept, really, one understood by anyone who has squeaked through Logic 101 with a C. Even people not armed with an education but in possession of a little common sense can figure it out on their own. So why, then, is the concept a stumbling block that regularly clouds the political landscape in Washington?
Part of the answer was revealed yesterday to 60 people assembled in the Senate Governmental Affairs Committee Hearing Room (of all places), at a briefing put on by the folks at the Mercatus Center, a market-oriented research organization affiliated with George Mason University.
A little background: The Government Performance and Results Act, passed in 1993, mandates that federal agencies evaluate whether or not they actually accomplish anything. The agencies' reports are supposed to be a tool to help legislators decide which programs work and which ones don't, with an eye toward cutting or reforming the latter.
Enter the Mercatus crew, which for the past two years has been studying just how well the various agencies are doing at completing their mandated annual reports. The Mercatus study did not measure what the agencies actually do, stressed Maurice McTigue, director of the center's Government Accountability Project. It simply measured how well each agency measured its own performance and reported that performance back to Congress and the general public. The results, as Steve Martin once said of comedy, are not pretty.
Jerry Ellig, a senior research fellow at Mercatus and co-author of the study, outlined the questions asked in the report on reports: "Does the agency lay out goals and measure what it is going to accomplish for the public and give us some idea whether it's making progress?" Ellig explained. "Does the agency demonstrate that its actions actually had an effect on and were responsible for the measured results? And does the agency tell us anything about cost so we can get some idea how much we are paying per unit of success."
The answer, by and large, is No. Ellig said that while the average score in FY2000 increased by 5 percent over the previous year, and that while some of the standards had become more stringent, the agencies still faced a serious problem in finding and reporting what they actually accomplished. He also noted that many basic agency reports are not even available to the public on the Internet. "We were a little surprised, because that seems to be an easy fix," he said.
According to the study, the best of the bunch were the Department of Veterans Affairs, followed closely by the Department of Transportation. The U.S. Agency for International Development came in third. NASA came in dead last, but still managed to beat the Department of Agriculture, which did not get its report done in time to be included in Mercatus' tally.
But even the top performers–"the best of the worst" Ellig dubbed them–were sorely lacking. The Department of Veterans Affairs, for instance, had to admit that it was unable to achieve more goals in 2000 than it did the year before.
But there's still another reason why even the winners are losers: "Generally, agencies were good at articulating goals," explained Ellig. "On the other hand, though, agencies were not always good at coming up with measures that really focused on results. [The Department of Transportation] actually did it very well. Some other agencies, NASA, for example, basically made a list of things to do, and that was about it. The other issue–the other problem area–is establishing cause and effect: demonstrating that what the agency did is really responsible for observed results." In other words, many agencies simply assume that correlation implies causation (at least when the trend is worth taking credit for).
The Mercatus show leads one to wonder: Is it ever possible to demonstrate a link between cause and effect in huge government bureaucracies? That's the question I posed to Ellig. "In many cases, the agencies are going to have to engage in fairly detailed program evaluations," he said. "It does take some pretty heavy duty analysis, and it takes a bit of time, but what the heck, it beats flying blind."
Unfortunately, as Mercatus documented, it looks as if we'll be flying blind–in a blizzard, at midnight–for a long time to come.
Show Comments (0)