The Media: Margins of Error

|

The next time you see numbers from a public-opinion survey in a newspaper or on television, keep in mind the infamous toilet paper poll of 1976.

As pollsters tell the tale, a paper manufacturer that year printed up special rolls of toilet tissue—half with Jimmy Carter's face on them, half with Gerald Ford's. The manufacturer wanted to see which man's face would sell more rolls, and when the sales figures came in, the company put out a goofy little press release noting its highly unscientific findings.

Unfortunately, in the rush of coverage during the final days of the election season, a wire service mistook this for a serious survey and reported the results as an indication of who would win the White House. Not only had it mistaken a public-relations gimmick for a serious study, the story neglected to deal with a central question: Did people buy toilet paper with Jimmy Carter's face on it as a show of support or out of scatological spite?

As polling nightmares go, that's about the worst of the bunch. Newspapers and television stations, conducting their own surveys with the help of reputable firms, can usually get the story right—if they are willing to pay the proper amount of attention.

But anyone who imagines that modern polling methods have laid to rest the embarrassing days of DEWEY DEFEATS TRUMAN need only read the confident predictions in February 1988 that Bob Dole would win the New Hampshire primary. Even Dole's own pollster, Richard Wirthlin, had begun to call his candidate "Mr. President," and the surprise of his defeat, more than the margin of George Bush's victory, did the Dole campaign in.

The curious thing about polling as the 1992 election approaches is that the media will probably be doing less of it. Newspapers are tightening their belts, and the broadcast networks have already cut the number of exit polls by two-thirds by combining their resources to produce just a single exit poll among the three of them. News officials "report that they have no money," notes Burns Roper, head of the nationally syndicated Roper polling organization.

Yet poll haters have little reason to cheer. Political reporters will still feel the urge to lace their stories with poll results. With no source for them inside the media tent, the only poll numbers in town will come from candidates, public-relations firms, and special-interest groups—in other words, people who make a business out of trying to trick the media and, through them, the public.

The classic trick came after the Carter-Reagan debate in October 1980. When ABC ran special 900 lines to let viewers pick (at 50 cents a call) who had won the debate, Reagan was the clear favorite. Later in the evening, when the ABC polling unit had a chance to finish its scientific survey, it found a wildly different result.

Why the difference? Reagan campaign workers had stacked the deck on the 900 lines, calling them over and over to create the illusion that Reagan had mopped the floor with Carter. (Of course, Carter forces had tried the same trick, but they made too many calls from the Atlanta area and swamped the long-distance lines in the process.) Without the more reliable internal ABC numbers, the network would have been left misleading its viewers and itself. As it was, one New York newspaper the next day reported the call-in numbers that the Reagan forces had cooked.

"Bad information drives out good," notes Evans Witt, who heads up the Associated Press's poll coverage. "It's sexier, it's easier to understand, it doesn't come with all these caveats. It's like covering what's on Johnny Carson as a news conference. It's entertainment."

But entertainment is what draws readers and viewers, so expect to see more of it in 1992. And until figures from polls disappear from news stories altogether—roughly around the time the First Amendment is repealed—it will increasingly be up to viewers and readers to filter out the bad from the good.

Here are a few questions to ask while reading any poll story:

What is the margin of error? For 40 years pollsters have been drilling into the heads of editors that any poll numbers should be accompanied by the little plus-or-minus figure that represents the smallest possible error associated with that poll. So most stories have that "margin of error" figure, even if reporters seemingly feel free to ignore it.

Pollsters, of course, interview tiny slices of the country's population, and the margin of error provides a little "wiggle room" to compensate for that. The margin of error says that if the whole population were questioned, the resulting number would differ from the poll results by no more than x percentage points in either direction.

Nothing illustrates better how reporters willfully ignore the margin of error than their quadrennial quest to appoint a front-runner from among the presidential candidates. When Gary Hart briefly jumped back into the Democratic race in late 1987 after his affair with Donna Rice had forced him to withdraw earlier in the year, he pulled the support of 21 percent of Democratic voters in a CBS News/New York Times poll. Jesse Jackson finished second, with 17 percent. With a margin of error of plus-or-minus five points, however, it was just as likely that Hart's support was 16 percent (21 minus 5) and Jackson's was 22 percent (17 plus 5).

Statistically, all you could say about the two men was that they were just about tied for the lead. There was just as much chance that Hart was leading Jackson as that Jackson was leading Hart. But reporters—none of them from CBS or the Times—felt compelled to characterize Hart as the front-runner. The quickness of his second exit from the presidential race proved just how wrong it had been to use the words Hart and front-runner in the same sentence.

The rule for any reader to follow at home is simple: Subtract the margin of error from the top answer, and add the margin of error to the number-two answer. If the top answer doesn't still come out on top, it's wrong for the story to even hint that there's a difference between the two. If you spot this, feel free to cry foul.

Is there any other error? The "margin of error" is just the lowest possible error, based on the formulas found in statistics textbooks. It says nothing about whether a question was slanted toward one answer or whether the questions that came before it had the same effect. Even the best polls may unintentionally suffer from these effects.

A prime example is one that most pollsters learn to avoid. Ask people whether they approve of President Bush at the start of a poll, and one "approval rating" will emerge. But if the approval question is preceded by queries about the recession, discontent with the economy will crowd out any positive thoughts of the president. Lower ratings are sure to follow. It works the other way, too: Asking a series of questions on Russia and the Persian Gulf at the top of the questionnaire will artificially boost Bush's rating.

Common sense is often all that is necessary to pick these errors out. Just ask yourself, "How would I have felt if a stranger had called me up and asked me those questions in precisely that order?"

Who paid for the survey? Possibly the most important question to ask. Polling is an expensive, labor-intensive business. If the survey was not paid for by a newspaper or television station, then someone else footed the bill. And that means that the candidate, or special-interest group, may have a hidden agenda. Don't be surprised when the Chocolate Ice Cream Council announces that chocolate is America's favorite ice cream flavor—and don't pay much attention to that "finding" either.

When was the survey taken? Pollsters like to tell you that a poll only represents a "snapshot" of public opinion at a moment in time. That makes sense. Polls taken before Anita Hill's charges were made public showed Clarence Thomas with respectable ratings; polls taken after showed his negative ratings expand to three and four times their original size. Pollsters are wise to wait until some of the dust has settled before asking their questions.

That doesn't mean they always get the chance to. The problem with the 1988 New Hampshire polls noted above was simply that the pollsters stopped polling—and reporters started writing their stories—four days before the primary. Lots of New Hampshire Republicans changed their minds in that time. How do we know? Those polling organizations that kept asking questions until Monday evening did, at the very last minute, detect the switch to George Bush. (The same problem haunted the DEWEY DEFEATS TRUMAN pollsters in 1948. They stopped polling with three or four weeks to go in the campaign.)

Once again, common sense is the only necessary antidote. See when the poll was taken and figure out for yourself whether events have outstripped its conclusions. Often they have.

Is this really a poll? In other words, is it really a survey done by a professional organization in which everyone in the targeted population had an equal chance of being picked? Or is it a 900-number call-in, which is heavily biased toward people who have the money to blow on such things? Is it a magazine mail-in survey, which clearly excluded anyone who doesn't read the magazine? Is it the work of a reporter who stopped people on the street? Is it the preference of people who buy toilet paper stamped with politicians' likenesses? Is it the tally of a radio talk-show host with a very idiosyncratic following?

The media are still trying to learn how to cope with these pseudo-surveys. Until they take the pledge to avoid all of them, it will be up to the reader to discount and ignore most of them.

The campaign coverage of 1992 will present a bewildering welter of polls, good and bad. The stories that quote them carry the same caveat that is built into any piece of news—not "let the reader beware," but certainly "let the reader be smart."

T. Keating Holland is a Washington, D.C.-based free-lance writer and former pollster.