Polls

Not All Polls Are Created Equal (Some Are Badly Written)

The first in a series of dispatches from PollsterCon

|


A call center employee
Pixabay

You're probably aware by now that modern pre-election polling is struggling against some massive methodological challenges, from plummeting response rates to the difficulty of trying to differentiate between people who will show up to vote on Election Day and those who just say they will. But poll takers sometimes err in ways that are far more basic than all that. A panel yesterday at the annual American Association for Public Opinion Research (AAPOR) conference pulled back the curtain on some of the challenges survey researchers have to contend with long before the interviewers ever start dialing.

The session, titled "Writing and Formatting Questions to Improve Data Quality," was a reminder that, unsexy as it may sound, it's possible for things to go badly awry if a survey questionnaire (the "instrument," in pollster jargon) isn't designed with enough care.

One major problem with data quality arises when a lot of telephone poll respondents give "uncodable" answers—say, replying with "a few" rather than giving a number when asked how many times he or she has done something in the last year. Turns out those inadequate responses are more likely to be the fault of problems with a question's phrasing than they are to be purely the fault of the respondent, according to Amanda Ganshert, Kristen Olson, and Jolene D. Smyth of the University of Nebraska–Lincoln (UNL).

Earlier this week I explored how question wording can (not necessarily intentionally) influence the results of a poll, for instance by using a frame that nudges people to be more or less supportive of a given policy. As the UNL researchers pointed out, though, another easy-to-make mistake is to write a question that has a poor fit between the so-called "question stem" and the answer options.

An example would be an item that implicitly calls for a yes or no response—"do you or does someone in your household own the home in which you live?"—but then in fact expects the respondent to select from a series of non-binary choices, like "we rent our home," "we have a mortgage on our home," "we own our home outright," etc. Mismatches like that can be discouraging or confusing to people, thus leading to less accurate responses or even causing large numbers to give up on the survey altogether.

Another panelist, Stephanie Wilson of the National Center for Health Statistics, found that not giving a respondent an obvious way to register that he or she doesn't know the answer to a factual query can also lead to bad results. Her research uncovered that most people recall very little about things like the names of the medical procedures they've had done recently or the reasons for them. But if you as the pollster don't make it clear from the wording of a question that respondents are welcome to admit their ignorance—and sometimes, distressingly, even if you do explicitly give them that option—they'll very often reason their way to a plausible answer ("my doctor didn't actually tell me the purpose, but why would he have ordered a chest X-ray unless he was screening for lung cancer?") rather than reply that they aren't sure.

It probably goes without saying that if wild guessing is prevalent—and there's reason to suspect it is—it can really throw off the accuracy of a study.

The good news is that yesterday's panel proves smart people are working hard to understand these problems and develop best practices to avoid them. The bad news is that, when you're waist-deep in an election year, people tend to spend more time hyperventilating over the latest SHOCK POLL result than scrupulously evaluating the outfits' question-wording choices.