Free Minds & Free Markets

Why Polls Don't Work

After decades of gradual improvement, the science of predicting election outcomes has hit an accuracy crisis.

Obama winsJason KeislingOn October 7, 2015, the most famous brand in public opinion polling announced it was getting out of the horse-race survey business. Henceforth, Gallup would no longer poll Americans on whom they would vote for if the next election were held today.

"We believe to put our time and money and brain-power into understanding the issues and priorities is where we can most have an impact," Gallup Editor in Chief Frank Newport told Politico. Let other operations focus on predicting voter behavior, the implication went, we're going to dig deeper into what the public thinks about current events.

Still, Gallup's move, which followed an embarrassingly inaccurate performance by the company in the 2012 elections, reinforces the perception that something has gone badly wrong in polling and that even the most experienced players are at a loss about how to fix it. Heading into the 2016 primary season, news consumers are facing an onslaught of polls paired with a nagging suspicion that their findings can't be trusted. Over the last four years, pollsters' ability to make good predictions about Election Day has seemingly deteriorated before our eyes.

The day before the 2014 midterms, all the major forecasts declared Republicans likely to take back the Senate. The Princeton Election Consortium put the odds at 64 percent; The Washington Post, most bullish of all, put them at 98 percent. But the Cook Political Report considered all nine "competitive" seats to be tossups—too close to call. And very few thought it likely that Republicans would win in a landslide.

Conventional wisdom had it that the party would end up with 53 seats at most, and some commentators floated the possibility that even those numbers were biased in favor of the GOP. The week before the election, for example, HuffPollster noted that "polling in the 2006 and 2010 midterm elections and the 2012 presidential election all understated Democratic candidates. A similar systematic misfire in 2014 could reverse Republican leads in a small handful of states."

We soon learned that the polls were actually overstating Democratic support. The GOP ended up with 54 Senate seats. States that were expected to be extremely close calls, such as Kansas and Iowa, turned into runaways for the GOP. A couple of states that many were sure would stay blue—North Carolina, Louisiana—flipped to red. The pre-election surveys consistently underestimated how Republicans in competitive races would perform.

The following March, something similar happened in Israel. Both pre-election and exit polls called for a tight race, with the Likud Party, headed by Prime Minister Benjamin Netanyahu, and the Zionist Union Party, led by Isaac Herzog, in a virtual tie. Instead, Likud easily captured a plurality of the vote and picked up 12 seats in the Knesset.

The pattern repeated itself over the summer, this time in the United Kingdom, where the 2015 parliamentary election was roundly expected to produce a stalemate. A few polls gave the Conservative Party a slight lead, but not nearly enough of one to guarantee it would be part of the eventual governing coalition. You can imagine the surprise, then, when the Tories managed to grab 330 of the 650 seats—not just a plurality but an outright majority. The Labour and Liberal Democrat parties meanwhile lost constituencies the polls had predicted they would hold on to or take over.

And then there was Kentucky. This past November, the Republican gubernatorial candidate was Matt Bevin, a venture capitalist whom Mitch McConnell had trounced a year earlier in a Senate primary contest. As of mid-October, Bevin trailed his Democratic opponent, Jack Conway, by 7 points. By Halloween he'd narrowed the gap somewhat but was still expected to lose. At no point was Bevin ahead in The Huffington Post's polling average, and the site said the probability that Conway would beat him was 88 percent. Yet Bevin not only won, he won by a shocking 9-point margin. Pollsters once again had flubbed the call.

Why does this suddenly keep happening? The morning after the U.K. miss, the president of the British online polling outfit YouGov was asked just that. "What seems to have gone wrong," he answered less than satisfactorily, "is that people have said one thing and they did something else in the ballot box."

'To Lose in a Gallup and Win in a Walk'

Until recently, the story of polling seemed to be a tale of continual improvement over time. As technology advanced and our grasp of probability theory matured, the ability to predict the outcome of an election seemed destined to become ever more reliable. In 2012, the poll analyst Nate Silver correctly called the eventual presidential winner in all 50 states and the District of Columbia, besting his performance from four years earlier, when he got 49 states and D.C. right but mispredicted Indiana.

There have been major polling blunders, including the one that led to the ignominious "DEWEY DEFEATS TRUMAN" headline in 1948. But whenever the survey research community has gotten an election wrong, it has responded with redoubled efforts to figure out why and how to do better in the future. Historically, those efforts have been successful.

Until Gallup burst onto the scene, The Literary Digest was America's surveyor of record. The weekly newsmagazine had managed to correctly predict the outcome of the previous four presidential races using a wholly unscientific method of mailing out postcards querying people on who they planned to vote for. The exercise served double duty as a subscription drive as well as an opinion poll.

In 1936, some 10 million such postcards were distributed. More than 2 million were completed and returned, an astounding number by the standards of modern survey research, which routinely draws conclusions from fewer than 1,000 interviews. From those responses, the editors estimated that Alfred Landon, Kansas' Republican governor, would receive 57 percent of the popular vote and beat the sitting president, Franklin Delano Roosevelt.

In fact, Roosevelt won the election handily—an outcome predicted, to everyone's great surprise, by a young journalism professor named George Gallup. Recognizing that the magazine's survey methodology was vulnerable to self-selection bias, Gallup set out to correct for it. Among The Literary Digest's respondents in California, for example, 92 percent claimed to be supporting Landon. On Election Day, just 32 percent of ballots cast in the state actually went for the Republican. By employing a quota system to ensure his sample looked demographically similar to the voting population, Gallup got a better read despite hearing from far fewer people.

Though far more scientific than what had come before, the early years of quota-based surveys retained a lot of room for error. It took another embarrassing polling miss for the nascent industry to get behind random sampling, which is the basis for the public opinion research we know today. That failure came in 1948, as another Democratic incumbent battled to hold on to the White House.

In the run-up to the election that year, all the major polls found New York Gov. Thomas Dewey ahead of President Harry Truman. Gallup himself had Dewey at 49.5 percent to Truman's 44.5 percent two weeks out. On the strength of that prediction, though the results were still too close to call, the Chicago Tribune went to press on election night with a headline announcing the Republican challenger's victory. When the dust had cleared, it was Truman who took 49.5 percent of the vote to Dewey's 45.1 percent, becoming, in the immortal words of the radio personality Fred Allen, the first president ever "to lose in a Gallup and win in a walk."

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  • Mr Lizard||

    I blame the mass screenings of chronilogically favored mammals hatched after 1985

  • UnCivilServant||

    oh, you mean non-voters?

  • Florida Man||

    You can screen my massive poll.

  • MichaelL||

    Those who have a problem with species identity. YES!

  • Slammer||

    You said "hard polling" uh huh huh huh

  • Swiss Servator||

    DO YOU WORK FOR A POLLING FIRM?!?!?!?!?!?!?!

  • Florida Man||

    Are you employed sir!

  • Swiss Servator||

    Servitude =, yeah.

  • Rich||

    Another concern is that most social media analysis relies on sentiment analysis, meaning it uses language processing software to figure out which tweets are relevant, and then—this is key—to accurately code them as positive or negative.


  • UnCivilServant||

    How do they catch people like me? I vote reliably (having missed only one election since I became eligable to vote), but I don't get surveyed and I don't use social media. The closest I come is snarking on comments sections - in which I do not actually express who I plan on voting for.

  • Florida Man||

    in which I do not actually express who I plan on voting for.

    I know you're feelin'...Da Bern!

  • UnCivilServant||

    Just for that, I'm sending you an extra flock of snowbirds to vote for more taxes.

  • Florida Man||

    That's crossing a line.
    /preps dueling pistols

  • MOFO.||

    Yea, your sarcasm is really helpful right now.

  • Rich||

    Another concern is that most social media analysis relies on sentiment analysis, meaning it uses language processing software to figure out which tweets are relevant, and then—this is key—to accurately code them as positive or negative.

    I'm sure this software does *very* well.

  • Scarecrow & WoodChipper Repair||

    Language professor pontificating on universal grammar talks about the evils of the double negative, how it is proper in some languages, etc, says there are no languages with a double positive. Smart ass in the back says "Yeah, right".

    I wonder how any language processor can cope with sarcastic "great" and similar usage. Does "great, just fuckin' great" count as twice as good?

    It's hopeless until they can pass the Turing test compared to a wino or beach bum.

  • Rich||

    Another concern is that most social media analysis relies on sentiment analysis, meaning it uses language processing software to figure out which tweets are relevant, and then—this is key—to accurately code them as positive or negative.

    Words cannot express the faith I have in the power of such processing.

  • ||

    The only poll that counts is the one taken at the voting booth.

  • UnCivilServant||

    Don't be silly. Do you think they actually count those votes?

  • Rich||

    Well, they *might* count the "I Voted" stickers they give out.

  • UnCivilServant||

    If you turn in six of them at the local *party redacted* office, you get a small coffee.

  • Citizen X||

    Turn in ten, and you get to put a packet of sugar and a teaspoon of powdered creamer in that coffee.

  • sarcasmic||

    The hardest thing about polling is phrasing the questions in such a way as to get people to respond in a manner that gives you the results that your boss wants.

  • Rich||

  • Hamster of Doom||

    Predictive analysis is old school, man. Cognitive control is the new thing.

  • Grand Moff Serious Man||

    Goddammit, Alan Rickman died

    Alan Rickman, one of the best-loved and most warmly admired British actors of the past 30 years, has died in London aged 69. His death was confirmed on Thursday by his family who said that he died “surrounded by family and friends”. Rickman had been suffering from cancer.

    A star whose arch features and languid diction were recognisable across the generations, Rickman found a fresh legion of fans with his role as Professor Snape in the Harry Potter films. But the actor had been a big-screen staple since first shooting to global acclaim in 1988, when he starred as Hans Gruber, Bruce Willis’s sardonic, dastardly adversary in Die Hard – a part he was offered two days after arriving in Los Angeles, aged 41.

    Gruber was the first of three memorable baddies played by Rickman: he was an outrageous sheriff of Nottingham in 1991’s Robin Hood: Prince of Thieves, as well as a terrifying Rasputin in an acclaimed 1995 HBO film.

    Happy trails, Hans.

  • Hamster of Doom||


    Alan Rickman is hot.

    I'll be in my bunk. Old time's sake, you understand.

  • sarcasmic||

    I just watched him on Galaxy Quest the other day.

  • Florida Man||

    Dude was amazing. I'm going to watch quigley down under today.

  • Citizen X||

    By Grabthar's hammer, what a shame.

  • Cdr Lytton||

    "No no no! No bloody holly!"

    "But sir!"

  • Robert||

    Allen L. Rickman's still around AFAIK.

  • The Iconoclast||

    Strangely, election prediction accuracy broke down right around the same time that rigging became fully implemented.

  • Jackand Ace||

    I do note that a couple of weeks ago you said that people who place bets are a better gauge, and you took to task polls showing Trump in the lead. At that time bettors said Rubio would win.

    That site you used now says people are putting their money on Trump winning the nomination, so in essence agreeing with the polls.

    Uh oh.

  • ||

    Since it is pretty obvious that the Democrats have no intention of nominating Bernie. why , he should just go independent. like a 4th party after Trump does a 3rd.

  • ammythomas||

    my neighbor's half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out

    +++++++++++++++++ www.Wage90.Com

  • ammythomas||

    my neighbor's half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out

    +++++++++++++++++ www.Wage90.Com

  • JFree||

    The biggest problem is polling itself. Polling is a requirement for the electorate itself to be pigeonholed into different sub-collectives and then marketed/persuaded via emotions/fear/etc (which we cannot control because they are basic animal instincts). All the 'problems' listed in this article are purely technical and non-fatal to polling itself.

    I will become hopeful for the future when the 'problem' becomes mass-lying to the pollsters rather than simple non-participation or sampling errors or such. When people stop telling the truth about themselves to some pollster who has no claim to getting the truth from you, then individuals will once again have a chance to be free. And pols/companies/etc will have to (possibly) include rational/logical arguments as well as positive visions of a future - and not just the negative and fear-mongered appeals.

  • wagnert in atlanta||

    I think the mass-lying problem is already showing itself. Given the amount of worthless tripe calls I already receive on my landline, the temptation to respond to yet another poll caller with blatant lies would be almost impossible to resist.

  • katrinakatrinakaif||

    Im making over $7k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life. This is what I do,


  • EscherEnigma||

    Meh. Nate Silver is still a witch.

  • marybell451||

    my neighbor's half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out


  • marybell451||

    my neighbor's half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out

    +++++++++++++++++ www.Wage90.Com

  • ||

    Start working at home with Google! It's by-far the best job I've had. Last Wednesday I got a brand new BMW since getting a check for $6474 this - 4 weeks past. I began this 8-months ago and immediately was bringing home at least $77 per hour. I work through this link, go to tech tab for work detail.

  • Chrxtoph3r||

    I believe, that it is the incessant barrage of 'polling' suggestions that is actually swaying elections. The polls, at the time they are taken, may be accurate, or close to accurate, but, with the polls input factored in, the populace reconsiders it's stance. Hence, the polls are not 'predicting' elections...they are 'manipulating' them.

  • 80sman||

    "A couple of states that many were sure would stay blue—North Carolina, Louisiana..."

    Many were "sure?" Hagan was a slight favorite and Landrieu was dead Senator walking

  • Galane||

    This article could also be on why the way TV ratings are calculated is deeply flawed and has been for more than a decade.

    There has been such an explosion in the amount and variety of television entertainment, in cable and satellite services, and in broadcast TV since 2009, that traditional methods that survey a tiny fraction of the 300+ million people in the USA simply *cannot be accurate*.

    Methods developed for a country with a much smaller population and with most areas having at most 5 broadcast channels, can't work when there are far more people, most of whom have many more channels available.

    Where I am went from 5 broadcast channels (2, 4, 6, 7, 12) to 30. Add that to a cable or satellite service with 100 to 500 channels.

    After the obvious erroneous ratings from Nielsen Media Research on the TV series "Firefly", NMR did make some changes but did not say what they changed, except for including reports from DVR's like TiVo. "Firefly" had been the #1 TiVo'ed show when it was on. Its official fan forum on collected more posts in a few months than the X-Files forum got in *eight years*. The DVD set sales broke records on - yet NMI stuck by their very obviously incorrect numbers. The world wide web and other ways of measuring viewership of a TV show showed how wrong the traditional methods were, but Fox stuck by NMI's reports, canceled it and refused to revive it despite all the evidence the show was really extremely popular.

  • retiredfire||

    If you make a call to your cable carrier, they can see what is happening on your TV, as you speak.
    Don't you think, if they can see it, when they make the right connections, they don't include a "background" connection that tells them what all the sets are tuned into?
    Just think of the value that would be to the broadcast companies and how the cable provider could sell it.
    Ratings aren't done by only chosen households, anymore.
    They know what you are watching, even if you aren't aware of it.

  • patches44||

    Well yes looking at television ratings is instructive as to the flaws and accuracy in modern political polling. Just not in the way most think. As was said modern technology can give highly accurate television ratings. So why use the old sampling methods? Because those who pay for the ratings benefit more from the older sampling so. In this case the absolute last thing any television network or cable carrier wants is to tell advertisers the accurate truth about how few people are really watching their shows. In much the same way polling firms are ultimately beholden to political dollars. No candidate or party will throw more business their way if they tell them that their total base of support is exclusively dead people. (Unless Chicago. Gotta allow for local traditions)

  • ||

    I got an email before the 2004 election from one of the Soros groups telling me to answer all calls and tell pollsters who I was voting for and to make eye contact with people doing poling outside of the voting booths. (My sister is a leftist.) On election day I happened to be making conversation with a statistician who assured me that no matter what I believed the polls were accurate. I tried to explain to him all of the problems with young people and cell phones and biased reporting. He was so confident that all of the variables had been accounted for. Yes, both sides encourage people to answer their phones. Both sides encourage people to talk to exit polls. Both sides use cell phones only. Both sides answer phone calls from strangers.
    His staunch unwillingness to consider the possibility that pollsters could be wrong convinced me that they were working in a yes-chamber.

  • rxc||

    Polling, especially political polling, is not scientific. So, the first step in dealing with this issue is to stop calling "the science of polling". Political polling involves sampling of sentient beings who actually care about the outcome of the event you are trying to understand, and those sentient beings change their minds amazingly often, and for the seemingly most trivial of reasons. They are not red balls and blue balls that you pull out of a bag. They are balls that may be blue in the bag, but which change to red when you look at them, and then to green when you show them to someone else. Or they may be a bright purple, so that they don't know what color they want to be until the moment that they MUST choose, when they might even decide to go back into the bag and refuse to be looked at.

    It is not scientific. Surveys may be useful for product marketing and even for political strategists to try to design their camplaigns, but they are not reproduceable and have demonstrated that they frequently make bad predictions in political contexts. Anyone who relys on them does so at his peril.

    At least we are starting to recognize this, and maybe it will finally put to rest the progressive plan to start to do census "counts" on the basis of samples.

  • patches44||

    A great discussion about modern polling. But it leaves out the other large elephant in the room. It assumes that honest accurate polling is itself the goal. But these days that is not always the case. More and more real world elements of campaigns, from momentum, to fundraising, to participation in debates, is driven not by election results but by pre election polling. This means there is a benefit to skew polling in your or your candidates favor wherever possible. Polling firms do not exist in a vacuum. Polls are paid for by politicians and news organizations at the end of the day. These have a stake in the game. This can result in results intentionally weighted to clear political purpose.

    Case in point that tends to raise the most eyebrows among the electorate. The mysterious circumstance of Jeb Bush. While doing abyssmal he keeps mysteriously polling just well enough to stay on stage and in the media. Yet nobody has ever actually met one of these unicorns known as a Jeb voter in the wild. But their are big moneyed interests behind keeping him on stage. Does anyone honestly think Jeb has more current niche support that Rand Paul and the Paul family semi cultish libertarian's? (Not saying it is a huge thing, both are in low single digits).

    The point is when the polling stops simply being a tool of measurement, and instead becomes the driving force of the story itself.

  • poorgrandchildren||

    Pollsters have a lot of difficulty predicting which way the dead (and the living) will vote more than once.

  • Scottie Rock||

    Watching the results come in on the night Bevin got elected was one of the best nights of my life, politics-wise. All I kept hearing was how Bevin was a bad candidate and the GOP had no shot at winning. Then, a beautiful thing happened: the race was called within only a couple hours in favor of Bevin, whom I had met the year before when he was primary-ing McConnell. Was Bevin the perfect candidate? No, not at all. I disagree with him on a number of issues, but he was the best choice Kentucky had and I was so proud of my state for recognizing that.

  • Westmiller||

    One aspect overlooked: People Lie!!
    With more knowledge of polling methods and their effects on actual political discourse, people can *intentionally* skew the results to their benefit. For example, it wouldn't surprise me if a large number of survey participants in Iowa were Hillary supporters who asserted that they were regular Republican voters and *loved* Donald Trump.
    True, it's hard to implement such a tactic on a broad level, but a Twitter or Facebook campaign suggesting that response would likely encourage a large enough group to participate in a "bold face" lie to pollsters.

  • AvaCueto||

    Good post , Coincidentally , people want a IRS 1099-INT , my business partner found a fillable document here


Get Reason's print or digital edition before it’s posted online