Why Polls Don't Work

After decades of gradual improvement, the science of predicting election outcomes has hit an accuracy crisis.


Obama wins
Jason Keisling

On October 7, 2015, the most famous brand in public opinion polling announced it was getting out of the horse-race survey business. Henceforth, Gallup would no longer poll Americans on whom they would vote for if the next election were held today.

"We believe to put our time and money and brain-power into understanding the issues and priorities is where we can most have an impact," Gallup Editor in Chief Frank Newport told Politico. Let other operations focus on predicting voter behavior, the implication went, we're going to dig deeper into what the public thinks about current events.

Still, Gallup's move, which followed an embarrassingly inaccurate performance by the company in the 2012 elections, reinforces the perception that something has gone badly wrong in polling and that even the most experienced players are at a loss about how to fix it. Heading into the 2016 primary season, news consumers are facing an onslaught of polls paired with a nagging suspicion that their findings can't be trusted. Over the last four years, pollsters' ability to make good predictions about Election Day has seemingly deteriorated before our eyes.

The day before the 2014 midterms, all the major forecasts declared Republicans likely to take back the Senate. The Princeton Election Consortium put the odds at 64 percent; The Washington Post, most bullish of all, put them at 98 percent. But the Cook Political Report considered all nine "competitive" seats to be tossups—too close to call. And very few thought it likely that Republicans would win in a landslide.

Conventional wisdom had it that the party would end up with 53 seats at most, and some commentators floated the possibility that even those numbers were biased in favor of the GOP. The week before the election, for example, HuffPollster noted that "polling in the 2006 and 2010 midterm elections and the 2012 presidential election all understated Democratic candidates. A similar systematic misfire in 2014 could reverse Republican leads in a small handful of states."

We soon learned that the polls were actually overstating Democratic support. The GOP ended up with 54 Senate seats. States that were expected to be extremely close calls, such as Kansas and Iowa, turned into runaways for the GOP. A couple of states that many were sure would stay blue—North Carolina, Louisiana—flipped to red. The pre-election surveys consistently underestimated how Republicans in competitive races would perform.

The following March, something similar happened in Israel. Both pre-election and exit polls called for a tight race, with the Likud Party, headed by Prime Minister Benjamin Netanyahu, and the Zionist Union Party, led by Isaac Herzog, in a virtual tie. Instead, Likud easily captured a plurality of the vote and picked up 12 seats in the Knesset.

The pattern repeated itself over the summer, this time in the United Kingdom, where the 2015 parliamentary election was roundly expected to produce a stalemate. A few polls gave the Conservative Party a slight lead, but not nearly enough of one to guarantee it would be part of the eventual governing coalition. You can imagine the surprise, then, when the Tories managed to grab 330 of the 650 seats—not just a plurality but an outright majority. The Labour and Liberal Democrat parties meanwhile lost constituencies the polls had predicted they would hold on to or take over.

And then there was Kentucky. This past November, the Republican gubernatorial candidate was Matt Bevin, a venture capitalist whom Mitch McConnell had trounced a year earlier in a Senate primary contest. As of mid-October, Bevin trailed his Democratic opponent, Jack Conway, by 7 points. By Halloween he'd narrowed the gap somewhat but was still expected to lose. At no point was Bevin ahead in The Huffington Post's polling average, and the site said the probability that Conway would beat him was 88 percent. Yet Bevin not only won, he won by a shocking 9-point margin. Pollsters once again had flubbed the call.

Why does this suddenly keep happening? The morning after the U.K. miss, the president of the British online polling outfit YouGov was asked just that. "What seems to have gone wrong," he answered less than satisfactorily, "is that people have said one thing and they did something else in the ballot box."

'To Lose in a Gallup and Win in a Walk'

Until recently, the story of polling seemed to be a tale of continual improvement over time. As technology advanced and our grasp of probability theory matured, the ability to predict the outcome of an election seemed destined to become ever more reliable. In 2012, the poll analyst Nate Silver correctly called the eventual presidential winner in all 50 states and the District of Columbia, besting his performance from four years earlier, when he got 49 states and D.C. right but mispredicted Indiana.

There have been major polling blunders, including the one that led to the ignominious "DEWEY DEFEATS TRUMAN" headline in 1948. But whenever the survey research community has gotten an election wrong, it has responded with redoubled efforts to figure out why and how to do better in the future. Historically, those efforts have been successful.

Until Gallup burst onto the scene, The Literary Digest was America's surveyor of record. The weekly newsmagazine had managed to correctly predict the outcome of the previous four presidential races using a wholly unscientific method of mailing out postcards querying people on who they planned to vote for. The exercise served double duty as a subscription drive as well as an opinion poll.

In 1936, some 10 million such postcards were distributed. More than 2 million were completed and returned, an astounding number by the standards of modern survey research, which routinely draws conclusions from fewer than 1,000 interviews. From those responses, the editors estimated that Alfred Landon, Kansas' Republican governor, would receive 57 percent of the popular vote and beat the sitting president, Franklin Delano Roosevelt.

In fact, Roosevelt won the election handily—an outcome predicted, to everyone's great surprise, by a young journalism professor named George Gallup. Recognizing that the magazine's survey methodology was vulnerable to self-selection bias, Gallup set out to correct for it. Among The Literary Digest's respondents in California, for example, 92 percent claimed to be supporting Landon. On Election Day, just 32 percent of ballots cast in the state actually went for the Republican. By employing a quota system to ensure his sample looked demographically similar to the voting population, Gallup got a better read despite hearing from far fewer people.

Though far more scientific than what had come before, the early years of quota-based surveys retained a lot of room for error. It took another embarrassing polling miss for the nascent industry to get behind random sampling, which is the basis for the public opinion research we know today. That failure came in 1948, as another Democratic incumbent battled to hold on to the White House.

In the run-up to the election that year, all the major polls found New York Gov. Thomas Dewey ahead of President Harry Truman. Gallup himself had Dewey at 49.5 percent to Truman's 44.5 percent two weeks out. On the strength of that prediction, though the results were still too close to call, the Chicago Tribune went to press on election night with a headline announcing the Republican challenger's victory. When the dust had cleared, it was Truman who took 49.5 percent of the vote to Dewey's 45.1 percent, becoming, in the immortal words of the radio personality Fred Allen, the first president ever "to lose in a Gallup and win in a walk."

That embarrassment pushed the polling world to adopt more rigorous methodological standards, including a move to "probability sampling," in which every voter—in theory, anyway—has exactly the same chance of being randomly selected to take a survey. The result: more accurate research, at least at the presidential level.

Another boon to the polling industry around this time was the spread of the telephone. As call volumes shot up during and immediately following World War II, thousands of new lines had to be installed to keep up with the rising demand. Although Gallup continued to survey people the old-fashioned way—in person, by sending interviewers door-to-door—new players entered the marketplace and began to take advantage of telephonic communications to reach people more cheaply.

A series of analyses from the National Council on Public Polls (NCPP) provides quantitative evidence that polling got better during this period. Analysts looked at the final survey from each major public-facing outfit before a given presidential election and determined its "candidate error"—loosely, the percentage-point difference between the poll's estimate and the actual vote percentage garnered by a candidate. From 1936 to 1952, the overall average candidate error per cycle ranged from 2.5 percent to 6 percent. Between 1956 and 2008, the average candidate error per cycle ranged from 0.9 percent to 3.1 percent—significantly better.

Polls were never perfect. Gallup's final survey before the 1976 election had Gerald Ford ahead of the eventual winner, Jimmy Carter. Still, in the post-Dewey/Truman era, pollsters amassed a respectable record that paved the way for Nate Silver to make a name for himself aggregating their data into an extraordinarily powerful predictive model.

Silver's success is achieved not just by averaging poll numbers—something RealClearPolitics was already doing when Silver launched his blog, FiveThirtyEight, in early 2008—but by modifying the raw data in various ways. Results are weighted based on the pollster's track record and how recent the survey is, then fed into a statistical model that takes into account other factors, such as demographic information and trend-line changes in similar states. The payoff is a set of constantly updating, highly accurate state-by-state predictions that took the political world by storm during the 2008 and 2012 elections.

The Seeds of Disaster Ahead

Despite Silver's high-profile triumph in 2012—the morning of the election, his model gave Barack Obama a 90.9 percent chance of winning even as the Mitt Romney campaign was insisting it had the race locked up—that year brought with it some early signs of trouble for the polling industry.

The NCPP's average candidate error was still just 1.5 percent for that cycle, meaning most pollsters called the race about right. But Gallup's final pre-election survey showed the former Massachusetts governor squeaking out a win by one percentage point. Polls commissioned by two prominent media entities, NPR and the Associated Press, also put Romney on top (though in all three cases his margin of victory was within the poll's margin of error). In reality, Obama won by four percentage points nationally, capturing 332 electoral votes to his opponent's 206. Like Truman before him, the president had lost "in a Gallup" while easily holding on to the Oval Office in real life. The world's most famous polling company—having performed dead last among public pollsters according to a post-election analysis by FiveThirtyEight—responded by scrapping plans to survey voters ahead of the 2014 midterms and embarking on an extensive post mortem of what had gone wrong.

Things have only gotten worse for pollsters since 2012. With the misses in the 2014 U.S. midterms, in Israel, in the U.K., and in the 2015 Kentucky governor's race, it's getting hard to escape the conclusion that traditional ways of measuring public opinion no longer seem capable of accurately predicting outcomes the way they more or less once did.

One reason for this is that prospective voters are much harder to reach than they used to be. For decades, the principal way to poll people has been by calling them at home. Even Gallup, long a holdout, switched to phone interviews in 1988. But landlines, as you may have noticed, have become far less prevalent. In the United States, fixed telephone subscriptions peaked in 2000 at 68 per 100 people, according to the World Bank. Today, we're down to 40 per 100 people, as more and more Americans—especially young ones—choose to live in cell-only households. Meanwhile, Federal Communications Commission regulations make it far more expensive to survey people via mobile phones. On top of that, more people are screening their calls, refusing to answer attempts from unknown numbers.

In addition to being more picky about whom they'll speak to, cellphone users are less likely to qualify for a survey being conducted. In August, I got a call from an interviewer looking for opinions from voters in Florida. That's understandable, since I grew up in Tampa and still have an 813 area code. Since I no longer live in the state, I'm not eligible to vote there, yet some poor polling company had to waste time and money dialing me—probably repeatedly, until I finally picked up—before it found that out.

All these developments have caused response rates to plummet. When I got my first job at a polling firm after grad school, I heard tales of a halcyon era when researchers could expect to complete interviews with 50 percent or more of the people they set out to reach. By 1997, that number had fallen to 36 percent, according to Pew Research Center. Today it's in the single digits.

This overturns the philosophy behind probability polling, which holds that a relatively small number of people can stand in for the full population as long as they are chosen randomly from the larger group. If, on the other hand, the people who actually take the survey are systematically different from those who don't—if they're older, or have a different education level, or for some other reason are more likely to vote a certain way—the methodology breaks down.

A series of Pew Research studies suggests, counterintuitively, that falling response rates may not actually matter. "Telephone surveys that include landlines and cell phones and are weighted to match the demographic composition of the population continue to provide accurate data on most political, social and economic measures," concludes the most recent study, conducted in the first quarter of 2012. "This comports with the consistent record of accuracy achieved by major polls when it comes to estimating election outcomes, among other things." But that was before the streak of missed calls in the four years since. A savvy observer would be right to wonder if people's growing unwillingness to participate in surveys has finally rendered the form inadequate to the task of predicting election outcomes.

Methodological Obstacles Mount

Because it's no longer possible to get a truly random (and therefore representative) sample, pollsters compensate by making complicated statistical adjustments to their data. If you know your poll reached too few young people, you might "weight up" the millennial results you did get. But this involves making some crucial assumptions about how many voters on Election Day will hail from different demographic groups.

We can't know ahead of time what the true electorate will look like, so pollsters have to guess. In 2012, some right-of-center polling outfits posited that young voters and African Americans would be less motivated to turn out than they had been four years earlier. That assumption turned out to be wrong, and their predictions about who would win were wrong along with it. But in the elections since then, many pollsters have made the opposite mistake—underestimating the number of conservatives who would vote and thus concluding that races were closer than they really were.

One way pollsters try to figure out who will show up on Election Day is by asking respondents how likely they are to vote. But this ends up being harder than it seems. People, it turns out, are highly unreliable at reporting their future voting behavior. On a 1–9 scale, with 9 being "definitely will," almost everyone will tell you they're an 8 or 9. Many of them are wrong. Maybe they're just overly optimistic about the level of motivation they'll feel on Election Day; maybe they're genuinely planning to vote, but something will come up to prevent them; or maybe they're succumbing to a phenomenon called "social desirability bias," in which people are embarrassed to admit aloud that they won't be doing their civic duty.

Some polling outfits—Gallup foremost among them—have tried to find more sophisticated methods of weeding out nonvoters. For example, they might begin a survey with a battery of questions, like whether the respondent knows where his or her local polling place is. But these efforts have met with mixed results. At the end of the day, correctly predicting an electoral outcome is largely dependent on getting the makeup of the electorate right. That task is much easier stated than achieved.

There's another open question regarding the 2016 race: How will Silver and other election modelers fare without the Gallup numbers to incorporate into their forecasts? There's no shortage of raw data to work with, but not all of it comes from pollsters that abide by gold-star standards and practices. "There's been this explosion of data, because everyone wants to feed the beast and it's very clickable," says Kristen Soltis Anderson, a pollster at Echelon Insights (and my former boss at The Winston Group). "But quantity has come at the expense of quality. And as good research has become more expensive, that's happening at the same time media organizations have less money to spend on polls and want to get more polls out of the limited polling budget that they have."

In service of that bottom line, some outfits experiment with nontraditional methodologies. Public Policy Polling on the left and Rasmussen Reports on the right both use Interactive Voice Response technology to do robo-polling—questionnaires are pre-recorded and respondents' answers are either typed in on their keypads ("Press 99 for 'I don't know'") or registered via speech recognition software. Other companies, such as YouGov, conduct surveys entirely online using non-probabilistic opt-in panels.

Meanwhile, "Gallup uses rigorous polling methodologies. It employs live interviewers; it calls a lot of cell phones; it calls back people who are harder to reach," FiveThirtyEight blogger Harry Enten wrote after the announcement. "Polling consumers are far better off in a world of [Gallups] than in a world of Zogby Internet polls and fly-by-night surveys from pollsters we've never heard of."

As methodological obstacles to good, sound research mount, Enten and Silver have also found evidence of a phenomenon they call "herding." Poll results in recent cycles have tended to converge on a consensus point toward the end, which suggests that some pollsters are tweaking their numbers to look more like everyone else's. When a race includes a "gold standard" survey like Gallup's, the other polls in that race have lower errors. Without Gallup in the mix, less rigorous polling outfits won't have its data to cheat off of.

All of which spells trouble for the ability of anyone to predict what's going to happen next November 8. The industry's track record since 2012 should make us think twice about putting too much stock in what the horse-race numbers are saying. At best, all a poll can know is what would happen if the election were held today. And there's no guarantee anymore that the polls are even getting that right.

The Future of Polling Is All of the Above

As good polls became harder and costlier to conduct, academics and politicos alike began to wonder if there isn't a better approach that eschews the interview altogether. Following the 2008 election, a flurry of papers suggested that the opinions people voluntarily put onto social media platforms—Twitter in particular—could be used to predict real-world outcomes.

"Can we analyze publicly available data to infer population attitudes in the same manner that public opinion pollsters query a population?" asked four researchers from Carnegie Mellon University in 2010. Their answer, in many cases, was yes. A "relatively simple sentiment detector based on Twitter data replicates consumer confidence and presidential job approval polls," their paper, from the 2010 Proceedings of the International Association for the Advancement of Artificial Intelligence Conference on Weblogs and Social Media, concluded. That same year Andranik Tumasjan and his colleagues from the Technische Universität München made the even bolder claim that "the mere number of [Twitter] messages mentioning a party reflects the election result."

A belief that traditional survey research was no longer going to be able to stand alone prompted Anderson to begin searching for ways to bring new sources of data into the picture. In 2014, she partnered with the Republican digital strategist Patrick Ruffini to launch Echelon Insights, a "political intelligence" firm.

"We realized that, for regulatory and legal reasons, for cultural and technological reasons, the normal, traditional way we've done research for the last few decades is becoming obsolete," Anderson explains. "The good news is, at the same time people are less likely to pick up the phone and tell you what they think, we are more able to capture the opinions and behaviors that people give off passively. Whether through analyzing the things people tweet, analyzing the things we tell the Google search bar—the modern-day confessional—all this information is out there…and can be analyzed to give us additional clues about public opinion."

But as quickly as some people began hypothesizing that social media could replace polling, others began throwing cold water on the idea. In a paper presented at the 2011 Institute of Electrical and Electronics Engineers Conference on Social Computing, some scholars compared social media analytics with the outcomes of several recent House and Senate races. They found that "electoral predictions using the published research methods on Twitter data are not better than chance."

One of that study's co-authors, Daniel Gayo-Avello of the University of Oviedo in Spain, has gone on to publish extensive critiques of the social-media soothsayers, including a 2012 article bluntly titled "No, You Cannot Predict Elections with Twitter." Among other issues, he noted that studies that fail to find the desired result often never see the light of day—the infamous "file-drawer effect." This, he said, creates an erroneous bias toward positive findings in scientific publications.

The biggest problem with trying to use social media analysis in place of survey research is that the people who are active on those platforms aren't necessarily representative of the ones who aren't. Moreover, there's no limit on the amount one user can participate in the conversation, meaning a small number of highly vocal individuals can (and do) dominate the flow. A study in the December 2015 Social Science Computer Review found that the people who discuss politics on Twitter overwhelmingly are male, live in urban areas, and have extreme ideological preferences. In other words, social media analyses suffer from the same self-selection bias that derailed The Literary Digest's polling project almost 80 years ago: Anyone can engage in online discourse, but not everyone chooses to.

Another concern is that most social media analysis relies on sentiment analysis, meaning it uses language processing software to figure out which tweets are relevant, and then—this is key—to accurately code them as positive or negative. Yet in 2014, when PRWeek magazine tried to test five companies that claimed they could measure social media sentiment, the results were disappointing. "Three of the five agencies approached by PRWeek that initially agreed to take part later backed down," the article reads. "They pointed to difficulties such as weeding out sarcasm and dealing with ambiguity. One agency said that while it was easy to search for simple descriptions like 'great' or 'poor', a phrase like 'it was not great' would count as a positive if there was no advanced search facility to scan words around key adjectives. Such a system is understood to be in the development stage, the agency said."

Even if we're able to crack the code of sentiment analysis, it may not be enough to render the fire hose of social media content valuable for election-predicting purposes. After more than 12 months of testing Crimson Hexagon's trainable coding software, which "classifies online content by identifying statistical patterns in words," Pew Research Center concluded that it achieves 85 percent accuracy at differentiating between positive, negative, neutral, and irrelevant sentiments. Yet Pew also found that the reaction on Twitter to current events often differs greatly from public opinion as measured by statistically rigorous surveys.

"In some instances, the Twitter reaction was more pro-Democratic or liberal than the balance of public opinion," Pew reported. "For instance, when a federal court ruled last February that a California law banning same-sex marriage was unconstitutional…Twitter conversations about the ruling were much more positive than negative (46% vs. 8%). But public opinion, as measured in a national poll, ran the other direction." Yet in other cases, the sentiment on Twitter skewed to the political right, making it difficult if not impossible to correct for the error with weighting or other statistical tricks.

At Echelon Insights, Ruffini and Anderson have come to roughly the same conclusion. When I asked if it's possible to figure out who the Republican nominee will be solely from the "ambient noise" of social media, they both said no. But that doesn't mean you can't look at what's happening online, and at what volume, to get a sense of where the public is on various political issues.

"It's easier for us to say 'Lots more people are talking about trade' than it is for us to make a more subjective judgment about whether people are talking more positively about trade," Anderson tells me. "But because of the way a lot of these political debates break down, by knowing who's talking about a topic, we can infer how that debate is playing out."

So what is the future of public opinion research? For the Echelon team, it's a mixture of traditional and nontraditional methods—an "all-of-the-above" approach. Twitter data isn't the Holy Grail of election predictions, but it can still be extraordinarily useful for painting a richer and more textured picture of where the public stands.

"It's just another data point," says Ruffini. "You have the polls and you have the social data, and you have to learn to triangulate between the two."

NEXT: Ross Ulbricht, Sentenced to Life for Running Website Silk Road, Files Appeal

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. I blame the mass screenings of chronilogically favored mammals hatched after 1985

    1. You can screen my massive poll.

    2. Monotremes?

      1. Those who have a problem with species identity. YES!

  2. You said “hard polling” uh huh huh huh

  3. My last pay check was $9500 working 12 hours a week online. My sisters friend has been averaging 15k for months now and she works about 20 hours a week. I can’t believe how easy it was once I tried it out. This is what I do..

    Clik This Link inYour Browser…….

    ? ? ? ? http://www.WorkPost30.com

    1. DO YOU WORK FOR A POLLING FIRM?!?!?!?!?!?!?!

        1. Servitude = employment…so, yeah.

  4. Another concern is that most social media analysis relies on sentiment analysis, meaning it uses language processing software to figure out which tweets are relevant, and then?this is key?to accurately code them as positive or negative.


    1. How do they catch people like me? I vote reliably (having missed only one election since I became eligable to vote), but I don’t get surveyed and I don’t use social media. The closest I come is snarking on comments sections – in which I do not actually express who I plan on voting for.

      1. in which I do not actually express who I plan on voting for.

        I know you’re feelin’…Da Bern!

        1. Just for that, I’m sending you an extra flock of snowbirds to vote for more taxes.

          1. That’s crossing a line.
            /preps dueling pistols

    2. Yea, your sarcasm is really helpful right now.

  5. Another concern is that most social media analysis relies on sentiment analysis, meaning it uses language processing software to figure out which tweets are relevant, and then?this is key?to accurately code them as positive or negative.

    I’m sure this software does *very* well.

    1. Language professor pontificating on universal grammar talks about the evils of the double negative, how it is proper in some languages, etc, says there are no languages with a double positive. Smart ass in the back says “Yeah, right”.

      I wonder how any language processor can cope with sarcastic “great” and similar usage. Does “great, just fuckin’ great” count as twice as good?

      It’s hopeless until they can pass the Turing test compared to a wino or beach bum.

  6. Another concern is that most social media analysis relies on sentiment analysis, meaning it uses language processing software to figure out which tweets are relevant, and then?this is key?to accurately code them as positive or negative.

    Words cannot express the faith I have in the power of such processing.

  7. The only poll that counts is the one taken at the voting booth.

    1. Don’t be silly. Do you think they actually count those votes?

      1. Well, they *might* count the “I Voted” stickers they give out.

        1. If you turn in six of them at the local *party redacted* office, you get a small coffee.

          1. Turn in ten, and you get to put a packet of sugar and a teaspoon of powdered creamer in that coffee.

  8. The hardest thing about polling is phrasing the questions in such a way as to get people to respond in a manner that gives you the results that your boss wants.

    1. Predictive analysis is old school, man. Cognitive control is the new thing.

  9. Goddammit, Alan Rickman died

    Alan Rickman, one of the best-loved and most warmly admired British actors of the past 30 years, has died in London aged 69. His death was confirmed on Thursday by his family who said that he died “surrounded by family and friends”. Rickman had been suffering from cancer.

    A star whose arch features and languid diction were recognisable across the generations, Rickman found a fresh legion of fans with his role as Professor Snape in the Harry Potter films. But the actor had been a big-screen staple since first shooting to global acclaim in 1988, when he starred as Hans Gruber, Bruce Willis’s sardonic, dastardly adversary in Die Hard ? a part he was offered two days after arriving in Los Angeles, aged 41.

    Gruber was the first of three memorable baddies played by Rickman: he was an outrageous sheriff of Nottingham in 1991’s Robin Hood: Prince of Thieves, as well as a terrifying Rasputin in an acclaimed 1995 HBO film.

    Happy trails, Hans.

    1. *gasp”

      Alan Rickman is hot.

      I’ll be in my bunk. Old time’s sake, you understand.

    2. I just watched him on Galaxy Quest the other day.

      1. Dude was amazing. I’m going to watch quigley down under today.

      2. By Grabthar’s hammer, what a shame.

      3. http://youtu.be/cfNzZre-sIU

        “No no no! No bloody holly!”

        “But sir!”

    3. Allen L. Rickman’s still around AFAIK.

  10. Strangely, election prediction accuracy broke down right around the same time that rigging became fully implemented.

  11. I do note that a couple of weeks ago you said that people who place bets are a better gauge, and you took to task polls showing Trump in the lead. At that time bettors said Rubio would win.

    That site you used now says people are putting their money on Trump winning the nomination, so in essence agreeing with the polls.


    Uh oh.

  12. Since it is pretty obvious that the Democrats have no intention of nominating Bernie. why , he should just go independent. like a 4th party after Trump does a 3rd.

  13. my neighbor’s half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out

    +++++++++++++++++ http://www.Wage90.Com

  14. my neighbor’s half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out

    +++++++++++++++++ http://www.Wage90.Com

  15. The biggest problem is polling itself. Polling is a requirement for the electorate itself to be pigeonholed into different sub-collectives and then marketed/persuaded via emotions/fear/etc (which we cannot control because they are basic animal instincts). All the ‘problems’ listed in this article are purely technical and non-fatal to polling itself.

    I will become hopeful for the future when the ‘problem’ becomes mass-lying to the pollsters rather than simple non-participation or sampling errors or such. When people stop telling the truth about themselves to some pollster who has no claim to getting the truth from you, then individuals will once again have a chance to be free. And pols/companies/etc will have to (possibly) include rational/logical arguments as well as positive visions of a future – and not just the negative and fear-mongered appeals.

    1. I think the mass-lying problem is already showing itself. Given the amount of worthless tripe calls I already receive on my landline, the temptation to respond to yet another poll caller with blatant lies would be almost impossible to resist.

  16. Im making over $7k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life. This is what I do,

    ——-+++++++ http://www.richi8.com

  17. Meh. Nate Silver is still a witch.

  18. my neighbor’s half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out


  19. my neighbor’s half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out

    +++++++++++++++++ http://www.Wage90.Com

  20. Start working at home with Google! It’s by-far the best job I’ve had. Last Wednesday I got a brand new BMW since getting a check for $6474 this – 4 weeks past. I began this 8-months ago and immediately was bringing home at least $77 per hour. I work through this link, go to tech tab for work detail.
    +_+_+_+_+_+_+_+_+_+_+_+_+_+ http://www.buzznews99.com

  21. I believe, that it is the incessant barrage of ‘polling’ suggestions that is actually swaying elections. The polls, at the time they are taken, may be accurate, or close to accurate, but, with the polls input factored in, the populace reconsiders it’s stance. Hence, the polls are not ‘predicting’ elections…they are ‘manipulating’ them.

  22. “A couple of states that many were sure would stay blue?North Carolina, Louisiana…”

    Many were “sure?” Hagan was a slight favorite and Landrieu was dead Senator walking

  23. This article could also be on why the way TV ratings are calculated is deeply flawed and has been for more than a decade.

    There has been such an explosion in the amount and variety of television entertainment, in cable and satellite services, and in broadcast TV since 2009, that traditional methods that survey a tiny fraction of the 300+ million people in the USA simply *cannot be accurate*.

    Methods developed for a country with a much smaller population and with most areas having at most 5 broadcast channels, can’t work when there are far more people, most of whom have many more channels available.

    Where I am went from 5 broadcast channels (2, 4, 6, 7, 12) to 30. Add that to a cable or satellite service with 100 to 500 channels.

    After the obvious erroneous ratings from Nielsen Media Research on the TV series “Firefly”, NMR did make some changes but did not say what they changed, except for including reports from DVR’s like TiVo. “Firefly” had been the #1 TiVo’ed show when it was on. Its official fan forum on Fox.com collected more posts in a few months than the X-Files forum got in *eight years*. The DVD set sales broke records on Amazon.com – yet NMI stuck by their very obviously incorrect numbers. The world wide web and other ways of measuring viewership of a TV show showed how wrong the traditional methods were, but Fox stuck by NMI’s reports, canceled it and refused to revive it despite all the evidence the show was really extremely popular.

    1. If you make a call to your cable carrier, they can see what is happening on your TV, as you speak.
      Don’t you think, if they can see it, when they make the right connections, they don’t include a “background” connection that tells them what all the sets are tuned into?
      Just think of the value that would be to the broadcast companies and how the cable provider could sell it.
      Ratings aren’t done by only chosen households, anymore.
      They know what you are watching, even if you aren’t aware of it.

    2. Well yes looking at television ratings is instructive as to the flaws and accuracy in modern political polling. Just not in the way most think. As was said modern technology can give highly accurate television ratings. So why use the old sampling methods? Because those who pay for the ratings benefit more from the older sampling so. In this case the absolute last thing any television network or cable carrier wants is to tell advertisers the accurate truth about how few people are really watching their shows. In much the same way polling firms are ultimately beholden to political dollars. No candidate or party will throw more business their way if they tell them that their total base of support is exclusively dead people. (Unless Chicago. Gotta allow for local traditions)

  24. ?Start working at home with Google! It’s by-far the best job I’ve had. Last Wednesday I got a brand new BMW since getting a check for $6474 this – 4 weeks past. I began this 8-months ago and immediately was bringing home at least $100 per hour. I work through this link

    Click This Link inYour Browser…….

    ? ? ? ?http://www.Paybucket40.com

  25. Start working at home with Google! It’s by-far the best job I’ve had. Last Wednesday I got a brand new BMW since getting a check for $6474 this – 4 weeks past. I began this 8-months ago and immediately was bringing home at least $77 per hour. I work through this link, go to tech tab for work detail.

    _+_+_+_+_+_+_+_+_+ http://www.Paybucket40.com

  26. I got an email before the 2004 election from one of the Soros groups telling me to answer all calls and tell pollsters who I was voting for and to make eye contact with people doing poling outside of the voting booths. (My sister is a leftist.) On election day I happened to be making conversation with a statistician who assured me that no matter what I believed the polls were accurate. I tried to explain to him all of the problems with young people and cell phones and biased reporting. He was so confident that all of the variables had been accounted for. Yes, both sides encourage people to answer their phones. Both sides encourage people to talk to exit polls. Both sides use cell phones only. Both sides answer phone calls from strangers.
    His staunch unwillingness to consider the possibility that pollsters could be wrong convinced me that they were working in a yes-chamber.

  27. my neighbor’s half-sister makes $83 every hour on the computer . She has been without a job for 9 months but last month her payment was $17900 just working on the computer for a few hours. why not try this out

    +++++++++++++++++ http://www.Wage90.Com

  28. Polling, especially political polling, is not scientific. So, the first step in dealing with this issue is to stop calling “the science of polling”. Political polling involves sampling of sentient beings who actually care about the outcome of the event you are trying to understand, and those sentient beings change their minds amazingly often, and for the seemingly most trivial of reasons. They are not red balls and blue balls that you pull out of a bag. They are balls that may be blue in the bag, but which change to red when you look at them, and then to green when you show them to someone else. Or they may be a bright purple, so that they don’t know what color they want to be until the moment that they MUST choose, when they might even decide to go back into the bag and refuse to be looked at.

    It is not scientific. Surveys may be useful for product marketing and even for political strategists to try to design their camplaigns, but they are not reproduceable and have demonstrated that they frequently make bad predictions in political contexts. Anyone who relys on them does so at his peril.

    At least we are starting to recognize this, and maybe it will finally put to rest the progressive plan to start to do census “counts” on the basis of samples.

  29. A great discussion about modern polling. But it leaves out the other large elephant in the room. It assumes that honest accurate polling is itself the goal. But these days that is not always the case. More and more real world elements of campaigns, from momentum, to fundraising, to participation in debates, is driven not by election results but by pre election polling. This means there is a benefit to skew polling in your or your candidates favor wherever possible. Polling firms do not exist in a vacuum. Polls are paid for by politicians and news organizations at the end of the day. These have a stake in the game. This can result in results intentionally weighted to clear political purpose.

    Case in point that tends to raise the most eyebrows among the electorate. The mysterious circumstance of Jeb Bush. While doing abyssmal he keeps mysteriously polling just well enough to stay on stage and in the media. Yet nobody has ever actually met one of these unicorns known as a Jeb voter in the wild. But their are big moneyed interests behind keeping him on stage. Does anyone honestly think Jeb has more current niche support that Rand Paul and the Paul family semi cultish libertarian’s? (Not saying it is a huge thing, both are in low single digits).

    The point is when the polling stops simply being a tool of measurement, and instead becomes the driving force of the story itself.

  30. Start working at home with Google! It’s by-far the best job I’ve had. Last Wednesday I got a brand new BMW since getting a check for $6474 this – 4 weeks past. I began this 8-months ago and immediately was bringing home at least $77 per hour. I work through this link, go to tech tab for work detail.

    _+_+_+_+_+_+_+_+_+ http://www.Paybucket40.com

  31. my friend’s half-sister makes $77 /hour on the computer . She has been laid off for 7 months but last month her income was $12280 just working on the computer for a few hours. browse around this web-site

    Open This Link for more Information…

    ???? http://www.Wage90.Com

  32. Pollsters have a lot of difficulty predicting which way the dead (and the living) will vote more than once.

  33. Watching the results come in on the night Bevin got elected was one of the best nights of my life, politics-wise. All I kept hearing was how Bevin was a bad candidate and the GOP had no shot at winning. Then, a beautiful thing happened: the race was called within only a couple hours in favor of Bevin, whom I had met the year before when he was primary-ing McConnell. Was Bevin the perfect candidate? No, not at all. I disagree with him on a number of issues, but he was the best choice Kentucky had and I was so proud of my state for recognizing that.

  34. “My last pay check was $9500 working 12 hours a week online. My sisters friend has been averaging 15k for months now and she works about 20 hours a week. I can’t believe how easy it was once I tried it out. This is what I do..

    Clik This Link inYour Browser…….

    ? ? ? ? http://www.Jobstribune.com

  35. I’ve made $76,000 so far this year working online and I’m a full time student.I’m using an online business opportunity I heard about and I’ve made such great money.It’s really user friendly and I’m just so happy that I found out about it.


  36. One aspect overlooked: People Lie!!
    With more knowledge of polling methods and their effects on actual political discourse, people can *intentionally* skew the results to their benefit. For example, it wouldn’t surprise me if a large number of survey participants in Iowa were Hillary supporters who asserted that they were regular Republican voters and *loved* Donald Trump.
    True, it’s hard to implement such a tactic on a broad level, but a Twitter or Facebook campaign suggesting that response would likely encourage a large enough group to participate in a “bold face” lie to pollsters.

  37. Good post , Coincidentally , people want a IRS 1099-INT , my business partner found a fillable document here http://goo.gl/OL7zFL

Please to post comments

Comments are closed.