working paper by political scientists Brendan Nyhan of Dartmouth and Jason Reifler of Georgia State University and summarized in a report published by the New America Foundation, involved randomly assigning 1,169 state legislators to one of three groups: those who received a letter "reminding them of the risks to their reputation and electoral security if they are caught making questionable statements," those who received no letter, and those who received a letter "stating that we were monitoring campaign accuracy." The third, "placebo" condition was included to control for the possibility that merely knowing they were being studied would cause legislators to behave differently.An new study suggests that, contrary to what you might think, pointing out when politicians are lying (or mistaken) may encourage them to be more careful with the facts. The experiment, described in a
After sending the letters, Nyhan and Reifler looked at how many legislators in each group "received a negative rating from the PolitiFact afﬁliate in their state" or were criticized for inaccurate statements in articles indexed by Nexis during a three-month period in 2012. (The articles were coded by a research assistant who did not know to which groups the subjects had been assigned.) Nyhan and Reifler found that the legislatiors in the "treatment" group were substantially less likely to be called out than the legislators in either of the other two groups. "In all," they report, "21 legislators in the placebo and control conditions had the accuracy of their claims questioned by PolitiFact or in Nexis (2.7%), compared with 4 in the treatment condition (1.0%)." In other words, the legislators reminded of the potential fallout from independent fact checking were about one-third as likely to have their claims questioned as the legislators who were not. Although the numbers criticized for misstatements were small in all the groups, this overall difference was "highly significant (p < .02)."
Nyhan and Reifler sum up their results this way:
In the first field experiment of its kind, we find that the randomized provision of a series of letters highlighting the electoral and reputational risks of having questionable statements exposed by fact-checkers significantly reduced the likelihood that legislators in nine U.S. states would receive a negative fact-checking rating or have the accuracy of their claims questioned publicly. We found no evidence that these results were driven by legislators speaking less frequently or receiving less coverage, suggesting instead that they were less likely to make inaccurate statements rather than being silenced more generally.
They conclude with a plea for more fact checking:
These results indicate that fact-checking should not be discredited by the continued prevalence of misinformation and misperceptions. While fact-checking may be ineffective at changing public opinion, its role as a monitor of elite behavior may justify the continued investment of philanthropic and journalistic resources. Indeed, given the very small numbers of legislators whose accuracy is currently being questioned by fact-checkers or other sources, one could argue that fact-checking should be expanded in the U.S. so that it can provide more extensive and consistent monitoring to politicians at all levels of government.
As much as I'd like to believe that calling attention to error makes it less common (a hypothesis for which my own career provides only limited support), one could also argue that what we need is more generic letters of the sort sent by Nyhan and Reifler, which apparently had an impact even though the actual risk of being criticized for inaccuracy was very low. Elegant as their experiment is, it does not actually show that the experience of being publicly corrected makes politicians more accurate. It might instead cause them to dig in their heels, especially if they (rightly or wrongly) perceive the criticism as unfair or ideologically motivated. In fact, the measures Nyhan and Reifler use for accuracy—PolitiFact ratings and criticism in the mainstream press—may be so skewed that what they are actually measuring is adherence to a certain set of beliefs deemed acceptable by the folks doing the fact checking.