After President Donald Trump's loss in 2020, a majority of his supporters believed the election had been rigged. Some adopted wild conspiracy theories involving Chinese supercomputers, Hugo Chavez, and state-level Republican officials. These beliefs culminated in an attack on the U.S. Capitol that left five people dead. To make sense of these events, many officials have argued that platforms such as Facebook and Twitter allowed conspiracy theories to spread unimpeded, leading to erroneous beliefs and deadly behaviors. In other words, they blame misinformation for the violence.
But it strains credulity to believe random tweets can lead otherwise normal people to drive across the country and stage an insurrection. That places an undue focus on misinformation itself, rather than on the people and institutions sharing it and on the people who choose to access and believe it. It also seems odd to call for more government intervention into our information ecosystem when government officials—the president, members of Congress—were, in this instance, the biggest purveyors of misinformation.
Since he became a candidate in 2015, Trump and his high-profile supporters in Congress and the media have repeatedly claimed that elections are rigged. Since his loss last year, he has become only more vociferous about this. It should come as no surprise that the person with the biggest bully pulpit in the world was able to convince some voters he was cheated. This is what politicians do: They build and mobilize coalitions. On the other side of the ledger, electoral losers are naturally prone to believing they were cheated, and Trump's claims only exacerbated this tendency among his core supporters.
Once we account for the influence that politicians have, as well as the dispositions of core audiences, the role of misinformation and mediums of communication in fomenting events like the Capitol riot become highly conditional and much smaller than many are arguing.
Nonetheless, pundits have called for interventions ranging from the benign (more journalistic fact checking) to the heavy-handed (internet censorship, the nationalization of social media). While these calls have intensified recently, they are not new. After the 2016 election, many journalists declared that we had entered a "post-truth" world in which lies, misinformation, and groundless conspiracy theories carried as much weight as statements of fact, if not more. Such sentiments grew more widespread during the COVID-19 pandemic.
The desire that others believe the "right" things and act the "right" way is often well-intentioned. I too would prefer that people not inject themselves with bleach because they heard that it can prevent COVID-19. But designs on others' beliefs are sometimes little more than expressions of crass self-interest or, worse, authoritarian tendencies.
It isn't clear that the public is more prone to believing misinformation than in the past. If it is, this may mostly be a top-down phenomenon driven by the conspiratorial rhetoric of high-profile elites such as Trump. Nonetheless, it has become scripture that our current maladies have been wrought by the mass public spreading misinformation through social media—and, of course, that something must be done to stop them.
Enter Harvard legal scholar Cass R. Sunstein. His new book, Liars: Falsehoods and Free Speech in an Age of Deception, expresses many questionable but popular claims about false information: that lies travel faster and farther than truth, that social media are responsible for a new age of misinformation, that government intervention is needed lest we lose our democracy.
Sunstein does recognize the dangers of asking the government to act as arbiter of truth. Early in Liars, he invokes Justice Robert Jackson to clarify the stakes: Governments that police speech will inevitably move on to policing dissent, and when they can't fully eliminate it, they may turn to exterminating dissenters.
Yet Sunstein ultimately concludes that, under some circumstances, the government has not just a reason but the authority to censor, punish, or use other tools against those who spread harmful falsehoods. His argument in brief: "False statements are not constitutionally protected if the government can show that they threaten to cause serious harm that cannot be avoided through a more speech-protective way."
Sunstein concedes that most falsehoods don't require punishment or censorship. But for some falsehoods, he insists, legal intervention is required. He begins with already proscribed forms of falsehoods involving defamation or fraud but expands from there to defamation of politicians and harmful misinformation about "people, places, and things." Sunstein would specifically like to see regulation of knowingly false speech that "creates a clear and present danger of harm."
So on one hand, Sunstein acknowledges that officials cannot be trusted to police truth because they have their own biases, which can ultimately lead to punishing dissent rather than mere falsehood. But he simultaneously argues that more categories of speech should lack constitutional protection and that government should play a role in determining both what is false and under what circumstances people should be allowed to freely express falsehoods. He wants an "independent tribunal" to make some of those decisions, but it isn't clear how independent it could actually be.
Meanwhile, the punishment of speech can foster exactly the kinds of beliefs that Sunstein says he wishes to prevent. If the believers think they're being persecuted, that can reinforce the idea that they possess important secret knowledge. Many of the people prosecuted would become martyrs. And the trials of accused liars could draw more attention to their falsehoods.
That's just one way that Sunstein overemphasizes misinformation and underemphasizes beliefs and behaviors. For example: If he thinks getting a vaccine is so important that we need to punish people who lie about the dangers of it, why not just call for the government to make those vaccinations mandatory? If the motive for policing speech is the actions that stem from the speech, then a punishment for not engaging in the prescribed behaviors would be even more justified. Besides, many people engage in the "wrong" behaviors even though they weren't exposed to false information. Does Sunstein want to improve outcomes or just to punish speech?
Sunstein also credits misinformation with more power than it has. A century of research into belief formation has shown that while information, true or false, can convince people to change their beliefs, it often doesn't. For people to adopt new information, they usually need worldviews already in alignment with it. This puts the responsibility for false beliefs at least partially on the back of the believer.
Sunstein ignores the role political and media elites play in driving people's beliefs as well. Prominent partisan leaders wield enormous influence over the beliefs of co-partisans in the mass public; this influence exists whether those leaders speak truthfully or not. (As we have seen, many of the lies told on social media are told by the president and members of Congress.) Because he omits this facet of opinion formation, Sunstein is blind to a paradox: Government leaders may be the ones spreading false information, yet Sunstein wants to empower the government to find out exactly who is spreading falsehoods and punish them. Government leaders are unlikely to punish themselves, and enforcement will likely target the less powerful, not the more powerful. So the probable result isn't to punish the most influential spreaders of falsehoods; it's just another cudgel for violating citizens' rights. Case in point: The members of Congress who encouraged the violence at the Capitol remain in power as of this writing.
If Sunstein's recommendations were policy, it would be difficult to know what is illegal and what isn't. Sunstein puts forward four considerations to determine what should be done about false speech, each with four gradations: the speaker's state of mind (from purposely lying to mistaken), the magnitude of the potential harm (from grave to nonexistent), the likelihood of harm (from certain to highly improbable), and the timing of the harm (from imminent to the distant future). Sunstein asks readers to consider all of these factors when considering what to do about an individual sharing a falsehood; this would leave the authorities to figure out where a particular falsehood falls within 256 possible categories. Such incomprehensibility would most certainly chill speech.
Nor would Sunstein's recommendations work on their own terms. We should have little confidence that government agents or tech companies could—in real time—tell fact from fiction without much error. Even professional fact checkers don't always agree on what is false and why. Further, the fear of being prosecuted may serve to drive falsehoods underground, where they can't be challenged.
Or people may choose to speak in generalities that don't assert facts. Saying "the MMR vaccine has been shown to cause autism" is factually wrong; saying "I don't trust vaccine companies" could be entirely true, since it refers to the speaker's state of mind. Both statements could convey the same meaning, but only one would be punishable. Unless, of course, the next step is to ban not just unwelcome speech but unwelcome meanings.
Liars: Falsehoods and Free Speech in an Age of Deception, by Cass R. Sunstein, Oxford University Press, 192 pages, $22.95