The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

First Amendment Limits on State Laws Targeting Election Misinformation, Part V

The need for a comprehensive strategy addressing election misinformation.

|

This is part V in a series of posts discussing First Amendment Limits on State Laws Targeting Election Misinformation, 20 First Amend. L. Rev. 291 (2022). What follows is an excerpt from the article (minus the footnotes, which you will find in the full PDF).

Even if most of the state statutes we reviewed end up being found to be constitutional, their enforcement will not eradicate lies and threats in elections, let alone eliminate the flow of misinformation that is polluting public discourse. The problem is simply too big. Any legislative approach to combatting election misinformation must be part of a broader strategy that seeks to reduce the prevalence of misinformation generally and to mitigate the harms that such speech creates.

Part of the challenge stems from the fact that we may be moving to what Richard Hasen calls a "post-truth era" for election law, where rapid technological change and hyperpolarization are "call[ing] into question the ability of people to separate truth from falsity." According to Hasen, political campaigns "increasingly take place under conditions of voter mistrust and groupthink, with the potential for foreign interference and domestic political manipulation via new and increasingly sophisticated technological tools." In response to these profound changes, election law must adapt to account for the ways our sociotechnical systems amplify misinformation. Furthermore, we must recognize that legislating truth in political campaigns can take us only so far; there are things that law simply cannot do on its own.

[A.] The Internet Blind Spot

One of the biggest challenges election-speech statutes face is the rise of social media, which have become the modern-day public forums in which voters access, engage with, and challenge their elected representatives and fellow citizens. Although political misinformation has been with us since the founding of the nation, it spreads especially rapidly on social media.

[* * *]

Although the Internet plays an increasingly important role in political communication and in public discourse generally, there currently is no national strategy for dealing with online election misinformation. The federal government does not regulate the content of election-related speech anywhere other than in the broadcast context, and even as to the broadcast medium federal regulation is limited. Transparency in political advertising gets a little more federal attention, but here again the law is directed at advertising disseminated by broadcast, cable, and satellite providers. Even though more money is now spent on online advertising than print and television advertising combined, federal laws mandating disclosure and recordkeeping requirements do not currently apply to online political ads.

[* * *]

Complicating matters further, state efforts to reduce election misinformation on social media are limited by Section 230 of the Communications Decency Act, which prohibits the enforcement of state laws that would hold Internet platforms liable for publishing speech provided by a third party (including advertising content). As a result, although the states can enforce their election-speech laws against the persons and entities who made the prohibited statements in the first place, they cannot impose either civil or criminal liability on social media companies or other internet services where such speech is shared. Given the outsized role social media platforms play in distributing and amplifying election misinformation, this leaves a large portion of the battlefield over election speech off limits to state legislatures.

Both Republicans and Democrats have called for changes to Section 230, but it seems unlikely that Congress will coalesce around legislation that carves out election-related harms from the statute's protections. Indeed, their complaints about the statute suggest that they will remain at loggerheads for the foreseeable future, with one side arguing that Section 230 is to blame for social media platforms doing too little moderation of harmful content, while the other side claims that Section 230 permits the platforms to engage in too much moderation of speech motivated by anti-conservative bias. And, even if they agree on the problem they wish to solve, there is the danger that Congress's efforts to force social media companies to police election misinformation will only make the situation worse.

[B.] The Limits of Law

Regardless of whether Congress takes the lead in regulating election speech, government efforts to combat election misinformation must be part of a multipronged strategy. . . . While the government can target narrow categories of false, fraudulent, or intimidating speech, the First Amendment sharply curtails the government's ability to broadly regulate false and misleading speech associated with elections. This is not to say that state legislatures should throw up their hands at the problem of election misinformation. Both the federal and state governments retain a range of policy levers that can reduce the prevalence and harmful effects of election misinformation. Two areas are frequently offered as holding particular promise—as well as being less likely than direct regulation to raise First Amendment issues: (1) increasing transparency about the types and extent of election misinformation that reaches voters and (2) supporting self-regulation by entities that serve as conduits for the dissemination of the speech of others, especially social media platforms.

[* * *]

However, transparency is not a panacea and there are reasons to think that as the government imposes more intrusive recordkeeping and disclosure requirements on media and technology companies, these efforts will face constitutional challenge. Eric Goldman points out that laws that require online platforms to disclose their content moderation policies and practices are "problematic because they require publishers to detail their editorial thought process [creating] unhealthy entanglements between the government and publishers, which in turn distort and chill speech." According to Goldman, transparency mandates can "affect the substance of the published content, similar to the effects of outright speech restrictions" and therefore these mandates "should be categorized as content-based restrictions and trigger strict scrutiny." He also suggests that requiring that platforms publicly disclose their moderation and content curation practices should qualify as "compelled speech," which is likewise anathema under the First Amendment.

The Fourth Circuit's recent decision in Washington Post v. McManus seems to support these concerns. McManus involved a Maryland statute that extended the state's advertising disclosure-and-recordkeeping regulations to online platforms, requiring that they make certain information available online (such as purchaser identity, contact information, and amount paid) and collect and retain other information and make it available upon request to the Maryland Board of Elections. In response, a group of news organizations, including The Washington Post and The Baltimore Sun, filed suit challenging the requirements as applied to them. In his opinion striking down the law, Judge Wilkinson concluded that the statute was a content-based speech regulation that also compelled speech and that these features of the law "pose[] a real risk of either chilling speech or manipulating the marketplace of ideas."

[* * *]

The McManus case casts a shadow over state laws that seek to impose broad recordkeeping and disclosure requirements on online platforms. More narrowly tailored transparency laws directed at election misinformation on social media platforms, however, may pass constitutional muster. The McManus court did not strike down the Maryland statute, but merely held that it was unconstitutional as applied to the plaintiff news organizations. Moreover, as Victoria Ekstrand and Ashley Fox note, "given the unique position of the plaintiffs in the case, it is currently unclear how far this opinion will extend, if at all, to online political advertising laws that target large platforms like Facebook." Nevertheless, they write that "McManus suggests that governments will likely be unable to take a wide-approach by imposing record-keeping requirements on all or nearly all third parties that distribute online political advertising."

Regardless of what level of First Amendment scrutiny the courts apply to mandatory recordkeeping and disclosure laws, the reality is that neither the federal nor state governments can simply legislate misinformation out of elections. Government efforts to ensure free and fair elections must account for—and should seek to leverage—the influential role online platforms, especially social media, play in facilitating and shaping public discourse. Because these private entities are not state actors, their choices to prohibit election misinformation are not subject to First Amendment scrutiny.

[* * *]

Counterintuitively, one way that government can facilitate the efforts of online platforms to address election misinformation is by retaining Section 230's immunity provisions. These protections grant platforms the "breathing space" they need to experiment with different self-regulatory regimes addressing election misinformation. Under Section 230(c)(1), for example, Internet services can police third-party content on their sites without worrying that by reviewing this material they will have liability for it. This allows social media companies to escape the "moderator's dilemma," where any attempt to review third-party content may result in the company gaining knowledge of its tortious or illegal nature and thus facing liability for everything on its service; to avoid this liability, the rational response is to forgo reviewing third-party content entirely, thus creating a strong counterincentive to moderation.

Section 230(c)(2) also immunizes platforms from civil claims arising from a platform's removal of misinformation or the banning of users who post such content. Although platforms undoubtedly enjoy a First Amendment right to choose what speech and speakers to allow on their services, this provision is a highly effective bar to claims brought by users of social media platforms who have been suspended or banned for violating a platform's acceptable use policies. Indeed, after having one of his posts on Twitter labeled as misinformation, former president Donald Trump sought to eviscerate this very provision in an executive order aimed at limiting the ability of platforms to remove or flag controversial speech.

As the states have shown, there is no one-size-fits-all approach to addressing election misinformation. Although there are many who feel that social media providers are not doing enough to remove election misinformation on their platforms, others argue that the major platforms are too willing to restrict political discourse and to ban controversial speakers. The benefit of Section 230 is that platforms can take different approaches to navigating this challenging and contentious topic. As Mark Lemley points out, "[t]he fact that people want platforms to do fundamentally contradictory things is a pretty good reason we shouldn't mandate any one model of how a platform regulates the content posted there—and therefore a pretty good reason to keep section 230 intact."