The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Can the State Regulate Content Moderation?
It's hard to argue that providing a pipe constitutes a speech act.
(This post is part of a five-part series on regulating online content moderation.)
Before we dream up policy recommendations for how the state might intervene when private content moderation runs amok, we should probably figure out whether the state can intervene at all. The First Amendment limits the power of governments to regulate online speech, either directly or indirectly by regulating intermediaries that host others' speech. Since most online content constitutes speech, does the First Amendment completely bar the state from regulating in this space?
That indeed is the view most academics seem to take. In support of that view—let's call it the "strong editorial rights" position—adherents often point to two cases. First, in Miami Herald v. Tornillo, the Court unanimously struck down a Florida law that required newspapers to print responses from political candidates who were criticized within their pages. Noting that a newspaper is more than a "passive receptacle for news, comment, and advertising," the Court explained that the choice of what "material [should] go into a newspaper … constitute[s] the exercise of editorial control and judgment." Interfering with that judgment, therefore, violated the First Amendment's guarantee of a free press. Nor could it be assumed that newspapers retained their editorial judgment just because the right-of-reply statute required them merely to append a small amount of additional material. Since newspapers offer only a limited number of physical pages, printing mandatory rebuttals would "tak[e] up space that could be devoted to other material the newspaper may have preferred to print." Let's call this the "space constraints" principle.
Second, in Hurley v. Irish-Am. Gay, Lesbian & Bisexual Group, the Court held that private organizers could not be compelled to include groups or messages in a public parade that they disapproved of since doing so would alter the overall message the organizers wished to convey. Eugene refers to this as the "coherent speech product" doctrine. If an entity hosts or distributes a collection of third-party content that, together, conveys an overall theme or message, then that hosting or distribution itself becomes a speech act, and the entity becomes a speech participant. The state, therefore, cannot compel the entity to host or distribute additional content if doing so would alter the overall message the entity seeks to express.
The strong editorial rights camp believes that forced carriage laws aimed at social media companies are constitutionally infirm for the same reasons. Social media companies exercise "editorial control and judgment" in deciding which user content to allow and which content to remove. Moreover, in deciding to permit certain viewpoints on their platforms (e.g., "trans women are women") while proscribing other viewpoints (e.g., "A man cannot get pregnant"), these companies are expressing an overall theme or message (e.g., trans pride or pro-LGBT sentiment) and are therefore creating a coherent speech product. Requiring them to permit trans-critical speech would effectively prevent them from communicating their preferred message and, thus, would violate their First Amendment rights.
By contrast, critics of the strong editorial rights position—let's call them the "weak editorial rights" camp—believe that certain forms of forced carriage regulation may indeed be constitutional. It isn't that these critics necessarily disagree with Tornillo or Hurley, but they aren't convinced that those precedents apply to social media. Eugene's article, Treating Social Media Platforms Like Common Carriers?, makes the skeptics' case in far greater detail, so I'll highlight just a couple distinctions here.
First, unlike a newspaper, which might be able to publish a mandatory rebuttal only by dropping another piece it would prefer to print, social media lacks any comparable space constraints. With the possible exception of spam or bot-generated content (which providers would presumably block for viewpoint-neutral reasons), laws requiring social media companies to host all lawfully expressed viewpoints would not force providers to make difficult decisions about which other posts to cut in order to make everything fit.
Second, because of their capacity to host an effectively infinite amount of user content, social media companies don't appear to put out anything approximating a coherent speech product. Not only would it be impossible to view or consume the entire universe of user content made available on, say, Reddit in order to discern an overall theme or message, but it's hard to argue that any overall theme or message currently exists on such platforms. Sure, providers may clearly prohibit certain viewpoints while permitting others, but does a collection of unrelated "can't say this" rules communicate any particular message? While platforms could argue that they present overarching themes like "decency, "dignity," or simply being on the "right side of history," skeptics say such themes are far too diffuse to constitute a concrete message under Hurley.
The weak editorial rights camp also points to Pruneyard Shopping Center v. Robins, in which the Supreme Court upheld an interpretation of the California Constitution requiring shopping centers to allow members of the public to distribute leaflets and gather signatures on their property. Although a shopping center might have disagreed with the messages it was effectively forced to host, that disagreement did not constitute compelled speech for First Amendment purposes. Nor was the shopping center's desire to provide a generically appealing environment for patrons (e.g., a non-political or family-friendly space) sufficiently concrete to constitute an overall message or speech act.
Likewise, in Turner Broad. Sys., Inc. v. FCC ("Turner II"), the Supreme Court upheld an FCC regulation requiring cable television providers to carry local broadcast television channels, even though such providers might not agree with the content of those channels. And in Rumsfeld v. FAIR, the Court found no First Amendment violation in the Solomon Amendment, which required universities to host military recruiters despite those universities' moral objection to the military's then-existing policy on homosexuals' serving in the armed forces.
Taken together, and as cabined by Tornillo and Hurley, these cases can be said to stand for the proposition that the state can force a private company to carry third-party speech it dislikes as long as (1) the speech medium does not qualify as a coherent speech product, (2) the carried speech is not likely to be attributed to the regulated provider, and (3) the provider is not prevented from disavowing or distancing itself from the speech it is forced to carry. Given that today's large social media platforms arguably meet each of these requirements, skeptics of the strong editorial rights position believe that the state could constitutionally compel such platforms to moderate user content in a viewpoint-neutral manner, perhaps even through laws like those enacted by Texas and Florida.
So, which side is right in this debate? Unfortunately, the current caselaw doesn't unambiguously boil down to either position. Social media companies would seem to have some editorial interests in what content they choose to allow on their platforms (Tornillo) and might have some expressive interests in permitting only user speech that corresponds with their values (Hurley). But I agree with Eugene that these companies' ad hoc, automated, and ex post removal of only a fraction of content that violates their acceptable use policies seems like a far cry from the kind of careful, ex ante curation that a newspaper or a parade organizer typically undertakes. Likewise, Pruneyard, Turner, and Rumsfeld do indeed establish that the state may sometimes require a provider to host another party's speech. But I agree with Ashutosh Bhagwat that Pruneyard and Rumsfeld did not involve communications platforms, and although Turner did, the Court ultimately upheld the FCC regulations only after concluding that the regulated cable companies had certain editorial rights.
In the end, I think Jane Bambauer has the best take on the issue when she says that "there are no close analogies in First Amendment precedent for internet platforms." Instead, "online platforms are their own free speech beast." And even if there are decent arguments for applying the Pruneyard line of cases to social media, it seems more likely than not that the Court will ultimately side with the strong editorial rights camp if and when it reviews the Texas and Florida laws. Kavanaugh showed his hand when he argued that the FCC's net neutrality rules violated ISPs' First Amendment rights. And Breyer, who was famously warm to forced carriage, has yielded his seat to Jackson, who I think is unlikely to vote to allow red states to force social media companies to carry racist, sexist, or homophobic speech.
Fortunately, for my purposes, I don't need to take a position on whether social media companies possess strong or weak editorial rights over their users' content. When it comes to viewpoint foreclosure—the ability to boot a person, viewpoint, or group from the internet entirely—the entities that have the power to effect such a result operate in a very different space than social media companies or any other website operators, for that matter.
As I'll explain in more detail in tomorrow's post, viewpoint foreclosure can occur only when those entities that operate the internet's core resources—the resources that make internet speech possible in the first place—start to delve into content moderation. Those resources include IP addresses, domain names, and the actual wires and airwaves that carry internet communications. Such providers operate at a far greater distance from actual expression on the internet, functioning more like phonebooks, telephone switches, and conduits. Neither courts nor the public are likely to attribute an internet troll's racist screed to the operator of subsea cables that carry his packets across the ocean any more than they would attribute the views expressed in an Antifa pamphlet to the utility companies that provide water and electricity to the premises where the pamphlet was printed. And it would seem that the operator of the .com registry has as much editorial interest in suspending a domain name associated with a controversial website as AT&T has in revoking a phone number used by a pro-Israel (or pro-Palestine) interest group. Which is to say, not much.
Imposing forced carriage on the entities that operate the internet's core functions would seem to satisfy both the strong editorial rights camp and the weak editorial rights camp. It would satisfy the weak rights camp because the greater includes the lesser. If the state can regulate websites like social media platforms, which operate close to actual user expression, then it can obviously regulate core intermediaries, which operate much further from that expression. And it would likely satisfy the strong rights camp because even those who take a broad view of editorial discretion draw a line when it comes to ISPs. Their embrace of net neutrality rules, which prevent ISPs from blocking access to websites or services they dislike, shows they don't believe that editorial rights should extend to all communications media, especially not to those that operate as mere "passive receptacle[s]."
Thus, whether or not the state can regulate content moderation where most moderation happens—namely, on websites like social media platforms—the First Amendment likely doesn't prevent the state from regulating "content moderation" at the hands of the internet's core intermediaries. Administering foundational resources like IP addresses, domain names, and fiber lines seems far more akin to publishing a phonebook, providing a utility, or maintaining a pipe. And it's hard to argue that providing a pipe constitutes a speech act.
Show Comments (51)