The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
When Should the Law Regulate Content Moderation?
Only when necessary to protect five basic internet rights.
Thanks to Eugene for inviting me to guest-blog this week about my new article, The Five Internet Rights. The article endeavors to answer the (internet) age-old question: When, if ever, should the law intervene into how private entities moderate lawful online user content?
This question has taken center stage as debates rage over the proper role of social media companies in policing online speech. For example, Twitter's decision to suppress the Hunter Biden laptop story alarmed many people about the power large platforms exercise, or could exercise, over the dissemination of news and, thus, over electoral outcomes. It also spurred Florida to pass a law that attempts to prevent social media companies from discriminating against "journalistic enterprises."
Similarly, social media companies' decision to prevent users from discussing the COVID "lab leak" theory—a decision they later reversed after the theory achieved mainstream status—caused many to wonder if social media companies may be hamstringing the search for truth by arrogating to themselves the power to determine what constitutes valid scientific inquiry. It too generated a legislative response—this time, from Texas, which went even further by prohibiting social media companies from discriminating against any users based on their viewpoints.
Still others fear that social media companies are being too lax in moderating user content. They believe that disinformation and hate speech pose a much greater danger to society, and even to free speech, than does private "censorship." These concerns have prompted other states like New York to consider legislation that would force social media companies to take down problematic user content or else face legal consequences.
But even if these concerns are valid—even if the content moderation practices of large online platforms present serious risks to society—is regulation the answer? Or would government involvement only introduce larger problems? And even if state intervention into private content moderation could improve matters, would the Constitution permit it? That too remains an open question after the Eleventh Circuit enjoined Florida's forced carriage law while the Fifth Circuit upheld Texas's similar law.
In the face of these thorny issues, it might seem impossible to answer the foundational question of whether the law should ever intervene in private content moderation. Still, I believe an answer (or at least a partial answer) can be found by examining the topic from a different angle.
When considering whether the state should intervene in private content moderation, much, if not all, of the analysis to date has focused on the actions of social media companies. That's not surprising, since most online speech these days occurs on social media, and social media companies make most content moderation decisions. But social media companies aren't the only game in town when it comes to limiting user speech. And far more concerning, I would argue, than whether Facebook permits some news story or scientific theory to be shared is the fact that content moderation is now moving deeper down the internet stack.
For example, after it was discovered that some of the January 6 rioters used Parler to amplify Donald Trump's "Stop the Steal" rhetoric, Amazon Web Services, a cloud computing provider, famously booted Parler from its servers, causing the website to go down for weeks. Other infrastructural providers, such as Cloudflare and Joyent, have similarly revoked hosting services for websites because those websites have permitted offensive (albeit lawful) user speech. Registrars like GoDaddy and Google have taken to revoking domain names associated with lawful websites whose viewpoints they oppose. And even internet service providers may be getting into the content moderation game by blocking their subscribers' access to websites for ideological reasons.
Still, most concerning of all was a development that received scarcely any attention in the press. Shortly after Parler managed to migrate to an alternate host, it went dark again, but this time for a different reason. After complaints reached LACNIC, one of the five regional internet registries responsible for managing the world's network identifiers, LACNIC revoked more than eight thousand IP addresses used by Parler and its new hosting provider, taking Parler offline once more. A year later, Ukraine sent a similar request to Europe's regional internet registry to revoke Russian IP addresses.
Given how foundational these core resources are to the operation of the internet—and the fact that a deplatformed website or user cannot simply build his or her own alternative Internet Protocol or Domain Name System—these developments point to a world in which it may soon be possible for private gatekeepers to exclude unpopular users, groups, or viewpoints from the internet altogether. I call this phenomenon viewpoint foreclosure.
Understanding viewpoint foreclosure provides the key to determining when and how the law should regulate content moderation. It suggests that intervention should start with basic viewpoint access—the right of all users to self-publish their lawfully expressed viewpoints on the public internet. It also suggests that whether the state should intervene when private intermediaries deny internet resources to users for ideological reasons should ultimately turn on whether users can realistically create substitute resources to stay online. As I'll explain when I unpack internet architecture, the law can ensure viewpoint access by guaranteeing five basic internet rights: the rights of connectivity (connecting to the internet), addressability (maintaining a publicly reachable IP address), nameability (maintaining a stable domain name), routability (having one's packets faithfully routed through intervening networks), and accessibility (not having one's users blocked from accessing one's content).
Put differently, when it comes to content moderation, the only regulation we clearly need (although we may indeed need more) is this set of five basic and irreducible rights. In this series, I'll show why that's the case by answering the following four questions:
- Can the state regulate content moderation?
- Can a controversial user really be kicked off the internet?
- Should a website have the right to exist?
- How can the state prevent viewpoint foreclosure?
Tomorrow, I'll tackle the first question.
Show Comments (88)