The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Should a Website Have the Right to Exist?
Even outcasts should be able to subsist on their own land.
(This post is part of a five-part series on regulating online content moderation.)
In Part III, I showed how it is technically possible to boot an unpopular speaker or her viewpoints from the internet. Such viewpoint foreclosure, as I call it, can be accomplished when the private entities that administer the internet's core resources—domain names, IP addresses, networks—deny those resources to websites that host offensive, albeit lawful, user content.
In Part II, I explained that even if the Supreme Court finds that social media platforms and other websites have a First Amendment right to moderate user content however they like, the entities that administer the internet's core resources probably do not. As such, if the state wanted to prevent core intermediaries from discriminating against lawful user speech, it probably could.
But should it? Or, left to itself, would the online marketplace of ideas do a better job of finding the right balance between free expression and cleaning up the internet? Another way of framing this issue is to ask whether a lawful website should have the right to exist. In this post, I argue that it should. I offer four reasons.
Reason #1: The Internet Is Different
Here's a common objection I've received when presenting my paper (on which this series is based): "Why should anyone have a right to speak on the internet? No one has the right to have his op-ed published in a newspaper, his voice broadcast over the radio, or his likeness displayed on TV. If newspapers, radio stations, and cable companies have the freedom to refuse to host your speech, why shouldn't every private intermediary on the internet, including core intermediaries, have the same freedom?
My response to this objection is that the internet is different from these other kinds of media. It's different in two important ways.
First, unlike traditional media, the internet is architected in a way that makes total exclusion possible. As I explained in Part III, internet speech depends on core resources, such as domain names, IP addresses, and networks. These resources are administered by a small group of private entities, such as regional internet registries, ISPs, and the Internet Corporation for Assigned Names and Numbers (ICANN). If any one class of these resources were denied to an unpopular speaker ("Jane" in my case study), she would have no hope of creating substitute resources to get back online. Even if she had limitless resources, creating an alternative domain name system, address space, and terrestrial network would be tantamount to building an entirely new Internet 2.0 and would not gain her readmission to the Internet 1.0 that everyone else uses. In other words, she could still be excluded from the internet.
Not so for other forms of media. No single entity or group of entities has the power to deny a person access to every radio or TV station. And nothing would prevent Jane from creating and circulating her own newspaper. LACNIC might be able to deny Jane an IP address, and Verisign might refuse her a .com domain name, but no one could prevent her from procuring paper and ink for Jane's Free Press and handing out copies by hand. As I've put it before, the terms "online" and "offline" have meaning only because a clear line separates these two states. No such dividing line gives the terms "on-book" or "off-book" any intelligible meaning.
Second, lacking the ability to express oneself on the internet may have represented only a minor inconvenience in times past. But to be shut out of the internet today is tantamount to being shut out of society. The internet, quite simply, is where the people and the content are. As Jack Balkin has argued, the public sphere is not fixed in its definition or destination but is instead best understood as "the space in which people express opinions and exchange views that judge what is going on in society." It is for this reason that internet access, a concept that encompasses online expression, is increasingly being recognized as a human right in other countries.
Reason #2: Intellectual Humility
Intellectual humility supports recognizing a right to basic internet expression for the simple reason that we can never conclude with absolute certainty that an idea is completely without merit, at least for purposes of discussion. Some might interpret this statement as an articulation of the marketplace of ideas metaphor. The best test of truth is the power of the thought to get itself accepted in the competition of the market, the cure for bad speech is more speech, etc.
But mine is not an argument for free speech absolutism or a belief that the best ideas will ultimately win out. If anything, the ease with which demagogues, sophists, and cranks can build massive online followings has shaken my faith in the marketplace of ideas metaphor, at least when it comes to online speech. The fact that free speech-absolutist platforms quickly become Nazi sewers shows why content moderation within individual sites produces a far better result than simply standing back and letting the free market rip. I agree with the sentiment that "free speech doesn't mean free reach," and I don't believe that each viewpoint should be entitled to the same tools of distribution and promotion as any other viewpoint.
What I mean instead is that when it comes to online expression, the appropriate response to bad ideas is marginalization, not banishment. In the offline world, at least in the United States, misfits and heretics might not be welcome in polite society, might be shunned and treated as pariahs. But as a last resort, persecuted groups can always purchase their own land and establish their own outcast communities. Essentially, they can vertically integrate down to the land itself to become what I like to call "full-stack rebels." Full-stack rebels have a long history in the offline space, from the early Mormons to the Ohioan Zoarites to even Black Wall Street in Tulsa, Oklahoma.
One feature of this ability to break away and establish a self-supporting community is that such communities need not remain insular forever. If their ideas have merit, they can attract new members and even expand their borders. Eventually, ideas that once were fringe may become mainstream, or at least begin to influence the broader discourse. Even the silliest conspiracy theory may have a kernel of truth that, left to germinate in its own humble plot, might grow and bear fruit.
For recent evidence of this phenomenon, one need look no further than the COVID lab leak theory. That theory—the idea that COVID may have originated in a lab or otherwise undergone man-made transformations—was initially scorned as a crackpot conspiracy theory, to the point that it was banned from major social media platforms. The fact that the theory eventually graduated into respectable scientific inquiry, and that the same platforms were forced to reverse course, is a testament to the importance of practicing intellectual humility before declaring that a viewpoint or theory is out of bounds. But it also illustrates another important point that is central to my thesis. When it comes to online discourse, the lab leak theory was marginalized; it was not banished.
Although major platforms actively stamped out debate on the topic and certain respectable online publications wouldn't touch it, the theory continued to be discussed in less mainstream venues—blogs, personal websites, and the like. It was within these online ghettos that knowledge was shared, flaws were worked out, and converts were slowly won over.
Had the lab leak theory instead been exiled from the internet altogether, it is not certain that this epistemic process would have played out. That is why the version of the marketplace of ideas metaphor that I subscribe to is scoped not to individual sites but to the internet as a whole. It posits that each idea should at least be given the opportunity to plant itself somewhere in the internet, even if I shouldn't be forced to give it space on my website.
Reason #3: No One Need Be Exposed to Undesirable Content
Some may fear that toxic ideas, if permitted, could sully the internet just as certainly as they could sully any individual site, making the internet an unsafe place for those who wish to avoid toxic content. But remember: others on the internet remain free to marginalize or ignore those ideas. Social media companies can continue to ban discussion of forbidden topics, search engines can exclude disreputable sites from their indexes, and each site can choose to link only to sites it agrees with. This marginalization acts as a counterweight to the fact that bad ideas can remain on the internet and, theoretically, spread.
Moreover, unlike what might result from forced carriage laws aimed at social media companies—where users might find it hard to avoid seeing objectionable content on their favorite sites—no user need ever visit a site he would prefer to avoid. If Facebook were forced to host misogyny, misandry, or Holocaust denial, as could happen under Texas's HB 20, it might be difficult for users to completely avoid being exposed to statements and opinions they dislike. While modern platforms may offer tools to help users filter out certain categories of content (e.g., nudity, gambling), such tools are highly imperfect at distinguishing between different viewpoints.
But if controversial speakers are permitted no more than the right to express themselves on their own websites, the story is quite different. A user who wishes not to be exposed to antisemitic remarks can simply refrain from visiting The Daily Stormer or its ilk. She is not forced to encounter tawdry websites like a pedestrian might be unable to avoid the sight of strip clubs en route to another venue in the same neighborhood.
Reason #4: Nuclear Disarmament
One admittedly embarrassing aspect of my thesis is that it forces me to defend the right of the worst possible figures to stay online. After all, it isn't those fighting for tolerance, equality, and moderation who are teetering on the edge of viewpoint foreclosure. It's neo-Nazis, white supremacists, and conspiracy peddlers. Some might understandably say that if permitting viewpoint foreclosure means that QAnon and Holocaust denial can't find their way onto the internet, that'd be just fine by them. Isn't it only the truly execrable that run the risk of being foreclosed on the internet?
If that outcome could be guaranteed, then I might consider adopting the same permissive attitude. But as Paul Bernal put it, "when you build a censorship system for one purpose, you can be pretty certain that it will be used for other purposes." As evidence for that phenomenon, consider Ukraine's attempt to revoke Russia's top-level domains and IP addresses or African ISPs' suggestion to use network routing as a litigation tactic. If Nazis and white supremacists should be kicked off the internet, why not pro-Russian bloggers? Why not those who endanger public health by attempting to link autism to childhood vaccines? Why not climate skeptics?
At the end of the day, when we talk about viewpoint foreclosure, what we're really talking about is a nuclear option—the ability to nuke a viewpoint off the face of the internet. Nuclear weapons are great, I suppose, if only responsible parties possess them. But one of the central concerns that animates nuclear disarmament is that weapons of mass destruction can fall into the wrong hands.
So it is with viewpoint foreclosure. There is simply no guarantee that today's orthodoxies (equality, human dignity, tolerance) won't become tomorrow's heresies (they once were) and eventually be forced offline. Much better to ensure that no private entities can use their power over internet architecture to exile their ideological opponents. The law should therefore recognize a basic right to lawfully express oneself on the internet—at least on one's own website.
Show Comments (36)