The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Should a Website Have the Right to Exist?
Even outcasts should be able to subsist on their own land.
(This post is part of a five-part series on regulating online content moderation.)
In Part III, I showed how it is technically possible to boot an unpopular speaker or her viewpoints from the internet. Such viewpoint foreclosure, as I call it, can be accomplished when the private entities that administer the internet's core resources—domain names, IP addresses, networks—deny those resources to websites that host offensive, albeit lawful, user content.
In Part II, I explained that even if the Supreme Court finds that social media platforms and other websites have a First Amendment right to moderate user content however they like, the entities that administer the internet's core resources probably do not. As such, if the state wanted to prevent core intermediaries from discriminating against lawful user speech, it probably could.
But should it? Or, left to itself, would the online marketplace of ideas do a better job of finding the right balance between free expression and cleaning up the internet? Another way of framing this issue is to ask whether a lawful website should have the right to exist. In this post, I argue that it should. I offer four reasons.
Reason #1: The Internet Is Different
Here's a common objection I've received when presenting my paper (on which this series is based): "Why should anyone have a right to speak on the internet? No one has the right to have his op-ed published in a newspaper, his voice broadcast over the radio, or his likeness displayed on TV. If newspapers, radio stations, and cable companies have the freedom to refuse to host your speech, why shouldn't every private intermediary on the internet, including core intermediaries, have the same freedom?
My response to this objection is that the internet is different from these other kinds of media. It's different in two important ways.
First, unlike traditional media, the internet is architected in a way that makes total exclusion possible. As I explained in Part III, internet speech depends on core resources, such as domain names, IP addresses, and networks. These resources are administered by a small group of private entities, such as regional internet registries, ISPs, and the Internet Corporation for Assigned Names and Numbers (ICANN). If any one class of these resources were denied to an unpopular speaker ("Jane" in my case study), she would have no hope of creating substitute resources to get back online. Even if she had limitless resources, creating an alternative domain name system, address space, and terrestrial network would be tantamount to building an entirely new Internet 2.0 and would not gain her readmission to the Internet 1.0 that everyone else uses. In other words, she could still be excluded from the internet.
Not so for other forms of media. No single entity or group of entities has the power to deny a person access to every radio or TV station. And nothing would prevent Jane from creating and circulating her own newspaper. LACNIC might be able to deny Jane an IP address, and Verisign might refuse her a .com domain name, but no one could prevent her from procuring paper and ink for Jane's Free Press and handing out copies by hand. As I've put it before, the terms "online" and "offline" have meaning only because a clear line separates these two states. No such dividing line gives the terms "on-book" or "off-book" any intelligible meaning.
Second, lacking the ability to express oneself on the internet may have represented only a minor inconvenience in times past. But to be shut out of the internet today is tantamount to being shut out of society. The internet, quite simply, is where the people and the content are. As Jack Balkin has argued, the public sphere is not fixed in its definition or destination but is instead best understood as "the space in which people express opinions and exchange views that judge what is going on in society." It is for this reason that internet access, a concept that encompasses online expression, is increasingly being recognized as a human right in other countries.
Reason #2: Intellectual Humility
Intellectual humility supports recognizing a right to basic internet expression for the simple reason that we can never conclude with absolute certainty that an idea is completely without merit, at least for purposes of discussion. Some might interpret this statement as an articulation of the marketplace of ideas metaphor. The best test of truth is the power of the thought to get itself accepted in the competition of the market, the cure for bad speech is more speech, etc.
But mine is not an argument for free speech absolutism or a belief that the best ideas will ultimately win out. If anything, the ease with which demagogues, sophists, and cranks can build massive online followings has shaken my faith in the marketplace of ideas metaphor, at least when it comes to online speech. The fact that free speech-absolutist platforms quickly become Nazi sewers shows why content moderation within individual sites produces a far better result than simply standing back and letting the free market rip. I agree with the sentiment that "free speech doesn't mean free reach," and I don't believe that each viewpoint should be entitled to the same tools of distribution and promotion as any other viewpoint.
What I mean instead is that when it comes to online expression, the appropriate response to bad ideas is marginalization, not banishment. In the offline world, at least in the United States, misfits and heretics might not be welcome in polite society, might be shunned and treated as pariahs. But as a last resort, persecuted groups can always purchase their own land and establish their own outcast communities. Essentially, they can vertically integrate down to the land itself to become what I like to call "full-stack rebels." Full-stack rebels have a long history in the offline space, from the early Mormons to the Ohioan Zoarites to even Black Wall Street in Tulsa, Oklahoma.
One feature of this ability to break away and establish a self-supporting community is that such communities need not remain insular forever. If their ideas have merit, they can attract new members and even expand their borders. Eventually, ideas that once were fringe may become mainstream, or at least begin to influence the broader discourse. Even the silliest conspiracy theory may have a kernel of truth that, left to germinate in its own humble plot, might grow and bear fruit.
For recent evidence of this phenomenon, one need look no further than the COVID lab leak theory. That theory—the idea that COVID may have originated in a lab or otherwise undergone man-made transformations—was initially scorned as a crackpot conspiracy theory, to the point that it was banned from major social media platforms. The fact that the theory eventually graduated into respectable scientific inquiry, and that the same platforms were forced to reverse course, is a testament to the importance of practicing intellectual humility before declaring that a viewpoint or theory is out of bounds. But it also illustrates another important point that is central to my thesis. When it comes to online discourse, the lab leak theory was marginalized; it was not banished.
Although major platforms actively stamped out debate on the topic and certain respectable online publications wouldn't touch it, the theory continued to be discussed in less mainstream venues—blogs, personal websites, and the like. It was within these online ghettos that knowledge was shared, flaws were worked out, and converts were slowly won over.
Had the lab leak theory instead been exiled from the internet altogether, it is not certain that this epistemic process would have played out. That is why the version of the marketplace of ideas metaphor that I subscribe to is scoped not to individual sites but to the internet as a whole. It posits that each idea should at least be given the opportunity to plant itself somewhere in the internet, even if I shouldn't be forced to give it space on my website.
Reason #3: No One Need Be Exposed to Undesirable Content
Some may fear that toxic ideas, if permitted, could sully the internet just as certainly as they could sully any individual site, making the internet an unsafe place for those who wish to avoid toxic content. But remember: others on the internet remain free to marginalize or ignore those ideas. Social media companies can continue to ban discussion of forbidden topics, search engines can exclude disreputable sites from their indexes, and each site can choose to link only to sites it agrees with. This marginalization acts as a counterweight to the fact that bad ideas can remain on the internet and, theoretically, spread.
Moreover, unlike what might result from forced carriage laws aimed at social media companies—where users might find it hard to avoid seeing objectionable content on their favorite sites—no user need ever visit a site he would prefer to avoid. If Facebook were forced to host misogyny, misandry, or Holocaust denial, as could happen under Texas's HB 20, it might be difficult for users to completely avoid being exposed to statements and opinions they dislike. While modern platforms may offer tools to help users filter out certain categories of content (e.g., nudity, gambling), such tools are highly imperfect at distinguishing between different viewpoints.
But if controversial speakers are permitted no more than the right to express themselves on their own websites, the story is quite different. A user who wishes not to be exposed to antisemitic remarks can simply refrain from visiting The Daily Stormer or its ilk. She is not forced to encounter tawdry websites like a pedestrian might be unable to avoid the sight of strip clubs en route to another venue in the same neighborhood.
Reason #4: Nuclear Disarmament
One admittedly embarrassing aspect of my thesis is that it forces me to defend the right of the worst possible figures to stay online. After all, it isn't those fighting for tolerance, equality, and moderation who are teetering on the edge of viewpoint foreclosure. It's neo-Nazis, white supremacists, and conspiracy peddlers. Some might understandably say that if permitting viewpoint foreclosure means that QAnon and Holocaust denial can't find their way onto the internet, that'd be just fine by them. Isn't it only the truly execrable that run the risk of being foreclosed on the internet?
If that outcome could be guaranteed, then I might consider adopting the same permissive attitude. But as Paul Bernal put it, "when you build a censorship system for one purpose, you can be pretty certain that it will be used for other purposes." As evidence for that phenomenon, consider Ukraine's attempt to revoke Russia's top-level domains and IP addresses or African ISPs' suggestion to use network routing as a litigation tactic. If Nazis and white supremacists should be kicked off the internet, why not pro-Russian bloggers? Why not those who endanger public health by attempting to link autism to childhood vaccines? Why not climate skeptics?
At the end of the day, when we talk about viewpoint foreclosure, what we're really talking about is a nuclear option—the ability to nuke a viewpoint off the face of the internet. Nuclear weapons are great, I suppose, if only responsible parties possess them. But one of the central concerns that animates nuclear disarmament is that weapons of mass destruction can fall into the wrong hands.
So it is with viewpoint foreclosure. There is simply no guarantee that today's orthodoxies (equality, human dignity, tolerance) won't become tomorrow's heresies (they once were) and eventually be forced offline. Much better to ensure that no private entities can use their power over internet architecture to exile their ideological opponents. The law should therefore recognize a basic right to lawfully express oneself on the internet—at least on one's own website.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"If Facebook were forced to host misogyny, misandry, or Holocaust denial, as could happen under Texas's HB 20, it might be difficult for users to completely avoid being exposed to statements and opinions they dislike."
Why would I be Facebook friends with misogynists, misandrists, and Holocaust deniers?
I don't buy this "sullying" concept. There's a lot of gross stuff on Twitter or YouTube, for instance, that doesn't break any rules, yet I would never encounter unless I were actively looking for it.
I used to be a member of a private group on FB, just some old engineers and college professors who'd been arguing with each other for years on one platform after another. None of us were Holocaust deniers, or anything that could fairly be characterized that way. We weren't even all conservatives. But we didn't particularly respect the left's shibboleths, that much is true.
Everything was fine for years, then out of the blue our group started getting locked, and the administrator would get demands that we take down content that violated the TOS. We were never told exactly how it violated the TOS. And certainly nobody was offended by it, it was a private group!
Eventually we had to jump ship to MeWe, because it just became insupportable to try to keep discussing things on FB while trying to guess what would set off the PC enforcer they'd sic'd on us.
So, yeah, this is NOT about people on the platforms being offended. Heck, the only reason you even see anything on FB that you didn't sign up to see, is that FB itself insists on shoving things in front of your eyeballs unsolicited!
The only people being offended here are the censors themselves...
This depends on how the site works. If a site uses a recommended reading algorithm (which I believe Facebook does), you could have the stuff delivered to your feed whether you want it or not. The primary customers for sites like Facebook are firms with advertising dollars to spend. We've already seen what a decrease in moderation and increase in vile content has done to Twitter/X's revenue. Their customers have abandoned them. Further, their primary revenue source--Twitter users--are slowly evaporating. (14% of Android users and 15% of iOS users.) Inversely, Parler, Truth Social, Gab, and similar "free speech" services never really attracted anything other than the worst the right-wing has to offer; the general public opted to avoid associating with the social sludge those services attracted.
I can't speak for everyone, but I've never seen anything on either Facebook or Twitter that wasn't at least somewhat related to my selected interests or designated friends/followers. If "Recommended" means either curated or by popularity, then it doesn't seem likely that it would include content that is almost universally condemned.
From reading reports from people who had bad advertising experiences on Twitter, my impression is that the liaison staff were cut and so it was much more difficult for advertisers to see how well their ads were working or how best to place them. The lack of moderation angle has never made much sense for the same reasons I mentioned above; the people who see ads next to bad content are looking for bad content (not to mention that the consumer also has to misunderstand how advertising works to impute ad placement on a feed or results page to an endorsement of the content appearing either above or below it).
Facebook is not a closed electronic mailing list. Users by design see things on Facebook that are not posted by their Facebook friends in addition to things that are.
The internet is the modern forum and the streets leading thereto. Blocking someone’s physical access to the forum would have been understood to have been an infringement on rights. And so…
The internet is a collection of devices and protocols on an interconnected network. It provides a miriad of services ranging from banking to media consumption to signaling a door lock or garage door opener. That doesn't really fit the concept of "forum," modern or otherwise.
Some of the services provided on the internet are forums, many of them privately owned and managed. Using your analogy, these are private homes and businesses along "the streets leading" to the public forum. The folks owning those private spaces should be allowed to eject rude and unruly people.
Was there ever a time when the public forum was 100% free speech safe? When you could do anything or say anything you wanted without repercussion? Or were there rules, social or legal, that governed acceptable behavior? I think for your analogy to work, the concept of public forums needs to have been near rule-less otherwise one's rights in that forum were moderated in some manner.
One basic issue is that there is no “marketplace” for core internet infrastructure. Domain name registration is a legal oligopoly, with only a small number of companies legally authorized to register domain names. Legal monopoly or oligopoly status normally comes with common carrier restrictions, and the First Amendment’s applicability to prevent these restrictions. Television is a good example, since TV stations clearly supply content, not just infrastructure, and the First Amendment more clearly applies. Yet television broadcasting can be regulated much more than a market with no legal badriers to entry like newspapers. Although many former regulations on television broadcasting (where channels are limited by law) have been repealed, the Supreme Court upheld most of them.
A seminal example is the Supreme Court’s upholding of the Fairness Doctrine in Red Lion Broadcasting v. FCC. The fairness doctrine could be characterized as a limitation on the ability of TV stations to use editorial control to restrict viewpoints. If legal oligopoly status renders the Fairness Doctrine constitutional for TV stations, it most assuredly renders legal limitations on viewpoint restriction constitutional for internet domain registration companies, whose relevance to disseminating ideas is far more attenuated.
Domain registrations are fairly competitive at this point. There's hundreds of retail registrars spread across countries and continents, and there's now a lot of alternative top-level registries so we're not just dependent on VeriSign and .com.
The RIRs are still a chokepoint for IP address allocations, and at the end of the day ICANN is responsible for the administration of all of these unique identifiers. But these entities are inherently mutlinational so thinking about applying US legal principles to them starts to get complicated (although both ICANN and ARIN are corporations domiciled in the US so they are subject to US law).
If we look at deplatforming examples from the past, it's actually probably DDos protection that is potentially the biggest vulnerability for a deplatforming campaign. You can set up your own server and almost certainly find some ISP willing to host you, and these are things that are not prohibitively expensive. But it would be trivial for an adversary to take such a site down through a denial of service attack, and the infrastructure to combat this sort of attack is both beyond the reasonable technical sophistication and financial wherewithal of almost everyone. That's why Cloudflare dropping Parler was such a big deal, and why folks who wanted them offline pushed to get LACNIC to take back DDos-Guard's IP addresses.
While I agree that DDOS protection services are a weak point in the service chain for large online services, there are international protection services, including, for example, CloudMTS in Russia, that can provide that service. Also, a minor quibble with your comment regarding how trivial it is to take down a site with a denial of service attack--if it's a simple DOS and the target has some basic but well-configured security and load balancers, it's likely not going to be effective. Whereas, with a distributed denial of service (DDOS) attack, it requires far higher computing resources on both ends which makes it non-trivial to execute and non-trivial to defend against.
Going after Cloudflare in Parler's case was especially effective because Parler is/was a site for extremists and it attracted a lot of unwanted attention. If one thinks of DDOS protection services as a type of insurance, Cloudflare is as unlikely to want to do business with Parler as most insurance companies want to write hurricane policies in Florida or fire policies in rural, mountainous California. DDOS protection is absolutely not necessary to run an online service as long as you don't make a point of pissing people off.
There are over 1,000 ICANN registrars and several times that many domain resellers. Is it still an oligopoly if the number of competitors is that high?
Most of those are not in fact independent. You can buy a .COM domain at any of hundreds of registrars but they all answer by contract to Verisign. If Verisign decides to cut you off, the fact that you actually bought your domain from GoDaddy is no protection at all.
jb has a slightly stronger point that there are other top-level-domains than just .COM but when you exclude all the restricted ones (country-specific and controlled such as .BANK) and the jointly-controlled TLDs (Verisign also controls .NET, .NAME, .EDU, etc), the case against oligopoly is not as strong as you two make it out. There may be a dozen practical alternatives.
I can see your point there on the oligopoly, though they're under contract from a quasi-governmental non-profit (ICANN) and that contract could always not be renewed or could be modified to reduce the likelihood Verisign and other registrars would abuse their position. (It might already be in the contract; I haven't read it nor am ever likely to.)
There's been some consolidation in the registry business so you're right that the range of options has shrunk down to the dozen-ish you mention. In most markets, including this one, that seems like reasonable evidence of robust competition, and ICANN is going to be allowing new TLDs again in a few years so the situation should also improve over time.
Besides, in this case ICANN is basically acting as the oligopoly regulator. One of its functions is explicitly to encourage competition in the domain name space.
The declaratory sections (a & b) of 47 U.S. Code § 230 declare the Internet to be a public form, and the definition (f)(1) of the Internet explicitly tells us that the Internet is a network of federal and non-federal networks.
The Internet is substantially composed of federal networks and technology that a social medium platform intrinsically uses in creating its forum within the public forum of the Internet. A social medium platform is inextricably intertwined with the government if it operates within the Internet. It is the situation of Burton v. Wilmington Parking Authority at both the state and also at the federal level.
A social medium platform does not have to host unprotected speech like obscenity, and it could restrict its forum to children (but not specifically to white children).
To be more precise, Section 230 addresses:
• (a) (1) “our citizens”,
• (a) (3) “forum for a true diversity of political discourse”, and
• (a) (4) “benefit of all Americans”.
According to the ordinary definition of English words, (a) (1), (3), and (4) refer to Internet and seem to designate the Internet to be a forum for the American pubic, to wit, a public forum. (SCOTUS must rule.)
Sometimes the following incorrect argument is made.
Social media platforms do not “operate within the Internet.” That phrase has no meaning. Every retail store in the country relies upon public roadways, both to obtain its products and to receive its customers. A retail store also relies other government services.
This reliance does not mean that all stores are “inextricably intertwined with the government.”
In point of fact, the Internet is a big machine or device. Every device connected to the Internet is part of the bigger machine or device.
The analogy to the phone network should be obvious. The US phone network is a big machine or device. Every device (including a customer premises handset) connected to the phone network is part of the bigger machine or device.
In contrast, the road, which passes by the supermarket, is not part of the supermarket. These issues were all settled decades ago — in some cases over a century ago.
The folks who actually wrote the statute you reference would likely agree with your statement that "These issues were all settled decades ago — in some cases over a century ago." The overarching goal of the statute you cite was to change nothing -- rather to reiterate that new, distributed machinery was similar to old, distributed libraries, to old inns and their keepers, and to other seemingly unrelated concepts.
You should spam this technologically and legally illiterate nonsense in every thread on the VC. It will become true if you keep repeating it. In fact, if you do it enough, courts will go back in time and rule in your favor on the frivolous lawsuits you filed based on these illiterate theories.
Iirc the Internet has been different from the very beginning. It connects networks, not really individual computers. As part of thd ground rules, a company couldn't insinuate itself there then start blocking stuff.
Given that, it becomes a non issue. If you want to join as a backbone provider, then start blocking traffic, well, I don't know what you thought you were joining, but it sure as hell wasn't the Internet.
Iirc the Internet has been different from the very beginning. It connects networks, not really individual computers. As part of thd ground rules, a company couldn’t insinuate itself there then start blocking stuff.
Like Sarcastr0 wrote yesterday, if the outcome of the conservative circle jerk about Big Tech censorship is that we can all finally have network neutrality (which is exactly the principle you’re describing) codified as a ground rule, that will be pretty great.
For now, though, there’s no such rules. First, companies have filtered traffic for decades. Generally not because of what content it’s carrying but to mitigate DDoS attacks (probably desirable) or because they might not like the general category of traffic (e.g. streaming video in places where bandwidth is scarce).
Second, it turns out the peering relationships between individual networks turn out to be opt-in, meaning that you and your neighbor network need to agree to pass traffic between each other. There’s no inherent expectation that all traffic goes between all networks either from a technical design or business practice.
Finally, countries and other political entities impose filtering requirements all the time. See China’s Great Firewall as the most obvious example, but also places filtering Twitter during the Arab Spring or Iran trying to be in their citizens’ business all the time. None of this breaks how the Internet works even if we may find it objectionable.
Sigh. My pretty blockquote got deleted when I did a grammar edit. First paragraph above is from Krayt if that's not obvious.
I'm a massive fan of net neutrality but I don't believe that concept really covers the larger issue conservatives are upset about which is the right of some private services to ban them in whole or part for being jerks.
Oh, I agree. But if you take Krayt's post at face value he's saying that network infrastructure providers shouldn't get to pick and choose what traffic to carry which is, you know, net neutrality.
He's not the first to note that irony, but they're not even sane about it. They want to end up with no net neutrality for the networks, but net neutrality for the applications.
The "internet is different" argument doesn't work for me. Jane can hand out "Jane's Free Press" in the internet age, too. What she cannot do is get on the radio waves (except ham radio, maybe) or TV. But radio and TV are "where the people were" for the past several decades. That didn't create an entitlement to television access. So the internet is different in degree of accessibility and availability, but I'm not convinced it is different in kind from the airwaves and cable bandwidth.
But can't this agument be applied just as easily against allowing Jane to write and hand out "Jane's Free Press"? Why can't the government legitimately prohibit stationers from selling her paper and ink? After all, she can still convey her views by buttonholing people on the street, or by standing on the corner and shouting; and oral communication was the way people relayed information before the invention of writing.
TV and radio are a bad example here insofar as without government regulation of frequency use, the useful radio spectrum would be subject to interference and rendered useless. That isn’t the case for Jane and her printing press or Janet and her online blog. As it pertains to the internet, it isn’t the government (in the US at least) that is blocking speakers from using private internet resources but the resource owners themselves–often as a necessary aspect of their business in order to maintain a stable user community.
"But radio and TV are 'where the people were' for the past several decades. That didn’t create an entitlement to television access. "
In fact, it DID create an entitlement to television access -- and on multiple levels. Sec. 611 of the Communications Act was revised in 1976 to require that cable operators provide "PEG channels." And today, taxpayers subsidize both cell phone and cable television access for those in (and not in) need.
More interestingly, the 1976 amendments were considered on the record when first discussing Internet regulation.
There is a difference between publication and distribution. Newspapers can control what they publish, they also control the distribution when they deliver the physical paper to the readers doorstep.
When they use outside entities to distribute their content, they run the risk of being denied distribution due to the rules of the distributors. The rules of the distributor can control access rights, movement of goods and distribution placement within the system. This is where the rules of the on-line platforms control the distribution. There can be a case made that this is an oligopoly that the government is reviewing the rules in certain cases.
Could you be more detailed on what you mean by "distributor" in the context of internet publications? From my perspective, the distributor for Facebook is Facebook. Same with YouTube and the majority of sites. There are some content delivery network (CDN) services that can be used to distribute content from nodes closer to the consumers but beyond that, I'm not sure what you mean by "distributor" such that they could impose control over online content services.
"After all, it isn't those fighting for tolerance, equality, and moderation who are teetering on the edge of viewpoint foreclosure."
Would that it were so, Professor Nugent! Unfortunately, what many people refer to as "tolerance" actually means approval of certain favored positions.
Allow me to quote a sentence from a discussion in which I was recently involved. Another participant wrote, "I will actively respect and celebrate differences of opinion, if those opinions don't disrespect anyone's existence."
There's a wide gap between tolerating something and celebrating it, but all too many people are ignoring that gap: if they don't regard something with approval, they don't believe that it warrants tolerance. There's no room in their minds for "This frankly disgusts and appalls me, but I have no right to tell other people how to lead their lives, however little I like their choices."
So, for instance, we should "tolerate"—which is to say, celebrate—the demands made by transgender activists; but we should suppress the expressions of those who disagree.
Let me substitute Jews for transgendered persons. Should we tolerate request for civility and equality towards Jews but not tolerate the requests to deny civility and equality to Jews? Yes. Yes we should.
You've skated awful close to the Tolerance Paradox.
Also, "demands made" versus "expressions" are interesting choices and placement of words in your transgender example.
"Should we tolerate request for civility and equality towards Jews but not tolerate the requests to deny civility and equality to Jews? Yes. Yes we should."
No. No, we shouldn't. Not if "not tolerate" means "use any means possible to keep such people from expressing their views".
I'm opposed to anti-discrimination laws, for instance. While governments should not be allowed to discriminate on the basis of race, sex, ethnicity, etc., private actors, including providers of public accommodations, should be free to do so if they so choose. I think there's a reasonable libertarian argument to be made for this position.
However, this might well be seen as a request to "deny equality" to these classes, including Jews. Does Shawn_dude believe that I should therefore be prevented from making my case against anti-discrimination law?
He obviously does. The “Tolerance Paradox,” as I understand it, is the idea that it’s OK for society to be “intolerant” toward (i.e., to suppress) the intolerant. But, of course, once you get to define your opponents as “intolerant” — as Democrats are currently trying to do to Republicans — it’s game over! How convenient!
The author goes on to argue that we shouldn’t allow such “viewpoint foreclosure” because doing so creates the risk that, some day, even “non-execrable” persons/ideas might be shut down/out. He argues for the right result, but his reasoning is off. The reason “viewpoint foreclosure” must be avoided is that no one should be in a position to define some ideas as “good” and others as “bad” (and then suppress the latter), not even “those [who claim to be] fighting for tolerance, equality, and moderation”!
https://reason.com/volokh/2020/08/23/putting-limits-on-acceptable-scholarship/?comments=true#comment-8419330
Popper's notion was about people who wanted to violently impose their will on others, not people who had intolerant ideas.
"A Web site has the right to exist, unless it's the .il domain." - Ilhan Omar
Should a Website Have the Right to Exist?
That headline creates too strong a tendency to back the discussion away from Nugent's purported low-level infrastructure target, and to focus attention instead on current controversies about regulating websites. I am unable to guess whether or not that is an effect Nugent intends.