The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The Return to Intermediary Control
I'm continue to serialize my forthcoming UC Davis Law Review article What Cheap Speech Has Done: (Greater) Equality and Its Discontents; you can read the Introduction, but in this post I'm talking about how "cheap speech" has led to a revival of calls for restrictions imposed by intermediaries. Recall that the article is mostly descriptive, focusing on what's happening, for better or worse.
[* * *]
Cheap speech, as the Introduction noted, has made it easier for people to spread their own views, good or evil, and their own understandings of the facts, true or false. And the Internet has in many ways made it easier to speak anonymously, and in ways that hide one's identity. Foreign governments can take advantage of this, too, and so can foreign groups that might be under the influence of a foreign government. That too was much harder under the old media system, for better or for worse.
The spread of such bad ideas and factual falsehoods — or things that people think are bad ideas and factual falsehoods — may be constitutionally protected, but that doesn't mean the public and Congress have to like it. As a result, there has been pressure to get intermediaries into "voluntarily" doing the policing of supposed "hate speech," "fake news," and the like that the First Amendment precludes the government from doing. And even for speech that the government might be able to itself restrict, such as revenge porn, intermediaries have been providing much more prompt takedown procedures than the legal system can practically provide.
Curiously, then, we seem to be reinventing, and many of us seem to be approving of, intermediary control: it's just that instead of newspaper and broadcaster editors choosing what to block, we're having that done by Facebook, Twitter, and occasionally other companies.
In a sense, one can imagine four different approaches to control of public speech:
- control by being regulated expressly by the government,
- control by being too expensive for ordinary people,
- control by private intermediaries, and
- no real control (at least of people's viewpoints and broad factual claims, as opposed to, say, of spam).
Modern First Amendment law largely precludes option (1), so as option (2) has retreated in significance, option 3) is being promoted as a substitute by those who find option (4) unacceptable.
On one hand, this form of Internet intermediary power is a less categorical control — if your speech is banned from Facebook, you can still get it out through other platforms (at least for now, while the infrastructure companies, such as hosting companies and search engines, police things only rarely). Such intermediary power also covers fewer subject matters: Facebook excludes a tiny fraction of all content that people try to post, while traditional editors excluded all except that which they chose to fit on their limited pages.
On the other hand, the control is more oligarchical than ever: a huge share of the control is in the hands of the people running three companies (Facebook, Google, and Twitter). In the past, the control was more broadly shared among executives and editors at broadcast networks, local broadcasters, national magazines, and national but mostly local newspapers.
And, unsurprisingly, this sort of oligarchical control is leading to resentment among many users who had gotten used to the early Internet's more egalitarian model. Why should Mark Zuckerberg get to say what's on my Facebook page, they might think, rather than my having exclusive control over that?
They might not have thought that back in the pre-Internet era, where of course the local newspaper editor got to say what was in the newspaper, or even on the letters to the editor page. But give people a taste of the power to publish, and some of them won't be happy to give it up.
Some have remarked on a certain degree of ideological reversal that seems to be happening here. These days, it is (some) conservatives who, perceiving that the platforms are run by liberals, are worried about the platforms' restricting conservative speech. As a result, some conservatives are calling for extra regulation of privately owned businesses, something that conservatives generally tend to oppose.
Likewise, these days it is generally (some) liberals who enthusiastically support the power of large corporations — indeed, among the largest of corporations — to influence political speech. Ten years ago, many liberals sharply condemned the Supreme Court's decision in Citizens United v. FEC, which held that corporations and unions have a First Amendment right to speak about political candidates (independently of those candidates' campaigns). Thus, for instance, from one 2012 article from liberal think tank Demos, titled 10 Ways Citizens United Endangers Democracy: "[C]oncentrated wealth has a distorting effect on democracy, therefore, winners in the economic marketplace should not be allowed to dominate the political marketplace."
Yet urging Facebook, Twitter, and similar companies to restrict alleged "hate speech" and to police alleged "fake news" involves some of the biggest "winners in the economic marketplace" using their power to affect "the political marketplace." And while of course that power is limited, since Facebook and Twitter are indeed far from the whole of the Internet, corporate advertising about candidates after Citizens United was also comparatively modest.
According to OpenSecrets.org's More Money, Less Transparency: A Decade Under Citizens United, corporations contributed about $300 million to outside spending groups in the 2012–18 federal election campaign cycles, and unions contributed about $275 million. The corporate contributions "made up 10 percent of funding to these groups in the 2012 cycle, a high water mark," falling to 5% in 2018. And "[w]hile corporations and unions gained potential political power as a result of Citizens United, it's individual donors who are fueling the explosion of money in recent elections." Even taking into account the fact that the platforms generally don't overtly endorse one or another political candidate as such, their content policing likely affects politics at least as much as does the corporate political advertising protected by Citizens United.
Now neither some conservatives' support for restraining private platforms' policing power, nor some liberals' support for increasing the political influence of giant corporations, necessarily reflect logical inconsistency. Few conservatives are categorical foes of all regulation of private business. (Indeed, the most libertarian conservatives, who are the most skeptical of regulation, tend to also oppose regulation of platforms.) And few liberals are categorical foes of all corporate influence on the political process.
Most such political principles are, quite sensibly, presumptions rather than categorical rules. The conservatives who back regulation and the liberals who back platform power may simply see those presumptions as being rebutted by sufficiently strong countervailing interests (whether in protecting user speech, or in fighting "hate speech" and "fake news"). But in both cases, it seems that we are seeing a reaction to the advent of cheap speech, and a reaction to that reaction.
Reno v. ACLU; Ashcroft v. ACLU (I); United States v. American Library Association; Ashcroft v. ACLU (II); Packingham v. North Carolina. Perhaps Elonis v. United States (if you focus on the facts of that case rather than the legal issue). Those are the Internet First Amendment cases that the Supreme Court has considered, mostly dealing with shielding children from sexually themed material, but also, in Elonis, online threats.
But this is not where most of the interesting recent Internet free speech developments have arisen. Rather, they have come in surprising places:
- the survival and perhaps resurgence of criminal libel law;
- trial courts' broad acceptance of anti-libel injunctions;
- trial courts' willingness to issue remarkably broad bans on public online speech about people, in the name of preventing "harassment" or "stalking";
- the criminalization of the disclosure of private facts, whether through outright criminal laws or through injunctions enforced using the threat of contempt;
- the enactment or broader application of narrower restrictions on specific kinds of false statements and disclosure of private facts, such as impersonation and nonconsensual porn;
- the growth of calls for greater policing of online speech by the platforms.
For decades, the main lever for dealing with libel and disclosure of private facts has been the threat of civil damages liability. As that lever has become increasingly irrelevant for many speakers, the legal system has had to grasp for other levers, odd as they might have seemed in 1993. Likewise, for decades, the main lever for dealing with extremist speech and with conspiracy theories has been the control exerted by media intermediaries. As that lever has fallen away, people have called for the platforms to step into the gap.
Some of these developments have been promising. Some have been misguided. But they all represent, I think, the legal system's largely bottom-up struggle with the dark side of cheap speech and of the democratization of mass communications.
 I set aside here intermediaries providing extra speech, such as pointing to fact-checks of posts, cf. Dawn Carla Nunziato, Cheap Speech and Counterspeech by the New Intermediaries, 54 UC Davis L. Rev. (manuscript at 22-25) (2021); that does not involve restrictions (private or governmental) on speech, and indeed the government could itself publish such fact-checks (though it likely couldn't require platforms to publish them).
 There's debate about the degree to which the platforms' editing does target conservative speech. But it's of course human nature for people faced with a massive, largely hidden editing process to assume the worst about the process, especially when it is run by those who are largely on the other side of the political aisle.