The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Volokh Conspiracy

The Sec. 230 Temperature is Rising

Liability safe harbors for Internet intermediaries are not responsible for Internet "hate speech"

|

The Business Section of Tuesday's NY Times print edition blared out, in gigantic type not a whole lot smaller than "MAN WALKS ON MOON," this headline:

WHY HATE SPEECH ON THE INTERNET IS A NEVER-ENDING PROBLEM

Why? "BECAUSE THIS LAW SHIELDS IT" – referring to Section 230 of the Communications Decency Act of 1996, which provides that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

That is some serious nonsense—somewhere between terribly misleading and completely wrong.  Section 230—which I've blogged about many times before [see e.g. here and here], and which has, without question, been an indispensable pillar of the Internet's growth, the "twenty-six words that created the Internet," as the title of Prof. Jeff Kossuf's recent book has it—provides that website operators can't be held civilly liable** for "speech" that originates from "another information content provider," i.e.. for user-generated content. It is impossible to imagine what the Internet information ecosystem would look like without it—where Amazon, and Facebook, and Youtube, and Instagram, and Soundcloud, and Twitter, and Reddit, and Medium, and literally hundreds of millions of other sites where users can exchange information with one another, would face civil liability arising from the postings of their users.

** Section 230 expressly exempts federal criminal law from the scope of the immunity, meaning that websites obtain no special protection if their actions, in connection with the speech in question, constitute criminal activity.

There are legitimate concerns about Section 230's scope and the way it has been interpreted by the courts (see below).  But the notion that it is somehow responsible for "hate speech" on the Net—and, by extension, for the rising tide of gun violence—is ridiculous.

It is ridiculous because, most fundamentally and depending of course on how it is defined, most of what we call "hate speech" is, however loathsome it may be, constitutionally protected. Section 230 doesn't provide Facebook et al. with an immunity from liability for publishing its users' "hate speech," the Constitution does that, in the First Amendment.***

*** Interestingly, the Times itself rather quickly recognized its error.  A correction was appended to the online version of the article:

"An earlier version of this article incorrectly described the law that protects hate speech on the internet. The First Amendment, not Section 230 of the Communications Decency Act, protects it."  Oops.

And with respect to the small subset of "hate speech" that is not constitutionally protected—words that are an "incitement to violence" under the standard set forth in Brandenburg v. Ohio (1969)—any criminal penalties which may be imposed for such speech are completely unaffected by Sec. 230.

So removing Section 230 tomorrow would do nothing to deal with the "hate speech" problem.

But pointing the finger in Section 230's direction is part of an increasing trend, to put it mildly, to lay all of the Internet's ills—all the hate speech, the revenge porn, the child porn, the terrorist information exchanges, the fake news, the general dumbing-down of the entire planet—at Section 230's door, part of a more generalized attack, from the political left and right, on the giant Internet platforms. [See Elizabeth Nolan Brown's article here on Reason.com "Section 230 Is the Internet's First Amendment. Now Both Republicans and Democrats Want To Take It Away"]

Then, in an op-ed in Thursday's NY Times, Jonathan Taplin adds his voice to the chorus of those seeking to "change safe harbor laws [i.e., Sec. 230] to ensure that social media platforms are held accountable."

I believe we can all agree that mass murder, faked videos and pornography should not be broadcast — not by cable news providers, and certainly not by Facebook and YouTube…. Changing the safe harbor laws so that social media platforms are held accountable for the content their users post would incentivize Facebook and YouTube to take things like the deep-fake video of Nancy Pelosi and the Christchurch shooting videos more seriously. Congress must revisit the Safe Harbor statutes so that active intermediaries are held legally responsible for the content on their sites.

Superficially appealing, but a terrible idea.  To begin with, there's that darn Constitution again—lots of material falling into the cateogries of "faked videos" and "pornography" are, again, constitutionally-protected, which severely limits law-makers abilities to get it off Internet websites.

Furthermore, I most emphatically do not agree that "mass murder, faked videos, and pornography should not be broadcast." It's not just that they're overwhelmingly constitutionally-protected speech; it's that they're categories that contain immense amounts of valuable and/or harmless material. Would liability for hosting videos of "mass murder" include videos posted by one of the victims or potential victims, or an innocent bystander, or only those posted by the perpetrator? And what about videos of "mass murder" perpetrated by government troops (e.g., a video of the massacre in Tiananmen Square, or the murder of the Rohinga in Burma, or Serbian atrocities in Bosnia, or a police shooting in New York City)?  And if, as I suspect is the case, there are some videos documenting murder or other violent crimes that are "OK" and some that are "not OK," how are we to distinguish between them? And more to the point, how are Youtube or Instagram, with over 100 million uploads a day, to distinguish between them?

And really—pushing "faked videos" off of the Net?! All those gifs of politicians or celebrities spouting idiotic slogans or assuming idiotic positions? All those cats playing the piano? All to be banned? Or, again, only the "bad" ones, and not the "good" ones? And which, exactly, are the bad ones? And gets to decide that?

Section 230 has proved to be an enormously valuable engine of free expression, enabling billions of people to communicate with one another every day. Some of those people say, and do, ghastly things, and there may be sensible, Constitution-respecting ways to tweak Section 230 to target them and reduce their incidence.

But repealing, or otherwise dismantling, the immunity scheme set up by Section 230 will do little if anything to curb any of the truly objectionable content, while doing considerable damage to the Internet's ability to sustain civil discourse of all kinds. An Internet without Section 230 will, among other things, pose insurmountable obstacles to any new entrants seeking to gain a foothold in the social media universe; holding them "legally responsible for the content on their sites" will virtually guarantee that only the existing Internet giants will have pockets deep enough to withstand the impact of vast and probably incalculable potential liability.

Last month, as part of a group of several dozen scholars and Internet public policy advocates, I helped to draft a set of "Principles for Lawmakers" who might be thinking about tinkering with Section 230 (or eliminating it entirely).  Some change is almost certain to come, and a great deal depends on what it looks like.