Internet

The Sec. 230 Temperature is Rising

Liability safe harbors for Internet intermediaries are not responsible for Internet "hate speech"

|The Volokh Conspiracy |

The Business Section of Tuesday's NY Times print edition blared out, in gigantic type not a whole lot smaller than "MAN WALKS ON MOON," this headline:

WHY HATE SPEECH ON THE INTERNET IS A NEVER-ENDING PROBLEM

Why? "BECAUSE THIS LAW SHIELDS IT" – referring to Section 230 of the Communications Decency Act of 1996, which provides that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

That is some serious nonsense—somewhere between terribly misleading and completely wrong.  Section 230—which I've blogged about many times before [see e.g. here and here], and which has, without question, been an indispensable pillar of the Internet's growth, the "twenty-six words that created the Internet," as the title of Prof. Jeff Kossuf's recent book has it—provides that website operators can't be held civilly liable** for "speech" that originates from "another information content provider," i.e.. for user-generated content. It is impossible to imagine what the Internet information ecosystem would look like without it—where Amazon, and Facebook, and Youtube, and Instagram, and Soundcloud, and Twitter, and Reddit, and Medium, and literally hundreds of millions of other sites where users can exchange information with one another, would face civil liability arising from the postings of their users.

** Section 230 expressly exempts federal criminal law from the scope of the immunity, meaning that websites obtain no special protection if their actions, in connection with the speech in question, constitute criminal activity.

There are legitimate concerns about Section 230's scope and the way it has been interpreted by the courts (see below).  But the notion that it is somehow responsible for "hate speech" on the Net—and, by extension, for the rising tide of gun violence—is ridiculous.

It is ridiculous because, most fundamentally and depending of course on how it is defined, most of what we call "hate speech" is, however loathsome it may be, constitutionally protected. Section 230 doesn't provide Facebook et al. with an immunity from liability for publishing its users' "hate speech," the Constitution does that, in the First Amendment.***

*** Interestingly, the Times itself rather quickly recognized its error.  A correction was appended to the online version of the article:

"An earlier version of this article incorrectly described the law that protects hate speech on the internet. The First Amendment, not Section 230 of the Communications Decency Act, protects it."  Oops.

And with respect to the small subset of "hate speech" that is not constitutionally protected—words that are an "incitement to violence" under the standard set forth in Brandenburg v. Ohio (1969)—any criminal penalties which may be imposed for such speech are completely unaffected by Sec. 230.

So removing Section 230 tomorrow would do nothing to deal with the "hate speech" problem.

But pointing the finger in Section 230's direction is part of an increasing trend, to put it mildly, to lay all of the Internet's ills—all the hate speech, the revenge porn, the child porn, the terrorist information exchanges, the fake news, the general dumbing-down of the entire planet—at Section 230's door, part of a more generalized attack, from the political left and right, on the giant Internet platforms. [See Elizabeth Nolan Brown's article here on Reason.com "Section 230 Is the Internet's First Amendment. Now Both Republicans and Democrats Want To Take It Away"]

Then, in an op-ed in Thursday's NY Times, Jonathan Taplin adds his voice to the chorus of those seeking to "change safe harbor laws [i.e., Sec. 230] to ensure that social media platforms are held accountable."

I believe we can all agree that mass murder, faked videos and pornography should not be broadcast — not by cable news providers, and certainly not by Facebook and YouTube…. Changing the safe harbor laws so that social media platforms are held accountable for the content their users post would incentivize Facebook and YouTube to take things like the deep-fake video of Nancy Pelosi and the Christchurch shooting videos more seriously. Congress must revisit the Safe Harbor statutes so that active intermediaries are held legally responsible for the content on their sites.

Superficially appealing, but a terrible idea.  To begin with, there's that darn Constitution again—lots of material falling into the cateogries of "faked videos" and "pornography" are, again, constitutionally-protected, which severely limits law-makers abilities to get it off Internet websites.

Furthermore, I most emphatically do not agree that "mass murder, faked videos, and pornography should not be broadcast." It's not just that they're overwhelmingly constitutionally-protected speech; it's that they're categories that contain immense amounts of valuable and/or harmless material. Would liability for hosting videos of "mass murder" include videos posted by one of the victims or potential victims, or an innocent bystander, or only those posted by the perpetrator? And what about videos of "mass murder" perpetrated by government troops (e.g., a video of the massacre in Tiananmen Square, or the murder of the Rohinga in Burma, or Serbian atrocities in Bosnia, or a police shooting in New York City)?  And if, as I suspect is the case, there are some videos documenting murder or other violent crimes that are "OK" and some that are "not OK," how are we to distinguish between them? And more to the point, how are Youtube or Instagram, with over 100 million uploads a day, to distinguish between them?

And really—pushing "faked videos" off of the Net?! All those gifs of politicians or celebrities spouting idiotic slogans or assuming idiotic positions? All those cats playing the piano? All to be banned? Or, again, only the "bad" ones, and not the "good" ones? And which, exactly, are the bad ones? And gets to decide that?

Section 230 has proved to be an enormously valuable engine of free expression, enabling billions of people to communicate with one another every day. Some of those people say, and do, ghastly things, and there may be sensible, Constitution-respecting ways to tweak Section 230 to target them and reduce their incidence.

But repealing, or otherwise dismantling, the immunity scheme set up by Section 230 will do little if anything to curb any of the truly objectionable content, while doing considerable damage to the Internet's ability to sustain civil discourse of all kinds. An Internet without Section 230 will, among other things, pose insurmountable obstacles to any new entrants seeking to gain a foothold in the social media universe; holding them "legally responsible for the content on their sites" will virtually guarantee that only the existing Internet giants will have pockets deep enough to withstand the impact of vast and probably incalculable potential liability.

Last month, as part of a group of several dozen scholars and Internet public policy advocates, I helped to draft a set of "Principles for Lawmakers" who might be thinking about tinkering with Section 230 (or eliminating it entirely).  Some change is almost certain to come, and a great deal depends on what it looks like.

NEXT: Summer Television Serves Up Some Femmes Fatale

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Regarding 8chan, Wikipedia tells us the following:
    “Since as early as March 2014, its FAQ has stated only one rule that is to be globally enforced: “Do not post, request, or link to any content illegal in the United States of America. Do not create boards with the sole purpose of posting or spreading such content.””

    Seems the founder was familiar with Section 230.

  2. “So removing Section 230 tomorrow would do nothing to deal with the “hate speech” problem.”

    This is a rather broad overstatement.
    Repeal Section 230, and many publishers, fearful (reasonable or not) of civil liability, will restrict or eliminate the ability of users to interact… either by interposing moderation (human or automated), or by removing features that let users post content.

    Either of those actions would do something to deal with the “hate speech” problem, and both would be widespread.

    1. It’s already happening, despite Section 230.
      As far as the ‘woke’ are concerned, any speech that isn’t in lockstep with them is by definition ‘hate speech.’

      1. “As far as the ‘woke’ are concerned, any speech that isn’t in lockstep with them is by definition ‘hate speech.’”

        So? According to the nice folks on the right, when tech companies apply their rules to people on the right, it’s because bias. Turns out taking partisans’ words for what is true is unwise, because they see the world through a distorted partisan lens.

    2. “Repeal Section 230, and many publishers, fearful (reasonable or not) of civil liability, will restrict or eliminate the ability of users to interact… either by interposing moderation (human or automated), or by removing features that let users post content.”

      If Section 230 were repealed, I cannot imagine many providers “interposing moderation” because that would invite liability that 230 otherwise foreclosed.

  3. Mmm, “The rising tide of gun violence” … Are you sure You didn’t mean “the rising tide of bloodsport reporting on a relatively small amount of gun violence”?

    1. “the rising tide of bloodsport reporting on a relatively small amount of gun violence”

      Relative to what?

        1. There’s less malaria, too. Should we stop trying to prevent that, too?

          1. You can claim we should try to prevent it, just don’t insist on calling it a ‘rising tide’ while the tide is going out.

            1. The tide is high, but I’m moving on.
              Something, something, number one.

  4. A rather superficial analysis.

    The CDA was enacted in response to the “Prodigy Services” case in New York, to provide immunity for website hosts for allegedly defamatory materials posted by third parties. Conceptually, that is a good idea. The problem is with the implementation; the CDA went too far. Section 230(c)(2)(A) provides immunity for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Note that last clause: “whether or not such material is constitutionally protected.” By this clause the federal government is denying access to its courts for clear 1st Amendment violations; that makes it a party to the violation. This aspect of the CDA violates the 1st Amendment.

    Website hosts should be immune to suit only if they provide truly neutral forums for the expression of ideas. But when a site chooses to exercise editorial control, and excludes material which its owners or moderators consider to be objectionable (but not objectively illegal), they effectively become publishers of those materials which they permit to appear. As such, they should be responsible for its content. Immunity should exist only for truly neutral sites which exercise no editorial control over the content published by others.

    1. “By this clause the federal government is denying access to its courts for clear 1st Amendment violations”

      By which you mean denying access to the courts for no first amendment violations, because private actors cannot violate the first amendment… only Congress can.

      “Website hosts should be immune to suit only if they provide truly neutral forums for the expression of ideas.”

      No. Just, no. Websites for rape survivors should have to allow pro-rape messages? Memorials for 9/11 have to allow pro-al qaeda messages? Fuck that noise. If you want a forum to express your ideas, knock yourself out. Buy a server, configure it with software, and put out whatever message you like. Demanding a right to use other people’s stuff to put your message out? Hell, no.

      1. Yeah, the point you’re missing is that, per Section 230, you as a site administrator have to make a choice. You can be a content neutral conduit for other people’s content, and immune to civil lawsuits for that content. Or you can make editorial decisions, and be civilly liable for that content.

        Nothing wrong with a site deciding to make that latter choice. They just lose the special protection Section 230 provides.

        1. “Yeah, the point you’re missing is that, per Section 230, you as a site administrator have to make a choice. You can be a content neutral conduit for other people’s content, and immune to civil lawsuits for that content. Or you can make editorial decisions, and be civilly liable for that content.”

          That choice existed before section 230 came along All S230 does is let you choose to remove some content without adopting the rest as your own.

          “Nothing wrong with a site deciding to make that latter choice. They just lose the special protection Section 230 provides.”

          Not at present.

        2. Once again, Brett, that is a lie. There is no such “choice” required by § 230; the entire point of § 230 is to eliminate the need to make that choice.

    2. “By this clause the federal government is denying access to its courts for clear 1st Amendment violations; that makes it a party to the violation.”

      Could you explain what you mean?

  5. “provides that website operators can’t be held civilly liable** for “speech” that originates from “another information content provider,”

    Only so long as their moderation is done in good faith. The courts have created the problem we face today, of politically discriminatory “deplatforming”, by failing to enforce the “good faith” clause of Section 230, which reserves its protection for sites that moderate content only if that moderation is done in good faith.

    Without that failure by the courts, companies like FaceBook would have to refrain from their political censorship, because engaging in it would lose them their shield against civil suits.

    1. “which reserves its protection for sites that moderate content only if that moderation is done in good faith.”

      It allows for removal of some messages (for a long list of reasons) without requiring the information service provider to adopt all the messages that are left.

      It has nothing to do with “politically discriminatory deplatforming”, whatever the hell that might be (in the eye of the beholder).

    2. Again, assuming for the sake of argument that Facebook is discriminating politically, what does that have to do with whether they are doing so in good faith?

      1. “If you’re against MY side, you MUST be acting in bad faith.”

  6. Mitch McConnell was blocked on Twitter for showing a video of ‘protesters’ screaming death threats at him in front of his house.

    1. Fake news, out of context, and what about the shootings in Texas?

    2. “Mitch McConnell was blocked on Twitter for showing a video of ‘protesters’ screaming death threats at him”

      Because Twitter has a rule against attaching videos of people making death threats, and whoever runs Mitch’s Twitter feed broke the rule. When you break rules, you get blocked.

      1. So why do news organizations and other political campaigns get to post videos of violence and threats?

        In case you actually didn’t know, it’s because Twitter’s rules have an exemption for ‘legitimate purposes’, same as nudity.

        1. So stop using Twitter. They’ll never censor you again.

          Problem solved.

  7. “And what about videos of “mass murder” perpetrated by government troops (e.g., a video of the massacre in Tiananmen Square, or the murder of the Rohinga in Burma, or Serbian atrocities in Bosnia, or a police shooting in New York City)?”

    The NYPD has legal power and moral duty to use lethal force to defend the citizens of New York against criminals and lunatics. In 2016 and 2017 they used it 18 times, always against armed criminals and lunatics. (One off-duty officer killed a man who attacked him in a road-rage incident.)

    Casually equating the NYPD to genuine mass murderers is a vicious and baseless drive-by libel; the sort of rhetoric that inspired
    the assassination of Detective Misosotis Familia.

    But hey, if you issue a popular but vicious drive-by libel against the NYPD, why not include an even more popular and equally vicious and baseless libel against the IDF? You could add “…Nazis exterminating the Warsaw Ghetto, or Israelis bombing and shelling Palestinian hospitals” to your list of mass murders potentially on video.

Please to post comments

Comments are closed.