The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

No, a Web Platform's Decision to Restrict Speech Doesn't Strip It of 47 U.S.C. § 230 Immunity

|

Lots of people have argued that, even if online platforms should often be immune from liability for speech by their users, they should lose that immunity if they decide to restrict some users' speech. Here's a sample of this argument:

In contrast [to platforms that allow people to whatever they want], here [Twitter] has virtually created an editorial staff … who … spend time censoring [user posts]. Indeed, it could be said that [Twitter's] current system of [editing] may have a chilling effect on freedom of communication in cyberspace, and it appears that this chilling effect is exactly what [Twitter] wants, but for the legal liability that [should attach] to such censorship…. [Twitter's] conscious choice, to gain the benefits of editorial control, [should open] it up to a greater liability than … other computer networks that make no such choice.

Now you might think that's a good argument, or a bad argument. But it is precisely the argument that Congress rejected in passing 47 U.S.C. § 230. That quote is from a 1995 case called Stratton Oakmont v. Prodigy, which held that Prodigy could be sued for its users' libelous posts because Prodigy edited submissions; the references to Twitter in the block quote above are to Prodigy in the original. Congress enacted § 230 to (among other things) overturn Stratton Oakmont.

And in addition to providing immunity to platforms that edit (alongside those that don't), Congress expressly protected their right to edit, notwithstanding any state laws that might aim to restrict that right (not that state laws generally do that):

No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected ….

This protects providers' ability to restrict any material that they consider to be, among other things, "harassing" or "otherwise objectionable" (whether or not the court agrees with that view). When a service's operators restrict material that they view as offensive to themselves or to some users, they are restricting material that they consider—in perfect good faith—to be objectionable.

We might think that the service is wrong to consider some ideologies to be objectionable, or unduly narrow-minded, or acting in a way that harms public debate. But if Twitter is censoring some conservative messages, it's doing that precisely because it considers them to be "otherwise objectionable." (One can imagine non-good-faith restrictions, such as if a service restricts messages not because it considers them objectionable but simply because it's competing financially with their authors, and wants to use its market share as a way to block the competition; but that doesn't seem to be happening in any of the recent blocking controversies.)

Maybe Congress erred; maybe § 230 should be revised; I'm inclined to think it's on balance a good idea, but we can certainly debate about whether and how it should be changed. But we should recognize that § 230 does indeed provide immunity to platforms that restrict material they consider objectionable (whether for political or other reasons) as well as to platforms that don't.