The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
No, a Web Platform's Decision to Restrict Speech Doesn't Strip It of 47 U.S.C. § 230 Immunity
[UPDATE: Sorry, this was double-posted; please add any comments to the post above.]
Lots of people have argued that, even if online platforms should often be immune from liability for speech by their users, they should lose that immunity if they decide to restrict some users' speech. Here's a sample of this argument:
In contrast [to platforms that allow people to whatever they want], here [Twitter] has virtually created an editorial staff … who … spend time censoring [user posts]. Indeed, it could be said that [Twitter's] current system of [editing] may have a chilling effect on freedom of communication in cyberspace, and it appears that this chilling effect is exactly what [Twitter] wants, but for the legal liability that [should attach] to such censorship…. [Twitter's] conscious choice, to gain the benefits of editorial control, [should open] it up to a greater liability than … other computer networks that make no such choice.
Now you might think that's a good argument, or a bad argument. But it is precisely the argument that Congress rejected in passing 47 U.S.C. § 230. That quote is from a 1995 case called Stratton Oakmont v. Prodigy, which held that Prodigy could be sued for its users' libelous posts because Prodigy edited submissions; the references to Twitter in the block quote above are to Prodigy in the original. Congress enacted § 230 to (among other things) overturn Stratton Oakmont.
And in addition to providing immunity to platforms that edit (alongside those that don't), Congress expressly protected their right to edit, notwithstanding any state laws that might aim to restrict that right (not that state laws generally do that):
No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected ….
This protects providers' ability to restrict any material that they consider to be, among other things, "harassing" or "otherwise objectionable" (whether or not the court agrees with that view). When a service's operators restrict material that they view as offensive to themselves or to some users, they are restricting material that they consider—in perfect good faith—to be objectionable.
We might think that the service is wrong to consider some ideologies to be objectionable, or unduly narrow-minded, or acting in a way that harms public debate. But if Twitter is censoring some conservative messages, it's doing that precisely because it considers them to be "otherwise objectionable." (One can imagine non-good-faith restrictions, such as if a service restricts messages not because it considers them objectionable but simply because it's competing financially with their authors, and wants to use its market share as a way to block the competition; but that doesn't seem to be happening in any of the recent blocking controversies.)
Maybe Congress erred; maybe § 230 should be revised; I'm inclined to think it's on balance a good idea, but we can certainly debate about whether and how it should be changed. But we should recognize that § 230 does indeed provide immunity to platforms that restrict material they consider objectionable (whether for political or other reasons) as well as to platforms that don't.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
For some reason I see this posted twice on the main page (both links go to the same article)
Different comment sections for each.
oops. you’re right, almost the same url, one ends with restric and the other with restrict
You seem to be depriving that “in good faith” of any function. But we have to assume Congress put those words in the statute for a reason: That particular guarantee was only supposed to apply to good faith moderation.
Interpreting “or otherwise objectionable” to include anything the carrier dislikes for any reason whatsoever would render the “in good faith” meaningless.
Exactly this. They are definitely not protected, even under section 230.
Sue them out of existence.
Indeed, and I would also point out that one of the most objectionable aspects of Twitter’s conduct in this entire affair is its failure to suppress the calumnious account purporting to be posted by Devin’s mother. This is surely an act of criminal mimicry if there ever was one. See the documentation of our nation’s leading criminal “parody” case at:
https://raphaelgolbtrial.wordpress.com/
230 is about immunity from liability, which only matters if there is liability in the first place. The First Amendment already recognizes that people have the right to decide what content is on their platforms for any reason whatsoever. The point of 230 is that people who commit defamation are the ones who can be sued unless the platform was actually involved in the specific acts of defamation. You don’t sue the people who built the roads when someone drives to your house and robs you because they should have built the road to not allow people to use it for robbery. Conservatives arguing for some form of a “fairness doctrine” on the internet in the age of Trump is one of the most pathetically embarrassing things in the history of the universe.
The “good faith” requirement means that the reason is not pretextual. EV has written extensively about fake take down orders. If services deleted comments based on payments from “reputation management” companies, and a not a good faith belief that the posts were objectionable, there would be liability. And, as noted elsewhere, if a take-down was based on a desire to gain a competitive advantage, it would also be in bad faith. In short, “good faith” has a function, just not the function you would like.
Your argument would have to be that political motivation is a pretext similar to those examples. The problem is that the line between political viewpoints and good-faith moral outrage is exceedingly thin, as many of your own posts demonstrate. Generally, “good faith” is a subjective rather than objective standard. It therefore seems to me that GAB could take down posts advocating inter-racial marriage because of the “moral offense” it gave, and the message board for Operation Rescue could take down offensive screeds in favor of “baby-killing.”
Now, it seems plausible to me that there could be times where there is a subjective “bad faith” political motivation, but the proof, not to mention pleading under Twomby and Iqbal, would be difficult and only extend to the take-down of nearly content-free innocuous posts.