The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Adam Candeub & Eugene Volokh, "Interpreting 47 U.S.C. § 230(c)(2)"
Still more from the free speech and social media platforms symposium in the first issue of our Journal of Free Speech Law; you can read the whole article (by Michigan State law professor Adam Candeub and me) here, but here's the abstract:
Section 230(c)(2) immunizes platforms' decisions to block material that they "consider[] to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." The ejusdem generis interpretive canon suggests that "otherwise objectionable" should be read "to embrace only objects similar in nature to those objects enumerated by the preceding specific words."
In this instance, the similarity is that all those words refer to material that was traditionally viewed as regulable in electronic communications media—and was indeed regulated by the Communications Decency Act of 1996, as part of which § 230 was enacted. And restrictions on speech on "the basis of its political or religious content" were not viewed as generally permissible, even in electronic communications.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Even if true what is the cause of action for censoring other content?
OT: It seems that the link to the articles on this post and the previous one about anther article from Professor Candeub both go to the article by Professor Bhagwat
Well, if you live in Louisiana or Alabama you can call the hotlines set up by the Attorneys General to report 'Social Media Censorship'. Not that the AGs can do anything for you, but it's a place to complain.
https://www.techdirt.com/articles/20210810/17230447337/louisiana-alabama-attorneys-general-set-up-silly-hotline-to-report-social-media-censorship-they-cant-do-anything-about.shtml
According to this line of reasoning, platforms are not allowed to remove spam (unsolicited commercial messages), off-topic messages, or pictures of things that are unpleasant but not of a sexual or violent nature e.g. garbage or cat vomit.
So, do I have the constitutional right to post my multi-level marketing come-ons here? Or fill up the comments section with lurid, explicit, detailed descriptions of my cat's vomit? And are you "censoring" me if you or Reason removes comments of that nature?
Is a conservative Christian site prohibited from removing material promoting LGBT lifestyle? Is a Jewish site prohibited from disallowing Nazi propaganda? Is a medical site unable to take down posts promoting quackery like taking horse dewormer? Is this site required to allow users to promote Sovereign Citizen nonsense?
EV and Candeub may be correct about the application of ejusdem generis here, but assuming they are, what would be the course of action on the part of someone who thinks their material was illegally removed?
No, following this reasoning, platforms would not be immunized against lawsuits premised on their having removed such content. This doesn't imply they'd lose such lawsuits. Just that you could bring them, and not automatically have them thrown out on the basis of Section 230.
Yes, that is my understanding as well.
The thing is, without immunization platforms may be subjected to numerous suits. I think we both know that a long expensive lawsuit is a loser for the defendant even if they "win", which is why sec 230 was written the way it was. The point was not to allow the platform to prevail at trial, but to prevent a trial in the first place so as to encourage the free exchange of ideas.
Take away that immunization and you'll find that only the big players (e.g. facebook, Google, Twitter) can afford the legal bills associated with hosting user-provided content.
The legal bills are incurred as a result of moderation, not hosting. Section 230 would still immunize hosting. It's moderation that would be perilous.
You'd likely see a lot of websites that aren't comfortable with refraining from stringent moderation just closing their comments, but Reason, for instance, could easily afford to keep theirs.
Agree that this interpretation would make moderation a minefield, and that many sites would simply close comments rather than be forced to carry content that they find "otherwise objectionable".
You may be correct that Reason has deep enough pockets to withstand the legal onslaught. Or not. I'm unaware of how financially viable they are; in any case do we really want "free expression for those platforms that can afford it"?
What legal onslaught? Moderating opens you to lawsuits, but Reason doesn't moderate the comments, so they wouldn't be facing any onslaught.
Reason does in fact moderate the comments. If they didn't it would be even worse than it is.
Try posting a comment with multiple hyperlinks and see what happens, as an example.
They do not moderate the comments on any basis ejusdem generis would militate against.
Is there some specific piece of misinformation causes you to believe that?
From the Reason TOS:
Reason Foundation may disable your user ID and password at Reason Foundation's sole discretion without notice or explanation.
Reason Foundation may terminate or restrict your use of the Websites or any part of the Websites at any time for any reason without notice. In the event of termination, you are no longer authorized to access the part of the Websites affected by such termination or restriction.
And from the notice at the top of this page:
We reserve the right to delete any comment for any reason at any time. Report abuses.
Notice the Flag Comment link attached to every post.
Granted, Reason does not "moderate" comments in the sense of reviewing them before they are posted*, but they will definitely take things down if they want to.
*There is apparently some automated moderation: for instance if you have multiple hyperlinks in a comment it goes into a holding queue, which is apparently not monitored.
Well, sure, they reserve the right to do it, standard boilerplate, but in practice they don't take things down except in extreme cases, and certainly not just out of disagreeing with them. Not at all like the many sites that will take comments down just because they disagree with them.
I'm not sure an automated filter that just stops comments with more than one link, and throttles really rapid commenting, is the sort of thing that gets you in trouble here. Does it even count as moderation, for legal purposes?
I've flagged obvious spam and seen it removed. The fact that there's little spam here says that they are at least moderating that.
Note that under EVs reading of sec 230 moderating for spam is not covered.
" in practice they don’t take things down except in extreme cases, and certainly not just out of disagreeing with them "
False.
Carry on, clingers.
So if am facebook and say I edit out pictures of barfing cats, and one slips through, and some lawyer can find a dweeb who shudders and shakes and loses their job because they saw it unexpectedly, I can get sued now?
Heck, right now, if your cat got lost, and you posted a picture of it on the local telephone poles, and some phobic ran their car off the road, you could be sued. Doesn't mean they'd win.
I would interpret those examples as "harassing". If a site clearly specifies the topic for a particular forum, then repeatedly posting adversarial or off-topic messages is harassing to the users of that forum.
alt.politics.flamewars does not belong in comp.arch
I think you are misreading the argument. This article is not about content being already illegal to remove, but about federal law not preempting state laws that works make that illegal. One hopes that state laws would preserve the ability of platforms to reject messages that involve off-topic or unwelcome advertisements. It should be sufficient, for example, to make platforms liable for applying their claimed rules in a discriminatory way, holding people to a higher standard based on true political positions.
"ejusdem generis"
Them's fighting words in some parts.
Perhaps a nickle-tour for us non-lawyers?
It's the rule that, when a list of items is given, with a catchall at the end, only things that are like the listed items can be included under the catchall.
So, if you say, "Apples, oranges, bananas, and so forth" in a law, chainsaws are right out, kumquats in, and you could argue about tomatoes.
Section 230 immunizes moderation of "material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable". So "otherwise objectionable" has to be read to mean material that is of the same nature: Sexual, or harassing.
And, since Section 230 is section 230 of the Communications Decency Act, which rather explicitly aimed at protecting against obscenity and harassment, NOT political dissent... You have a reall tough time justifying moderation of things on a political basis as "otherwise objectionable".
I know what the term means. What I'm not sure about is how controversial it is, hence BoedLawyer's use of the term "fighting words".
I've noted those who want to use the power of government to threaten billions in losses to facebook have moved on from "harrassment" to "dangerous" as the favored descriptor du jour when talking headding and tweeting.
Whether because it's to shoehorn into allowed under section 230, as per above, or trying to wish reallh had hard its akin to lawless action, I don't know.
Anyway, it shows the effort behind the scenes to justify government censorship of their political opponents.
Feel free to jump off a cliff, brilliant lawyers who shifted talking headspeak a few months back to "dangerous". May your children live under a nasty dictatorship brought about by your attempts to enable censorship.
That is not remotely what the word "otherwise" means. "Otherwise" is a word of expansion. Combined with the fact that 230 expressly makes it a subjective determination — that the provider or user considers to be and your argument is not tenable. Which is why courts have not interpreted it in the manner you propose. (And why the authors of the text in question have explained that you are misreading it.)
Really, all you're saying here is that you don't want to apply ejusdem generis, but instead want the "or otherwise objectionable" to swallow the list, render it redundant.
What purpose did the list serve, in your approach? None. You've transformed what was clearly intended to be a limited carve-out into complete editorial control.
A "limited carve-out" from what, Brett? Your position is nonsensical because you don't start from an understanding of what the law was.
How dare you.
Why link to the front page of the Journal and not directly to the article itself?
Well, some journals actually insist on that, or so I understand.
I'm not a lawyer, but a quick read on ejusdem generis is that it's not dispositive, and is frequently cast aside if the "legislative intent" indicates a different interpretation.
Fortunately, the legislative intent can be examined for this 24 year old law, as per the EFF:
"Section 230 had two purposes: the first was to "encourage the unfettered and unregulated development of free speech on the Internet," as one judge put it; the other was to allow online services to implement their own standards for policing content and provide for child safety."
In the wake of Stratton Oakmont, Inc. v. Prodigy Servs. Co. which held that once Prodigy started moderating anything, they were responsible for everything, the legislature carved out an exception that allowed "services to implement their own standards for policing content..."
Restricting the coverage of sec 230 to "...material that was traditionally viewed as regulable in electronic communications media..." is simply ahistorical.
But in terms of legislative intent, you'd have to take into account that it's Section 230 of The Communications Decency Act, not The Avoid Annoying Mark Zuckerberg Act. And so take a look at what that Act was dealing with, which wasn't everything Mark Zuckerberg found annoying.
That's not what the text says.
I have to side with the Ninth Circuit on this.
Ejusdem generis limits the construction of a general term to things of similar nature to the listed items. "Regulated elsewhere in the Communications Decency Act" doesn't evoke the common nature of communications that are obscene, lewd, lascivious, filthy, excessively violent, or harassing, it just describes a shared circumstance.
Similar nature, eh? For instance: "red, yellow, green, blue, purple"?
Any of those not belong? Depends. What axis of comparison applies? Colors regarded as primaries in one system or another? Purple is out, the others stay. Colors which are printers' primary colors? They are all out, except yellow. Colors which are primaries based on the biological basis of human vision? Red, green, and blue are in, yellow and purple are out. Artists' primary colors? Red, blue, and yellow are in, green and purple are out.
I have never understood how ejusdem generis can be useful as a method to say which items belong to a group, without independent verification of what group is intended. The notion that anyone can reliably use group members presented in a list to deduce the standard for belonging strikes me as unreliable, and an invitation to arbitrary interpretation.
Sometimes it doesn't. As the Ninth recognized in Enigma Software, if there is no unifying characteristic of the preceding list members then the final general term won't be limited by them.