The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Platform Immunity and "Platform Blocking and Screening of Offensive Material"
"Good faith," "otherwise objectionable," and more.
In an earlier post, I talked about the big picture of 47 U.S.C. § 230, the federal statute that broadly protects social media platforms (and other online speakers) from lawsuits for the defamatory, privacy-violating, or otherwise tortious speech of their users. Let's turn now to some specific details of how § 230 is written, and in particular its key operative provision:
(c) Protection for "Good Samaritan" blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1). [Codifier's note: So in original [as enacted by Congress]. Probably should be "subparagraph (A)."]
Now recall the backdrop in 1996, when the statute was enacted. Congress wanted both to promote the development of the Internet, and to protect users from offensive material. Indeed, § 230 was part of a law named "the Communications Decency Act," which also tried to ban various kinds of online porn; but such a ban was clearly constitutionally suspect, and indeed in 1997 the Court struck down that part of the law.
One possible alternative to a ban was encouraging service providers to block or delete various materials themselves. But a then-recent court decision, Stratton Oakmont v. Prodigy, held that service providers that engage in such content removal become "publishers" who are more liable for tortious speech (such as libel) that they don't remove. Stratton Oakmont thus created a disincentive for service provider content control, including content control of the sort that Congress liked.
What did Congress do?
[1.] It sought to protect "blocking and screening of offensive material."
[2.] It did this primarily by protecting "interactive computer service[s]"—basically anyone who runs a web site or other Internet platform—from being held liable for defamation, invasion of privacy, and the like in user-generated content whether or not those services also blocked and screened offensive material. That's why Twitter doesn't need to fear losing lawsuits to people defamed by Twitter users, and I don't need to fear losing lawsuits to people defamed by my commenters.
[3.] It barred such liability for defamation, invasion of privacy, and the like without regard to the nature of the blocking and screening of offensive material (if any). Note that there is no "good faith" requirement in subsection (1).
So far we've been talking about liability when a service doesn't block and screen material. (If the service had blocked an allegedly defamatory post, then there wouldn't be a defamation claim against it in the first place.) But what if the service does block and screen material, and then the user whose material was blocked sues?
Recall that in such cases, even without § 230, the user would have had very few bases for suing. You generally don't have a legal right to post things on someone else's property; unlike with libel or invasion of privacy claims over what is posted, you usually can't sue over what's not posted. (You might have breach of contract claims, if the service provider contractually promised to keep your material up, but service providers generally didn't do that; more on that, and on whether § 230 preempts such claims, in a later post.) Statutes banning discrimination in public accommodations, for instance, generally don't apply to service providers, and in any case don't generally ban discrimination based on the content of speech.
Still, subsection (2) did provide protection for service providers even against these few bases (and any future bases that might be developed)—unsurprising, given that Congress wanted to promote "blocking and screening":
[4.] A platform operator was free to restrict material that it "considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
- The material doesn't have to be objectionable in some objective sense—it's enough that the operator "consider[ it] to be" objectionable.
- The material isn't limited to particular speech (such as sexually themed speech): It's enough that the operator "consider[ it] to be" sexually themed or excessively violent or harassing or otherwise objectionable. If the categories were all of one sort (e.g., sexual), then "otherwise objectionable" might be read, under the legal principle of ejusdem generis, as limited to things of that sort: "when a generic term follows specific terms, the generic term should be construed to reference subjects akin to those with the specific enumeration." But, as the Ninth Circuit recently noted,
- [T]he specific categories listed in § 230(c)(2) vary greatly: Material that is lewd or lascivious is not necessarily similar to material that is violent, or material that is harassing. If the enumerated categories are not similar, they provide little or no assistance in interpreting the more general category…. "Where the list of objects that precedes the 'or other' phrase is dissimilar, ejusdem generis does not apply[.]" …
- What's more, "excessively violent," "harassing," and "otherwise objectionable" weren't defined in the definitions section of the statute, and (unlike terms such as "lewd") lacked well-established legal definitions. That supports the view that Congress didn't expect courts to have to decide what's excessively violent, harassing, or otherwise objectionable, because the decision was left for the platform operator.
[5.] Now this immunity from liability for blocking and screening was limited to actions "taken in good faith." "Good faith" is a famously vague term.
But it's hard to see how this would forbid blocking material that the provider views as false and dangerous, or politically offensive. Just as providers can in "good faith" view material that's sexually themed, too violent, or harassing as objectionable, so I expect that many can and do "in good faith" find to be "otherwise objectionable" material that they see as a dangerous hoax, or "fake news" more broadly, or racist, or pro-terrorist. One way of thinking about is to ask yourself: Consider material that you find to be especially immoral or false and dangerous; all of us can imagine some. Would you "in good faith" view it as "objectionable"? I would think you would.
What wouldn't be actions "taken in good faith"? The chief example is likely actions that are aimed at "offensive material" but rather that are motivated by a desire to block material from competitors. Thus, in Enigma Software Group USA v. Malwarebytes, Inc., the Ninth Circuit reasoned:
Enigma alleges that Malwarebytes blocked Enigma's programs for anticompetitive reasons, not because the programs' content was objectionable within the meaning of § 230, and that § 230 does not provide immunity for anticompetitive conduct. Malwarebytes's position is that, given the catchall, Malwarebytes has immunity regardless of any anticompetitive motives.
We cannot accept Malwarebytes's position, as it appears contrary to CDA's history and purpose. Congress expressly provided that the CDA aims "to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services" and to "remove disincentives for the development and utilization of blocking and filtering technologies." Congress said it gave providers discretion to identify objectionable content in large part to protect competition, not suppress it. In other words, Congress wanted to encourage the development of filtration technologies, not to enable software developers to drive each other out of business.
The court didn't talk about "good faith" as such, but its reasoning would apply here: Blocking material ostensibly because it's offensive but really because it's from your business rival might well be seen as being not in good faith. But blocking material that you really do think is offensive to many of your users (much like sexually themed or excessively violent or harassing material is offensive to many of your users) seems to be quite consistent with good faith.
I'm thus skeptical of the argument in President Trump's "Preventing Online Censorship" draft Executive Order that,
Subsection 230 (c) (1) broadly states that no provider of an interactive computer service shall be treated as a publisher or speaker of content provided by another person. But subsection 230(c) (2) qualifies that principle when the provider edits the content provided by others. Subparagraph (c)(2) specifically addresses protections from "civil liability" and clarifies that a provider is protected from liability when it acts in "good faith" to restrict access to content that it considers to be "obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable." The provision does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform's terms of service. When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. By making itself an editor of content outside the protections of subparagraph (c)(2)(A), such a provider forfeits any protection from being deemed a "publisher or speaker" under subsection 230(c)(1), which properly applies only to a provider that merely provides a platform for content supplied by others.
As I argued above, § 230(c)(2) doesn't qualify the § 230(c)(1) grant of immunity from defamation liability (and similar claims)—subsection (2) deals with the separate question of immunity from liability for wrongful blocking or deletion, not with liability for material that remains unblocked and undeleted.
In particular, the "good faith" and "otherwise objectionable" language doesn't apply to § 230(c)(1), which categorically provides that, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider," period. (Literally, period.)
Removing or restricting access to content thus does not make a service provider a "publisher or speaker"; the whole point of § 230 was to allow service providers to retain immunity from claims that they are publishers or speakers, regardless of whether and why they "block[] and screen[] offensive material."
Now this does leave the possibility of direct liability for "bad-faith" removal of material. A plaintiff would have to find an affirmative legal foundation for complaining that a private-company defendant has refused to let the plaintiff use the defendant's facilities—perhaps as Enigma did with regard to false advertising law, or as someone might do with regard to some antitrust statute. The plaintiff would then have to show that the defendant's action was not "taken in good faith to restrict access to or availability of material that the provider … considers to be … objectionable, whether or not such material is constitutionally protected."
My sense is that it wouldn't be enough to show that the defendant wasn't entirely candid in explaining its reasoning. If I remove your post because I consider it lewd, but I lie to you and say that it's because I thought it infringed someone's copyright (maybe I don't want to be seen as a prude), I'm still taking action in good faith to restrict access to material that I consider lewd; likewise as to, say, pro-terrorist material that I find "otherwise objectionable." To find bad faith, there would have to be some reason why the provider wasn't in good faith acting based on its considering material to be objectionable—perhaps, as Enigma suggests, evidence that the defendant was just trying to block a competitor. (I do think that a finding that the defendant breached a binding contract should be sufficient to avoid (c)(2), simply because § 230 immunity can be waived by contract the way other rights can be.)
But in any event, the enforcement mechanism for such alleged misconduct by service providers would have to be a lawsuit for wrongful blocking or removal of posts, based on the limited legal theories that prohibit such blocking or removal. It would not be a surrender of the service provider's legal immunity for defamation, invasion of privacy, and the like based on posts that it didn't remove.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
My gratitude, as always, for your legal analysis notwithstanding, three consecutive posts on Josh's blog seems pretty nervy.
Is he going to have a contrarian take on this? Or maybe David? I’m pretty sure that Orin, Jonathan, and Ilya will agree with Eugene.
Guess Stewart Baker is the answer.
Stewart Baker is never the answer.
First, I'd take issue with 4 C; That list is more like, "apples, kumquats, bananas, and so forth" than it is, "Apples elephants, palindromes, and so forth". Political views really does NOT look like it would fit in that list.
Second, "If I remove your post because I consider it lewd, but I lie to you and say that it's because I thought it infringed someone's copyright (maybe I don't want to be seen as a prude), I'm still taking action in good faith to restrict access to material that I consider lewd; "
Literally, you're claiming that lying about the basis for moderation is moderation in good faith. I think at this point you might as well just admit that you want to write that "in good faith" language out of the statute, because you've now emptied it of all meaning.
Your first argument is about what the law is; though it seems unsupported opinion without more.
Your second argument is about what the law ought to be, not what it is. Prof. Volokh presents a case that explains what good faith means. It's broader than you think makes sense. But you need to present more than your unhappiness to prove that that's not the law.
"Your first argument is about what the law is; though it seems unsupported opinion without more."
No more or less unsupported opinion than our genial host's. It's not the sort of thing you can resolve objectively, it's largely a matter of opinion, and I'm disagreeing with his.
Wait, wait, wait. Wait. Wait. W. a. i. t.
EV posts how you get to the reading you want but points out that it requires recourse to a particular canon of statutory construction. EV then cites judicial authority rejecting that very recourse.
But you hold your good-faith (ha) belief to the contrary is equally as supported as EV's based on the way it "seems" to you and nothing else.
Law! How not to do it in the time of coronavirus.
On the other hand, you handsome devil, the 9th Circuit also disclaimed that it was deciding the question about the relations of the terms:
Good point, you excellent and honorable gentleman!
I don't know about you, but when a competitor is benefited, I find it objectionable! In very good faith, too!
So there are limits, but that court didn't ascertain them, beyond this example.
Yes. It went out of its way to use semaphores and hilltop cauldron-fires to announce that it was deciding only this here little case, dammit. Which makes me a bit skeptical, honestly, of EV's recourse to it as authority on anything other than punting.
Not all opinions are equal, and his is much better informed and supported than yours.
Unlike your list of things that are all fruits, Violence and sex are not related, so the list cannot logically be read in that manner.
Point 1.
I think we playing fast and loose with ejusdem generis here.
If the terms "obscene, lewd, lascivious, filthy, excessively violent, and harassing" do not constrain the term "or otherwise objectionable" in any way, then there would have been no reason for the statute to have included any of those terms in the first place.
The statute could have been drafted to say:
"No provider or user of an interactive computer service shall be held liable on account of ... any action voluntarily to restrict access to restrict access to or availability of material that the provider or user considers to be objectionable."
Applying ejusdem generis in this manner simply writes the terms "obscene, lewd, lascivious, filthy, excessively violent, and harassing" out of the statute. This cannot be the correct approach.
Point 2.
//Removing or restricting access to content thus does not make a service provider a "publisher or speaker"; the whole point of § 230 was to allow service providers to retain immunity from claims that they are publishers or speakers, regardless of whether and why they "block[] and screen[] offensive material."//
But this does not necessarily preclude the finding that the provider was a publisher if, in fact, it publishes materials. And, of course, there is an argument to consider whether it is possible to moderate content to such a degree that it is akin to publication.
Can Twitter manipulate the content of tweets under the guise of moderation? If so, does it become a publisher?
"Applying ejusdem generis in this manner simply writes the terms “obscene, lewd, lascivious, filthy, excessively violent, and harassing” out of the statute."
Indeed. This list seems to break down into two broad categories: Obscenity/profanity, and threats. It's not actually that hard to decern a common thread such that you can see that political opinion or supposed falsity are not of the same sort.
Say I consider all calls for war excessively violent and I want to remove those comments. Explain how I’m not acting in good faith by considering advocacy of war excessively violent?
That's not really the point of his objection to EV on good faith. It would be more like, you really hate weapons manufacturers because they fired your family dog, but you say you're removing comments calling for war because you find them all to be excessively violent.
But how on earth would you prove that? Much like courts don’t typically question the sincerity of religious conviction, even of a company like Hobby Lobby, I don’t see how they can determine something like that.
I think you could actually get away with that. But then you start picking and choosing among wars.
To give you an example, FB bans the Proud Boys, and does not ban the Antifa, on the basis of violence. But they're both violent, so that's a pretext.
Volokh has emptied "bad faith" of meaning to the point where even this wouldn't be bad faith.
The problem here is that the platforms have chosen sides in our political disputes. I don't think that would be a problem if we had multiple competing platforms, even. But we don't, thanks to a combination of network effects and, bluntly, antitrust violations.
The Republican Party championing against antitrust violations since 1901 (years 1910-2020 excluded; 2020– very specifically).
Say you're a racist, and you find African Americans objectionable. Isn't it good faith to remove all African American content and viewpoints?
Yes it would be. You can’t show that racist views aren’t held sincerely and don’t fit into the dictionary definition of obscene as applied subjectively to the user.
Yet, somehow I doubt the court would see it that way
It sounds like you're saying that the NAACP would have an airtight case against Stormfront.org. Are you calling their attorneys lazy?
Does ‘otherwise objectionable’ mean ‘objectionable for reasons of the same sort?’
Should we take Congress as having meant similarly when it used the word otherwise?
It should be, but the categories listed are undefined and pretty expansive. And notably it makes it a point to say that constitutional protections don’t matter, so I don’t have to stick to what SCOTUS says is obscene. I mean there are a lot of things people say are obscene. Look at dictionary definition of obscene, a lot of things can offend ones moral principles or be disgusting to the senses.
“obscene, lewd, lascivious, filthy, excessively violent, and harassing”
Things that are obscene, lewd, lascivious, and filthy could be grouped together, but not with excessive violence, or with harassment. Three separate categories are being listed, which do not share a common thread among them, other than that someone might find them objectionable. The "otherwise objectionable" cannot be constrained by them, since it is already implied that those 3 categories of potential objectionable content could be blocked.
Therefore, the mention of this phrase necessarily means that other categories of things that a provider finds objectionable can also be blocked. If not, then the "otherwise objectionable" would be meaningless.
What it means is that this is one of the many times when congress creates an inexhaustive list of examples, which are there as a guide but not as a hard limit. If the only terms listed related to sexual conduct, or violence, or harassment, respectively, then you would be correct, but they aren't, so you are not.
I don't think you need worry too much about that, being a former clerk of Judge Ko———ohh, look at the time.
Bing. Bang. Bong.
Yowza.
Awooga.
God, l need to get a life.
Thanks for fighting an uphill battle, Professor.
On what legal basis did Congress act when it preempted state libel laws? I get that there is a role for the courts to protect press freedom. Where does Congress get a constitutional hook for Section 230? Or has libel regulation always been a federal power and I just did not know about it?
Well internet communication is the among most interstate commerce thing imaginable. They didn’t want businesses in one state being liable in another state for possibly anonymous comments that may have been posted from yet another state (or even overseas.) There would be an endless amount of uncertainty in how to conduct your internet business without a uniform federal standard.
Few people realize this but the original meaning of the commerce clause was actually that Congress can do whatever it wants.
I thought that was the original meaning of the necessary and proper clause. Dang Con Law.
It's the original meaning of all the clauses, I've been given to understand: They replace the Articles of Confederation with the Constitution to make the central government more powerful, and you can't get more powerful than omnipotent, after all.
You're not seriously suggesting that Congress lacks the authority to regulate the Internet, are you?
"regulate the Internet"
How vague.
The Internet is interstate commerce. And that's enough under Article I.
You CAN argue that there are a couple of traditionally state functions that might touch upon interstate commerce but, because of tradition, aren't included in the commerce power (e.g., divorce cases). But even if that's true, defamation would not be one of them; it's been a matter of joint state and federal concern at least since the Federal Communications Act in the 1930's.
Dilan, I expected interstate commerce responses, and I got them. I know that right wingers—among whom I do not number you—hate the interstate commerce clause, and perversely delight in finding uses for things they hate, to discomfit folks they take to be adversaries. I reply to you instead of the others because I know you are not doing that. Please try to give me a thoughtful response, because I need help here.
On what basis does anyone conclude that the subject of Section 230 is really interstate commerce? I get that there is a lot of non-operative language in the section which refers to federal policy and interstate commerce. I take that as indication that Congress knew it was about to do something dicey, and was deploying an especially dense cloud of squid ink preemptively.
After emitting the ink, Section 230 gets down to business, and says this:
(3) State law
Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.
If that says anything about interstate commerce, it says that Congress just vaporized state defamation law, because it concluded state defamation law was inconvenient for interstate commerce. And Congress did so despite the fact that there is no federal law at all which covers defamation. (Please correct me if I am mistaken on that last point.)
Do you know of any precedent for using interstate commerce powers to preempt an entire class of state laws? That has happened time and again in defense of individual rights, but for interstate commerce?
The Section says to states that where it comes to defamation, your laws are off the books, and the feds take over. And then the feds do not take over. There is still no federal law of defamation, right? Unless you count Section 230's abolition of defamation law as defamation law.
What would the public response have been if the blather about the wonderful internet had been omitted, and Section 230 had just stated, forthrightly:
"State defamation law is hereby abolished."
That is what Section 230 did. Why not say so? I think that ought to be said in court, in a federalism-based challenge to Section 230.
I wonder why on a blog so sensitive to issues of federalism, there has not been more reflection on that. I suspect the answer is because this is a blog—seen as dependent for its very existence on the absence of the laws Section 230 abolished. That is motivated reasoning of the broadest sort. I suggest that reasoning is also mistaken.
I do not suppose blogging depends on freedom from defamation law. Private editing—reading before publishing—was practiced universally in publishing for more than a century, before Section 230 largely wiped it out online. No one ever said, "My god, this editing is so onerous it makes publishing impossible."
What practice of private editing did make impossible, was publishing giantism. A practical requirement for private editing assured that publishing remained—to a greater degree than many other industries, and for a long time—locally diversified.
That was a good thing. It greatly promoted the objective of fostering press freedom. There was little problem maintaining a press industry free of government censorship. Private editing—because of its diversity, and because the need for it kept unlimited publishing-business expansion in check—kept the populace from supposing that its reading choices were dictated from some inaccessible seat of unified private power.
With that public bulwark knocked down by a Congressional blunder, the populace now supposes the opposite—with what justification nobody really knows. The remains of the previously-thriving publishing industry lie everywhere in smoking ruins, while citizens clamor for government intervention in publishing to assuage their fears—their various and opposing fears—that only the opinions of their enemies will find published outlet, because of the connivance of the few giants who control everything.
I wonder why consequences as dramatic and disturbing as those have not made bloggers more reflective in a forum such as this one. Is it really because of the interstate commerce clause, or are other motivations actually more influential, however reticently held? I suggest that it is the latter. Perhaps it is because among the opponents of defamation law, there are a great many who wish to publish defamation, or who suppose doing so can be politically useful.
Defending that may not be what Congress intended when it passed Section 230, but it is what Section 230 delivered.
Do you know of any precedent for using interstate commerce powers to preempt an entire class of state laws?
Isn't that what tort reform would be? (To pick a conservative hobby horse.)
https://www.everycrsreport.com/reports/95-797.html#_Toc252790128
Sure. The PLCAA. The NCVIA. The FDCA.
And, you know, the vast bulk of economic regulation.
Bqqx“State defamation law is hereby abolished.”
That is what Section 230 did. Why not say so? I think that ought to be said in court, in a federalism-based challenge to Section 230.
Whoops. Don’t know what happened there. Why oh why is there no edit function?
Because it would a Trumpian-caliber lie. Defamation law is fully in force.
Yes, but you’re the Dr. Ed of law talking people.
Nieporent, you have a nasty habit. In blog comments you do stuff I doubt any court would let you get away with. You assert your side of a question in controversy as if it were legal fact. I thought lawyers had to be more forthright, or risk annoying people, including judges. By the way, I know already what to expect from any reply, if you bother with one. You will say there is no possibility of any controversy, which will be more of the same. While maybe throwing in another sneering ad hominem.
Right. There isn’t any possibility of any controversy. One can imagine as an academic exercise a hypothetical argument about interstate commerce, but it wouldn’t get even one vote at SCOTUS.
The reason I am nasty to you is because you are dishonest. You keep pulling the “We need to destroy the village in order to save it” shtick against free speech, while pretending that you aren’t always speaking out in favor of censorship. Every time I point out that keeping people from publishing through the threat of defamation liability - which is absolutely what you endorse - is censorship, you run and hide, only to pop up later and again falsely claim that 230 is encouraging anti-speech attitudes.
Interstate commerce, obviously. Maybe just keep your head down so the other kids can learn.
Very good post.
I have reservations that "otherwise objectionable" should really be read to mean anything and everything. This renders "obscene, lewd, lascivious, filthy, excessively violent, harassing" superfluous. Brett has a point about this list being not totally dissimilar. It also renders "in good faith" largely superfluous if the basis can be anything.
But the bigger issue pointed out here is that (c)(1) is not qualified or contingent on a platform refraining from moderating content that falls outside of (c)(2)(A). The EO's reasoning seems weak here, as EV points out.
Instead, the latter section seems to be a safe harbor of sorts, providing that there won't be any liability (of whatever kind) "on account of" these actions. Acting outside of this safe harbor is not necessarily wrongful nor does it result in forfeiting the protection of (c)(1).
But I don't think I agree that the only possible result of moderation outside the safe harbor would be "a lawsuit for wrongful blocking or removal of posts, based on the limited legal theories that prohibit such blocking or removal." What's not analyzed here is that (c)(1) is actually qualified by the requirement that the information in question is "provided by another information content provider." "Information content provider" means any person or entity that is responsible, in whole or in part, for the creation or development of information. So the protection from liability under (c)(1) is conditioned on the provider or user having no part in the creation or development of the information. As an example, YouTube pays large sums of money to some of its information content providers, and smaller sums to many, generally offering remuneration or the possibility of it to all uploaders. It's possible YouTube might have a hand in the "development" of some of its content. When it comes to moderation outside of the safe harbor, it's conceivable that various activities, whether removing content or restricting access to it in various ways, would support a claim that a platform is actively curating and helping to develop the information that actually ends up being published.
Read that 9th Circuit case he cites (Enigma Software v. Malwarebytes). It also expresses concern about an "unbounded" reading of "otherwise objectionable," although its concerns are more focused on anti-competitive content blocking (given the case they're deciding).
About district courts reading a previous decision, Zango, as providing providers with unlimited discretion: "We find these decisions recognizing limitations in the scope of immunity to be persuasive. The courts interpreting Zango as providing unlimited immunity seem to us to have stretched our opinion in Zango too far."
Zango, chained.
Good one.
Personally, I think EV's reasoning against the executive order when it comes to subsection 2 is a bit weaker. It's moot to the current controversy since Twitter wasn't removing a post but adding its own additional speech (so it wouldn't fall under subsection 2) and also because there aren't any likely causes of action they need to claim immunity from here. But I do also think ejusdem generis (difficult to spell) should be applied to the categories private entities have immunity for based on their direct actions.
Here’s the thing: if you want to use Brett “IANAL” Bellmore’s exceedingly narrow interpretation of (2) to refer basically only to a few types of speech like obscenity or harassment, then something as basic as filtering spam wouldn’t be protected. Does anyone think Congress wanted there to be potential liability for doing that?
Separately, I suspect there could be unfair and deceptive trade practice claims if an online platform is engaged in a broad pattern of deception, such as giving pretextual claims for removal of content, violating its TOS, or generally making misleading claims such as claiming to be consistent with regard to its content policies, etc.
It would not be a surrender of the service provider's legal immunity for defamation.
I thought (apparently incorrectly) that it might be in this hypothetical:
The provider deletes a comment which says "Josh R is a pedophile," but knowingly and intentionally does not delete a comment which says, "Eugene Volokh is pedophile." Perhaps the provide could be sued for not deleting the latter comment?
no, i think that's been resolved in caselaw. incomplete moderation doesn't incur liability. something similar was argued but failed - the argument was that platform chose to moderate X (in good faith), but refused to moderate Y, so they're now a publisher and liable to Y. c(1) bars that. the only leaky part, under current law, is the good faith requirement, and so far only limited to direct marketplace competitors.
The statute as quoted does seem to leave the door open to a defamation case arising because the forum owner modifies someone's post or comment so that the poster appears to say something he didn't say; or because the forum owner inserts a false "fact check" which amounts to defamation. (Although paragraph (1) might mean that Trump must sue CNN, and not Twitter, for calling him a liar about the likelihood of vote fraud.)
For what it's worth, thousands of court decisions convicting people of vote fraud in recent years can be viewed on heritage.org.
Can you cite the caselaw?
The order tasks the Secretary of Commerce to file a petition with the FCC clarifying when blocking does not constitute good faith. Given that the remedy for not blocking in good faith is a civil penalty, why does the FCC have any say in the manner? Shouldn't it be up to the courts?
He has Article II.
Some say so, I mean maybe a lot of people say so, smart people, I don't know, a lot of people agree he has Article II.
Thinking that the censorship shield of the second part doesn't implicate the broad shield ("26 words") of the first was my first instinct as well, but it has to surmount the legislative history of the court case that it was clearly written to reverse. Clearly, there was a case where a BBS edited, and the court held that as a result, they were liable. There is a tenable argument that an interactive computer service becomes the speaker, say if they screen out every comment other than ones which favor companies in their personal stock portfolio. (Think sites with one or two glowing testimonials in the right menubar.)
Better, perhaps to point out that the act then explicitly allows the _users_ of the site to censor any content that they wish, which means that "objectionable" in the prior provision doesn't have an objective meaning that protects any critical speech. The act seems to specifically condone petulant censorship in the second protection, so it's difficult to read a morals clause into the powers it gives the site owners. (Pace the 9th Circuit precedent in the comments above.)
My guess is that this is a memo to the SG to find a vehicle to challenge the exemption on constitutionally protected speech, given all the talk about how essential the forums are to democracy. But I'm not a 1A expert, or even reasonably well informed in the area, and in the words of the Dead: "Please don't dominate the rap, Jack, if you've got nothing new to say."
Not legal advice. In fact this entire comment was composed on a Ouija board, which is alone liable for any delict.
Mr. D.
Wasn't the issue in that case that the court held, by virtue of their editing, they were liable for all posts on their board not just for the act of editing. Subsection 1 overturns that case directly. Subsection 2 provides them immunity from any kind of lawsuit for their direct action.
Is it just me who notices that the conduct which so angered the President was not suppression of speech, and that in fact the posts of the President have not been removed and there is no indication that future posts will be removed.
The action that Trump so objects to was the attempt by Twitter to recognize that its readers should look at an alernative opinion when, in this case, the position of Trump was so at odds with truth that if Twitter took no action they were participating in the spreading of a lie.
Is Trump so afraid of the truth that he would invoke the power of the Feds to attack the platform? The answer would seem to be a resounding yes.
Jack Dorsey should delete Trump's account just to see what happens.
That would certainly trigger the good faith test, especially if Dorsey did not delete comparable accounts.
Which comparable accounts?
Kevin, did you read the post about what the good faith test means?
Oh, hey, it's THIS THREAD!
It (arguably) would remove immunity from liability, but what cause of action would be created?
this passage was hard to read, and it might be in error as written:
"[3.] It barred such liability for defamation, invasion of privacy, and the like without regard to the nature of the blocking and screening of offensive material (if any). Note that there is no "good faith" requirement in subsection (1)."
i think the referential language is confusing. i would edit to:
[3.] It barred liability for defamation, invasion of privacy, and the like, regardless of any blocking and screening of offensive material. Note that there is no "good faith" requirement in subsection (1).
separately, i'm still confused as to what you mean. narrowly, you could just be making the point that (c)(1) bars liability for hosting someone else's content, and that's true, but mentioning "the nature of the blocking and screening" confuses that point. more broadly, you could be trying to argue that selective blocking or screening (like turning all cuss words in a textual post to ***s) doesn't incur liability, but that point requires analysis of (c)(2).
which brings me to another good example of what could constitute bad faith. let's say i post a comment that says "EV is a great professor and has helped federal prosecutors convict a rapist by providing technical legal support." If you then selectively screened my post to remove only the words "a great professor and has helped federal prosecutors convict" and "by providing technical legal support," that selective screening would change the meaning of my post, effectively to mean the opposite of what i initially wrote. that selective screening would likely be bad faith, and you'd be potentially liable to EV, and maybe even to me (if my moniker was still linked to the selectively screened post). there's still the issue whether your selective screening could be considered otherwise objectionable, but i don't think you get past the good faith test.
finally, i'm interested in how this all plays with affixing a badge of misinformation onto a presidential tweet. does a badge constitute a mechanism protected by (c)(2)? if so, does making it a link to information that counters the tweets theme take it out from the umbrella of (c)(2)?
my take is that if a platform can delete a post under (c)(2), it can certainly mark it with an objectively inoffensive badge, especially where the original content is left intact, and still be protected under (c)(2)'s "any action voluntarily taken in good faith to restrict access to or availability of material." this is just too similar to replacing cuss words with stars - a relatively minor change to the full scope of how the content is presented. at worst, it triggers self-restriction in a reader of the tweet.
but, what about making the badge a link to corrective text? i think the badge itself is still protected, but the linked corrective text is probably not protected. it's perhaps too fine a distinction, but the act of affixing the badge, even if the badge includes a link to other text, is still protected under (c)(2), but the content of the linked corrective text isn't blanket protected and is susceptible to analysis as by itself constituting a tortious act, such as defamation.
so, under the current section 230 analysis, twitter shouldn't be liable just because the badge labels his post a lie or an uninformed position (opinion anyway, so not defamation), or because the badge links to other content, but twitter could in some universe be liable under the hypothesis that the linked content, by itself, somehow constituted defamation.
anyway, i guess my question is - what are the meets and bounds of the nature of the moderations platforms can employ while enjoying immunity under 230?
Person ally, I think (c)(2) makes more sense in the context of actively editing posts, rather than blocking them outright.
For example, replacing all F-bombs with "[expletive deleted]" is fine and an example of an "action voluntarily taken in good faith."
However, replacing all references to Jews or African Americans with racial slurs would be bad faith.
“I approve of X but disapprove of Y” is not the definition of good faith.
I don't understand why Eugene thinks that Enigma was about good faith, when it *says* it's about the meaning of objectionable:
"Enigma alleges that Malwarebytes blocked Enigma's programs for anticompetitive reasons, not because the programs' content was objectionable within the meaning of § 230,..."
So Enigma implies that "otherwise objectionable" doesn't include everything a platform provider may wish to restrict (or even everything such a provider objects to, since a platform provider may object to competitors' content because they are competitors). And this seems consistent with the text of 230, because (as other commenters have observed) the first 6 categories listed in (c) (2) (A) would otherwise be redundant.
so, i have a problem with that argument. the root reason why you include certain language in a statute is so that the statute applies to !at least! the items you explicitly list. after that explicit list, you can very naturally end with a catch-all so that anything you haven't thought of, or anything you don't have room for, priority wise, is still included, but at the very least, you have your explicit list. there's no redundancy issue - they listed explicitly what they considered the bare minimum of the items constituting justified moderation, and then added the catch-all so that the statute was very broad.
after that root understanding, then you can get into the issue of how broad that catch-all is. and, the scattershot nature of the explicit list of items naturally breathes tremendous breadth into the catch-all. there's even a good argument that it's expansive enough to cover both objectively reasonable AND subjective objectionable-ness.
and, that's why eugene thinks Enigma is about good faith, because there's no meaningful limit to "otherwise objectionable" presented in Enigma. that leaves good faith. at least, that's the argument, i think.
I don't think the listed items are all that broad. They all fall within the criteria used to rate motion pictures. "The following film contains scenes of nudity and extreme violence." And as some rando guy said about something else, "it's right there in the name." Decency not politics I don't like or people I don't like.
Trump’s politics are certainly indecent.
C'mon man!
If "otherwise objectionable" includes objections based the platform owner's political/editorial viewpoint , then there is no difference between a platform and a publisher. One cannot have his cake and eat it too.
I can’t tell whether this is trolling. There is no difference, and you can have your cake and eat it too.
Blocking them is really good
Dynamicsarts