The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Justices Thomas and Gorsuch Argue Court Should Review Scope of 47 U.S.C. § 230 Immunity
But, at least in this case, this view didn't get the four votes necessary to grant review.
From today's opinion by Justice Thomas, joined by Justice Gorsuch, dissenting from the denial of review in Doe v. Snapchat, L.L.C.:
When petitioner John Doe was 15 years old, his science teacher groomed him for a sexual relationship. The abuse was exposed after Doe overdosed on prescription drugs provided by the teacher. The teacher initially seduced Doe by sending him explicit content on Snapchat, a social-media platform built around the feature of ephemeral, self-deleting messages. Snapchat is popular among teenagers. And, because messages sent on the platform are self-deleting, it is popular among sexual predators as well.
Doe sued Snapchat for, among other things, negligent design under Texas law. He alleged that the platform's design encourages minors to lie about their age to access the platform, and enables adults to prey upon them through the self-deleting message feature. The courts below concluded that §230 of the Communications Decency Act of 1996 bars Doe's claims. The Court of Appeals denied rehearing en banc over the dissent of Judge Elrod, joined by six other judges..
The Court declines to grant Doe's petition for certiorari. In doing so, the Court chooses not to address whether social-media platforms—some of the largest and most powerful companies in the world—can be held responsible for their own misconduct. Section 230 of the Communications Decency Act states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." In other words, a social-media platform is not legally responsible as a publisher or speaker for its users' content.
Notwithstanding the statute's narrow focus, lower courts have interpreted §230 to "confer sweeping immunity" for a platform's own actions. Malwarebytes, Inc. v. Enigma Software Group USA (2020) (statement of Thomas, J., respecting denial of certiorari). Courts have "extended §230 to protect companies from a broad array of traditional product-defect claims." Even when platforms have allegedly engaged in egregious, intentional acts—such as "deliberately structur[ing]" a website "to facilitate illegal human trafficking"—platforms have successfully wielded §230 as a shield against suit. See Doe v. Facebook (2022) (statement of Thomas, J., respecting denial of certiorari).
The question whether §230 immunizes platforms for their own conduct warrants the Court's review. In fact, just last Term, the Court granted certiorari to consider whether and how §230 applied to claims that Google had violated the Antiterrorism Act by recommending ISIS videos to YouTube users. See Gonzalez v. Google LLC (2023). We were unable to reach §230's scope, however, because the plaintiffs' claims would have failed on the merits regardless. This petition presented the Court with an opportunity to do what it could not in Gonzalez and squarely address §230's scope.
Although the Court denies certiorari today, there will be other opportunities in the future. But, make no mistake about it—there is danger in delay.
Social-media platforms have increasingly used §230 as a get-out-of-jail free card. Many platforms claim that users' content is their own First Amendment speech. Because platforms organize users' content into newsfeeds or other compilations, the argument goes, platforms engage in constitutionally protected speech. See Moody v. NetChoice (2024).
When it comes time for platforms to be held accountable for their websites, however, they argue the opposite. Platforms claim that since they are not speakers under §230, they cannot be subject to any suit implicating users' content, even if the suit revolves around the platform's alleged misconduct.
In the platforms' world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry. The Court should consider if this state of affairs is what §230 demands.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I love it when motivations are so transparent.
This is just a screed, devoid of analysis. For example, if someone wants to have a parade, then they can choose not to have certain groups in their parade (the choice is their "speech"). On the other hand, no one doubts that the people in the parade have their own speech as well.
Or if you write a letter to the newspaper, the newspaper can run it or not as their choice, because that decision is their speech. But guess what? That letter is your speech as well!
These are simple concepts that are well-known, and yet we have actual Supreme Court justices that seem baffled by this concept!
Moreover, they are deliberately conflating a First Amendment issue with a statutory issue. It's like, how much more stupid can this be? None. None more stupid.
You are literally agreeing with Thomas and Gorsuch here, even though you appear to believe otherwise. Their whole point is that when a platform decides to publish (or not publish) user content, it is simultaneously true that the content is the user’s speech and also the editorial decision is publisher’s speech. That’s what you’re saying. Complete agreement.
Then they point out the consequences: if the speech is the sort which can result in liability for anyone (which is a very narrow but extant portion of all speech), then it can result in liability for the publisher platform, insofar as the decision to publish is their speech. Under this reading, section 230 only shields the publisher for liability for the speech (if any) insofar as it *isn’t* the publisher’s speech.
Publishers have been arguing that when they make an editorial decision, that isn’t speech, but then flip-flopping and claiming 1a rights to editorial decisions. The problem Gorsuch and Thomas have with this isn’t the 1a rights, it’s the flip-flopping. Just because one of the prongs is true (the 1a rights) doesn’t mean that the whole flip-flopping position is correct or coherent, and it especially doesn’t mean that the opposite prong is true (“editorial decisions aren’t speech for which we can be liable”).
Actually, I am not agreeing with what they are saying ... at all.
Instead, I am making two basic points that should be obvious-
1. Speech can be both yours and not-yours depending on the context. Not a hard thing to understand. Happens a lot with the FA.
2. The statutory language of 230 is not the same as the FA. So when you're looking at the protection of 230, you're analyzing the statute's terms. That's different than the FA application.
So you can get statutory immunity (via preemption and a federal statute) based on the definition in the statute, which will be different than the FA principle involved.
Basic points, ignored and conflated in this.
You seem to be the one confused by what your own point is. In your first comment you were focusing on the speech being both parties' speech, as evidenced by your repetition of the phrase "as well". But now that you learn this is actually Thomas' and Gorsuch's position, you flop over to a completely different position, which is that speech can be both yours and not-yours simultaneously. That may well be true, but you're the one conflating two very different points here, not Thomas and Gorsuch.
Anyhow, T&G's argument is that the former point, that editorial decisions mix user and platform speech, is not an extraordinary situation and is not evidence of the latter point, that editorial decisions can be both speech for 1a purposes and also not-speech for 230 purposes. If you wish to contradict T&G's implied ultimate conclusion, you should provide some better evidence of the latter point. You have argued that analysis *could* lead to different answers to the two questions, but not that they *should*.
Hope that helps.
Are you at all familiar with the actual text of 230?
Have you read Zeran and its progeny?
Have you litigated a 230 case?
Based on what you have written, it appears that you have no idea what you are talking about.
There is no "flip flopping." It's two entirely separate issues.
1) It's their speech for 1A purposes. That's a constitutional issue, and true regardless of what congress or a legislature says.
2) But a law passed by Congress — Section 230 — says that they're not responsible for it; for the purposes of liability, it's not their speech.
T&G appear to be inviting an argument that 230 does not in fact say that, because 230 only applies to purely user speech. They're being more than a bit coy about it because the best argument is not textual, but purposive: 230 is not meant to immunize platforms for their own speech, but only for users'. Insofar as editorial decisions are platform speech, then 230 is not meant to immunize them. That it's their speech for 1a purposes is evidence of this claim that it's also their speech for 230 purposes: not conclusive, but evidence nonetheless.
There has yet to be presented any evidence to the contrary - that editorial decisions are not platform speech for 230 purposes, despite being platform speech for 1a purposes. All the arguments so far are that it might not be. But if it's a question of law, that's not good enough.
The problem with that argument — assuming that's what they're saying — is that it's based on a nonsensical premise, because there's no such thing as liability for curation of content independent of the content itself.
> there’s no such thing as liability for curation of content independent of the content itself
“Independent” is doing a lot of work in that proposition, and is disproportionately ambiguous. If it means we ignore the underlying content, then the claim is uninteresting; because this removes all the meaning from the editorial decision. But that’s not how we analyze indexical speech acts. If the next VC post includes the claim “From all the evidence I have seen, I am confident David Nieporent is a serial killer,” and I comment underneath, “Yes, based on what I’ve seen, I’m in absolute agreement,” then this would probably be libelous. This, despite that my comment read alone doesn’t even refer to you.
While this goes beyond what we can impute to T&G, I think the argument continues: Editorial decisions can be indexical endorsements of the underlying speech. As such they may generate liability for the editor. The question is whether 230 eliminates this kind of liability; to me it obviously doesn’t, because the liability for the indexical speech act does not in any way hinge upon treating the editor as the “publisher or speaker” of the original speech. Again, that’s not how indexical speech acts work: in my hypothetical above, I could not truthfully claim to be a co-blogger just because I endorsed the post. When I publish my comment, I am the only person who could (under 230) be considered the publisher or speaker of my content, but it's libelous because it endorses some other libelous content that someone else published (and maybe suggests I have an independent reason for endorsing it; but that may be just a quirk of libel statutes and may not apply for other types of liability). That all seems obvious and very sound; the one weak link in the chain is whether editorial decisions are or are not indexical endorsements (in a particular case).
Yes. That's what it means.
That's a linguistic term, not a legal one. Under § 230, that's exactly how we analyze it. You're liable for your own words, but not for distributing other people's words.
The Hoover Institution seems quite interested in the views of one or two right-wing justices (the Crow-Leo Caucus) who can't persuade other conservatives to join them.
I doubt many law professors -- especially mainstream law professors -- share that interest.
So, given the confusion, I thought I’d add a little more.
In 1996, Congress enacted the Communications Decency Act (“CDA”, 47 U.S.C. § 230). Congress specifically found that “the rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens … [t]he Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation …” 47 U.S.C. § 230(a)(1), (4). In passing the CDA, Congress made its objectives clear: “to promote the continued development of the Internet and other interactive computer services and other interactive media,” and to preserve a “vibrant and competitive free market” for them, “unfettered by Federal or State regulation[.]” Id. (b)(1)(2).
The CDA mandates that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” 47 U.S.C. § 230(c)(1), and “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section,” 47 U.S.C. § 230(e)(3).
Are we all clear on that? That means that the provider (such as a website) cannot be held liable for the content made by third parties. That’s it. This isn’t about whose speech this is, or the First Amendment.
Yes, it gets more complicated from there, but that’s the gist. Good? Good.
This is why this is so annoying. It’s not the Supreme Court doing anything other than trying to change the plain meaning of a statute and, in effect, making websites liable for the exercise of a publisher’s traditional editorial functions; something expressly prohibited.
"Notwithstanding the statute's narrow focus, lower courts have interpreted §230 to "confer sweeping immunity" for a platform's own actions. Malwarebytes, Inc. v. Enigma Software Group USA (2020) (statement of Thomas, J., respecting denial of certiorari). Courts have "extended §230 to protect companies from a broad array of traditional product-defect claims."
No one disagrees with the "gist", I think we're all clear on what it means if read narrowly. The objection is that companies are reading it narrowly when it might cost them, but expansively when it benefits them.
And again, no, they're not. You're conflating two different things entirely (just as we discussed in one of the previous threads on this): the first amendment and 230. 230 cannot override the 1A, so any activity protected by the 1A is protected regardless of how broadly or narrowly one reads 230.
230 says that they aren't liable for other people's content. That's not a broad reading; it's just a reading. The 1A says that they have the right to decide which speech to disseminate; that's not a narrow reading of 230, because it's not a reading of 230 at all.
Security is not a product defect. Sec 230 gets rid of the case at the motion to dismiss phase but that shouldn't matter because Snapchat should win on the merits. The fact that a product can be misused by bad people does not ever count as a product defect.
I would agree that misuse is not a product defect, but I would disagree that security designs are necessarily benign. In the software world, “dark patterns” (design choices that intentionally mislead and entrap users) are definitely a thing. We have entire communities dedicated to finding these patterns and shaming the companies that use them by publicizing their evil tricks.
If people are harmed by a dark pattern and you can prove it, the Court sounds like exactly where you ought to go to be made whole.
The fact that a product can be misused by bad people does not ever count as a product defect.
That points toward an aspect of publishing which generally gets too little consideration in internet-centric discussions. Publishing—by any means, including internet means—is an activity with potentially potent consequences. Do it wrong, and the publisher itself inflicts damages on innocent third parties. Those publisher-created damages are likely to be far greater than those a contributor could typically inflict on his own.
The notion of publishing activity in fact encompasses an assortment of other distinct activities. And as it happens, the distinct activities which multiply damage potential to innocent third parties are inseparable from the various activities which economically enable publishing, such as: recruitment and curation of audience; multiplication of consumers for particular information; assembly, operation, and management of means to achieve message distribution; expansion of the geographic reach of content distribution; creation of a content record which is for practical purposes irretrievable after publishing occurs, and potentially perpetual; monetization of all that to support the costs associated with practicing those activities.
Because those activities are indispensable to support institutional publishing economically, the law has treated them all as proper objects of 1A protection. If government attempted to ban sale of advertising, for instance, that would be a 1A violation. Likewise if government imposed a licensing requirement for use of printing paper. Nor could government demand that a publisher distribute content without cost, or insist instead that published content must be charged for. All of those would rightly be ruled as 1A violations.
Contributors to internet platforms typically do not engage at all in any of those publishing-specific activities. Publishers typically do them all, and thus may profit from the 1A's protection of the means publishers use.
Publishers may even choose to share some of the money they make with would-be contributors, to encourage availability of the kinds of content which the publisher reckons will help curate the desired audience. Or publishers may—as so often happens now, rely on the unpaid contributions of folks like us, who perhaps foolishly ignore the maxim attributed to Samuel Johnson, "None but a blockhead ever wrote, except for money." Either way, those practices too rightly get 1A protection.
Thus, over centuries, a legal insight both practical and logical developed—that publishing activity itself creates joint liability as a matter of fact, between content creators who risk creating defamatory content, and publishers who risk distributing far and wide that defamatory content, to the detriment of innocent third parties entitled legally to be recompensed for their damages.
On the basis of that traditional insight, the law of defamation for at least a century prior to 1996, treated publication of defamation as a matter of joint liability, shared alike between publishers and contributors. That worked well, because it was in close accord with what was actually happening.
A notable consequence of that traditional legal regime was that publishers took on the expense to edit privately all content before publishing it, lest defamatory content slip by and expose the publisher to liability. That practice had the collateral benefit of protecting content providers, many of whom would typically lack skills necessary for such judgments.
It had other collateral benefits as well, which delivered society-wide public goods which typically went unreckoned. Those in the aggregate were worth more to society than even the suppression of defamation which the private editing was put in place to accomplish. Such public goods included: suppression of deceptive and damaging, but non-defamatory material; prevention of election frauds; promotion of science; reduction in the publication of scurrilous sensationalism; a broadened sense of enterprise in terms of content variety; much better and more objective coverage of local and community news; more attention to culturally interesting content; greater public insight into public policy; greater attention to economic theory; a practical reduction in the frequency of racist and bigoted content; heightened awareness of social and regional diversity; and an overall increase in means and attention to the notion of the public life of the nation.
All of that came essentially for free, as a byproduct of a need to edit everything prior to publication. With the cost to do that already a practical mandate, the opportunity to leverage that editing effort to achieve competitive advantages drove the rest.
Best of all, it was a system which left every such determination entirely to the judgment of competing private parties. It mostly held at bay the inextinguishable impulse in government to meddle and censor.
Then came Section 230, enacted with the best of intentions, it turned out to be a legislative blunder of historic proportions. Because almost no one in congress gave a thought to the collateral public policy advantages being delivered by private editing, no one anticipated what public costs might occur after the practical legal requirement for private editing was struck down.
The legal system was caught flat-footed. It too had never noticed legal difficulties avoided because private editing kept so many troublesome issues from rising to salience. Now, with private editing largely gone, the law struggles to cope with previously unheard of phenomena, such as libel-for-profit business models, which game the legal system successfully, despite what seem like ruinous judgments against them. And those, of course, have similarly novel counterparts in the political system, which reward politicians who serve up lies made monitizable by mass cost-free circulation among now-unedited platforms.
Nor did anyone in congress reckon in advance the public policy cost in lost national news gathering capacity. Internet platforms, it turned out, did not care to practice news gathering. Nor did they need to, advantaged as they were by statutory relief from the costs of private editing.
So the platforms systematically drove out of business the media which did gather news, and which thus could not escape the cost to continue to edit prior to publication. Of course, that process is not quite yet finished. Until it is completed, the inevitable mass disorientation the platforms promise will not inflict its most baleful effects. Those will be along shortly.
For reasons mentioned above, Rossami, I suggest the Section 230 discussion needs broader insight than the legal community acting on its own is prepared to give it. Nieporent and Loki are keen readers of law, and admirable sources of purely legal insight. I do not think in this case they adequately understand the activities the law purports to govern.
Perhaps none of us did, as we contemplated the internet in the belief that it was akin to a newly-discovered natural resource—a bonanza of unprecedented opportunities so novel that all of us ought to urge government to make up policies from scratch, and mold this new discovery to optimize each of us in our attempts to satisfy particular private ambitions. The attempt to do that has been continuous since 1996, and I think if anyone cares to look around, it ought to be evident it is not working.
Hey, at least this anti-230 screed, while completely wrong, is related to the thread!
Is there fundamental misunderstanding/confusion of the Internet, Interne service providers, and Internet content providers? As best as I can tell, Section 230 "Protection for private blocking and screening of offensive material" was meant to protect the internet service providers from liability from the information users found on the internet from Information Content Providers. Originally meant to protect companies like Earthlink, AOL, etc., that users dialed into to access the internet that has since transformed into Internet Service Providers such as ATT, Verizon, etc, where users are connecting through internet modems to access the internet,
Some are now claiming that Section 230 also protects Information Content Providers, which could range from individual websites to large internet based social media platforms.
To me it seems wrong to use the original Section 230 law that protected the internet access providers to now protect companies that actively promote or hide specific information from being treated as a publisher.
Section 230 is a statute, not a concept or a policy. You have to read the actual words of it, not the idea you assume was behind it. (The latter can help resolve ambiguity, but cannot override the actual text.)
The original cases that provoked 230 were involving the forums those ISP's operated.