The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Supreme Court Remands Texas and Florida Social Media Cases - But Strongly Suggests the States' Laws Violate the First Amendment
The majority opinion makes clear that social media content moderation is an activity protected by the First Amendment. That likely dooms large parts of the state laws restricting content moderation.

In today's ruling in Moody v. NetChoice, addressing challenges to Texas and Florida laws severely limiting social media content moderation, the Supreme Court declined to issue a final ruling on the merits, for procedural reasons. But in remanding the cases to the lower courts, Justice Elena Kagan's majority opinion also established standards under which the major provisions of the two laws would almost certainly have to be ruled unconstitutional. I was highly critical of last week's decision denying standing to plaintiffs challenging the federal government's efforts to pressure social media firms to take down posts. Today's ruling is far better. Hopefully, the Court will eventually make clear that the government is presumptively barred from either forcing social media providers to take down posts it disapproves of or forcing them to post material the website owners object to.
The reason why the Court decided not to issue a final decision is that the lower courts did not engage in extensive enough fact-finding and analysis to consider a facial challenge to the constitutionality of the laws as a whole:
Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of NetChoice's challenge. The courts mainly addressed what the parties had focused on. And the parties mainly argued these cases as if the laws applied only to the curated feeds offered by the largest and most paradigmatic social-media platforms…. But argument in this Court revealed that the laws might apply to, and differently affect, other kinds of websites and apps. In a facial challenge, that could well matter, even when the challenge is brought under the First Amendment. As explained below, the question in such a case is whether a law's unconstitutional applications are substantial compared to its constitutional ones. To make that judgment, a court must determine a law's full set of applications, evaluate which are constitutional and which are not, and compare the one to the other. Neither court performed that necessary inquiry…..
To succeed on its First Amendment claim, NetChoice must show that the law at issue (whether from Texas or from Florida) "prohibits a substantial amount of protected speech relative to its plainly legitimate sweep." Hansen, 599 U. S., at 770. None of the parties below focused on that issue; nor did the Fifth or Eleventh Circuits. But that choice, unanimous as it has been, cannot now control. Even in the First Amendment context, facial challenges are disfavored, and neither parties nor courts can disregard the requisite inquiry into how a law works in all of its applications. So on remand, each court must evaluate the full scope of the law's coverage. It must then decide which of the law's applications are constitutionally permissible and which are not, and finally weigh the one against the other. The need for NetChoice to carry its burden on those issues is the price of its decision to challenge the laws as a whole.
But in remanding the cases, the majority lays out "relevant constitutional principles,
and explain[s] how" the Fifth circuit "failed to follow them" when it upheld the Texas social media (the Eleventh Circuit had invalidated most of Florida's law). The Court's three principles are devastating to the states' laws:
First, the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others' speech, is directed to accommodate messages it would prefer to exclude. "[T]he editorial function itself is an aspect of speech." Denver Area Ed. Telecommunications Consortium, Inc. v. FCC, 518 U. S. 727, 737(1996) (plurality opinion)….. And that is as true when the content comes from third parties as when it does not. (Again, think of a newspaper opinion page or, if you prefer, a parade.) Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product. When the government interferes with such editorial choices—say, by ordering the excluded to be included— it alters the content of the compilation. (It creates a different opinion page or parade, bearing a different message.) And in so doing—in overriding a private party's expressive choices—the government confronts the First Amendment…
Second, none of that changes just because a compiler includes most items and excludes just a few…. That was the situation in Hurley. The St. Patrick's Day parade at issue there was "eclectic": It included a "wide variety of patriotic, commercial, political, moral, artistic, religious, athletic, public service, trade union, and eleemosynary themes, as well as conflicting messages." 515 U. S., at 562. Or otherwise said, the organizers were "rather lenient in admitting participants." Id., at 569. No matter. A "narrow, succinctly articulable message is not a condition of constitutional protection." Ibid. It "is enough" for a compiler to exclude the handful of messages it most "disfavor[s]." Id., at 574….
Third, the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas. Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That's the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access…. But in case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm.
Central elements of the Texas and Florida laws are unconstitutional under this approach. Social media firms are undeniably "compiling and curating others' speech" and under the state laws, they are "directed to accommodate messages [they] would prefer to exclude." The firms may choose to exclude only a small percentage of the vast rage of speech users might want to post. But the Court's second principle rightly says that doesn't matter.
Finally, if "the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas," that destroys the central rationale for the two state laws. As the Court notes later in its opinion, "improving" or "better balancing" the "marketplace" of ideas is precisely the objective of Texas's law, which was largely motivated by concerns that the social media platforms were biased against various types of right-wing speech.
Later in the opinion, Justice Kagan notes the implications for the Texas law:
The platforms may attach "warning[s], disclaimers, or general commentary"—for example, informing users that certain content has "not been verified by official sources." Id., at 75a. Likewise, they may use "information panels" to give users "context on content relating to topics and news prone to misinformation, as well as context about who submitted the content…."
But sometimes, the platforms decide, providing more information is not enough; instead, removing a post is the right course. The platforms' content-moderation policies also say when that is so. Facebook's Standards, for example, proscribe posts—with exceptions for "news-worth[iness]" and other "public interest value"—in categories and subcategories including: Violence and Criminal Behavior (e.g., violence and incitement, coordinating harm and publicizing crime, fraud and deception); Safety (e.g., suicide and self-injury, sexual exploitation, bullying and harassment); Objectionable Content (e.g., hate speech, violent and graphic content); Integrity and Authenticity (e.g., false news, manipulated media). Id., at 412a–415a, 441a–442a…. The platforms thus unabashedly control the content that will appear to users, exercising authority to remove, label or demote messages they disfavor….
Except that Texas's law limits their power to do so. As noted earlier, the law's central provision prohibits the large social-media platforms (and maybe other entities6) from "censor[ing]" a "user's expression" based on its "viewpoint."§143A.002(a)(2); see supra, at 7. The law defines "expression" broadly, thus including pretty much anything that might be posted. See §143A.001(2). And it defines "censor" to mean "block, ban, remove, deplatform, demonetize, deboost, restrict, deny equal access or visibility to, or otherwise discriminate against expression." §143A.001(1).7 That is a long list of verbs, but it comes down to this: The platforms cannot do any of the things they typically do (on their main feeds) to posts they disapprove—cannot demote, label, or remove them whenever the action is based on the post's viewpoint….
And we have time and again held that type of regulation to interfere with protected speech. Like the editors, cable operators, and parade organizers this Court has previously considered, the major social-media platforms are in the business, when curating their feeds, of combining "multi-farious voices" to create a distinctive expressive offering. Hurley, 515 U. S., at 569. The individual messages may originate with third parties, but the larger offering is the platform's. It is the product of a wealth of choices about whether—and, if so, how—to convey posts having a certain content or viewpoint. Those choices rest on a set of beliefs about which messages are appropriate and which are not (or which are more appropriate and which less so). And in the aggregate they give the feed a particular expressive quality.
I think the Court's principles are broad enough to justify facial invalidation of the Texas and Florida laws, because ruling that the restrictions on social-media content moderation are unconstitutional is enough to show that the laws "prohibit… a substantial amount of protected speech relative to [their] plainly legitimate sweep." But even if the facial challenges fail, the social media firms could easily file as-applied challenges focusing more narrowly on content moderation. And those would almost certainly succeed.
In his opinion concurring in judgment, Justice Samuel Alito (joined by Gorsuch and Thomas) claims the Court's discussion of First Amendment standards is merely nonbinding dicta. But it pretty obviously sets out principles the lower courts must follow on remand.
Alito also argues that not enough is known about the firms' content moderation policies, and how their content moderation policies work, in part because the firms have not fully revealed how their algorithms function. But, as the majority shows, we do know enough to see that the major social media firms do restrict posts based on content, and that they favor some messages, while disfavoring others. That's exactly why the states decided to enact the challenged laws in the first place!
The dissent's argument that there are different social media platforms with different contents also doesn't do much to undercut the majority. All of the major platforms have extensive expressive content, and all impose editorial restrictions based at least in part on the subject matter and viewpoint. Perhaps this is less true of some platforms (such as Etsy) which mainly just let users sell products, rather than convey messages. But the Texas and Florida laws cover enough political and social commentary that they clearly "prohibit… a substantial amount of protected speech relative to [their] plainly legitimate sweep."
Justice Alito also alludes, briefly, to the major social media platforms' extensive reach and influence. Interestingly, this issue - much focused on by commentators on these cases - plays almost no role in the majority's analysis, and only a minor one in the dissent. The same goes for the argument that social media firms' content moderation policies can be regulated because the firms are similar to "common carriers." The majority doesn't explicitly mention this theory, though much of its analysis implicitly rebuts it, by pointing out the many ways in which social media firms do not simply serve all comers. Justice Alito only briefly mentions the common carrier theory in a footnote. I criticized the common carrier and influence arguments in detail here.
In sum, while the Court did not reach a decision on the merits, the standards it lays out are an important win for the social media firms - and for freedom of speech.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
It appears to me that the basis for saying curation decisions are 1st amendment protected is in direct conflict with the basis for interpreting Section 230 as extending its shelter to them: That law states that 3rd party content is not the platform's speech for liability purposes, but this ruling declares curated content as absolutely the platform's own speech.
So it got them out from under the law, but also out from under the shelter from liability?
The Constitution prohibits government from requiring Facebook to not curate content (they are like a publisher). The Consitution is silent on whether government, thropugh statute, can nonetheless grant Facebook immunity protection as if they aren't a publisher.
Yes.
I'd also add that it's a pretty weak commitment to free speech to say a publisher should only have a choice of being controlled directly by the police or indirectly by an avalanche of lawsuits. A strong commitment to free speech means the publisher can be biased and unfair about what they choose to publish, and the government-provided recourse for people who don't like it is none except in some very limited cases like original first-author libel.
The "contract" argument has some merit only if it's based on an actual agreement to which the publisher and users were voluntary parties. The Texas and Florida legislatures don't get to regulate speech by dictating the terms of that contract.
I said "interpretation" for a reason. Section 230 doesn't SAY that EVERYTHING on a platform is user content. It doesn't protect them from liability for their own content.
But this ruling does say that curated content is platform content for 1st amendment purposes.
So, the reasoning conflicts with interpreting section 230 to treat the results of curation as user content.
"I’d also add that it’s a pretty weak commitment to free speech to say a publisher should only have a choice of being controlled directly by the police or indirectly by an avalanche of lawsuits."
Look, I'm talking about the logical implications of a ruling, not my personal policy preferences. None the less, you're wrong: Under this reasoning a platform, (Not, "publisher", are you a Lathrop sock puppet or something?) has a third alternative: Stop curating. Become a common carrier.
To be clear, if I'm expressing my own policy preferences, and not just reasoning out the implications of a ruling...
No, I don't think either publishers or platforms should be directly controlled by the police. I take the 1st amendment a lot more seriously than the Court does. Nor am I in favor of an avalanche of lawsuits. What the government can't do directly, it should not be permitted to do indirectly by imposing liability rules and outsourcing the prosecution.
But this doesn't mean that, if a platform or publisher up and decides to defame somebody, they get a perfect shield, now, does it?
I'd argue for a policy where you can be, legally, either a platform, which is to say a common carrier, where the user generated content is the user's, if you don't like it take it up with them, or a publisher, which exercises full editorial control, and has full responsibility for content on account of that control. Or, I suppose, a mosaic of both, but with each 'tile' of the mosaic one or the other.
As it is, under Section 230 jurisprudence, internet platforms get the best of both worlds: Liability is reserved for users, editorial control for the platform. Power without responsibility, which famously is a recipe for abuse, and it shows. Man, does it ever show.
What you're arguing for is the current situation. The curation is almost entirely automated, the companies don't have any idea what any individual user is seeing.
Do you think the phone company should be liable if I text you some defamation? What about if you read a defamatory tweet while on the toilet?
You're expecting an impossible standard of omniscience here.
No, that's bullshit. The platforms pretend that curation is almost entirely automated, and at scale there does have to be an extensive use of automation, but we have whistleblower testimony that FB, for instance, maintains an extensive and continually updated system of white and black lists that override the algorithms.
And even the purely algorithmic portion of the curation is a learning algorithm that learns to imitate manual curation decisions made by people. Automated amplification of their own biases.
Users get the best of both worlds. We can generally post what we want, with no prior approval required, but without having to wade through a cesspool of nazis and spam.
Like Lathrop, what you're advocating for is the repeal of 230. And like Lathrop, what you don't understand is that this is not a viable business model. (Well, Lathrop's misunderstanding of that is different than yours, plus he wants to destroy social media platforms in order to protect the jobs of dinosaur editors.) A common carrier model for a one-to-one communications medium is fine; for a broadcast medium it is unworkable. And a publisher-with-full-responsibility model doesn't scale.
It doesn't say that anything on a 'platform'¹ is "user content." 230 just says that an interactive computer service² isn't liable for information provided by another.
There's no such thing as "the results of curation" that's entirely independent of user content. It's nonsensical to say, "This bookstore isn't liable for the content of any of the books contained therein, but it's liable for the overall selection of those books."
¹Note that although people frequently use this term in discussing 230, that word is not a legal term and does not appear in the statute.
²This is the statutory term, and it's far broader than "platform."
The basis for interpreting Section 230 as extending its shelter to them is that Section 230's text says so. What the 1A says is irrelevant to that. 230's immunity from liability does not depend on how the speech is characterized.
No. Nothing in this ruling somehow strikes down Section 230.
So I guess the SCOTUS rule is that only the federal government can suppress the First Amendment.
Sure, but I don't see how anyone can seriously believe that the "the major social-media platforms are in the business, when curating their feeds, of combining 'multi-farious voices' to create a distinctive expressive offering." As if Twitter has any discrete, expressive offering when you can find celebrations of Hezbollah and tips on knitting on the same platform.
If you compare Twitter to Truth Social to Parler to Threads, you'll find that each one is a distinctive expressive offering.
It's not too remarkable that circuit courts get reversed from time to time.
It is somewhat surprising, though, how much the 5th Cir. has been getting bench slapped recently by the justices. FDA v AHM shouldn't have needed unanimous reversal on something as basic as standing, but the 5th Cir. wanted to reach a result rather than apply the law. And now the NetChoice opinion for the court repeatedly calls out the 5th Cir. for being thoroughly off the rails in their analysis.
Majority opinion (Kagan plus Roberts, Kavanaugh, Sotomayor, Barrett, and Jackson):
That's pretty unflattering to see in any judicial opinion.
Barrett:
“The firms may exclude only a small percentage of the vast rage of speech users might want to post.”
Rage? An amusing typo?