The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Free Speech and Private Power: The Right to "Present[] a Curated Compilation of Speech"
[I am serializing my short Harvard Law Review Forum essay titled "Free Speech and Private Power", responding to the Harvard Law Review's publication of Evelyn Douek & Genevieve Lakier's excellent new article, Lochner.com? I actually agree with much of what Douek & Lakier say, but offer a somewhat different perspective on the matter, mostly asking what the Court's recent cases mean going forward, rather than trying to critique them. Here is the section on Moody's reaffirming the right to "present[] a curated compilation of speech."]
[1.] The Majority.—To begin with, as Douek and Lakier note, the Moody majority strongly reaffirmed private entities' power to exclude speech from their "curated compilation[s]" that make up "a single speech product," such as news feeds, parades, and newspapers. That remains true even when the private entities have a great deal of influence over the public sphere.
And this makes sense, partly because we rely on private entities to provide us as readers some valuable services that the First Amendment disables the government from providing. For instance, the government's power to restrict misinformation is sharply limited. But we of course count on newspapers and other publishers to avoid misinforming their readers, including by screening third-party submissions (such as op-eds) for accuracy.
Indeed, it would be hard to have effective democratic self-government or search for truth without some private entities—newspapers, scientific journals, book publishers—that help us sort the true from the false and good ideas from bad ones. The Court concluded that the same principles that protect newspaper publishers, parade organizers, and the like also protect social media platforms. A magazine might want to present a conservative view or a liberal view. A parade organizer might want to organize a parade that conveys a particular theme and not other messages that the organizer views as inconsistent with the theme. Likewise for social media platforms striving to create particular "curated speech products" for their users.
Private entities can also help promote useful discussions by trying to shape a pleasant environment for participants, readers, and listeners. Historically, many newspapers have had editorial policies aimed at satisfying what was seen as editors' and readers' preference for decency and propriety. Likewise, the moderator of an online discussion group may want to block people or posts that are unduly vulgar, menacing, or otherwise offensive, and that risk leading potential participants to leave. Indeed, without this, useful conversation might become difficult for all but the thickest-skinned.
A social media platform might similarly try to block similar material from comments posted on users' pages, or the items that it includes in its news feed, in order to keep those pages and news feeds valuable to its users. It is especially important for such platforms to block spam, or else their products would become unusable. But even blocking offensive ideas may help them create a speech product that more readers will want to consume.
Douek and Lakier suggest the Court spoke too categorically in foreclosing the future viability of even modest right-of-access mandates such as "relatively modest nondiscrimination obligations" that mandate some degree of equal treatment of the speech of political candidates. (Think the narrow and precise obligations imposed on broadcasters by the candidate equal opportunity and noncensorship rule, rather than the broad, vague, and discretionarily applied obligations imposed by the old fairness doctrine.) Perhaps there should be some more latitude for narrow laws that aim to limit "the capacity of the powerful tech companies to, for example, sway an election if they desire to do so." But on balance, I think, the Court was right to conclude that, as to their curated feeds, platforms have the same broad curatorial power that newspapers do.
[2.] Possible Departures?—But there is a complication: One of the five Justices who joined the majority in full, Justice Barrett, filed a concurrence flagging questions for the future, and suggesting that certain kinds of ""curation" might not be fully protected by the First Amendment after all. These might include:
- platforms' using algorithms that "just present[] automatically to each user whatever the algorithm thinks the user will like—e.g., content similar to posts with which the user previously engaged";
- platforms' "hand[ing] the reins to an AI tool and ask[ing] it simply to remove 'hateful' content," based "on large language models" that "determine what is 'hateful'"; and
- "foreign … corporations" making decisions "at the direction of foreign executives."
Justices Alito, Thomas, and Gorsuch were even more broadly open to certain kinds of restrictions on platforms. While some of their analysis was squarely rejected by the majority, some of it fits Justice Barrett's reservations: They, too, expressly noted that "when AI algorithms make a decision, 'even the researchers and programmers creating them don't really understand why the models they have built make the decisions they make,'" and asked, "[a]re such decisions equally expressive as the decisions made by humans?"
Finally, Justice Jackson's minimalist concurrence suggested she may be saving such questions (among others) for a later day: "Faced with difficult constitutional issues arising in new contexts on undeveloped records, this Court should strive to avoid deciding more than is necessary." There thus may be at least five Justices who are open to some limitations even on "[a] private party's collection of third-party content into a single speech product."
To be sure, the majority rejected one possible basis for such limitations: the claim that such collection loses First Amendment protection "just because a compiler includes most items and excludes just a few." But, as noted above, other bases may yet be available.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"Indeed, it would be hard to have effective democratic self-government or search for truth without some private entities—newspapers, scientific journals, book publishers—"
I think you're skating into Lathrop territory here, treating the platforms as publishers. They're actually closer to common carriers, like a telephone company: The users aren't generally looking for curated content, they're looking to communicate with each other.
In fact, the curation isn't done for the benefit of the users, it's how the platform monetizes the users. The curation is used to deliver advertising.
And the platforms really didn't start intensively curating content, and heavily moderating, until they had enough market power that annoying their users wouldn't drive too many away.
None of that is correct. They are nothing like common carriers, which hold themselves out to the public as delivering goods, people, and/or messages indiscriminately, and generally privately.
The users are not "looking to communicate with each other." If they were, they'd be using telephones, or email, or mail. The whole point is that these companies provide value added rather than just communications services.
Which is for the benefit of the users, since otherwise the companies would need to charge the users instead of allowing them to use the services for free. (Of course, I don't pretend that the primary motivation is altruism, but that's true for every corporation in existence.)
As the saying goes, if they're not charging you for the product you are the product.
There once was a time when we recognized that excessive market power threatened self-government. In response, we did not curtail freedom of speech but we did enact and enforce the Sherman Act and later the Clayton Act. Although the Biden administration got many things wrong (all administrations of both parties do), one thing it got right was the resurrection of antitrust law. I don't expect Trump to continue down that path, but perhaps the Steve Bannons will trump the Elon Musks.
Here's my opinion: common carriers are the ones expected to deliver something to a specific place without change. They are carriers. If you board a train, you know that it's going to a specific destination. If you send a snail mail or package, you know that it's going to the addressee, and that they do not alter the thing you sent.
Now, let's say there is a service that would collect letters and display them publicly in a distant location. I don't think this is a common carrier; the letter is not addressed to a specific recipient. Similarly, if the service translated the message before sending to a named recipient, I don't think it would qualify as a common carrier.
Under this definition, social media obviously fail the first test, and any reasonable one would fail the second test. They are not common carriers.
“Expected to deliver something to a specific place without change.”
Whose expectations matter? The company’s? Customers’? Legislatures’? Judges’?
Probably customers - but that doesn't matter in the social media context because any reasonable person would know that Facebook or X posts can be seen by anyone. In contrast, Gmail or WhatsApp could be common carriers.
" because any reasonable person would know that Facebook or X posts can be seen by anyone."
Well, you know, except reasonable persons who actually use Facebook, and know that they can set their posts to be "private", so that only people they invite can see them.
Exactly. Facebook and other social media platforms apply the “curated speech” doctrine to contexts that are nothing like general publishing, indeed contexts that much more closely resemble a conference call or a mass mailing, where there are multiple recipients but they are selected by the “content provider.”
Hence my comment about the Post Office.
Good point, though I doubt they're moderating private posts in practice. I think the best way is to see which service they are predominately offering - are the private posts its main service? Or public ones?
My "opinion" is actually part of the test used in Japan for whether someone is subject to common-carrier regulations in Japan, including the "fairness in use" provision. Under that rule, Gmail, WhatsApp, and Zoom are common carriers, while Google Search, Twitch, or non-DM parts of social media aren't.
Although the law has a provision banning "censorship" also applicable to non-common carriers, it is interpreted to only apply to state actors, and what we call "content moderation" are probably not censorship under Japanese law. (To give you an idea of how narrow "censorship" means in Japan, preliminary injunction against publishing defamatory speech before it reaches the public was held not to be "censorship".)
"Good point, though I doubt they're moderating private posts in practice."
Your doubt is unjustified. I was part of a private group that got chased out of FB by FB's third party moderators. Just a bunch of old friends who'd met online years ago, and liked to discuss the issues of the day. Not Aryan Nation, just cranky old engineers for the most part.
Said moderators never bothered telling us WHAT they kept locking the group for, we were supposed to guess.
And in case you're wondering about Section 230: as I understand it, the purpose was to eliminate the distributor-publisher distinction that was considered harmful (as any efforts to remove inappropriate-but-constitutionally-protected content, also known as "editorializing", could subject them to liability). I doubt that it ever intended to make distributors common carriers.
Well, yes: Under the legal precedent prior to section 230, a platform could escape liability for content only by refraining from proactive moderation; To the extent they moderated, they became responsible for what they left up.
Section 230 of the Communications Decency Act was intended to create a limited carve-out from that doctrine, for moderation of content that was "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,". Porn and harassment. Not political dissent.
The problem is that the courts read that "or otherwise objectionable" to swallow the whole rule, and hand the platforms total editorial control. They could remove ANYTHING they claimed THEY found objectionable. If they liked vanilla and you lauded chocolate ice cream, the courts said they could delete your post.
That was never the way Section 230 was supposed to work. They actually anticipated limited consensus moderation, combined with user selected filters. But the platforms found that user filters got in the way of pushing advertising, and failed to cooperate with their use.
Yes, it was. I know that not only because the text says so, but because that's the way it has been consistently been interpreted for 30 years without any pushback from the people who enacted it. Indeed, the primary authors of the bill have expressly said so.
Moreover, your uninformed Just So story continues to confuse (c)(1) and (c)(2)(a). Non-liability for other people's content is (c)(1), and is not contingent on anything and contains no limitations — express or implied — on the type of content it pertains to. (Except that it doesn't apply to IP.)
(c)(2)(a) is the part you're quoting, but even if that paragraph explicitly said, "only porn or harassment" — which it does not, no matter how many times you want to read that imaginary limitation in there — it would have no bearing whatsoever on companies' non-liability for user content. It's an entirely independent provision.
It's not "swallowing" the rule; it is the rule. And the platforms already have total editorial control. That's the 1A, not 230.
"Non-liability for other people's content is (c)(1),"
If your reading of (c)(1),
"(1)Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
were correct, then (c)(2)
"2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)."
Would be redundant. And so you're not correct.
(c)(1) Establishes that the platform is not "the publisher or speaker" of user content. That doesn't imply lack of liability for moderation decisions, it merely makes such liability a bit harder to establish.
1) The canon against superfluousness is a presumption. It cannot overcome plain text.
2) Your interpretation of my reading of the statute is incorrect; my reading does not render (c)(2) redundant, so the canon doesn't come into play anyway. The two sections cover different things.
3) What you call "my reading" is the same as the courts' reading.
To put it in the simplest terms:
(c)(1) governs liability to third parties for a user's speech.
(c)(2) pertains to liability to the speakers themselves for the company's moderation decisions.
To make that concrete, if I buy Twitter from Musk (at which point I would immediately change the official name back to Twitter!) and you tweet, "Eugene Volokh molests puppies":
1. (c)(1) says that Prof. Volokh can't sue me for that tweet. Period. It's not contingent on anything at all. It has nothing to do with "moderation decisions" — a facet not found in (c)(1).
2. (c)(2) says that if I decide to delete your tweet, you can't sue me. That's a moderation decision, and that's what (c)(2) applies to.
Suppose the Post Office opens your mail, reads it, throws letters it doesn’t agree with in the trash, and for the rest, stuffs a bunch of advertising in that you don’t really want to receive for products its advertisers think people who write letters on subjects like yours tend to like.
Is it providing you a curated speech product?
It seems to me that it is performing all the elements of curation.
" the Moody majority strongly reaffirmed private entities' power to exclude speech from their "curated compilation[s]" that make up "a single speech product," such as news feeds,"
I'll throw in my $0.02:
I'm fine with this only as long as no one (private or government) is trying to pressure payment processors (VISA for example) or ISPs to de-monetize or deplatform any one who tries to run uncurated compilations.
Justices Alito, Thomas, and Gorsuch were even more broadly open to certain kinds of restrictions on platforms. While some of their analysis was squarely rejected by the majority, some of it fits Justice Barrett's reservations: They, too, expressly noted that "when AI algorithms make a decision, 'even the researchers and programmers creating them don't really understand why the models they have built make the decisions they make,'" and asked, "[a]re such decisions equally expressive as the decisions made by humans?"
This might be naive, but why not ask for the decision tree that the AI algorithm made. Sort of, explain your reasoning. That is what we are talking about here, correct...the specific set, sequence of decision rules.
"Explain your reasoning" is a current goal in AI research, but the LLM's that are so popular now aren't really good at it, because they don't actually reason.
AIs don't have "decision rules"; they have probabilistic models that they develop based on the data set input to them. There's likely nothing that they could disclose that would be helpful to a human. (To be sure, there may be some rules hard coded in, such as "No mentions of the word such-and-such." But that's a trivial part of their "decisionmaking.") Also, not really sure why the information you're describing would be relevant legally in this context.