The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

Four Justices in Netchoice Flag Question Whether First Amendment Protects AI-Curated Materials

|

From Justice Barrett's concurrence in today's Moody v. Netchoice, LLC:

Consider, for instance, how platforms use algorithms to prioritize and remove content on their feeds. Assume that human beings decide to remove posts promoting a particular political candidate or advocating some position on a public-health issue. If they create an algorithm to help them identify and delete that content, the First Amendment protects their exercise of editorial judgment—even if the algorithm does most of the deleting without a person in the loop. In that event, the algorithm would simply implement human beings' inherently expressive choice "to exclude a message [they] did not like from" their speech compilation

But what if a platform's algorithm just presents automatically to each user whatever the algorithm thinks the user will like—e.g., content similar to posts with which the user previously engaged? The First Amendment implications of the Florida and Texas laws might be different for that kind of algorithm.

And what about AI, which is rapidly evolving? What if a platform's owners hand the reins to an AI tool and ask it simply to remove "hateful" content? If the AI relies on large language models to determine what is "hateful" and should be removed, has a human being with First Amendment rights made an inherently expressive "choice … not to propound a particular point of view"? In other words, technology may attenuate the connection between content-moderation actions (e.g., removing posts) and human beings' constitutionally protected right to "decide for [themselves] the ideas and beliefs deserving of expression, consideration, and adherence." So the way platforms use this sort of technology might have constitutional significance."

Likewise, see Justice Alito's concurrence in the judgment, joined by Justices Thomas and Gorsuch:

[C]onsider how newspapers and social-media platforms edit content. Newspaper editors are real human beings, and when the Court decided Miami Herald Co. v. Tornillo (1974) (the case that the majority finds most instructive), editors assigned articles to particular reporters, and copyeditors went over typescript with a blue pencil. The platforms, by contrast, play no role in selecting the billions of texts and videos that users try to convey to each other. And the vast bulk of the "curation" and "content moderation" carried out by platforms is not done by human beings. Instead, algorithms remove a small fraction of nonconforming posts post hoc and prioritize content based on factors that the platforms have not revealed and may not even know.

After all, many of the biggest platforms are beginning to use AI algorithms to help them moderate content. And when AI algorithms make a decision, "even the researchers and programmers creating them don't really understand why the models they have built make the decisions they make." Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?

My coauthors Mark Lemley and Peter Henderson and I have argued that AI output is generally protected by the First Amendment (without focusing specifically on AI curation). But the Justices certainly raise important questions, which lower courts are now especially likely to consider.