The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"AI, Free Speech, and Duty"
Just two weeks ago, my Free Speech Unmuted co-host Prof. Jane Bambauer (Florida) and I discussed Garcia v. Character Technologies. (Besides being a leading First Amendment scholar, Jane also teaches and writes about tort law.) I'm therefore especially delighted to pass along some thoughts from her on yesterday's decision in the case:
AI, Free Speech, and Duty [by Jane Bambauer]
The case against Character.AI, based on the suicide of a teenager who had become obsessed with his Daenerys Targaryen chatbot, produced an important opinion yesterday. Judge Conway of the U.S. District Court in Orlando declined to dismiss almost all of the plaintiffs' claims, and also refused "at this stage in the litigation" to treat AI or chatbot output as speech under the First Amendment. I think the opinion has problems.
Eugene has already laid out some of the reasons that the court's First Amendment analysis is flawed. (E.g., could the Florida legislature really pass a law banning ChatGPT from producing content critical of Governor DeSantis? Of course not.) I want to pile on a little bit—I can't help myself—and then connect the free speech issues to the analysis of tort duty.
Is Chatbot Output "Speech"?
The defendants (Google and Character AI) argued that chatbot output is protected speech, much like computer-generated characters in video games. The defendants made analogies to other expressive technologies that were once new as well. But the court found that these arguments "do not meaningfully advance their analogies" because the defendants didn't explain how chatbot output is expressive.
In the court's opinion, with an assist from Justice Barrett, chatbot output is not expressive because it is designed to give listeners the expression that they are looking for rather than choosing for them:
The Court thus must decide whether Character A.I.'s output is expressive such that it is speech. For this inquiry, Justice Barrett's concurrence in Moody on the intersection of A.I. and speech is instructive. See Moody, 603 U.S. at 745–48 (Barrett, J., concurring). In Moody, Justice Barrett hypothesized the effect using A.I. to moderate content on social media sites might have on the majority's holding that content moderation is speech. Id. at 745–46. She explained that where a platform creates an algorithm to remove posts supporting a particular position from its social media site, "the algorithm [] simply implement[s] [the entity's] inherently expressive choice 'to exclude a message.'" Id. at 746 (quoting Hurley v. Irish-American Gay, Lesbian and Bisexual Grp. of Boston, Inc., 515 U.S. 557, 574 (1995)). The same might not be true of A.I. though—especially where the A.I. relies on an LLM:
But what if a platform's algorithm just presents automatically to each user whatever the algorithm thinks the user will like …? The First Amendment implications … might be different for that kind of algorithm.
This reasoning was ill-conceived when Justice Barrett first wrote it. When the Disney company greenlights a superhero movie, it's plausible that a decision to make a movie about people flying around and looking cool is mostly or even entirely motivated by giving paying movie-goers whatever they want, and that they would choose the backstory, dialog, wardrobe, and everything else to maximize profits if they could. But this wouldn't change the fact that the movie is speech.
Justice Barrett's reasoning is even more untenable in a case against chatbots. Is there really any doubt about the nature of written correspondence responding to a person's prompts and questions? It's difficult to conceive of a more expressive product than words designed to address the questions and interests that a listener has actively cultivated and asked for.
Lest there be any doubt, Judge Conway's own opinion, just a couple sections later, can't help but use the word "expression" to describe chatbot output. When discussing the products liability claim, Judge Conway decided that the case may proceed to the extent the product claim is based on the app's failure to confirm users' ages or to give users greater control over excluding indecent content, and not on the actual content of the conversations. "Accordingly, Character A.I. is a product for the purposes of Plaintiff's product liability claims so far as Plaintiff's claims arise from defects in the Character A.I. app rather than ideas or expressions within the app" (emphasis added). This analysis seems correct to me, and by restricting the products claims the court has sidestepped the First Amendment defenses that media defendants typically bring to design defect cases. My point here is just to show that the court couldn't even get through its own opinion without referring to chatbot output as expression.
Free speech first principles also strongly suggest that AI-generated content is protected. In my opinion, the most basic and sensible core value for free speech is what Seanna Shiffrin called the thinker-based approach to the First Amendment, focusing on the audience as thinkers. This asks whether a law interferes with the "free development and operation of a person's mind." More than any other theory of free speech, even the democratic self-governance theory, this one gets at the heart of what most Americans love and expect from the First Amendment: that the government will not interfere with the voluntary development of inner thoughts. A diary should receive First Amendment protection even though it doesn't match the usual human-speaker-to-human-listener paradigm. So, too, should a person alone with their chatbot have the freedom to explore the expressive output of AI.
So I think the court flubbed an opportunity to get the free speech question right. Still, the First Amendment does not automatically require the dismissal of the plaintiff's claims, since it still remains possible that there might be an exception to the First Amendment for certain kinds of negligent speech that causes physical harm. I am particularly interested in the deep questions related to general principles of duty.
Does General Purpose Service Create a General Duty of Care?
Character.AI argued that it did not owe a duty of care to its users. The court disagreed. The opinion explained:
"a legal duty will arise whenever a human endeavor creates a generalized and foreseeable risk of harming others." McCain v. Fla. Power Corp., 593 So. 2d 500, 503 (Fla. 1992).
Florida's rule for duty is similar to most other states, and basically boils down to a "conduct plus foreseeability" test. If you take an act that introduces a new force into the world, and that force foreseeably creates risks to others, then you have a duty to conduct your affairs in a manner that is reasonable. (Contrast this with just standing there while others call out for help. Assuming that nothing you did caused the person to need assistance in the first place, you can stand there with impunity, under common law tort principles, because you owe no duty.)
Judge Conway easily finds that Character.AI owed a duty in this case:
Defendants, by releasing Character A.I. to the public, created a foreseeable risk of harm for which Defendants were in a position to control. Accordingly, Plaintiff sufficiently alleges Defendants owed a duty "either to lessen the risk or see that sufficient precautions are taken to protect others from the harm that the risk poses." McCain, 593 So. 2d at 503 (quoting Kaisner v. Kolb, 543 So. 2d 732, 735 (Fla. 1989)).
Yet in truth, the conduct + foreseeability test is not quite right. There are many activities where courts don't impose a legal duty even though, viewed at a particular point in time or with enough abstraction, the acts of the defendant clearly would foreseeably increase a risk of some type of hazard.
There is no better illustration of this, in fact, than duty rules in suicide inducement cases. Long before this case, courts struggled to select a duty rule that takes account of the fact that suicidal people typically exercise their own agency. Courts do allow negligence claims to proceed, even in the absence of a special relationship, but plaintiffs often have to show more than simple conduct + foreseeability. Otherwise, almost any action taken with or near a depressed person could trigger a legal responsibility for their care. Thus, courts struggle with the best duty rule. The Restatement (Second) of Torts rule related to suicide recognizes liability for negligent conduct if it (1) "brings about delirium or insanity in another" and (2) while that condition of delirium or insanity continues to exist, it "makes it impossible for him" or her to resist the suicidal impulse by depriving that person of the capacity to reasonably control his or her conduct and not carry out the suicidal impulse. (Restatement (Second) Torts § 455.) Some states have chosen rules that provide more avenues to recovery than the Restatement (Second) rule, and others have foreclosed negligence cases based on suicide entirely.
In other words, even putting aside the speech aspect of this case, a court should struggle with the facts of this case, and the rule for duty should be more searching and careful. Duty is the element that often takes a peek at the other elements that a plaintiff will have to prove (breach, causation, damages) and imagines the impact of repeated cases. Where good policy would require limiting duty, the "conduct plus foreseeability" rule should be modified.
Can Risky Speech Create a Duty of Care?
The court's First Amendment analysis does have some impact on the duty analysis, too. If the court had recognized chatbot output as speech, then it would have to recognize the parallel between this case and the wide range of cases brought against other media defendants. There have been multiple unsuccessful claims that popular movies and music glamorizing violence or self-harm have foreseeably caused some members of their audience to commit suicide. Ozzy Osbourne's song "suicide solution" alone attracted multiple lawsuits (see, e.g., McCollum v. CBS, Inc.(Cal. App. 1988)). And indeed, compared to the rather ambiguous messages produced by Character.AI—messages like "Please come home to me as soon as possible, my love"—Osbourne's song was pretty overt in its message:
Why try, why try
Get the gun and try it
Shoot, shoot, shootBut in art, even overt messages are rarely straightforward. Doesn't Romeo and Juliet glamorize suicide? Given the chilling effect of liability on speech, foreseeability alone cannot suffice to force authors and media companies to defend themselves, and to show that they took "reasonable precautions" to reduce the risk that some portion of their audience will be inspired to do something harmful.
But that isn't the whole story. There have been cases where a defendant has unreasonably increased the risk of suicide through speech alone. These involve cases of one-on-one communications between the defendant and the decedent, such as the criminal conviction of Michelle Carter who urged her friend to get back into his fume-filled truck and complete his suicide attempt. And media defendants are not completely immune, either. In one case, a tort claim was allowed to go forward against a news company when it broadcast a telephone call between its reporter and the suicidal person while the crisis and police standoff was still taking place.
The difference between the two types of cases is the nature of the communication: In a one-to-many form of expression, the foreseeability of a sort of stochastic risk of harm will not suffice to overcome free speech protections. But in one-to-one communications, foreseeability is much more particular.
So which type of defendant is Character.AI? Is it the one-to-one defendant that is directly and specifically interacting with the decedent (analogous because of the highly customized nature of the output)? Or is it the one-to-many media defendant that is putting its content out more generally (analogous because the human decision-making at Character.AI stopped well before the particular messages at issue in the case)?
A case that Judge Conway cites in her opinion (but much earlier, and for another proposition) suggests one-to-many. In Twitter v. Taamneh, the Supreme Court applied standard principles of tort law to find that Twitter, Google and other social media firms do not owe a duty of care to all potential victims of terrorism even though they knew, at the time of offering service, that several terrorist organizations were using their platforms to recruit new members. The reasoning in Taamneh is very applicable to this case as well, so I will quote it at length:
[T]he only affirmative "conduct" defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants selected or took any action at all with respect to ISIS' content (except, perhaps, blocking some of it). Indeed, there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs' own allegations, these platforms appear to transmit most content without inspecting it.
The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants' for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. …
To be sure, plaintiffs assert that defendants' "recommendation" algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. … Viewed properly, defendants' "recommendation" algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS' content) with any user who is more likely to view that content.
There are some differences in the Character.AI case—most especially that there is no other third-party bad actor that can be held morally or legally responsible in this case. But the key is that the court assigned a duty of care to the defendant based on a very early action—offering the Character.AI service at all—rather than based on conduct and decisions that put a narrow set of vulnerable people at heightened risk of a particular hazard. Said the court:
Defendants, by releasing Character A.I. to the public, created a foreseeable risk of harm for which Defendants were in a position to control.
This sweeps much too broadly, in my view.
My Take
As a policy matter, I see the logic of establishing duty in this case to make sure there is an incentive in the industry to develop AI in a way that reduces risks to vulnerable users or to third parties. If the court had limited its finding of duty to certain facts—such as the plaintiff's age—it would be harder to find fault with the court's approach. But on balance, I believe market and reputational forces will do enough to induce reasonable AI precautions, and formal tort duty is likely to cause overdeterrence unless it is carefully cabined. The general duty of care established in this case will force the AI industry to aggressively monitor and police its users, to the detriment of all. Requiring AI companies to guard against the full range of risks will severely harm AI development. The pruning will cut off the branch that feeds the root.
More generally, I can't help but return to the nature of this case as a state intrusion into the life of the mind. I fear we have lost faith in the most basic commitment to free thought. From one of the Ozzy cases [with citations removed]:
The life of the imagination and intellect is of comparable import to the presentation of the political process; the First Amendment reaches beyond protection of citizen participation in, and ultimate control over, governmental affairs and protects in addition the interest in free interchange of ideas and impressions for their own sake, for whatever benefit the individual may gain. The rights protected are not only those of the artist to give free rein to his creative expression, but also those of the listener to receive that expression. The central concern of the First Amendment … is that there be a free flow from creator to audience of whatever message a film or book might convey. The central First Amendment concern remains the need to maintain free access of the public to the expression.
I believe, in this case, that the teenager who took his life had become obsessed with the AI character that he had developed. It is a tragedy, and it would not have happened if Character.AI had not existed. But that is not enough of a reason to saddle a promising industry with the duty to keep all people safe from their own expressive explorations.
Show Comments (3)