The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error
And AI programs' "tendency [to, among other things, produce untruthful content] can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity."
Various commenters have suggested that AI programs' output can't be defamatory because reasonable readers wouldn't view the statements as "100% reliable" or "gospel truth" or the like. Others have taken the more modest position that reasonable readers would at least recognize that there's a significant risk of error (especially given AI programs' disclaimers that note such a risk). And our own Orin Kerr has suggested that "no one who tries ChatGPT could think its output is factually accurate," so I take it he'd estimate the risk of error as very high.
But, as I've noted before, defamation law routinely imposes liability for communicating assertions even when there is a clear indication that the assertion may well be false.
For instance, "when a person repeats a slanderous charge, even though identifying the source or indicating it is merely a rumor, this constitutes republication and has the same effect as the original publication of the slander." When speakers identify something as rumor, they are implicitly saying "this may be inaccurate"—but that doesn't get them off the hook.
Indeed, according to the Restatement (Second) of Torts, "the republisher of either a libel or a slander is subject to liability even though he expressly states that he does not believe the statement that he repeats to be true." It's even more clear that a disclaimer that the statement merely may be inaccurate can't prevent liability.
Likewise, say that you present both an accusation and the response to the accusation. By doing that, you're making clear that the accusation "may [be] inaccurate."
Yet that doesn't stop you from being liable for repeating the accusation. (There are some narrow privileges that defamation law has developed to free people to repeat certain kinds of possibly erroneous content without risk of liability, in particular contexts where such repetition is seen as especially necessary. But those privileges are needed precisely because otherwise presenting both an accusation and a response is actionable.)
And this is especially so because of what OpenAI itself notes in its GPT-4 Technical Report:
This tendency [to, among other things, produce untruthful content] can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity.
Couple that with OpenAI's promotion of GPT-4's successes in reliably performing on various benchmarks—bar exams, SATs, etc.—and it seems likely that reasonable readers will perceive GPT-4 (and especially future, even more advanced, versions) as generally fairly reliable. They wouldn't view it as perfectly reliable, but, again, rumors are famously not perfectly reliable, yet people do sometimes act based on them, and repeating rumors can indeed lead to defamation lawsuits. They may certainly view it as more reliable than a Ouija board, a monkey on a typewriter, a fortune-teller, or the various other analogies that I've heard proposed (more on those here). And one can be a reasonable reader even if one doesn't have much understanding of how these AIs work, or even if one doesn't have much experience with testing the AIs to see how often they err.
So, yes, when an AI program generates and communicates statements about how someone was found guilty of tax fraud, accused of harassment, and so on—and includes completely bogus quotes, though supposedly from real and prominent media outlets—there is a significant legal basis for treating those statements as defamatory, and the AI company as potentially liable for that defamation.
Show Comments (23)