The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
OpenAI Wins Libel Lawsuit Brought by Gun Rights Activist Over Hallucinated Embezzlement Claims
In yesterday's decision by Judge Tracie Cason (Ga. Super. Ct. Gwinnett County) in Walters v. OpenAI, L.L.C., gun rights activist Mark Walters sued OpenAI after journalist Frederick Riehl ("editor of AmmoLand.com, a news and advocacy site related to Second Amendment rights") received an AI-generated hallucination from ChatGPT that alleged Walters was being sued for alleged embezzlement. The court granted OpenAI summary judgment, concluding that OpenAI should prevail "for three independent reasons":
[1.] In context, a reasonable reader wouldn't have understood the allegations "could be 'reasonably understood as describing actual facts,'" which is one key element of a libel claim. The court didn't conclude that OpenAI and other such companies are categorically immune whenever they include a disclaimer, but stated just that "Disclaimer or cautionary language weighs in the determination of whether this objective, 'reasonable reader' standard is met," and that "Under the circumstances present here, a reasonable reader in Riehl's position could not have concluded that the challenged ChatGPToutput communicated 'actual facts'":
{Riehl pasted sections of the Ferguson complaint [a Complaint in a civil case that Riehl was researching] into ChatGPT and asked it to summarize those sections, which it did accurately. Riehl then provided an internet link, or URL, to the complaint to ChatGPT and asked it to summarize the information available at the link. ChatGPT responded that it did "not have access to the internet and cannot read or retrieve any documents." Riehl provided the same URL again. This time, ChatGPT provided a different, inaccurate summary of the Ferguson complaint, saying that it involved allegations of embezzlement by an unidentified SAF Treasurer and Chief Financial Officer. Riehl again provided the URL and asked ChatGPT if it could read it. ChatGPT responded ''yes" and again said the complaint involved allegations of embezzlement; this time, it said that the accused embezzler was an individual named Mark Walters, who ChatGPT said was the Treasurer and Chief Financial Officer of the SAF.}
In this specific interaction, ChatGPT warned Riehl that it could not access the internet or access the link to the Ferguson complaint that Riehl provided to it, and that it did not have information about the period of time in which the complaint was filed, which was after its "knowledge cutoff date." Before Riehl provided the link to the complaint, ChatGPT accurately summarized the Ferguson complaint based on text Riehl inputted. After Riehl provided the link, and after ChatGPT initially warned that it could not access the link, ChatGPT provided a completely different and inaccurate summary.
Additionally, ChatGPT users, including Riehl, were repeatedly warned, including in the Terms of Use that govern interactions with ChatGPT, that ChatGPT can and does sometimes provide factually inaccurate information. A reasonable user like Riehl—who was aware from past experience that ChatGPT can and does provide "flat-out fictional responses," and who had received the repeated disclaimers warning that mistaken output was a real possibility—would not have believed the output was stating "actual facts" about Walters without attempting to verify it….
That is especially true here, where Riehl had already received a press release about the Ferguson complaint and had access to a copy of the complaint that allowed him immediately to verify that the output was not true. Riehl admitted that ''within about an hour and a half' he had established that "whatever [Riehl] was seeing" in ChatGPT's output "was not true." As Riehl testified, he ''understood that the machine completely fantasized this. Crazy." …
Separately, it is undisputed that Riehl did not actually believe that the Ferguson complaint accused Walters of embezzling from the SAF. If the individual who reads a challenged statement does not subjectively believe it to be factual, then the statement is not defamatory as a matter of law.… [Riehl] knew Walters was not, and had never been, the Treasurer or Chief Financial Officer of the SAF, an organization for which Riehl served on the Board of Directors….
[2.a.] The court also concluded that Walters couldn't show even negligence on OpenAI's part, which is required for all libel claims on matters of public concern:
The Court of Appeals has held that, in a defamation case, "[t]he standard of conduct required of a publisher … will be defined by reference to the procedures a reasonable publisher in [its] position would have employed prior to publishing [an item] such as [the] one [at issue. A publisher] will be held to the skill and experience normally exercised by members of [its] profession. Custom in the trade is relevant but not controlling." Walters has identified no evidence of what procedures a reasonable publisher in OpenAl's position would have employed based on the skill and experience normally exercised by members of its profession. Nor has Walters identified any evidence that OpenAI failed to meet this standard.
And OpenAI has offered evidence from its expert, Dr. White, which Walters did not rebut or even address, demonstrating that OpenAI leads the Al industry in attempting to reduce and avoid mistaken output like the challenged output here. Specifically, "OpenAI exercised reasonable care in designing and releasing ChatGPTbased on both (1) the industry-leading efforts OpenAI undertook to maximize alignment of ChatGPT's output to the user's intent and therefore reduce the likelihood of hallucination; and (2) providing robust and recurrent warnings to users about the possibility of hallucinations in ChatGPT output. OpenAI has gone to great lengths to reduce hallucination in ChatGPT and the various LLMs that OpenAI has made available to users through ChatGPT. One way OpenAI has worked to maximize alignment of ChatGPT's output to the user's intent is to train its LLMs on enormous amounts of data, and then fine-tune the LLM with human feedback, a process referred to as reinforcement learning from human feedback." OpenAI has also taken extensive steps to warn users that ChatGPT may generate inaccurate outputs at times, which further negates any possibility that Walters could show OpenAI was negligent….
In the face of this undisputed evidence, counsel for Walters asserted at oral argument that OpenAI was negligent because "a prudent man would take care not to unleash a system on the public that makes up random false statements about others…. I don't think this Court can determine as a matter of law that not doing something as simple as just not turning the system on yet was … something that a prudent man would not do." In other words, Walters' counsel argued that because ChatGPT is capable of producing mistaken output, OpenAI was at fault simply by operating ChatGPT at all, without regard either to ''the procedures a reasonable publisher in [OpenAl's] position would have employed" or to the "skill and experience normally exercised by members of [its] profession." The Court is not persuaded by Plaintiff's argument.
Walters has not identified any case holding that a publisher is negligent as a matter of defamation law merely because it knows it can make a mistake, and for good reason. Such a rule would impose a standard of strict liability, not negligence, because it would hold OpenAI liable for injury without any "reference to 'a reasonable degree of skill and care' as measured against a certain community." The U.S. Supreme Court and the Georgia Supreme Court have clearly held that a defamation plaintiff must prove that the defendant acted with "at least ordinary negligence," and may not hold a defendant liable "without fault." …
[2.b.] The court also concluded that Walters was a public figure, and therefore had to show not just negligence, but knowing or reckless falsehood on OpenAI's part (so-called "actual malice"):
Walters qualifies as a public figure given his prominence as a radio host and commentator on constitutional rights, and the large audience he has built for his radio program. He admits that his radio program attracts 1.2 million users for each 15-minute segment, and calls himself ''the loudest voice in America fighting for gun rights." Like the plaintiff in Williams v. Trust Company of Georgia (Ga. App.), Walters is a public figure because he has "received widespread publicity for his civil rights … activities," has "his own radio program," ''took his cause to the people to ask the public's support," and is "outspoken on subjects of public interest." Additionally, Walters qualifies as a public figure because he has "a more realistic opportunity to counteract false statements than private individuals normally enjoy"; he is a radio host with a large audience, and he has actually used his radio platform to address the false ChatGPT statements at issue here…. [And] at a minimum, Walters qualifies as a limited-purpose public figure here because these statements are plainly "germane" to Walters' conceded "involvement" in the "public controvers[ies]" that are related to the ChatGPT output at issue here….
Walters' two arguments that he has shown actual malice fail. First, he argues that OpenAI acted with "actual malice" because OpenAI told users that ChatGPT is a "research tool." But this claim does not in any way relate to whether OpenAI subjectively knew that the challenged ChatGPT output was false at the time it was published, or recklessly disregarded the possibility that it might be false and published it anyway, which is what the "actual malice" standard requires. Walters presents no evidence that anyone at OpenAI had any way of knowing that the output Riehl received would probably be false…. [The] "actual malice" standard requires proof of the defendant's "subjective awareness of probable falsity" ….
Second, Walters appears to argue that OpenAI acted with "actual malice" because it is undisputed that OpenAI was aware that ChatGPT could make mistakes in providing output to users. The mere knowledge that a mistake was possible falls far short of the requisite "clear and convincing evidence" that OpenAI actually "had a subjective awareness of probable falsity" when ChatGPT published the specific challenged output itself….
[3.] And the court concluded that in any event Walters had to lose because (a) he couldn't show actual damages, (b) he couldn't recover presumed damages, because here the evidence rebuts any presumption of damage, given that Riehl was the only person who saw the statement and he didn't believe it, and (c) under Georgia law, "[A]ll libel plaintiffs who intend to seek punitive damages [must] request a correction or retraction before filing their civil action against any person for publishing a false, defamatory statement," and no such request was made here.
An interesting decision, and might well be correct (see my Large Libel Models article for the bigger legal picture), but it's tied closely to its facts: In another case, where the user didn't have as many signals that the assertion is false, or where the user more broadly distributed the message (which may have produced more damages), or where the plaintiff wasn't a public figure, or where the plaintiff had indeed alerted the defendant of the hallucination and yet the defendant didn't do anything to try to stop it, the result might well be different. For comparison, check out the Starbuck v. Meta Platforms, Inc. case discussed in this post from three weeks ago.
Note that, as is common in some states' courts, the decision largely adopts a proposed order submitted by the party that prevailed on the motion for summary judgment. The judge has of course approved the order, and agrees with what it says (since she could have easily edited out parts she disagreed with); but the rhetorical framing in such cases is often more the prevailing party's than the judge's.
OpenAI is represented by Stephen T. LaBriola & Ethan M. Knott (Fellows LaBriola LLP); Ted Botrous, Orin Snyder, and Connor S. Sullivan (Gibson, Dunn & Crutcher LLP); and Matthew Macdonald (Wilson Sonsini Goodrich & Rosati, P.C.).
Show Comments (38)