The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

First (?) Libel-by-AI (ChatGPT) Lawsuit Filed

"Every statement of fact in the summary [provided by ChatGPT] pertaining to [plaintiff] Walters is false."

|

It's Walters v. OpenAI L.L.C., No. 23-A-04860-2, filed in Gwinnett County (Georgia) yesterday. An excerpt from the Complaint (and please note both that the Complaint is just an allegation, and that the statements quoted in it about Walters are allegedly entirely made up, not by some human accuser but by a hallucinating AI program):

8. Fred Riehl is a third party who works for a media outlet [Ammoland.com] as a journalist and who is a subscriber of ChatGPT.

9. On May 4, 2023, Riehl interacted with ChatGPT about a lawsuit (the "Lawsuit") that Riehl was reporting on.

10. The Lawsuit is in federal court in the Western District of Washington, case No. 2-23-cv-00647, with short caption of The Second Amendment Foundation v. Robert Ferguson….

14. In the interaction with ChatGPT, Riehl provided a (correct) URL of a link to the complaint on the Second Amendment Foundation's web site, https://www.saf.org/wp-content/uploads/2023/05/Dkt-1-Complaint.pdf. [I assume that he was using a ChatGPT plugin that allows users to feed PDFs into ChatGPT. -EV]

15. Riehl asked ChatGPT to provide a summary of the accusations in the complaint.

16. ChatGPT responded that the document "[I]s a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF. The complaint alleges that Walters, who served as the organization's treasurer and chief financial officer, misappropriated funds for
personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF's leadership. The plaintiff seeks various forms of relief, including the recovery of misappropriated funds, damages for breach of fiduciary duty and fraud, and removal of Walters from his position as a member of the SAF's board of directors."

17. Every statement of fact in the summary pertaining to Walters is false.

18. As already noted, Walters is not a party to the Lawsuit.

19. Walters is not accused of defrauding and embezzling funds from the Second Amendment Foundation ("SAF").

20. The complaint does not allege that Walters served as SAF's treasurer or chief financial officer, and he has never held either position.

21. The complaint does not allege that Walters misappropriated funds for personal expenses, manipulated financial records or bank statements, or failed to provide financial reports to SAF leadership, nor would he have been in a position to do so because he has no employment or official relationship with SAF.

22. The complaint does not seek any relief from Walters.

23. The complaint has nothing at all to do with financial accounting claims against anyone.

24. Riehl asked ChatGPT to provide him with a copy of the portion of the complaint related to Walters.

25. ChatGPT responded, "Certainly, here is the paragraph from the complaint that concerns Walters: 'Defendant Mark Walters ("Walters") is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012. Walters has access to SAF's bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF's board of directors. Walters owes SAF a fiduciary duty of loyalty and care, and is required to act in good faith and with the best interests of SAF in mind. Walters has breached these duties and responsibilities by, among other things, embezzling and misappropriating SAF's funds and assets for his own benefit, and manipulating SAF's financial records and bank statements to conceal his activities.'"

26. Riehl then asked ChatGPT to provide the entire text of the complaint.

27. ChatGPT provided the text in Exhibit 1 to this Complaint.

28. Exhibit 1 is a complete fabrication and bears no resemblance to the actual complaint, including an erroneous case number.

29. None of ChatGPT's statements concerning Walters are in the actual complaint.

As I argue in my Large Libel Models? Liability for AI Output article (forthcoming later this month), I think such libel claims are in principle legally viable. But this particular lawsuit should be hard to maintain.

Generally speaking, speech on matters of public concern can lead to defamation liability only in two kinds of cases:

  1. If the plaintiff can show that the defendant knew the statement was false, or knew the statement was likely false but recklessly disregarded that knowledge, or
  2. if the plaintiff can show proven actual damages (e.g., lost jobs, lost business opportunities, lost social connections, and the like) and the plaintiff is a private figure and the defendant was negligent in making the false statement.

Here, it doesn't appear from the complaint that Walters put OpenAI on actual notice that ChatGPT was making false statements about him, and demanded that OpenAI stop that, so theory 1 is unavailable. And there seem to be no allegations of actual damages—presumably Riehl figured out what was going on, and thus Walters lost nothing as a result—so theory 2 is unavailable. (Note that Mark Walters might be a public figure, because he's a syndicated radio talk show host; but even if he is a private figure, that just potentially opens the door to recovery under theory 2 if he can show actual damages, and again that seems unlikely given the allegations in the complaint.)

Now I suppose that Walters could argue that OpenAI knows that ChatGPT often does publish false statements generally (it does, and indeed has acknowledged that), even if it didn't know about the false statements about Walters in particular. But I don't think this general knowledge is sufficient, just like you can't show that a newspaper had knowledge or recklessness as to falsehood just because the newspaper knows that some of its writers sometimes make mistakes. For liability in such cases (again, absent actual damages to a private figure), there has to be a showing that the allegedly libelous "statement was made with 'actual malice'—that is, with knowledge that it was false or with reckless disregard of whether it was false or not." And here no-one at OpenAI knew about those particular false statements, at least unless Walters had notified OpenAI about them.

Again, some other plaintiff could in principle sue OpenAI on a negligence theory, which I discuss in my article, and for that actual knowledge on OpenAI's part wouldn't be required. But that requires either a statement on matters of purely private concern, or a statement about a private figure that has caught provable actual damages.

In any event, though, it will be interesting to see what ultimately happens here.