The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Court Lets First AI Libel Case Go Forward
Georgia Gwinnett County Superior Court Judge Tracie Cason's decision last week in Walters v. OpenAI, LLC simply says that it denies OpenAI's motion to dismiss, and (as is common in many state courts) doesn't offer a detailed explanation. But here's an excerpt from Walters' response to OpenAI's motion to dismiss; presumably the judge agreed with much of the response (for more on this sort of question, see my Large Libel Models article):
The following facts are alleged in the Amended Complaint and must be taken to be true:
Fred Riehl is a third party who works for a media outlet as a journalist and who is a subscriber of ChatGPT. On May 4, 2023, Riehl interacted with ChatGPT about a lawsuit (the "Lawsuit") that Riehl was reporting on.
The Lawsuit is in federal court in the Western District of Washington, case No. 2- 23-cv-00647, with short caption of The Second Amendment Foundation v. Robert Ferguson. The two defendants in the Lawsuit are Robert Ferguson, the Washington Attorney General, and Joshua Studor, the Washington Assistant Attorney General. The plaintiffs in the Lawsuit are the Second Amendment Foundation and others, including Alan Gottlieb. Walters is neither a plaintiff nor a defendant in the Lawsuit.
In the interaction with ChatGPT, Riehl provided a (correct) URL of a link to the complaint on the Second Amendment Foundation's web site, https://www.saf.org/wp-content/uploads/2023/05/Dkt-1-Complaint.pdf. Riehl asked ChatGPT to provide a summary of the accusations in the complaint.
ChatGPT responded that the document "[I]s a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF. The complaint alleges that Walters, who served as the organization's treasurer and chief financial officer, misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF's leadership. The plaintiff seeks various forms of relief, including the recovery of misappropriated funds, damages for breach of fiduciary duty and fraud, and removal of Walters from his position as a member of the SAF's board of directors."
Every statement of fact in the summary pertaining to Walters is false.
As already noted, Walters is not a party to the Lawsuit. Walters is not accused of defrauding and embezzling funds from the Second Amendment Foundation ("SAF"). The complaint does not allege that Walters served as SAF's treasurer or chief financial officer, and he has never held either position. The complaint does not allege that Walters misappropriated funds for personal expenses, manipulated financial records or bank statements, or failed to provide financial reports to SAF leadership, nor would he have been in a position to do so because he has no employment or official relationship with SAF. The complaint does not seek any relief from Walters. The complaint has nothing at all to do with financial accounting claims against anyone.
SAF publishes the names of its officers and directors on its website. SAF has never listed Walters as its CFO or treasurer. SAF also publishes its audited financial statements on its website. Such audited financial statements have never mentioned the misappropriation of funds. OAI knows or can readily determine that Walters has never been the CFO or treasurer of SAF.
Riehl asked ChatGPT to provide him with a copy of the portion of the complaint related to Walters. ChatGPT responded, "Certainly, here is the paragraph from the complaint that concerns Walters: 'Defendant Mark Walters ("Walters") is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012. Walters has access to SAF's bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF's board of directors. Walters owes SAF a fiduciary duty of loyalty and care, and is required to act in good faith and with the best interests of SAF in mind. Walters has breached these duties and responsibilities by, among other things, embezzling and misappropriating SAF's funds and assets for his own benefit, and manipulating SAF's financial records and bank statements to conceal his activities.'"
Riehl then asked OAI's ChatGPT to provide the entire text of the complaint. ChatGPT provided the text in Exhibit 1 to [the] Complaint [in this case]. Exhibit 1 is a complete fabrication and bears no resemblance to the actual complaint, including an erroneous case number. None of ChatGPT's statements concerning Walters are in the actual complaint.
OAI is aware that ChatGPT sometimes makes up facts, and refers to this phenomenon as a "hallucination." Riehl questioned OAI regarding the accuracy of its statements regarding Walters, and OAI insisted that they were accurate. OAI knew or should have known that its statements made via ChatGPT to Riehl about Walters were false and defamatory. Even when questioned about the accuracy of its statements, OAI did not attempt to verify them. OAI disregarded serious questions about the accuracy of its statements. OAI was reckless in its disregard of the falsity of its statements.
OAI CEO Sam Altman was quoted in Fortune in June of 2023 as saying, "I think we will get the hallucination problem to a much, much better place. I think it will take us a year and a half, two years. Something like that." Altman also has said, "I probably trust the answers that come out of ChatGPT the least of anybody on Earth."
Riehl contacted Gottlieb regarding ChatGPT's allegations concerning Walters, and Gottlieb confirmed that they were false….
OAI complains that 1) Riehl did not and could not read OAI's statements as defamatory; 2) there was no publication; and 3) Walters is a public figure and there was no actual malice. These are the types of things that are to be assessed in discovery and cannot form the basis of a motion to dismiss for failure to state a claim. Nevertheless, Walters will discus each item in turn:
Whether Riehl Viewed the Statements as Defamatory
OAI relies on a statement from Riehl to OAI challenging the accuracy of the statements. OAI insists this shows that Riehl did not believe the statements. Even if this could form the basis of a failure to state a claim, the burden is on OAI to prove with certainty that Riehl did not believe the statements. This, it cannot do. The Amended Complaint alleges that Riehl checked with a third party (Alan Gottlieb) to determine if OAI's statements were true. That is, the Amended Complaint shows that Riehl questioned the accuracy of OAI's statements—not that he disbelieved them. Whether Riehl with certainty did not believe OAI's statements cannot be determined without discovery.
OAI itself refers to its statements as "probabilistic," a euphemism for "maybe not true." OAI insists that "responsible use" of its ChatGPT system includes "fact-checking" its statements. That's what Riehl did, by first challenging OAI itself and then checking with a third party. In essence, OAI is arguing that it is a mere gossip monger who begins its wild accusations with, "Now this may not be true, but…." The responsible thing for OAI would be not to spread false rumors in the first place.
Publication
OAI next argues that its statements to Riehl do not constitute publication. Under Georgia law, "A libel is published as soon as it is communicated to any person other than
the party libeled." In order recover for libel, "there must be communication to any person other than the party libeled." The Amended Complaint clearly alleges that the defamatory statements were communicated to Riehl, someone other than Walters. While OAI is free to explore these allegations in discovery, it cannot be disputed that Walters stated a claim.
Whether Walters is a Public Figure and Whether There Was Actual Malice
OAI … argues that Walters is a public figure and OAI had not actual malice. Whether someone is a public figure is a mixed question of fact and law. It must be decided on a case-by-case basis. It turns on whether the plaintiff has achieved "pervasive fame or notoriety" or whether the plaintiff "voluntarily injects himself or is drawn into a particular public controversy." OAI does not assert which form of public figurehood applies to Walters. Given, however, that OAI has not identified a "particular public controversy," the only reasonable conclusion is that OAI must be asserting that Walters has achieved "pervasive fame or notoriety." This determination cannot be made without discovery.
Even if Walters is a public figure, "Actual malice" is knowledge that the statements were false or reckless disregard for their truth or falsity. The Amended Complaint alleges, "OAI knew or should have known that its statements made via ChatGPT to Riehl about Walters were false and defamatory." And, "Even questioned about the accuracy of its statements, OAI did not attempt to verify them." "OAI disregarded serious questions about the accuracy of its statements." "OAI was reckless in its disregard of the falsity of its statements." OAI clearly knew its system generated blatantly false statements. Its own CEO said, "I probably trust the answers that come out of ChatGPT the least of anybody on Earth."
Personal Jurisdiction
Finally, OAI argues that this Court lacks personal jurisdiction over it because it is organized out of Delaware and headquartered in California. OAI ignores, however, that the Georgia Supreme Court has re-affirmed that registering to do business in Georgia subjects a company to the personal jurisdiction of the courts of this State. The Amended Complaint alleges that OAI is registered to do business in Georgia….
Thanks to the Media Law Resource Center (MLRC) MediaLawDaily for the pointer.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
There is no unfair report privilege.
OAI squarely presents the implications of strict actual malice, under which libel can occur only if the defendant becomes aware of the truth but rejects it.
The reasonable, prudent law-abiding person takes care to conduct no investigation at all, but is careful to fabricate things with no knowledge or concern for truth of any kind. The reasonable, prudent, law-abiding person is thereby insulated from any potential to become aware of truth, and hence from any possibilty of liability.
Only the reckless, lawless, imprudent, anti-social person, the kind who foolishly persists in having concern for truth and conducting some sort of investigation into the facts in complete disregard of ethical and social responsibilities, can be subject to liability under the strict actual malice.
Libel under this theory consists primarily to punish and deter people for having concern for truth and for investigating facts. It is based on a view that truth is inimical to the effective functioning of a democratic society, and its investigation dissemination should be severely discouraged.
That is its result. Surely it must be its intent.
OAI advances the theory quite shamelessly. Since ChatGPT simply made shit up without anyone conducting any investigation or becoming aware of any facts, there were never any facts to disregard, hence no reckless disregard of facts, and hence no liability for libel.
If the person then promulgated the story, the person is liable.
What if they renamed Open AI as "Lying Teller of Tall and False Tales"?
"Lying Teller of Tall and False Tales, tell me a story of person X."
"Person X is a fraud under investigation by the SEC."
"Thanks!"
As long as the person promulgating the story is careful not to do anything stupid that would risk encountering contrary evidence, the person promulgating the story would also be immune from libel liability under strict actual malice.
Another way to say this is that strict actual malice imposes liability for conducting a flawed investigation. If one conducts no investigation, ones behavior remains flawless.
The situation is like Touchstone’s knight in Shakespeare’s As You Like It:
Touchstone: Of a certain knight that swore by his honour they were good pancakes, and swore by his honour the mustard was naught. Now I’ll stand to it, the pancakes were naught and the mustard was good, and yet was not the knight forsworn.
Celia: How prove you that, in the great heap of your knowledge?
Rosalind: Ay marry, now unmuzzle your wisdom.
Touchstone: Stand you both forth now: stroke your chins, and swear by your beards that I am a knave.
Celia: By our beards, if we had them, thou art.
Touchstone: By my knavery, if I had it, then I were. But if you swear by that that is not, you are not forsworn; no more was this knight, swearing by his honor, for he never had any; or if he had, he had sworn it away before he ever saw those pancakes or that mustard.
As honor was to Touchstone’s knight, so are investigations and fact-finding to the law of strict actual malice.
"The reasonable, prudent law-abiding person takes care to conduct no investigation at all"
The homeowner who refuses to test his basement for radon, on the basis that, if he had, and radon was detected, he would be obligated to disclose that fact to anyone interested in buying the house.
There are many situations where the law effectively impedes people from mitigating risks or improving things for these reasons. If one notices a problem, one becomes responsible for solving it and if one fails to completely solve it, one is blamed for the imperfections of ones solution. But if one does nothing and fails to notice anything, one has no obligations and is completely exempt from blame. This makes steps that improve things without completely solving them legally risky. It’s a general problem.
"OAI … argues that Walters is a public figure"
1) OAI defamed Walters
2) Therefore Walters' name was published in an article at Reason
3) Therefore Walters is a public figure because people who aren't public figures aren't discussed in news and opinion articles
4) Therefore OAI is not liable for defaming Walters
QED
I've never quite understood the "public figure" exemption in libel laws. I can say patently untrue and defamatory things about someone so long as they have done something noteworthy?
That's not it. As always, things are more complicated. First, it comes from the public figure/matter of public concern, but more importantly, there's a good reason that we have a higher standard for public figures.
To start with, you can't just say patently untrue and defamatory things about public figures. Instead, there's a standard of actual malice, which is a legal standard and doesn't mean "made with malice." Instead, it means that the plaintiff must show that the defendant knew that the statement was false at the time it was made or demonstrated reckless disregard as to the truth of the statement.
As for the reason for this heightened standard, it is to protect robust discussion and discourse. It allows people to freely discuss public figures and matters of public concern without fear of lawsuits, so long as they aren't knowingly (or recklessly) spreading falsehoods. It's a much higher standard, but it's not impossible.
In short, we want to protect free discourse, and we've set the values so that we are protecting robust speech. Otherwise, the powerful could easily use defamation law to silence discussions about them.
"the powerful could easily use defamation law to silence discussions about them"
Titles of nobility?
...um, what?
I highly suggest people look into the history and development of defamation law, which is fascinating. Originally, truth was not only not a defense ... it was worse (hence the maxim, "The greater the truth, the greater the libel" in English law) Other than seditious and criminal libel, which was primarily used to repress people from speaking out against government policies, we saw defamation law spring up as an alternative to, inter alia, duels.
Then we get to Alexander Hamilton (yeah, that one) who prevailed on truth as a defense ... and so on and so forth. IMO, the free speech we have today (such as the abilities of many of our own commenters to make dubious comments) arises from the protections that we all enjoy.
Anyway, YMMV, but I think it's fascinating, and I am in favor of as much protection for free speech (and, in turn, as little allowance for defamation) as possible.
Andrew Hamilton.
I was actually thinking of People v. Croswell when I wrote that, which was tried by Alexander Hamilton (and led to the change in NY's libel law and that other states followed).
You're thinking of the Zenger case, with Andrew Hamilton.
Too many Hamiltons for the same proposition!
The libeled person is not a party to the lawsuit?
No, Walters is not a party to the SAF related lawsuit. He was libeled by ChatGPT making up libelous allegations about him in responding to queries about the SAF lawsuit.
So it's a lawsuit about libelous allegations about a second lawsuit.
So where's the standing coming from?
Standing for whom, in which lawsuit?
ChatGPT created a document saying that Walters had been sued for various illegal activities. But Walters hadn’t been sued. He wasn’t a party to the lawsuit the ChatGPT document claimed he was a defendant in. While ChatGPT referenced a real lawsuit, it fabricated not only Walters’ involvement but detailed court filings in this lawsuit out of thin air. It fabricated supposed quotes from court documents in the real lawsuit mentioning Walters and detailing supposed evidence against him.
For this reason, Walters sued OAI, ChatGPT’s owner, for libel, claiming that it libeled him by claiming in its document that he was accused of wrongdoing in a suit he was never actually a party to. In this suit, the libel suit, Walters has standing to claim OAI libeled him by fabricating his involvement in a suit he was never a party to.
Does Gottlieb or SAF get to put a dog in this fight?
“I’ve never quite understood the “public figure” exemption in libel laws.” Me neither. It’s as if ordinary ‘non-public’ people have reputations to defend, and ‘public figures’ don’t. At some point a court has to formulate when and how a non-public figure becomes a public figure and loses rights—better yet, how a non-public figure can conduct himself so as never to become a public figure—or scrap the distinction altogether. This is already the future wherein everybody becomes famous for 15 minutes.
If you're genuinely curious, courts have actually wrestled with this issue. See also "limited-purpose public figure."
If you want to understand the distinction, there's a lot of court cases and books and articles you can look at. Enjoy!
It seems that AI is doing more than hallucinating.
What else does it seem that it’s doing?
I'm curious about something: How is it relevant whether Riehl believed the nonsense ChatGPT was spouting? I suppose that would go to figuring damages, but surely lying about somebody to a third party is defamatory regardless of whether the lie is believed?
I'm also curious about how Walters learned about this; Did Riehl contact him?
Anyway, ChatGPT's best defense IS probably their TOS, which basically warn you that you're using a Magic Eight Ball: Don't rely on ANYTHING ChatGPT says actually being true.
A statement cannot, by definition, be defamatory if the person hearing it does not believe it to be a true statement.
Not arguably true. Not something that might be questionable.
So, for example, if I am talking to someone, and say that John is a Martian, and the listener doesn't believe me, defamation cannot lie because the statement wasn't defamatory.
(Generally, this goes to the third element in a state-law claim, which is that the statement is understood to be about the plaintiff, and tends to harm the reputation; if the statement is not believed, by definition, it cannot harm. It's the same reason you can't claim defamation against someone who lies about you in a good way.)
A lot of people heard it, or we wouldn't be reading about the lawsuit.
In any case, tell the lie to enough people, someone will believe it. (Quite a lot of them, if fact, if their Dear Leader is the one who repeats it to them.)
I suspect that all of this kerfuffle about the accuracy of AI will be a transitory thing. When ChapGPT first got in the news I asked it to do a biography of me. It was quite fascinating to read. I'd had to give it my general occupation (astronomer) to get it started, but then it wrote a quite coherent and flattering biography. Alas, I think the only detail it got right was the year of my PhD. It wasn't completely out of the ballpark, I had worked in the fields it described, but not on the specific topics or places it wrote about.
A year later I redid this with ChatGPT 4.0. Now it gives a biography which is somewhat distorted in which topics are emphasized but essentially every fact is correct. My thesis title, date and institution are all there, where I worked, some of my major accomplishments and papers. Sadly the major awards I'd won in the previous iteration have fallen away...
I bet that it's proving to be a lot easier to add a layer of fact-checking in the models that it was to get them to the point where they were fluent in the language and I think a lot of that is already in the pipeline. In another year or two this worry about the accuracy of AI, may seem as dated as the Y2K apocalypse.
" Sadly the major awards I’d won in the previous iteration have fallen away…"
Well, meritocracy is privilege, so you can't expect a left wing propaganda tool to promote it.
Like Wikipedia, it will probably end up being fairly reliable for purely factual information that is completely uncontroversial, while being fairly useless for politically charged or otherwise sensitive or controversial topics and information (except to the extent you are able to take those limitations into account).
But even if it becomes fairly reliable for the first type of information, and so the current staggering degree of inaccuracy with respect to that information proves transitory, there will always be some inaccuracy. And kerfuffle.
I just wanted to add one additional comment- I hope that everyone read the post carefully. This is a decision in state court, and the judge did not make any specific findings, instead just denying the motion to dismiss. The part that EV is excerpting is just the response from the Plaintiff, so we can't know what reasons (if any) the judge found particularly important.
In addition, while I know nothing about this case or this judge, it is not uncommon in some states, for some judges, to deny motions to dismiss unless there is some compelling reason that they will end the case (and the judge won't get reversed)- such as immunity from suit, or a statute of repose that obvious from the face of the complaint. Can't say what happened here, but absent any judicial findings, we can't know what this means, other than the complaint survived a motion to dismiss.
I don't think this suit succeeds for a couple of reasons:
1) It's well known ChatGPT can make mistakes, ChatGPT says it makes mistakes, the prompt says it makes mistakes. If someone tells you "I believe this to be true but I sometimes have wild hallucinations" before the false statement I think that raises the liable bar quite a bit.
2) Riehl realized ChatGPT was hallucinating from the start, therefore Walters was never defamed (this would only apply to this case).
3) It's hard to argue that ChatGPT displayed actual malice. Reckless disregard for the truth is a stronger claim to make, but I suspect OpenAI can show they've done a lot of work to combat the hallucination problem. Even if they've failed I think that makes them non-reckless.
Yes. However some judges may waste a lot of time and money considering these silly lawsuits.
That's not really how it works. It is true that things not reasonably understood to be false statements of fact (such as parody/satire) are not defamatory. But that's not the context of the ChatGPT oeuvre; those AI-generated statements are intended to be taken as factual. Prefacing one's otherwise factual assertions with "oh, by the way, sometimes I lie" is not a way to insulate oneself from claims for defamation.
Such a disclaimer might cause the person who repeats the subsequent statements to be in more legal peril, but it doesn't immunize the original speaker.
That is, I think, their problem: Their TOS declares ChatGPS to be a magic eight ball, but they're none the less marketing it as a useful research tool, not a game.
They're talking out of both sides of their mouth; Which is it? Unreliable game, or useful tool?
I don't see why trying and failing makes the company not reckless. Trying means the company knew about the risk. Failing could mean the company didn't try hard enough under the circumstances.
The problem is so new that courts have not developed legal standards to apply. If AI is placed into the ultrahazardous category then the company is strictly liable for harm. If AI is placed into the category of things we can't live without then we tolerate defects that are unreasonably expensive to avoid.
It is not clear to me that the public figure exception requiring proof of malice ought to apply to AI-generated content. We tolerate the occasional act of defamation as a necessary consequence of our right to comment on matters of public interest. If your robot sprays lies in all directions and one of them happens to hit a public figure, why should you get lucky?
I'm confused by part of the Response. Who did Riehl query about the veracity of ChatGPT's output? The Response uses OAI some times when it seems to mean ChatGPT - or did an actual OAI employee swear it was accurate? Or does the Response mean ChatGPT when it says OAI, at least some of the time?
(ie, "Riehl questioned OAI regarding the accuracy of its statements regarding Walters, and OAI insisted that they were accurate." Who, exactly, is being questioned and insisting on the accuracy of the statements?)
According to the opposition to the motion to dismiss, the complaint is not required to "establish the basic elements of a defamation claim" but need only "put OAI on notice of what he is claiming." This as a matter of Georgia procedural law.