The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Journal of Free Speech Law: My "Large Libel Models? Liability for AI Output"
Just published, in our symposium on Artificial Intelligence and Speech; more articles from the symposium coming in the next few days.
The article is here; the Introduction:
In March 2023, I asked ChatGPT-3.5 "What was [R.R.] accused of?" (I used R.R.'s full name, but I'll omit it here, for reasons that will become obvious.) The response began with a somewhat imprecise but mostly accurate statement of what R.R. had indeed been publicly accused of. But then ChatGPT added that,
In 2019, [R.] pleaded guilty to conspiracy to commit wire fraud and was sentenced to 3 years of probation and ordered to pay a fine.
When I probed further about the guilty plea, ChatGPT referred to a Reuters article, and even gave me a quote:
Certainly, here is a longer quote from the Reuters article published on December 5, 2018, that mentions R.R.:
"[R.R.], 35, pleaded guilty to one count of conspiracy to commit wire fraud and one count of aggravated identity theft, according to the U.S. Department of Justice. Prosecutors said that [R.], who operated an online reputation management company, and co-conspirators created fraudulent court orders and other legal documents to convince Google to remove negative reviews and other content about his clients. The scheme, which was carried out between 2014 and 2017, was designed to manipulate Google's search rankings in favor of [R.]'s clients, according to prosecutors. [R.] is scheduled to be sentenced on March 8, 2019."
But no such quote seems to appear in any archives of Reuters articles. I could find no record of any guilty plea by R.R. None of the other quotes that ChatGPT-3.5 passed along (in quotation marks) checked out, either. ChatGPT-4 and Bard have similarly invented false allegations, complete with invented quotes and invented newspaper articles—I give some examples in the Appendix, though I have more in my files. These Large Language Models (LLMs) seem to routinely erroneously produce false and defamatory statements.
Indeed, a libel lawsuit against OpenAI has already been filed, based on a claim that ChatGPT falsely summarized a complaint in a different case as alleging embezzlement by a particular person; that complaint actually had nothing to do with that person, or with embezzlement. Likewise, a libel lawsuit against Bing has been filed, based on a claim that Bing (which uses GPT-4 technology) responded to a query about "Jeffery Battle" with the following output:
This output apparently mixes information about the technology expert Jeffery Battle with information about the convicted terrorist Jeffrey Battle, and adds material that conveys the message that the output is all about one person (the word "However," coupled with the last name "Battle" without inclusion of the slightly different first name). Yet it appears that AI companies have focused little on the risk of libel, though they have focused on many other kinds of risks.
This is becoming especially significant as such AI programs are becoming integrated into search engines (such as Bing) and other products. If people were to end up viewing AI programs as merely fun toys that can't be trusted with regard to any important information—or just as generators of amusing fiction—then such false statements would cause little harm. But, as I'll discuss in Part I.B, I expect that many users will indeed view the AI programs as reliable enough that the users might, for instance, decide not to select one of dozens of job applicants, or not to deal with some professional or service provider, because of an allegation that the program outputs. And even if users realize that AI programs are no more reliable than, say, rumor or gossip, the law generally recognizes that rumor and gossip can be quite damaging, and can therefore be actionable.
Should, then, the AI programs' creators and operators, such as OpenAI (for ChatGPT) or Google (for Bard) be liable for defamation, based on their programs' output? Part I will analyze this question under the current rules of U.S. defamation law. I will tentatively argue that, when the "actual malice" standard applies, the standard might be satisfied if an AI company has received actual notice of particular spurious information being produced by its software but has refused to act. This would in practice require such companies to implement a "notice-and-blocking" system, loosely similar to "notice-and-takedown" systems required under the DMCA as to copyright and trademark infringements. And I will also discuss the possibility of negligence liability, when such liability is authorized under libel law, by analogy to negligent design product liability.
To be sure, allowing such liability could yield substantial costs. That is particularly so since it may require lay judges and juries to evaluate complicated technical claims about which designs are feasible. (Such concerns of course mirror the concerns about legal liability as to other products, such as pharmaceuticals or cars, or as to services, such as surgical procedures.) Part II will tentatively discuss some arguments for why the law might be changed, whether by courts, by legislatures, or by administrative agencies. Finally, Part III will offer some similarly tentative thoughts about how this might apply to other claims, such as false light, disclosure of private facts, the right of publicity, or negligence.
A terminological note: For convenience, I'll generally use the term "AI programs" to mean AIs that output information in response to a user prompt, though I realize that there are of course AI programs that do other things, such as operate self-driving cars. I'll also refer to "AI companies" as shorthand for companies that create AI programs and provide access to their services. My logic ought to apply to such AI programs however they are designed, whether they are based on LLMs or not, though I appreciate that parts of the analysis may turn on the specifics of each design.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
People often believe rumor or gossip. Even if they don't entirely believe them they act on mere suspicion that a rumor or gossip is true. If you think someone might be dangerous, you might avoid their company. Certainly you might shield your minor children. If you think an accountant might embezzle your funds, you probably won't hire him. Someone else can tell you "guilty until proven innocent"-- but that's a legal standard. You may well tell them they are free to take the risk the rumors are true and let you avoid it.
Google makes essentially the same mistake. If you search for Jeffery Battle, it also gives info on Jeffrey the terrorist, without ever noting that he is different from the professor. ChatGPT is just giving the same info, run through a fancy AI transformer.
I do not see the aerospace professor can complain, considering that his own web page does not bother to say he is a different person, nor has he bothered to edit Wikipedia to point out the distinction. How does he expect ChatGPT to find the difference, if he does not use his opportunity to explain it?
Roger S,
Why should one's web page need a list of all the people you are not?
You do know Wikipedia frown on people editing their own pages. (Frowns so heavily, you would probably get the user you created for the purpose banned.) Also, as long as the wikipedia page is not incorrect, you have no basis for getting them to modify your page to say "Roger S the splendid person discussed here is not the Roger S who is a convicted axe murder". So "didn't edit wikipedia page to turn it into your personally tailored press release" hardly seems like a good standard for immunizing others from slandering you.
I'm not a lawyer, but I don't think a person who is slandered as a wife beater is barred from suing for defamation because they didn't blanket the world in ads that say "Contrary to the rumor you might have read or hear, I did beat my wife to a pulp so severely she ended up in a coma. That's some different Roger S."
Yes, Wikipedia frowns on posting puffery about yourself. However, adding a sentence to avoid a confusion is allowed.
The reason for the professor to use his web page to say that he is not the terrorist is that he very much does not want to be confused for the terrorist. This is commonly done by people who have names similar to famous people.
Roger S,
I can see why some people might want to do that: I'm not the John Wayne Gacy who everyone has heard of and who killed people.
I can also understand why many would not want to do that. (I am not the "not very famous person who did something nefarious and you, mostly, you otherwise probably would not have heard of, but who ChatGPT might think is me."
In the latter case, you are more likely to create a link in someones mind through your own action. That's self harm.
Either way, I'm not seeing putting distracting sentence on your page as a necessary step to pursuing a defamation case against someone else. If "they didn't anticipate my defamation and proactively defend their reputation" is a defense to defamation, I guess I'd like to hear that specifically. Just giving a possible motive why a potentially defamed person might do it doesn't seem mean that's a defense for the person committing defamation.
So the professor files a lawsuit that associates himself with the terrorist? Makes no sense to me. It is almost as if he wants his name associated with the terrorist.
It’s a disturbing trend that error messages seem to be getting more opaque. In the happy scenario, it’s just because of sloppy engineering.
"This would in practice require such companies to implement a "notice-and-blocking" system,"
You realize that these bots generally do not provide identical answers to identical prompts. I don't see how any blocking could work unless they specifically program the word Battle as a forbidden word.
Before suing to force a company to act, you should have to prove that there exists a practical and effective act that would do the job. In prior posts, EV suggested that the bots should just check their facts, but that is not a practical remedy with the technology when EV said that. Today, some months later, much has changed about what it posssible.
One AI claims to translate all prompts into Wikipedia searches. It then extracts the relevant part of the Wiki article and present that to the user. All the text shown in the answer comes from Wikipedia, not generated by the bot. In that sense, it is not a generative AI.
The NYT has published false information many times. What if a bot uses a quote from NYT archives that is false, but the bot does not credit the NYT. Is the bot liable, or the NYT liable?
In fact, their programming virtually guarantees they will not.
Do you? Can't you force a company to desist certain activities? I was under the impression you could. For example: stop dumping waste in the street. Etc.
IANAL AFAIK you can not ask for injunctive relief in a libel civil suit.
A Nerf gun (https://nerfweb.com/nerf-rival-pathfinder-review/), short for "NERF Blaster," is a popular toy firearm that fires foam darts or foam balls. Manufactured by Hasbro under the Nerf brand, these toys are known for their safety, fun, and versatility. Nerf guns come in a wide range of styles, from small, single-shot pistols to fully automatic blasters with large magazines. They are designed for friendly, non-lethal battles, making them a favorite choice for indoor and outdoor play among children and even adults. The foam ammunition ensures that the games are safe and minimize the risk of injury, allowing players to engage in exciting and imaginative battles without the need for protective gear. Nerf guns have become a beloved and iconic part of many people's childhoods, fostering creativity, teamwork, and active play.