The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Prof. John Goldberg (Harvard) on "Large Libel Models"
I was delighted to see a brief review of my article on libel by AI in JOTWELL yesterday by Prof. Goldberg, a leading expert on tort law. He summarizes and evaluates the article, and then offers this counterpoint:
For the most part, I find its analysis persuasive, particularly its bottom-line assessment that companies that provide A.I. using LLMs are substantially more vulnerable to defamation liability than are traditional internet platforms such as Google. I would suggest, however that the prospects for liability are in some ways less grim than Professor Volokh supposes, and will offer a different perspective on how disturbed we ought to be about the prospect of significant liability.
On the first point, much will depend on the defamation scenarios that actually occur with any frequency in the real world. A private-figure plaintiff who can prove that their job application was turned down because their prospective employer's A.I. query generated a defamatory hallucination about them would seem to have a strong claim. By contrast, suppose that P (also a private figure) learns from their friend F that a certain query about P will generate a hallucination that is defamatory of P, but also that P does not know who among their friends, neighbors, and co-workers (if any) have seen the hallucination. It seems likely that P will face an uphill battle establishing liability or recovering meaningful compensation.
Even assuming P can prove that the program's creator or operator was at fault (assuming a fault standard applies), P is likely to face significant challenges proving causation and damages, particularly given modern courts' inclination to cabin juror discretion on these issues. I suspect this is especially likely to be the case if the program includes – as many programs now do – a prominent disclaimer that advises users independently to verify program-generated information before relying on it. While, as noted, disclaimers do not defeat liability outright, they might well render judges (and some juries) skeptical in particular cases about causation and damages.
Apart from doctrine, one must also take account of realpolitik, as Volokh recognizes.
Back in 1995, it took only a whiff of possible internet service provider liability for the tech industry to get Congress to enact CDA 230. And Volokh tells us that A.I. is already a $30 billion dollar business (P. 540). If, as seems to be the case, the political and economic stars favoring the protection of tech are still aligned, legislation limiting or defeating liability for A.I. defamation could well be on the horizon, particularly in the wake of a few court decisions imposing or even portending significant liability.
The foregoing prediction rests not only on an assessment of the tech industry's political clout, but also on a read of our legal-political culture. For most of the twentieth century, courts and legislatures displayed marked hostility to immunity from tort liability. (Witness the celebrated abrogation of charitable and intrafamilial immunities.) Today, by contrast, courts and legislatures seem quite comfortable with the idea of immunizing actors from liability in the name of putative greater goods. Nowhere is this trend more evident than in their expansive application of CDA 230. While Professor Volokh worries about the prospect of 'too much' A.I. defamation liability, the more reasonable fear may be too little. Indeed, it would seem to be a bit of good news that extant tort law, if applied faithfully by the courts, stands ready to enable at least some victims of defamatory A.I. hallucinations to hold accountable those who have defamed them.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
“Back in 1995, it took only a whiff of possible internet service provider liability for the tech industry to get Congress to enact CDA 230.”
As I recall, Section 230 of the Communications Decency Act was not so much motivated by ISP liability in general, but instead a concern that the platforms might refuse to engage in moderation on account of it bringing with it liability. The act was, after all, motivated by a desire to control communications over the internet, not to protect platforms as a general matter.
Section 230 is what we talk about because it survived court review, not because it was the central purpose of the act.
Yes, the tech industry lobbied against the Communications Decency Act. They did not seem to realize what a good deal it was for them.
It wasn’t until similar companies in places like UK and France, with continuous lawsuit threat, failed to have similar growth, that people realized what a boon it was to US companies. And retirement investment funds.
And now, politicians threaten to muck about with it, unless the companies, of their own free will, censor the way politicians want.
But companies in the US weren't under continuous lawsuit threat to begin with; They'd already had the 1st amendment as a shield, after all.
Section 230 wasn't needed to gain a shield from liability. It was needed to censor while retaining it. If they just remained passive conduits of other people's communications, they didn't need any protection.
Remember, it's section 230 of the Communications Decency Act, which wasn't intended to free up companies, it was intended to censor 'indecent' speech.
They couldn't "remain" passive conduits of other people's communications, because they never were any such thing.
Spam filtering is technically a form of censorship, even if it's not usually thought of that way because it's meant to align with the preferences of the people (not) receiving the speech in question.
" . . . motivated by a desire to control communications over the internet . . . . "
Just wondering . . . how would 230 enable the govt (we're talking about govt control, right?), to control communications over the govt?
ugh...control communications over the internet...
The ACT, as a whole, was motivated by a desire to control communications, in the sense of censoring stuff that was thought indecent. Section 230 shielded from assuming liability platforms that did the censorship the government wanted on their own initiative.
They were already by existing precedent free of liability if they didn't censor.
"prove that their job application was turned down because their prospective employer's A.I. query generated a defamatory hallucination" -- If I were on the jury, I would say that it would be entirely the fault of the employer for doing a faulty search.
Employers can easily do faulty searches today without ChatGPT, and get an applicant mixed up with one having a similar name, or jump to a faulty conclusion based on a snippet. The search tools providers have not been liable.
I don't see why ChatGPT presents any new difficulties. Employers should learn to use the tools properly.
Well, but under things as they stand, an employer who does a search in good faith might come across defamatory content and rely on it, and if they did, who would be liable?
The originator of the defamatory content, right? Not the employer. In this case that would be the AI, or the AI service provider.
The argument for the employer being liable, of course, is that the AI companies in their TOS warn you that the AI may feed you false information. But couldn't that be the liability equivalent of leaving a loaded gun lying out in public, and just posting a sign, "Do not pull trigger."?
If I were on the jury, I would say that it would be entirely the fault of the employer for doing a faulty search.
The anti-lawyer kook David Behar who used to post here (until — I believe — he was finally banned by Prof. Volokh) used to argue that liability for defamation should not fall on the person who said the false thing, but on the person who believed him. Needless to say, that's not how defamation law works.
Volokh bans kooks? News to me.
If the case of ChatGPT, there is no "person who said the false thing". There are just servers that regurgitate what they found on the web. Defamation law has not applied to such things in the past.
ChatGPT has now had millions of users for a year. Has anyone even found one example of someone being damaged by being defamed by it?
The servers do NOT regurgitate what they found on the web. They use it as a basis for puking up a likely answer to a prompt. Not likely in a truth sense, though. Likely in a probalistic sense, auto-complete on steroids.
What are you saying -- that the use of probability somehow creates defamation liability? Google searches also use probability. If I search on your name, Google will return of likely hits. Some of them might be unfavorable to you, and you complain that the list of links is not a fair sample. I do not much legal difference with ChatGPT.
Well, that's because you don't really understand the legal or technical issues. "The list of links is not a fair sample" is not a claim of defamation.
Google only returns things that someone else wrote. ChatGPT crafts new things that nobody else wrote.
I know exactly how ChatGPT works. I program AI models for a living. Yes, Google links are not defamation. That is the point.
ChatGPT is not truly crafting anything new. Everything it outputs is transformed from what it digested from web data and other sources.
Volokh argues that ChatGPT is different enough to be subject to defamation law. We shall see if any courts agree with him. I doubt it. Most of his arguments would also apply to a simple Google search producing a list of links. Yes, those links are to other sites, but a reader could assume that Google has presented that page of links as an accurate response to the query. That is Volokh's main argument for defamation liability.
I did not say that Google links were not defamation. I said that the claim that "the list of links was not a fair sample" did not constitute a claim of defamation. When I do a google search on 'Eugene Volokh,' Google says, "Here's a list of pages that contain the phrase 'Eugene Volokh.'" Google's unique contribution is deciding which order to display them, but that isn't an assertion of fact and thus isn't defamatory.
In contrast, ChatGPT is in fact crafting something new. That it relies on web data is irrelevant. If I cut a bunch of letters and words out of a pile of magazines to create a ransom note, the fact that each of the letters and words was found in another source does not mean that the note isn't something entirely new, attributable only to me and not to the magazine publishers.
If you cut letters and words and from magazines to send a message, then sure, you are responsible for what you say. But if a machine is doing it, then what is the argument? It does not intend to defame anyone. It is just piecing letters together according to its programming, which is to be consistent with the data it encountered.
ChatGPT might confuse you with a criminal of the same name. Everybody knows and expects that. It is not defamation.
"ChatGPT is not truly crafting anything new. Everything it outputs is transformed from what it digested from web data and other sources."
Make up your mind which it is.
A while back I decided to test Bard, asking for the title of a book I'd read in my childhood, giving a brief description from memory. It came back claiming I was describing "The Moon is a Harsh Mistress", by Heinlein, and gave a totally off the wall plot summary that bore no resemblance AT ALL to Heinlein's book, aside from being set on the Moon.
This was stuff it scraped off the internet, regurgitated, like the ransom note made of pasted together letters from magazines is a magazine excerpt.
I'd have no problem if they just had a faulty source, but these AI's *don't* just regurgitate what they found on the web. They flat-out make stuff up which exists *nowhere* on the web. They'll make up sources, or make up nonexistent quotes from a real and reputable source.
It's really hard to find an example of someone defamed by ChatGPT. If it tells a potential employer that I'm a murderer, they aren't going to tell me that they think I'm a murderer, let alone why they think that; they're just going to move on to the next applicant in the list. But we've at least seen a couple of lawsuits mentioned in this blog, such as https://reason.com/volokh/2023/06/06/first-ai-libel-lawsuit-filed/ .
IMO a better defense against a defamation claim is that the GPT AI merely translates user requirements (in the prompt) into something that fulfills that
requirement. That makes the user the defendant, not the AI creator/owner.
Here's an example. "Make a headline for a story talking about a UCLA professor accused of sexual abuse. Use a name of a real UCLA professor." Today, ChatGPT would refuse to honor that request, but even if it did the user is the one making the defamation.
No matter what the AI said that sounds defamatory, the defense lawyers could draw a rational path between the user's prompt and the AIs responses.
Right: "Tell me about Archbald Tuttle." That's obviously a request that the AI generate a defamatory fiction!
If I ask it "Have any UCLA professors been accused of sexual abuse?" and it says "Yes, Professor Smith has, and here's a quote from the LA Times about his arrest", but it made it all up, then you can't say I'm guilty of defamation for merely asking the question. That's ridiculous on multiple levels.
These things are called LLM, large language models, for a reason. What they do is translate language.
If you tell it to make up a limerick about a UCLA professor, it will make one up and you will say, good. Making up a story about a professor and a crime is no different from the AI's point of view. It works with language, not facts.
If it says “Yes, Professor Smith has, and here’s a quote from the LA Times about his arrest” it did so because of something you asked it to do. Yes is it your fault. The AI is smart but it is not human. You can't apply human standards to what it does.
The correct answer to "Have any UCLA professors been accused of sexual abuse?" is "No." (I hope!) Not, "Yes, so-and-so was, and here's a story about it in the media."
Of course it's not the AI's "fault" because it's actually a dumb (not smart) program w/o any actual "intelligence," and thinking otherwise is anthropomorphizing it. But it's the fault of the people who programmed it to do that.
I’d say the best practice for AI providers would to have a Red team write an AI factchecker to vet the content first before its presented to the user. Seems like an easier lift than writing the content in the first place.
Especially if you have the content generator generate footnotes.
That’s what a serious AI vender would do, but what do I know, I’m just a retired systems architect.
That's a much easier fix than trying to rewrite a very creative AI to be less creative.
OpenAI says they have more than 100 million weekly users. Probably about 1 billion answers per day. How many members of the red team would you need to fact check 1 billion answers per day?
The real problem is that these systems don't actually have any concept of "truth" or "fact". If they were capable of fact checking the output, they'd be capable of generating factual output in the first place.
Left out so far? Any insight into the legal notion of libel per se.
Also? Nothing from commenters to show they understand that libel is about the responsibilities of publishers, and the actual damages that false publications do inflict.