The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Possible Damages in Lawsuits Against AI Companies for Defamatory Communications by Their Products
This week and next, I'll be serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, publication, and more), see here; in particular, the two key posts are Why ChatGPT Output Could Be Libelous and An AI Company's Noting That Its Output "May [Be] Erroneous]" Doesn't Preclude Libel Liability. Here, I want to say just a few words about damages.
[* * *]
The majority view in the states is that "One who falsely publishes matter defamatory of another in such a manner as to make the publication a libel is subject to liability to the other although no special harm results from the publication."[1] To have a case, then, a plaintiff need not prove any particular financial loss. The First Amendment limits this doctrine in private figure/public concern cases that are premised on a showing of mere negligent falsehood (as opposed to reckless or knowing falsehood): In such cases, some showing of damage to reputation, and consequent financial loss or emotional distress, is required.[2] But in cases brought based on speech on matters of private concern, or in cases where reckless or knowing falsehood is shown (more on that below), damages need not be shown.
In any event, though, damages could often be shown, especially once the AI software is integrated into widely used applications, such as search engines. To be sure, the results of one response to one user's prompt will likely cause at most limited damage to the subject, and might thus not be worth suing over (though in some situations the damage might be substantial, for instance if the user is deciding whether to hire the subject, or do business with the subject). But of course what one person asks, others might as well; and a subpoena to the AI company, seeking information from any search history logs that the company may keep for its users (as OpenAI and Google do), may well uncover more examples of such queries. Moreover, as these AIs are worked into search engines and other products, it becomes much likelier that lots of people will see the same false and reputation-damaging information.
But beyond this, libel law has long recognized that a false and defamatory statement to one person will often be foreseeably repeated to others—and the initial speaker could be held liable for harm that is thus proximately caused by such republication.[3] In deciding whether such repetition is foreseeable, the Restatement tells us, "the known tendency of human beings to repeat discreditable statements about their neighbors is a factor to be considered."[4] Moreover, if the statement lacks any indication that the information should "go no further," that lack "may be taken into account in determining whether there were grounds to expect the further dissemination."[5]
[1] Restatement (Second) of Torts § 569.
[2] Gertz.
[3] Restatement (Second) of Torts § 576(c) (1977); see, e.g., Oparaugo v. Watts, 884 A.2d 63, 73 (D.C. 2005) ("The original publisher of a defamatory statement may be liable for republication if the republication is reasonably foreseeable."); Schneider v. United Airlines, Inc., 208 Cal.App.3d 71, 75, 256 Cal.Rptr. 71 (1989) ("the originator of the defamatory matter can be liable for each repetition of the defamatory matter by a second party, if he could reasonably have foreseen the repetition" (cleaned up)); Brown v. First National Bank of Mason City, 193 N.W.2d 547, 555 (Iowa 1972) ("Persons making libelous statements are, and should be, liable for damages resulting from a repetition or republication of the libelous statement when such repetition or republication was reasonably foreseeable to the person making the statement."). The law of some states seems to reject this theory, see, e.g., Fashion Boutique of Short Hills, Inc. v. Fendi USA, Inc., 314 F.3d 48, 60 (2d Cir. 2002), but it appears to be the majority view.
[4] Restatement (Second) of Torts § 576(c) cmt. D (1977).
[5] Id. cmt. d.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
re: “especially once the AI software is integrated into widely used applications, such as search engines.”
A proposed legal regime for dealing with AI needs to realistically take into account its impact on that regime. If the notion of holding AI responsible for false statements and suing for libel actually is adopted and damages start flowing: it’ll shut down the use of the current AI tech in mainstream companies. Contrary to the delusions of some who have posted here: this isn’t some simplistic “design negligence”: there are major hurdles making the tech factual as far as most in the field are aware. Yes: there could be some researcher someplace who has it figured out and we don’t know yet and it could be a fix out soon: or it could take perhaps many years.
In that case, as I just posted on the prior post on this topic (I hadn’t checked for responses to my comments yet today): Pandora’s box is already open, it won’t shut down AIs. What’ll happen is the spread of user run AIs or AIs from other countries. Vast numbers of people will still use AIs: but they will likely be even lower quality AIs that are more prone to generating inaccurate information and you will have made the issue worse for some large group of people. Yup, some won’t use those AIs if the major ones are shut down so its unclear what the impact overall will be: but I’d suspect the likely end result will be more misinformation spread rather than less. Its like black market drugs being lower quality than the ones on the market. Unfortunately some progressives and anti-tech folk seem likely to adopt anything that fearful folks push to shut down tech so its possible folks here will get there way.
Otherwise: hopefully the tech industry and those who grant human agency will win out. They’ll more productively create a framework that allows for the reality that you can’t close the barn door after AI has already escaped into public use. Perhaps this sort of proposed restrictive framework paranoid of false information might have been enacted if it had been proposed a few years back, and led AI to develop in a different direction towards other forms of AI that didn’t hallucinate, likely delaying it for a long time. Its too late now, after it leaked from labs and into the rest of the world.
If society decides it actually wants to give into reasonable progress instead of “precautionary principle” fearful approaches: something analogous to Section 230 will arise that’ll be more rigorously reasoned grasping that AI systems are *not* people and reasoning about them from that perspective rather than being held back by status quo bias trying to apply flawed analogies from rules built for humans to things which are *not* human agents. Learn from how those who actually use and build these things think about them, just as those who did Section 230 actually seemed to try to grasp how tech biz folks thought and codified the way they thought about it and allowed the net to grow and prosper.
RealityEngineer, formerly, damages for libel were a commonplace risk for all publishers, if they failed to take care with what they published. To reduce that risk, publishers almost universally read and edited contributions before publishing them. That did not, as your comment above implies, "[make] the issue worse for some large group of people." In fact, much less libel damage occurred. What makes you suppose it would work in the opposite way now?
I do not think your commentary is forthright. You are a booster for a currently libel-prone technology. You show concern that libel objections could become a threat to hypothetical development of that technology. Your argument has been, "Damn the libel torpedoes, full speed ahead."
If the aim is to boost AI writing technology in public esteem, why would it not be a wiser tactic to insist it not be adopted publicly until its libel-prone features have been corrected privately? On the basis of your voluminous comments, everyone can already see what your answer would be, "Because I advocate permitting the libel, if that correction proves impossible."
Like internet utopians who abound, you position yourself, not apologetically, but proudly, optimistically, as pro-libel. You candidly expect a public life in which libel becomes commonplace—even more so than it has become at present—where no practical remedy for damaged victims exists—and where nobody but the victims themselves will care. You do worse than expect that outcome; you advocate it. You actively oppose care for libel victims. You insist that everyone be trained not to care.
To be fair, you seem to suppose that if no one cares, that will mean no damage happens. Or perhaps that uncompensated libel damage will merely lose public significance, and thus become an ordinary, undisruptive feature of public life.
You want random victims treated as collateral damage, damage accepted as a cost to promote the technology you champion. You do that uncritically, on the basis of libertarian ideology which you promote as if it were an agreed-upon embodiment of reason. To venerate a technology you favor, you demand public sacrifice—sacrifice to be suffered silently, by particular victims targeted by happenstance, to promote the satisfactions of others.
In short, you argue in a familiar style—the style of a gun nut. This nation has no need to add yet another such scourge—or, more accurately, another such double scourge: the scourge of unjust damage, made worse by callous advocacy to normalize injustice.
Once again, you completely misunderstand applicable legal principles. "It's really hard to solve this problem" is not a reason why the AI companies aren't negligent.
To pick an example that might make it clearer: self-driving cars. A company puts one on the market knowing that, roughly 5% of the time it will steer into oncoming traffic. "Well, actually, there are major hurdles to this technology" is not an argument that the car company isn't negligent. (Nor would "We disclosed when we sold the car that this risk existed" be a defense to a lawsuit.)
re: "To pick an example that might make it clearer: self-driving cars"
You are describing a harm to a human that happens in the real world where the machine is the only entity involved in the choice that led to an accident.
In this case: the action of believing a false statement is the choice of a human. The AI doesn't force the user to be negligent about evaluating the content.
Just because these machines generate false information doesn't mean they are negligent: the mere existence of a string of letters representing a false statement harms absolutely no one. It only arguably has done "harm" if a user chooses to believe it.
Are mind readers going to somehow determine that this thought crime exists? The only way it'll be known is if a user tells others: in which case *they* are the one choosing to spread a false statement to other humans negligently and should be punished. That will teach people to be less negligent about what they believe: and allow the existence of these things.
Unfortunately you wish to imply that this user can't possibly be held entirely responsible for that: but that doesn't have to be the case. Its a choice the world can make: either treat the human race as lacking agency and unable to ever use anything that might ever be wrong, or hold them responsible for their actions.
The odds are: somehow this will get worked out, either by judges with enough imagination who don't mindlessly apply precedents regarding humans, or through laws to force a different way of looking at things to make clear that machines aren't humans and lack agency.
re: "Once again, you completely misunderstand applicable legal principles."
No: you completely misunderstand the reality that machines are different from humans and there is no apriori reason that existing precedents need to be mindlessly applied to them as if they were. New ways of thinking about things that lead to more rational end results are possible.
Not only is this screed borderline sociopathic — sounding like a villain in a Michael Crichton novel is probably not a good thing — but it's misusing terminology. The precautionary principle says that one should assume that new policies or products are harmful until proven otherwise. Here, we already know that the products are flawed; no assumption required.
All we know is that you wish to claim they are harmful because humans should be viewed as apriori unable to take responsibility for the contents of their own minds. They shouldn't be allowed to take responsibility for determining what is fact or fiction.
The precautionary principle is used for cases where they only consider the claimed harm done from something new: they don't consider the harm that comes from the not allowing that new thing to be used. The "unseen" opportunity cost.
Its likely the opportunity cost will be that people use subpar alternatives from other countries or open source ones they use themselves, just like they do with recreational drugs. There may be more information rather than less because of the simplistic thinking that shuts down better quality (even if not fully truthful) US tools, or the evolution of tools to deal with the problems of these tools.
The "borderline sociopathic" view is that humans can't be allowed to take responsibility for being the ones to separate fact from fiction since you deem them unworthy and incapable of being trusted with that choice.
EV, you already saw a large amount of pushback on this thesis. Time will tell how it fares when published.
Perhaps your article might be used to lobby Congress for section 230 like immunity for AI providers. Personally, I think if it gets to the courts it will create great confusion splits and frivolous litigation.
RealityEngineer — if it isn't a bot itself — repeatedly posting the same long uninformed screeds over and over again is not a "large amount of pushback."
I look forward to next week's obsession: "Suing Logitech for the libel committed using it's keyboards".
This, of course, is ridiculous. OpenAI didn't achieve the increasingly obnoxious valuations it has (most recently $29B) by casting GPT as some sort of dumb pass-through appliance or lookup tool. Instead, it exuberantly describes a product that has "advanced reasoning capabilities" and can "produce factual responses." With that sort of upside comes this sort of downside. They can't have it both ways.
Sure they can. Overselling your product, collecting a bunch of capital, and then eventually producing something that's much more achievable (but not as attention-getting) that breaks even is a time-tested Silicon Valley strategy.
So is overselling your product, producing nothing, and then running away with the money, but this doesn't appear to the case here.
But that's not what we're talking about here. What we're talking about here is how Volokh doesn't understand that ChatGPT and whatever aren't oracles, they're mirrors. And because he doesn't understand that, he thinks he can sue them for libel.
I'd imagine the primary reason he doesn't understand that is because it's just not so. From the horse's mouth:
Moreover, if the statement lacks any indication that the information should “go no further,” that lack “may be taken into account in determining whether there were grounds to expect the further dissemination.”
This is where I think the disclaimer could come into play. If the AI provider’s Terms of Use make it clear that the AI is factually unreliable, that could amount to a declaration that the information shouldn’t be passed along publicly.
If blanket disclaimers of accuracy tucked away in a corner could serve to protect a publisher from defamation liability, I'd imagine every single newspaper would have started printing one at the bottom of the classifieds section decades ago.
You're about 10 articles behind in the discussion, so let me catch you up. We already covered liability, and Eugene made the point you're making. So yes, the AI can't avoid liability that way.
But now we're talking about damages. The newspaper analogy doesn't work here, because the newspaper already got published to the public. The damage has been done.
But the AI response only goes to one person. The question is whether the AI provider is responsible for damages that come about from that other person making the statement public.
Stated differently, I was patiently reiterating something that we'd worked through 10 articles ago after you went for the same too-good-to-be-true silver bullet. If you want to try your hand at a set of magic words for a product like ChatGPT that would be, by your own admission, insufficient to avoid liability for the initial defamatory speech but somehow would suffice to cut off liability for reasonably foreseeable republication of that same speech, maybe that would help drive the point home.
Damages follow liability. Hopefully you're just pretending they're separable because you're trying to salvage your point.
Did you even read the post that you're commenting on? Eugene found it necessary to write a whole separate post on damages exactly because it is a different analysis than liability.
So far, all Eugene has managed to show is that AI may be liable for defamation, but with de minimis damages. It's very difficult to construct a theory of significant damages even when there's liability. I think Eugene has failed to do so for the reason above.
Do you understand the word "damages" now? It's what you get when you win the lawsuit. Winning a lawsuit is a bit pointless if there's nothing to collect.
Liability isn't just a theoretical concept -- it's liability for damages. Once you pass that threshold, the amount of damages is a quantitative question. This isn't difficult unless you're trying to make it so.
And Eugene specifically addressed your too-clever, oft-tried "every specific instance of publication is de minimis damages, so there's no actual harm no matter how many republications" argument in his post. One of the specific reasons for vicarious liability doctrines is to allow a party to be made whole when damages are distributed and/or difficult to collect.
Right. And I was pushing back on the argument he made in his post. Now you're caught up.
An objective observer likely would conclude I was there from the beginning, and your ankle-biting posts just serve to display a basic ignorance of the relevant law in this area. But carry on with whatever it takes to save face -- I know you're a last word sort of guy.
The sufficiency of the disclaimer, weighed against other contradictory statements by the company, would be a question for the jury. Maybe the deep-pocketed defendant wins this time.
"… a subpoena to the AI company, seeking information from any search history logs that the company may keep for its users (as OpenAI and Google do), may well uncover more examples of such queries…"
The easiest remedy for this is to have a policy of only keeping that information for a short time. Or implement end-to-end encryption so users may have access to those histories but the provider does not.
But it’s not clear whether either action is economically justifiable given no information about the likelihood or amount of risk of liability. One or both of those remedies may be much more justifiable to protect user privacy in the event of a security breach.
It’s also not clear whether the chatbots produce a variety of statements or whether (or how much) they tend to repeat. Wouldn’t a variety of statements to a variety of people need individual claims and individual court findings for each statement? Say you find 10000 different statements. Do you track down 10000 individuals who read each statement and try to calculate damages on an individual basis?