The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Other Torts (Besides Libel) and Liability for AI Companies
Last week and this, I've been serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, publication, and more), see here; one particular significant point is at Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error. Today, I close with some thoughts on how my analysis, which has focused on libel, might be generalizable to other torts.
[* * *]
Generally speaking, false light tort claims should likely be treated the same way as defamation claims. To be sure, the distinctive feature of the false light tort is that it provides for a remedy when false statements about a person are not defamatory, but are merely distressing to that person (in a way the reasonable person test would recognize). Perhaps that sort of harm can't justify a chilling effect on AI companies, even if harm to reputation can. Indeed, this may be part of the reason why not all states recognize the false light tort.
Nonetheless, if platforms are already required to deal with false material—especially outright spurious quotes—through a notice-and-blocking procedure, or through a mandatory quote-checking mechanism, then adapting this to false light claims should likely produce little extra chilling effect on AIs' valuable design features.
[B.] Disclosure of Private Facts
An LLM is unlikely to produce information that constitutes tortious disclosure of private facts. Private information about people covered by the tort—for instance, about sexual or medical details that had not been made public—is unlikely to appear in the LLM's training data, which is largely based on publicly available sources. And if the LLM's algorithms come up with false information, then that's not disclosure of private facts.
Nonetheless, it's possible that an LLM's algorithm will accidentally produce accurate factual claims about a person's private life. ChatGPT appears to include code that prevents it from reporting on the most common forms of private information, such as sexual or medical history, even when that information has been publicized and is thus not tortious; but not all LLMs will include such constraints.
In principle, a notice-and-blocking remedy should be available here as well. Because the disclosure of private facts generally requires intentional behavior, negligence liability should generally be foreclosed.
[C.] False Statements That Are Likely to Lead to Injury
What if an LLM outputs information that people are likely to misuse in ways that harm persons or property—for instance, inaccurate medical information?[1]
Current law is unclear about when falsehoods are actionable on this theory. The Ninth Circuit rejected a products liability and negligence claim against the publisher of a mushroom encyclopedia that allegedly "contained erroneous and misleading information concerning the identification of the most deadly species of mushrooms,"[2] partly for First Amendment reasons.[3] But there is little other caselaw on the subject. And the Ninth Circuit decision left open the possibility of liability in a case alleging "fraudulent, intentional, or malicious misrepresentation."[4]
Here too the model discussed for libel may make sense. If there is liability for knowingly false statements that are likely to lead to injury, an AI company might be liable when it receives actual notice that its program is producing false factual information, but refuses to block that information. Again, imagine that the program is producing what purports to be an actual quote from a reputable medical source, but is actually made up by the algorithm. Such information may seem especially credible, which may make it especially dangerous; and it should be relatively easy for the AI company to add code that blocks the distribution of this spurious quote once it has received notice about the quote.
Likewise, if there is liability on a negligent design theory, for instance for negligently failing to add code that will check quotes and block the distribution of made-up quotes, that might make sense for all quotes.
[D.] Accurate Statements That Are Likely to Facilitate Crime by Some Readers
Sometimes an AI program might communicate accurate information that some readers can use for criminal purposes. This might include information about how one can build bombs, pick locks, bypass copyright protection measures, and the like. And it might include information that identifies particular people who have done things that may target them for retaliation by some readers.
Whether such "crime-facilitating" speech is constitutionally protected from criminal and civil liability is a difficult and unresolved question, which I tried to deal with in a separate article.[5] But, again, if there ends up being liability for knowingly distributing some such speech (possible) or negligently distributing it (unlikely, for reasons I discuss in that article), the analysis given above should apply there.
If, however, legal liability is limited to purposeful distribution of crime-facilitating speech, as some laws and proposals provide,[6] then the company would be immune from such liability, unless the employees responsible for the software were actually deliberately seeking to promote such crimes through the use of their software.
[1] See Jane Bambauer, Negligence Liability and Autonomous Speech Systems: Some Thoughts About Duty, 3 J. Free Speech L. __ (2023).
[2] Winter v. G.P. Putnam's Sons, 938 F.2d 1033, 1034 (9th Cir. 1991).
[3] Id. at 1037.
[4] Id. at 1037 n.9.
[5] Eugene Volokh, Crime-Facilitating Speech, 57 Stan. L. Rev. 1095 (2005).
[6] See id. at 1182–85.
Show Comments (16)