The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

Negligence Theories in "Large Libel Models" Lawsuits Against AI Companies

|

This week and next, I'm serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, publication, and more), see here; in particular, the two key posts are Why ChatGPT Output Could Be Libelous and An AI Company's Noting That Its Output "May [Be] Erroneous" Doesn't Preclude Libel Liability.

Yesterday, I wrote about lawsuits against AI companies claiming that they are knowingly or recklessly publishing, through their software, false and defamatory statements. Today, I'll start on the discussion of similar negligence claims.

[* * *]

[1.] Responsibility for the equipment a company uses

Say that R.R. is a private figure, and can show that the statements about him have caused "actual injury," in the form of "out-of-pocket loss" or emotional distress stemming from damage to reputation.[1] (Perhaps R.R. lost a contract that he was expecting to get, and it eventually came out that the reason was that the other party had looked up his name in ChatGPT.) Or say he can show that the statements about him are on a matter of "private concern" for libel purposes. Can he sue OpenAI, even in the absence of any specific notice to OpenAI that its output was defamatory?

I think so. A business is generally potentially responsible for harms caused by the equipment it uses in the course of business, at least when it negligently fails to take reasonable steps to minimize the risks of those harms. (As I'll turn to shortly, it's also potentially responsible for harms caused by products it sells, though right now AI companies actually directly provide access to the AI software, on their own computers.)

If a company knows that one of its machines sometimes emits sparks that can start fires and damage neighbors' property, the company must take reasonable steps to diminish these risks, even if it didn't deliberately design the machines to emit those sparks. If a company knows that its guard dogs sometimes escape and bite innocent passersby, it must take reasonable steps to diminish these risks (put up better fences, use stronger leashes, train the dogs better).

Likewise, say a newspaper knows that its publishing software or hardware sometimes produces the wrong letters, and those typos occasionally yield false and defamatory statements (e.g., misidentify a person who's accused of a crime). I think it may likewise be sued for libel—at least, in private figure cases, where negligence is the rule—on the theory that it should have taken steps to diminish that risk. The negligence standard applies to reporters' and editors' investigative, writing, and editing decisions; why shouldn't it also apply to the newspaper's decision to use tools that it knows will sometimes yield errors? And the same logic applies, I think, to an AI company's producing AI software and offering it for public use, when the company knows that the software often communicates false and defamatory statements.

[2.] The design defect liability analogy

Just to make this extra clear, we're not talking here about strict liability: The AI company wouldn't be liable for all errors in its output, just as newspapers generally aren't liable (under modern defamation law) for all errors in their pages. Rather, the question would be whether the company was negligent, and such a claim would be analogous to a negligent design product liability claim:

A product is defective when, at the time of sale or distribution, . . . the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design . . . and the omission of the alternative design renders the product not reasonably safe.[2]

The analogy is not perfect: Product liability law is limited to personal injury and property damage, and not to economic loss or emotional distress stemming from damage to reputation.[3] But the premise of negligent design product liability law is that one way that people can negligently injure persons or property is by distributing negligently designed products.[4] Likewise, one way that people can negligently damage reputations is by making available negligently designed software.

Product liability law is also limited to sale or distribution of products, and excludes the use of services.[5] But this stems from the fact that, in traditional service arrangements, a court can consider the reasonableness of the service provider's behavior in that particular relationship, while with products a court would generally need to look at the general design of the product. Even if offering an AI program is a service, it's analogous to the sale of a product—the AI company basically makes the design decisions up front and then lets the program operate without direct control, much as buyers of a product use it after it has left the manufacturer's control.

Of course, not all design that causes harm is negligent. Some harms aren't reasonably avoidable, at least without crippling the product's valuable features. Car accidents might be reduced by capping speed at 10 mph, but that's not a reasonable alternative design. Likewise, an AI company could decrease the risk of libel by never mentioning anything that appears to be a person's name, but that too would damage its useful features more than is justified. The design defect test calls for "risk-utility balancing"[6] (modeled on the Hand Formula), not for perfect safety. A company need not adopt an alternative design that "substantially reduc[es the product's] desirable characteristics" to consumers.[7]

Still, there might be some precautions that could be added, even beyond the notice-and-blocking approach discussed above.

[3.] Possible precautions: Quote-checking

One reasonable alternative design would be to have the AI software include a post-processing step that checks any quotes in its output against the training data, to make sure they actually exist—at least if the prompt is calling for fact rather than fiction[8]—and to check any URLs that it offers to make sure that they exist.[9] This may not be easy to do, because the AI software apparently doesn't have ongoing access to all its training data.[10] But that's a design choice, which presumably could be changed; and under design defect law, such a change may be required, depending on its costs and benefits. And if an AI company's competitor successfully implemented such a feature, that would be evidence that the feature is a "reasonable alternative design" and that its absence is unreasonable.[11]

This is especially important because quotes are so potentially reputation-damaging. As the Court explained in Masson v. New Yorker Magazine,

In general, quotation marks around a passage indicate to the reader that the passage reproduces the speaker's words verbatim. They inform the reader that he or she is reading the statement of the speaker, not a paraphrase or other indirect interpretation by an author. By providing this information, quotations add authority to the statement and credibility to the author's work. Quotations allow the reader to form his or her own conclusions and to assess the conclusions of the author, instead of relying entirely upon the author's characterization of her subject.[12]

Literate American readers have spent their lifetimes absorbing and relying on the convention that quotation marks generally mean that the quoted person actually said the particular words. To be sure, there are some exceptions, such as hypotheticals, or quotation marks to mean "so-called." As the Masson Court noted, "an acknowledgment that the work is so-called docudrama or historical fiction, or that it recreates conversations from memory, not from recordings, might indicate that the quotations should not be interpreted as the actual statements of the speaker to whom they are attributed."[13] But those are exceptions. Generally seeing a quote attributed to, say, Reuters will lead many reasonable readers to assume that Reuters actually wrote this. And that is so even if, faced with the absence of quotes, the readers might be on guard for the possibility that the statement might not properly summarize or paraphrase the underlying sources.

Of course, a company can certainly argue that it would be technically infeasible to check quotes against the training data. Perhaps the training data is too large to host and to quickly search (despite the availability of modern storage and indexing technology). Or perhaps it's impossible to distinguish quotes generated in response to requests for fictional dialogue ("write a conversation in which two people discuss the merits of tort liability") from ones generated in response to requests for real data. Presumably the company would find independent computer science experts who could so testify. And perhaps a plaintiff wouldn't find any independent expert who could testify that such alternative designs are indeed feasible, in which case the plaintiff will lose,[14] and likely rightly so, since expert consensus is likely to be pretty reliable here.

But perhaps some independent experts would indeed credibly testify that the alternatives might be viable. The plaintiff will argue: "The AI company produced an immensely sophisticated program, that it has touted as being able to do better than the average human law school graduate on the bar exam. It has raised $13 billion on the strength of its success. It was trained on a massive array of billions of writings. Is it really impossible for it to check all the quotes that it communicates—including quotes that could devastate a person's reputation—against the very training data that the company must have had in its possession to make the program work?" It seems to me that a reasonable juror may well conclude, at least if credible experts so testify, that the company could indeed have done this.

Liability for failing to check quotes might also be available under state laws that, instead of the dominant design defect approach I discuss above, use the "consumer expectations" design defect liability test. Under that test, design defect liability can be established when a product "did not perform as safely as an ordinary consumer would have expected it to perform."[15] For the reasons given in Part I.B, I'm inclined to say that an ordinary consumer would expect outright quotes given by AI software to be accurate (though if the AI producers sufficiently persuade the public that their software is untrustworthy, that might change the legal analysis—and the AI producers' profits).

 

[1] Such liability would normally be consistent with the First Amendment. See Gertz v. Robert Welch, Inc., 418 U.S. 323, 349–50 (1974).

[2] Restatement (Third) of Torts: Product Liability § 2(b).

[3] Id. § 1 & cmt. e; id. § 21.

[4] Restatement (Third) of Torts: Product Liability § 2 cmd. d:

Assessment of a product design in most instances requires a comparison between an alternative design and the product design that caused the injury, undertaken from the viewpoint of a reasonable person. That approach is also used in administering the traditional reasonableness standard in negligence. The policy reasons that support use of a reasonable-person perspective in connection with the general negligence standard also support its use in the products liability context.

[5] Id. § 19.

[6] Restatement (Third) of Torts: Product Liability § 2 cmd. d.

[7] See id. cmt. f & ill. 9 (providing, as an example, that a car manufacturer need not replace all its compact cars with more crashworthy full-sized models, because this would "substantially reduc[e the compact car's] desirable characteristics of lower cost and [higher] fuel economy").

[8] For instance, if an AI program is asked to write dialog, the quotes in the output should largely be original, rather than accurate quotes from existing sources. This presupposes that it's possible for an AI company to design code that will, with some reasonable confidence, distinguish calls for fictional answers from calls for factual ones. But given the AI program's natural language processing of prompts, such a determination should be feasible.

[9] If the AI program outputs a quote that does appear in the training data, then the AI company would be immune from liability for that output under § 230 even if the quote itself proves to be faculty inaccurate (so long as it's correctly rendered by the program). See supra note 17.

[10] [Cite will be added in a later draft.]

[11] See Restatement (Third) of Torts: Product Liability § 2 cmd. d ("How the defendant's design compares with other, competing designs in actual use is relevant to the issue of whether the defendant's design is defective.").

Note that the "open and obvious" nature of the danger shouldn't be relevant here. In some situations, if I'm injured by an open and obvious feature of a product that I'm using, the manufacturer might evade liability (though not always even then, id. & ill. 3), since I would have in effect assumed the risk of the danger. But this can't apply to harm to third parties—such as the victim of an AI program's defamatory output—who did nothing to assume such a risk.

[12] 501 U.S. 496, 511 (1991).

[13] Id. at 513.

[14] See, e.g., Pitts v. Genie Industries, Inc., 921 N.W.2d 597, 609 (Neb. 2019) (holding that expert evidence is required if the question is one of "technical matters well outside the scope of ordinary experience"); Lara v. Delta Int'l Mach. Corp., 174 F.Supp.3d 719, 740 (E.D.N.Y. 2016) ("In order to prove liability grounded upon a design defect, New York law requires plaintiffs to proffer expert testimony as to the feasibility and efficacy of alternative designs.").

[15] Judicial Council of Cal. Jury Inst. [CACI] No. 1203.