The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

Large Libel Models: The CheckBKG Analogy

|

To better understand the debate about possible defamation liability for OpenAI, based on its Large Libel Models tendency to sometimes communicate entirely made-up quotes about people—supposedly (but not actually) drawn from leading media outlets— let's consider this hypothetical:

Say a company called OpenRecords creates and operates a program called CheckBKG, which does background checks on people. You go to CheckBKG.com, enter a name, and the program reviews a wide range of publicly available court records and provides a list of the criminal and civil cases in which the person has been found liable, including quotes from relevant court records. But unfortunately, some of the time the program errs, reporting information from an entirely wrong person's record, or even misquoting a record. CheckBKG acknowledges that the information may be erroneous, but also touts how good a job CheckBKG generally does compared to ordinary humans.

Someone goes to CheckBKG.com and searches for someone else's name (let's say the name Jack Schmack, to make it a bit unusual). Out comes a statement that Schmack has been convicted of child molestation and found liable in a civil case for sexual harassment, with quotes purportedly from the indictment and the trial court's findings of fact. The statement accurately notes Schmack's employer and place of residence, so readers will think this is about the right Schmack.

But it turns out that the statements about the court cases are wrong: The court records actually refer to someone entirely different (indeed, not someone named Schmack), or the software missummarized the court records and wrongly reported an acquittal as a conviction and a dismissal of the civil lawsuit as a finding of liability. The quotes are also entirely made up by CheckBKG. It also turns out that Schmack has informed OpenRecords that its software is communicating false results about him, but OpenRecords hasn't taken steps to stop CheckBKG from doing so.

It seems to me that Schmack would be able to sue OpenRecords for defamation (let's set aside whether there are any specialized statutory schemes governing background checks, since I just want to explore the common-law defamation tort here):

  1. OpenRecords is "publishing" false and reputation-damaging information about Schmack, as defamation law understands the term "publishing"—communication to even one person other than Schmack is sufficient for defamation liability, though here it seems likely that OpenRecords would communicate it to other people over time as well.
  2. That this publication is happening through a program doesn't keep it from being defamatory, just as physical injuries caused by a computer program can be actionable. Of course, the program itself can't be liable, just as a book can't be liable—but the program's developer and operator (OpenRecords) can be liable, just like an author or publisher can be liable.
  3. OpenRecords isn't protected by 230, since it's being faulted for errors that its software introduces into the data. (The claim isn't that the underlying conviction information in court records is wrong, but that OpenRecords is misreporting that information.)
  4. OpenRecords' noting that the information may be erroneous doesn't keep its statements from being defamatory. A speaker's noting that the allegation he's conveying is a rumor (which signals a risk of error) or that the allegation he's conveying is contradicted by the person being accused (which likewise signal a risks of error) doesn't keep the statements from being defamatory; likewise here.
  5. OpenRecords now knows that its software is outputting false statements about Schmack, so if it doesn't take steps to prevent that or at least to diminish the risk (assuming some such steps are technologically feasible), it can't defend itself on the grounds that this is just an innocent error.
  6. Indeed, I'd say that, OpenRecords might be liable on a negligence theory even before being alerted to the specific false statement about Schmack (if Schmack isn't a public official or public figure), if Schmack can show that it carelessly implemented algorithms that created an unreasonable risk of error—for instance, created algorithms that would routinely make up fake quotes, in a situation where a reasonably effective and less harmful alternative was available.

If I'm right on these points, then it seems to me that OpenAI is likewise potentially liable for false and reputation-damaging communications produced by ChatGPT-4 (and Google is as to Bard). True, CheckBKG is narrower in scope than OpenAI, but I don't think that matters to the general analysis (though it might influence the application of the negligence test, see below). Both are tools aimed at providing useful information—CheckBKG isn't, for instance, specifically designed to produce defamation. Both may, however, lead to liability for their creators when they provide false and reputation-damaging information.

I say "potentially" because of course this turns on various facts, including whether there are reasonable ways of blocking known defamatory falsehoods from ChatGPT-4's output (once OpenAI is informed that those defamatory falsehoods are being generated), and whether there are reasonable alternative designs that would, for instance, prevent ChatGPT-4's output from containing fake defamatory quotes. But I think that the overall shape of the legal analysis would be much the same.