The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

Prof. John Goldberg (Harvard) on "Large Libel Models"

|

I was delighted to see a brief review of my article on libel by AI in JOTWELL yesterday by Prof. Goldberg, a leading expert on tort law. He summarizes and evaluates the article, and then offers this counterpoint:

For the most part, I find its analysis persuasive, particularly its bottom-line assessment that companies that provide A.I. using LLMs are substantially more vulnerable to defamation liability than are traditional internet platforms such as Google. I would suggest, however that the prospects for liability are in some ways less grim than Professor Volokh supposes, and will offer a different perspective on how disturbed we ought to be about the prospect of significant liability.

On the first point, much will depend on the defamation scenarios that actually occur with any frequency in the real world. A private-figure plaintiff who can prove that their job application was turned down because their prospective employer's A.I. query generated a defamatory hallucination about them would seem to have a strong claim. By contrast, suppose that P (also a private figure) learns from their friend F that a certain query about P will generate a hallucination that is defamatory of P, but also that P does not know who among their friends, neighbors, and co-workers (if any) have seen the hallucination. It seems likely that P will face an uphill battle establishing liability or recovering meaningful compensation.

Even assuming P can prove that the program's creator or operator was at fault (assuming a fault standard applies), P is likely to face significant challenges proving causation and damages, particularly given modern courts' inclination to cabin juror discretion on these issues. I suspect this is especially likely to be the case if the program includes – as many programs now do – a prominent disclaimer that advises users independently to verify program-generated information before relying on it. While, as noted, disclaimers do not defeat liability outright, they might well render judges (and some juries) skeptical in particular cases about causation and damages.

Apart from doctrine, one must also take account of realpolitik, as Volokh recognizes.

Back in 1995, it took only a whiff of possible internet service provider liability for the tech industry to get Congress to enact CDA 230. And Volokh tells us that A.I. is already a $30 billion dollar business (P. 540). If, as seems to be the case, the political and economic stars favoring the protection of tech are still aligned, legislation limiting or defeating liability for A.I. defamation could well be on the horizon, particularly in the wake of a few court decisions imposing or even portending significant liability.

The foregoing prediction rests not only on an assessment of the tech industry's political clout, but also on a read of our legal-political culture. For most of the twentieth century, courts and legislatures displayed marked hostility to immunity from tort liability. (Witness the celebrated abrogation of charitable and intrafamilial immunities.) Today, by contrast, courts and legislatures seem quite comfortable with the idea of immunizing actors from liability in the name of putative greater goods. Nowhere is this trend more evident than in their expansive application of CDA 230. While Professor Volokh worries about the prospect of 'too much' A.I. defamation liability, the more reasonable fear may be too little. Indeed, it would seem to be a bit of good news that extant tort law, if applied faithfully by the courts, stands ready to enable at least some victims of defamatory A.I. hallucinations to hold accountable those who have defamed them.