The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

The First Stab Theory

|

Last week and this, I've been serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, publication, and more), see here; one particular significant point is at Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error. Today, I turn to another argument against liability.

[* * *]

Perhaps the strongest argument for immunizing AI companies from libel liability is that AI program output can be a valuable first stab at the user's research. The AI provides a helpful tentative analysis—whether about a particular person, or about a broader subject—that can help guide the user in investigating further. AI companies shouldn't be chilled from providing such valuable initial information by the risk of liability.

This in some measure mirrors certain privileges, for instance the privilege for reports to law enforcement. If I think my neighbor is beating his children, I should be encouraged to convey my suspicions to the police, without fear of liability. Perhaps I might be carelessly misperceiving things. I might have not looked further into the matter to decide whether my initial perception was correct. I might well be wrong.

But that's fine, because it's the job of the police to investigate further. Better that they have as much input as possible (perhaps short of deliberate lies[1]), even if some of the input is mistaken, and mistaken in a way that could put an innocent neighbor in the uncomfortable position of being questioned by the police or even erroneously arrested. Indeed, some courts have recognized absolute privileges (precluding lawsuits even for deliberate lies) for complaints to certain kind of quasi-judicial bodies.[2]

Just as the police should have maximum input into their investigations, the argument would go, so each of us should be able to gather maximum information for own investigations of whatever interests us—not just through tools such as Google, which always point to real sources (and are immunized by § 230 for liability for falsehoods in those sources), but also through AI programs that might sometimes generate fictitious quotes and cite fictitious sources. Then we would have the responsibility to investigate the matter further before proceeding further, for instance by refusing to do business with someone because of what we've found.

I appreciate the force of this argument, especially since AI programs do seem to be potentially very helpful sources of information, despite their errors. It would be a real loss if their functionality had to be sharply reduced in order to prevent libel. And while negligence liability would in theory balance the loss against the gain, in practice it may be so unpredictable that many AI companies might err on the side of overconstraining them.

Nonetheless, I think that we have to acknowledge that, practically speaking, many users will view AI programs' output as the final step in some inquiries, not the first stab. That is especially likely if there's little at stake for the users in the decision (and thus little reason for them to research further), even if a great deal is at stake for the people defamed by an AI program's output.

Say, for instance, that an AI program becomes integrated into search results, and a search for Dr. Jane Schmane (to use a somewhat less common name than the usual Joe Schmoe) routinely yields a false quote reporting on her supposedly having been found guilty of malpractice. A typical prospective patient probably won't follow up to see if the quote is real, because there are lots of doctors the prospective patient could turn to instead. But the aggregate of these decisions could ruin Schmane's practice, based on users' understandable perception that, when an AI produces a quote, it's actually a real quote.

The absolute privileges I describe above also generally apply only under special circumstances that include "safeguards to prevent abuse of the privilege,"[3] such as

  • means for punishing knowingly false statements (even if through criminal punishment for perjury or similar crimes rather than civil liability),[4]
  • means for "strik[ing] from the record" statements that prove to be "improper," so that the false statements aren't further redistributed,[5] and
  • the presence of an adversarial hearing at which the accused (or, better yet, the accused's counsel), could rebut the allegations.[6]

And qualified privileges are limited to situations where the statement is said to one of a particular set of narrow audiences for narrowly defined reasons;[7] if the statement is communicated to a different audience or for a different reason, it loses its immunity for being an "abuse of privilege."[8]

Of course, AI programs' output to their users lack any such safeguards. There is no risk of any punishment (criminal or administrative) when AI companies repeatedly produce fake quotes; civil liability is the only legal constraint on that. That which the AI says cannot be struck from the reader's mind—indeed, readers may well carelessly forward them to others. There is of course no opportunity for the target to rebut the false statement. And the AI program can, over time, distribute the statement to a potentially wide audience, even if just one query at a time.

So, again, I appreciate how AI programs can help users research various matters, including various people. In principle, if users generally viewed them only as a first stab at the end result, and wouldn't precipitously act on the initial response, it might make sense to create a new immunity for such programs. But in practice, I expect that users will often make certain decisions—such as a decision not to consider a prospective service provider or employment applicant—based just on the initial query. Indeed, the more reliable AI programs get (much as ChatGPT-4 has been reported to be much more reliable than ChatGPT-3.5), the more users are likely to do that. I therefore expect that AI programs' errors—such as fake reputation damaging-quotes—would seriously damage many people's reputations. And I'm therefore skeptical that it would be wise to categorically immunize AI companies from liability for such harms, if the harms could have been avoided using reasonable alternative designs.

[1] See, e.g., Cal. Civ. Code § 47(b)(5).

[2] See, e.g., Twelker v. Shannon & Wilson, Inc., 564 P.2d 1131 (Wash. 1977).

[3] Story v. Shelter Bay Co., 760 P.2d 368, 371 (Wash. Ct. App. 1988); Arroyo v. Rosen, 628 A.2d 1074, 1078 (Md. Ct. Spec. App. 1994).

[4] See, e.g., id.; Imperial v. Drapeau, 716 A.2d 244, 249 (Md. 1998); Restatement (Second) of Torts § 588 cmt. a.

[5] Story, 760 P.2d at 371.

[6] Imperial, 716 A.2d at 249; Arroyo, 648 A.2d at 1078.

[7] See, e.g., Restatement (Second) of Torts § 594 (1977) (protection of speaker's interest); id. at § 595 (protection of the listener's or a third party's interest); id. at § 602 (protection of the listener's interest, when the speaker and the listener have a special relationship).

[8] Restatement (Second) of Torts § 603 (1977) ("One who upon an occasion giving rise to a conditional privilege publishes defamatory matter concerning another abuses the privilege if he does not act for the purpose of protecting the interest for the protection of which the privilege is given."); id. § 604 ("One who, upon an occasion giving rise to a conditional privilege for the publication of defamatory matter to a particular person or persons, knowingly publishes the matter to a person to whom its publication is not otherwise privileged, abuses the privilege . . . .").