The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

AI in Court

$10K Sanction for AI Hallucination in Appellate Brief

|

From today's decision in Noland v. Land of the Free, L.P., by Justice Lee Smalley Edmon, joined by Justice Anne Egerton and Riverside Superior Court Judge Kira Klatchko:

[N]early all of the legal quotations in plaintiff's opening brief, and many of the quotations in plaintiff's reply brief, are fabricated. That is, the quotes plaintiff attributes to published cases do not appear in those cases or anywhere else. Further, many of the cases plaintiff cites do not discuss the topics for which they are cited, and a few of the cases do not exist at all. These fabricated legal authorities were created by generative artificial intelligence (AI) tools that plaintiff's counsel used to draft his appellate briefs. The AI tools created fake legal authority—sometimes referred to as AI "hallucinations"—that were undetected by plaintiff's counsel because he did not read the cases the AI tools cited.

Although the generation of fake legal authority by AI sources has been widely commented on by federal and out-ofstate courts and reported by many media sources, no California court has addressed this issue. We therefore publish this opinion as a warning. Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI or any other source—that the attorney responsible for submitting the pleading has not personally read and verified. Because plaintiff's counsel's conduct in this case violated a basic duty counsel owed to his client and the court, we impose a monetary sanction on counsel, direct him to serve a copy of this opinion on his client, and direct the clerk of the court to serve a copy of this opinion on the State Bar….

To state the obvious, it is a fundamental duty of attorneys to read the legal authorities they cite in appellate briefs or any other court filings to determine that the authorities stand for the propositions for which they are cited. Plainly, counsel did not read the cases he cited before filing his appellate briefs: Had he read them, he would have discovered, as we did, that the cases did not contain the language he purported to quote, did not support the propositions for which they were cited, or did not exist. Counsel thus fundamentally abdicated his responsibility to the court and to his client.

Counsel acknowledges that his reliance on generative AI to prepare appellate briefs was "inexcusable," but he urges that he should not be sanctioned because he was not aware that AI can fabricate legal authority and did not intend to deceive the court. Although we take counsel at his word—and although there is nothing inherently wrong with an attorney appropriately using AI in a law practice—before filing any court document, an attorney must "carefully check every case citation, fact, and argument to make sure that they are correct and proper.

Attorneys cannot delegate that role to AI, computers, robots, or any other form of technology. Just as a competent attorney would very carefully check the veracity and accuracy of all case citations in any pleading, motion, response, reply, or other paper prepared by a law clerk, intern, or other attorney before it is filed, the same holds true when attorneys utilize AI or any other form of technology."

We note, moreover, that the problem of AI hallucinations has been discussed extensively in cases and the popular press for several years….

In 2013, another appellate court noted that appellate sanctions for frivolous appeals recently had ranged from $6,000 to $12,500, "generally, but not exclusively, based on the estimated cost to the court of processing a frivolous appeal." The costs of processing a frivolous appeal have undoubtedly increased in the intervening 12 years. Nonetheless, because counsel has represented that his conduct was unintentional, and because he has expressed remorse for his actions, we impose a conservative sanction of $10,000….

We conclude by noting that "hallucination" is a particularly apt word to describe the darker consequences of AI. AI hallucinates facts and law to an attorney, who takes them as real and repeats them to a court. This court detected (and rejected) these particular hallucinations. But there are many instances—hopefully not in a judicial setting—where hallucinations are circulated, believed, and become "fact" and "law" in some minds. We all must guard against those instances. As a federal district court recently noted: "There is no room in our court system for the submission of fake, hallucinated case citations, facts, or law. And it is entirely preventable by competent counsel who do their jobs properly and competently."

Thanks to Irwin Nowick for the pointer.