The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

AI in Court

Six Federal Cases of Self-Represented Litigants Citing Fake Cases in Briefs, Likely Because They Used AI Programs

These are likely just the tip of the fakeberg.

|

Unsurprisingly, lawyers aren't the only ones to use AI programs (such as ChatGPT) to write portions of briefs, and thus end up filing briefs that contain AI-generated fake cases or fake quotations (cf. this federal case, and the state cases discussed here, here, and here). From an Oct. 23 opinion by Chief Judge William P. Johnson (D.N.M.) in Morgan v. Community Against Violence:

Rule 11(b) of the Federal Rules of Civil Procedure states that, for every pleading, filing, or motion submitted to the Court, an attorney or unrepresented party certifies that it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation," that all claims or "legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law," and that factual contentions have evidentiary support….

Plaintiff cited to several fake or nonexistent opinions. This appears to be only the second time a federal court has dealt with a pleading involving "non-existent judicial opinions with fake quotes and citations." Quite obviously, many harms flow from such deception—including wasting the opposing party's time and money, the Court's time and resources, and reputational harms to the legal system (to name a few).

The foregoing should provide Plaintiff with enough constructive and cautionary guidance to allow her to proceed pro se in this case. But, her pro se status will not be tolerated by the Court as an excuse for failing to adhere to this Court's rules; nor will the Court look kindly upon any filings that unnecessarily and mischievously clutter the docket.

Thus, Plaintiff is hereby advised that she will comply with this Court's local rules, the Court's Guide for Pro Se Litigants, and the Federal Rules of Civil Procedure. Any future filings with citations to nonexistent cases may result in sanctions such as the pleading being stricken, filing restrictions imposed, or the case being dismissed. See Aimee Furness & Sam Mallick, Evaluating the Legal Ethics of a ChatGPT-Authored Motion, LAW360 (Jan. 23, 2023, 5:36 PM), https://www.law360.com/articles/1567985/evaluating-the-legal-ethics-of-a-chatgpt-authored-motion.

See also Taranov ex rel. Taranov v. Area Agency of Greater Nashua (D.N.H. Oct. 16, 2023):

In her objection, Taranov cites to several cases that she claims hold "that a state's Single Medicaid Agency can be held liable for the actions of local Medicaid agencies[.]" The cases cited, however, do no such thing. Most of the cases appear to be nonexistent. The reporter citations provided for Coles v. Granholm, Blake v. Hammon, and Rodgers v. Ritter are for different, and irrelevant, cases, and I have been unable to locate the cases referenced. The remaining cases are entirely inapposite.

For an earlier federal district court motion pointing out such hallucinated citations in another case, which I hadn't seen mentioned anywhere before and which I just learned about Friday, see Whaley v. Experian Info. Solutions, Inc. (S.D. Ohio May 9, 2023). (The software that the pro se in that case later acknowledged using, Liquid AI, appears to be built on top of OpenAI's GPT.)

Unsurprisingly, the same is visible in federal appellate courts. Esquivel v. Kendrick (5th Cir. Aug. 29, 2023) states,

[C]iting nonexistent cases, Esquivel argues that the City of San Antonio waived immunity from suit by purchasing liability insurance for its police officers.

Defendants' brief speculated that this too was a result of using an AI system. Likewise, a Second Circuit brief in Thomas v. Metro. Transp. Auth. (Sept. 4, 2023) alleges,

[Appellant Thomas's] brief is otherwise chiefly comprised of what appears to be cut-and-pasted ChatGPT conversations reciting the generic elements of various claims, but without actually pointing to any allegations in the SAC that would satisfy those elements. The legal authorities Thomas cites appear to be fabricated or hallucinated by ChatGPT: the case titles do not match their reporter citations, nor do Thomas's descriptions match the contents of the real opinions that the titles and citations come from.

See also the defendants' motion in Froemming v. City of West Allis (7th Cir. Oct. 19, 2023):

Froemming's 49-page brief contains a table of "authorities" with reference to case citations, federal statutes, and an ABA ethics rule. None of these "authorities" serve to aid this Court in the review of this matter and none of them are supportive of Froemming's arguments. First of all, only three of Froemming's fifteen listed cases even exist within the federal reporter. However, those three cases decide topics entirely unrelated to Froemming's arguments. Further, the quotes in his brief do not exist anywhere within those cases.