The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Six Federal Cases of Self-Represented Litigants Citing Fake Cases in Briefs, Likely Because They Used AI Programs
These are likely just the tip of the fakeberg.
Unsurprisingly, lawyers aren't the only ones to use AI programs (such as ChatGPT) to write portions of briefs, and thus end up filing briefs that contain AI-generated fake cases or fake quotations (cf. this federal case, and the state cases discussed here, here, and here). From an Oct. 23 opinion by Chief Judge William P. Johnson (D.N.M.) in Morgan v. Community Against Violence:
Rule 11(b) of the Federal Rules of Civil Procedure states that, for every pleading, filing, or motion submitted to the Court, an attorney or unrepresented party certifies that it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation," that all claims or "legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law," and that factual contentions have evidentiary support….
Plaintiff cited to several fake or nonexistent opinions. This appears to be only the second time a federal court has dealt with a pleading involving "non-existent judicial opinions with fake quotes and citations." Quite obviously, many harms flow from such deception—including wasting the opposing party's time and money, the Court's time and resources, and reputational harms to the legal system (to name a few).
The foregoing should provide Plaintiff with enough constructive and cautionary guidance to allow her to proceed pro se in this case. But, her pro se status will not be tolerated by the Court as an excuse for failing to adhere to this Court's rules; nor will the Court look kindly upon any filings that unnecessarily and mischievously clutter the docket.
Thus, Plaintiff is hereby advised that she will comply with this Court's local rules, the Court's Guide for Pro Se Litigants, and the Federal Rules of Civil Procedure. Any future filings with citations to nonexistent cases may result in sanctions such as the pleading being stricken, filing restrictions imposed, or the case being dismissed. See Aimee Furness & Sam Mallick, Evaluating the Legal Ethics of a ChatGPT-Authored Motion, LAW360 (Jan. 23, 2023, 5:36 PM), https://www.law360.com/articles/1567985/evaluating-the-legal-ethics-of-a-chatgpt-authored-motion.
See also Taranov ex rel. Taranov v. Area Agency of Greater Nashua (D.N.H. Oct. 16, 2023):
In her objection, Taranov cites to several cases that she claims hold "that a state's Single Medicaid Agency can be held liable for the actions of local Medicaid agencies[.]" The cases cited, however, do no such thing. Most of the cases appear to be nonexistent. The reporter citations provided for Coles v. Granholm, Blake v. Hammon, and Rodgers v. Ritter are for different, and irrelevant, cases, and I have been unable to locate the cases referenced. The remaining cases are entirely inapposite.
For an earlier federal district court motion pointing out such hallucinated citations in another case, which I hadn't seen mentioned anywhere before and which I just learned about Friday, see Whaley v. Experian Info. Solutions, Inc. (S.D. Ohio May 9, 2023). (The software that the pro se in that case later acknowledged using, Liquid AI, appears to be built on top of OpenAI's GPT.)
Unsurprisingly, the same is visible in federal appellate courts. Esquivel v. Kendrick (5th Cir. Aug. 29, 2023) states,
[C]iting nonexistent cases, Esquivel argues that the City of San Antonio waived immunity from suit by purchasing liability insurance for its police officers.
Defendants' brief speculated that this too was a result of using an AI system. Likewise, a Second Circuit brief in Thomas v. Metro. Transp. Auth. (Sept. 4, 2023) alleges,
[Appellant Thomas's] brief is otherwise chiefly comprised of what appears to be cut-and-pasted ChatGPT conversations reciting the generic elements of various claims, but without actually pointing to any allegations in the SAC that would satisfy those elements. The legal authorities Thomas cites appear to be fabricated or hallucinated by ChatGPT: the case titles do not match their reporter citations, nor do Thomas's descriptions match the contents of the real opinions that the titles and citations come from.
See also the defendants' motion in Froemming v. City of West Allis (7th Cir. Oct. 19, 2023):
Froemming's 49-page brief contains a table of "authorities" with reference to case citations, federal statutes, and an ABA ethics rule. None of these "authorities" serve to aid this Court in the review of this matter and none of them are supportive of Froemming's arguments. First of all, only three of Froemming's fifteen listed cases even exist within the federal reporter. However, those three cases decide topics entirely unrelated to Froemming's arguments. Further, the quotes in his brief do not exist anywhere within those cases.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Judge Johnson seems to be handling a challenging problem -- a struggling (at best) pro se litigant -- admirably.
A couple of years from now those systems will be better. And a couple more years after that, most people will easily get adequate legal services for free or for pennies on the dollar that they currently pay.
Looking forward to robot arbitration judges and robot arbitration clauses in contracts.
Society will be a lot richer without paying for arguing solely for the sake of getting paid hourly to argue.
"Esquivel argues that the City of San Antonio waived immunity from suit by purchasing liability insurance for its police officers."
It's not a silly argument in general, although it may be incorrect in the case cited. My state does waive municipal immunity in some cases conditional on insurance being available to cover the payment.
Pro se litigants have enough trouble already. They should be allowed to use the tools that are available. If the brief is faulty, it is up to the other side to point it out.
I very much disagree. As one of the courts points out, fictitious and irrelevant cites causes extra work for both courts and opposing litigants. Putting all of the work on opposing counsel to do their cite checking is unfair, and detracts from judicial economy. Rule 11 keeps this in check for represented parties, along with bar discipline for egregious cases.
To be clear, federal Rule 11 applies the same way to self-represented parties in the same wag it does to attorneys (at least notionally).
If the brief is faulty for a technical legal reason, that's perhaps understandable. If the brief is faulty because it includes an outright lie, that's not excusable. And no, it does not matter the source of the lie. If you can't figure out how to properly use the tools available, you should not be using them. Proper use of AI (and Wikipedia and Google and pretty much any other online resource) includes checking your sources.
None of these cases suggest that anyone be forbidden from using anything, as far as I can tell. What they do suggest is that, like any other tool, people who use ChatGPT improperly should and may be held accountable for it.
Pro se litigants have standards they have to meet too. No one would care if they used AI if they weren't lying to the court by doing so. They're responsible for producing legitimate statutes and caselaw in support of their claims. If they're not capable of doing that, they need to find someone who can. You don't get to get away with outright lying by shrugging and saying "I didn't know and I didn't check."
Wrong.
Disaffected, antisocial, uninformed clingers are among my favorite culture war casualties.
I understand that Trump fans endorse his principle that it ain't lying if you don't get caught, but that's not the way the legal system works. It's the job of the other side to explain why your arguments are flawed, not to point out things that you fabricated.
First time I saw this thread, I was scrolling fast. I read the head as:
Six Federal Cases of Self-Represented Litigants Citing Fake Cakes in Briefs, Likely Because They Used AI Programs
Seemed vaguely plausible, and maybe interesting. Maybe there isn’t any kind of AI error I wouldn’t credit.
If supposedly professional lawyers submit briefs written by ChatGPT without checking them, it's probably unreasonable to expect pro se litigants to do any better. Lawyers, even the dumb ones, will probably learn not to do this, but I suppose this will be a perennial problem with pro se litigants.