The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
$7500 Sanctions for Nonexistent Citations in Brief; Magistrate Judge Stresses Cite-Checking Isn't a New Obligation
"Whether a case cite is obtained from a law review article, a hornbook, or through independent legal research, the duty to ensure that any case cited to a court is "good law" is nearly as old as the practice of law."
From the Report and Recommendation in Davis v. Marion County Superior Court Juvenile Detention Center, filed Tuesday by Magistrate Judge Mark Dinsmore (S.D. Ind.):
In [a] brief, Mr. Sture included two citations that he concedes do not exist….Mr. Sture acknowledged that he was responsible for the errors in the brief that he signed and filed. However, he took the position that the main reason for the errors in his brief was the short deadline (three days) he was given to file it. He explained that, due to the short timeframe and his busy schedule, he asked his paralegal (who once was, but is not currently, a licensed attorney) to draft the brief, and Mr. Sture did not have time to carefully review the paralegal's draft before filing it.
[The Magistrate Judge briefly explained the reason for the unusually short deadline, and noted that, "while Mr. Sture did only have three days to file his response after the motion to compel was filed, he had much more time than that to consider and research the issues that were ultimately addressed in the response brief"; for more on that, read the full opinion. -EV]
Further, while Mr. Sture made much at the hearing about the fact that he filed his response brief literally at the eleventh hour (the brief was, in fact, filed at 11:00 p.m. on the due date), he further represented that he subscribes to LEXIS. It would have taken only a few minutes to check the validity of the citations in the brief using LEXIS before filing it.
Mr. Sture failed to take even that most basic of actions, and therefore did not catch the fact that the brief contained citations that did not exist. The most logical explanation for the citation to non-existent authority is, of course, the use of generative AI to conduct legal research and/or draft the brief. The issue of "hallucinated case[s] created by generative artificial intelligence (AI) tools such as ChatGPT and Google Bard" has been "widely discussed by courts grappling with fictitious legal citations and reported by national news outlets."
The paralegal who drafted the brief did not appear at the hearing. Mr. Sture represented at the hearing that the paralegal did not use generative AI to aid in the drafting of the brief; rather, she told Mr. Sture that she used a legal research product called Fastcase. However, neither Mr. Sture nor the paralegal provided any alternative explanation for the erroneous citations. Further, whether generative AI was or was not used to draft the brief is not particularly relevant to the analysis. There is nothing fundamentally improper in the use of AI tools to draft a brief. Rather, it is counsel's abdication of his responsibility to ensure that the information he provided to the Court was accurate that is the basis for the sanctions recommended….
Courts have consistently held for decades that failing to check the treatment and soundness—let alone the existence—of a cited case warrants sanctions. See, e.g., Salahuddin v. Coughlin, 999 F. Supp. 526, 529 (S.D.N.Y. 1998) (noting that Shepardizing would have led defense counsel to a key case); Brown v. Lincoln Towing Serv., No. 88C0831, 1988 WL 93950 (N.D. Ill. 1988) (imposing sanctions where the attorney filed a claim based on an expired federal statute); Pravic v. U.S. Indus.-Clearing, 109 F.R.D. 620, 623 (E.D. Mich. 1986) (holding that the act of relying on another attorney's memorandum without Shepardizing the cases cited warranted sanctions); Blake v. Nat'l Cas. Co., 607 F. Supp. 189, 191 (C.D. Ca. 1984) (noting that Shepardizing cases already cited would have led to controlling authority).
The advent of modern legal research tools implementing features such as Westlaw's KeyCite and Lexis's Shephardization has enabled attorneys to easily fulfill this basic duty, and there is simply no reason for an attorney to fail to do so. Such has been the view for decades: "It is really inexcusable for any lawyer to fail, as a matter of routine, to Shepardize all cited cases (a process that has been made much simpler today than it was in the past, given the facility for doing so under Westlaw or LEXIS)." Gosnell v. Rentokil, Inc., 175 F.R.D. 508, 510 n.1 (N.D. Ill. 1997). Confirming that a case is good law is a basic, routine matter and something that is expected from a practicing attorney. As noted in the case of an expert witness, an individual's "citation to fake, AI-generated sources … shatters his credibility." See Kohls v. Ellison, 2025 WL 66514, at (D. Minn. Jan. 10, 2025). The same is true even if the fake citations were generated without the knowing use of AI.
Mr. Sture admits that he did not make the requisite reasonable inquiry into the law before filing his brief. Whether or not AI was the genesis of the non-existent citations, Mr. Sture's failure to review them before submitting them to the court was a violation of Rule 11. See Hayes, 763 F. Supp. 3d at 1066-67 ("The Court need not make any finding as to whether Mr. Francisco actually used generative AI to draft any portion of his motion and reply, including the fictitious case and quotation…. Citing nonexistent case law or misrepresenting the holdings of a case is making a false statement to a court. It does not matter if generative AI told you so.") (citations and quotations marks omitted).
The fact that Mr. Sture did not file the brief with the intention of deceiving the court does not excuse his failure to check the citations therein. Whether a case cite is obtained from a law review article, a hornbook, or through independent legal research, the duty to ensure that any case cited to a court is "good law" is nearly as old as the practice of law. As previously noted, the development of resources such as the Shephard's citation system provided lawyers a tool to accomplish that most basic of tasks. {Frank Shepard introduced his print citation index in the 1870s, though other precursor citation series had existed since the early nineteenth century. See Laura C. Dabney, Citators: Past, Present, and Future, 27 Legal Reference Servs. Q. 165, 166 (2008).} It is Mr. Sture's failure to comply with that most basic of requirements that makes his conduct sanctionable.
The Undersigned finds that Mr. Sture violated Rule 11 and that sanctions for that violation are appropriate…. Monetary sanctions ranging from $2,000 to $6,000 have been imposed in similar contexts in the past few years. Given the distressing number of cases calling out similar conduct since the opinions cited above were issued, it is clear that the imposition of modest sanctions has failed to act as a deterrent. {The Undersigned's very quick, certainly non-exhaustive search revealed at least eleven cases noting fictitious citations (to either non-existent cases or non-existent quotations) in federal court filings in the month of August 2025 alone. While most of these cases involved filings by pro se litigants, three of them … involved filings by attorneys.} Accordingly, the Undersigned RECOMMENDS that Mr. Sture be sanctioned $7,500.00 for his Rule 11 violations in this case….
The Magistrate Judge also referred "the matter of Mr. Sture's misconduct in this case to the Chief Judge pursuant to Local Rule of Disciplinary Enforcement 2(a) for consideration of any further discipline that may be appropriate."
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
Especially given today’s technology, there really is no excuse. I mean, how hard is it to type, “Please cite-check the brief you just wrote.”?
The judges and other parties can also cite-check the brief. It should not be a big deal.
Well sure, and the judges and other parties can also find the evidence for you, and decide what the best arguments to make are. Making a legal argument is the lawyer's job. If he's not doing that properly, what is he being paid for?
They did cite-check the brief; that's how the lawyer was caught. But you don't understand ethics (or human decency), so it's no surprise you don't get why it's a big deal.
Maybe they should have included a "disclaimer" stating that the filing "may contain false citations".
There's president for it...
"she used a legal research product called Fastcase," but one of the hallucinated citations had a Westlaw citation. I'm pretty sure you can't get a WL proprietary citation through Fastcase. The judge already found a lack of candor in the offering of hallucinations as relevant law, so didn't burrow down on this inconsistency.
Mr. Sture? Even the guy’s name sounds AI generated!
As a service to the lazy, incompetent lawyers, I asked ChatGPT why it's a bad idea to use Artificial Intelligence to write a legal document that will be submitted to the court. This is how ChatGPT responded (in about four seconds):
"There are several serious risks in using Artificial Intelligence to draft a legal document (like a motion, brief, or complaint) that will actually be submitted to a court:
1. Accuracy and Reliability Issues: AI tools sometimes generate text that sounds correct but is legally inaccurate or even fabricated (“hallucinations”). Courts have sanctioned lawyers for filing briefs that cited non-existent cases generated by AI.
2. Lack of Jurisdiction-Specific Knowledge: Legal documents often require compliance with highly specific rules (local court rules, formatting requirements, deadlines, procedural posture). AI may not reliably apply these rules. A document that doesn’t comply can be rejected or harm your case.
3. Ethical and Professional Responsibility: Lawyers are obligated to provide competent representation under professional conduct rules. Relying blindly on AI can violate those duties. Submitting false or misleading information, even unintentionally, can result in sanctions, fines, or discipline by the bar.
4. Confidentiality Risks: If sensitive client information is entered into an AI tool, it may not be adequately protected. Depending on the tool, information could be stored or used for training. This could breach attorney-client confidentiality.
5. Nuance and Judgment: Legal writing isn’t just about correct grammar and citations—it requires strategic judgment about how to frame arguments, what precedents matter most, and how to tailor arguments to a particular judge or fact pattern. AI cannot fully replace human legal reasoning.
6. Credibility Before the Court: Judges and opposing counsel can quickly spot sloppy or generic AI-generated arguments. This can damage your credibility and weaken your position."
Seems pretty obvious to me.
Using a human lawyer has all those same risks.
AI: GI(exponentially more)GO
The problem with AI hallucinations, I think, isn't "Garbage In, Garbage Out" in the sense of output reproducing the errors of the training data -- my sense is that the main AIs are trained on fairly accurate data (including real court opinions), but the algorithm tends to produce hallucinations even then. It's thus Gold In, Garbage Out, at least sometimes.
I minimum suspension from practicing law might get their attention better than a fine. Like go home, you can't have anything to do with legal work for 6 months.
The common theme in these cases seems to be that when the lawyer gets caught he doesn't say, "Judge, I'll admit, I got sloppy and used ChatGPT. Sorry about that."
Nope, he doubles down and swears that he used Westlaw and that his computer told him it was real. The judge knows he's lying, gets pissed off and levies sanctions.
...thus assuring the lawyer's future employment in the Trump Administration.
"It would have taken only a few minutes to check the validity of the citations in the brief using LEXIS before filing it."
In a recent UK case, a lawyer was reported to her professional standards body for including hallucinated cases. There were quite a number of mitigating circumstances - one of them was that the law firm she worked for couldn't afford a Lexis subscription. This and the quote above made me wonder, though: From a purely doctrinal legal perspective, have Lexis/Westlaw, etc, a more "authoritative" status? (bearing in mind that the UK law reports, a relatively recent (19th century) invention themselves, were also nothing but a private publishing enterprise, though their citations too were treated de facto as authoritative.
Which raises the more philosophical question: what is the "authoritative" original against which a search result should be compared? Ultimately, I'd guess one would have to go to the issuing court, and some, but not all, publish these days their decisions on their websites, but I don't know any lawyer who would consistently check their references there. Repositories such as Bailii/Worldlii/Auslii if they serve your jurisdictions?
And if Lexis etc are deemed authoritative, all of the big digital publishers have their own AI products, such as Lexis+ - would a lawyer be allowed to rely on these, hoping that the Lexis development team keeps the temperature of the AI low, or use other guardrails against hallucination?