Repeated "Nonexistent Cases" in Filing From >20-Lawyer Insurance Defense Firm
Lawyers at firms of all size, don't let this happen to you.
Lawyers at firms of all size, don't let this happen to you.
"Whether a case cite is obtained from a law review article, a hornbook, or through independent legal research, the duty to ensure that any case cited to a court is "good law" is nearly as old as the practice of law."
for "citing to fabricated, AI-generated cases without verifying the accuracy, or even the existence, of the cases" and "misrepresenting to the Court the origin of the AI-generated cases."
Plus, "He claims that, going forward, he will undertake certain 'remedial efforts,' including, inter alia, 'establish[ing] ... database reconciliation procedures involving resolution of discrepancies through direct consultation of archival legal resources and substitution of alternative, verifiable authorities where necessary.' Most lawyers simply call this 'conducting legal research.'"
There have likely been hundreds of filings with AI-hallucinated citations in American courts, but this is the first time I've seen a court note that a judge had included such a citation.
"it is clear that he, at the very best, acted with culpable neglect of his professional obligations."
And the court declines to so find when the proposed class counsel filed a brief containing "a wholesale fabrication of quotations and a holding on a material issue" (presumably stemming from using AI and not adequately checking its output).
Are human courts the best venue to protect wild animals?
UPDATE 5/15/2025 (post moved up): Anthropic's lawyers filed a declaration stating that the error was not the expert's, but stemmed from the (unwise) use of Claude AI to format citations.
The judge finds "a collective debacle"—possibly caused, I think, by two firms working together and the communications problems this can cause—though "conclude[s] that additional financial or disciplinary sanctions against the individual attorneys are not warranted."
An Arizona trial court judge allowed this innovative approach to presenting a victim impact statement, which seems like a useful step toward justice.
UPDATE: Lawyer's response added; post bumped to highlight the update.
"Lehnert used ChatGPT after he had written his report to confirm his findings, which were based on his decades of experience joining dissimilar materials."
that's likely just the tip of the iceberg.
It's not the hallucination, it's the coverup.
"[A] credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
Maybe, but not in this particular case, a federal court rules.