AI in Court
Georgia Trial Court Cites Likely AI-Hallucinated Cases (Possibly Borrowed from Party's Filing)
There have likely been hundreds of filings with AI-hallucinated citations in American courts, but this is the first time I've seen a court note that a judge had included such a citation.
"Whoever or Whatever Drafted the Briefs Signed and Filed by Blackburn,"
"it is clear that he, at the very best, acted with culpable neglect of his professional obligations."
"To Certify This Class …, the Court Must Find That the Named Plaintiffs Have Retained Competent Counsel to Represent the Class"
And the court declines to so find when the proposed class counsel filed a brief containing "a wholesale fabrication of quotations and a holding on a material issue" (presumably stemming from using AI and not adequately checking its output).
Cocaine Hippos, Monkey Copyrights, and a Horse Named Justice: The Debate Over Animal Personhood
Are human courts the best venue to protect wild animals?
Seemingly Nonexistent Citation in Anthropic Expert's Declaration [UPDATE: Apparently Caused by Lawyer's Misuse of Claude to Format Citations]
UPDATE 5/15/2025 (post moved up): Anthropic's lawyers filed a declaration stating that the error was not the expert's, but stemmed from the (unwise) use of Claude AI to format citations.
AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions
The judge finds "a collective debacle"—possibly caused, I think, by two firms working together and the communications problems this can cause—though "conclude[s] that additional financial or disciplinary sanctions against the individual attorneys are not warranted."
Should a Killer's Victim Be Able to "Speak" at a Sentencing Through AI?
An Arizona trial court judge allowed this innovative approach to presenting a victim impact statement, which seems like a useful step toward justice.
Apparent AI Hallucinations in Defense Filing in Coomer v. Lindell / My Pillow Election-Related Libel Suit
UPDATE: Lawyer's response added; post bumped to highlight the update.
No Problem with Expert's Using ChatGPT to Confirm His Work
"Lehnert used ChatGPT after he had written his report to confirm his findings, which were based on his decades of experience joining dissimilar materials."
11 Court Opinions in the Last 30 Days Mention AI-Hallucinated Material, and …
that's likely just the tip of the iceberg.
Federal Public Defender Submits Brief with Nonexistent Citation, Apparently Refuses to Admit This to the Judge at a Hearing
It's not the hallucination, it's the coverup.
Misinformation Expert's "Citation to Fake, AI-Generated Sources in His Declaration … Shatters His Credibility with This Court"
"[A] credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
Can You Sue Over Assurances Made by Company's Customer Service AI Chatbot?
Maybe, but not in this particular case, a federal court rules.
Deepfake Crackdowns Threaten Free Speech
From criminal penalties to bounty hunters, state laws targeting election-related synthetic media raise serious First Amendment concerns.
Could LLM AI Technology Be Leveraged in Corpus Linguistic Analysis?
As technology develops, we anticipate the use of LLM AI tools to augment corpus linguistic analysis of ordinary meaning—without outsourcing the ultimate task of legal interpretation.
Corpus Linguistics v. LLM AIs
The selling points of LLM AIs are insufficient; corpus tools hold the advantage.
N.Y. Court Opines on Use of AI by Experts
"[C]ounsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing prior to its admission ...."
LLM AIs as Tools for Empirical Textualism?: Manipulation, Inconsistency, and Related Problems
LLM AIs are too susceptible to manipulation—and too prone to inconsistency—to be viewed as reliable means of producing empirical evidence of ordinary meaning.
Corpus Linguistics, LLM AIs, and the Future of Ordinary Meaning
Our draft article shows that corpus linguistics delivers where LLM AI tools fall short—in producing nuanced linguistic data instead of bare, artificial conclusions.
Corpus Linguistics, LLM AIs, and the Assessment of Ordinary Meaning
As we show in a draft article, corpus linguistic tools have been shown to do what LLM AIs cannot—produce transparent, replicable evidence of how a word or phrase is ordinarily used by the public.
Minnesota 'Acting as a Ministry of Truth' With Anti-Deep Fake Law, Says Lawsuit
The broad ban on AI-generated political content is clearly an affront to the First Amendment.
Fugees Rapper Pras Michel Not Entitled to New Trial Based on Lawyer's Use of AI to Help Craft Closing Argument
Among other things, "Michel does not explain how ... the [AI-generated] mistaken attribution of a Puff Daddy song in the closing argument" sufficiently undermined his case.