"Whoever or Whatever Drafted the Briefs Signed and Filed by Blackburn,"
"it is clear that he, at the very best, acted with culpable neglect of his professional obligations."
"it is clear that he, at the very best, acted with culpable neglect of his professional obligations."
And the court declines to so find when the proposed class counsel filed a brief containing "a wholesale fabrication of quotations and a holding on a material issue" (presumably stemming from using AI and not adequately checking its output).
Are human courts the best venue to protect wild animals?
UPDATE 5/15/2025 (post moved up): Anthropic's lawyers filed a declaration stating that the error was not the expert's, but stemmed from the (unwise) use of Claude AI to format citations.
The judge finds "a collective debacle"—possibly caused, I think, by two firms working together and the communications problems this can cause—though "conclude[s] that additional financial or disciplinary sanctions against the individual attorneys are not warranted."
An Arizona trial court judge allowed this innovative approach to presenting a victim impact statement, which seems like a useful step toward justice.
UPDATE: Lawyer's response added; post bumped to highlight the update.
"Lehnert used ChatGPT after he had written his report to confirm his findings, which were based on his decades of experience joining dissimilar materials."
that's likely just the tip of the iceberg.
It's not the hallucination, it's the coverup.
"[A] credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
Maybe, but not in this particular case, a federal court rules.
From criminal penalties to bounty hunters, state laws targeting election-related synthetic media raise serious First Amendment concerns.
As technology develops, we anticipate the use of LLM AI tools to augment corpus linguistic analysis of ordinary meaning—without outsourcing the ultimate task of legal interpretation.
The selling points of LLM AIs are insufficient; corpus tools hold the advantage.
"[C]ounsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing prior to its admission ...."
LLM AIs are too susceptible to manipulation—and too prone to inconsistency—to be viewed as reliable means of producing empirical evidence of ordinary meaning.
Our draft article shows that corpus linguistics delivers where LLM AI tools fall short—in producing nuanced linguistic data instead of bare, artificial conclusions.
As we show in a draft article, corpus linguistic tools have been shown to do what LLM AIs cannot—produce transparent, replicable evidence of how a word or phrase is ordinarily used by the public.
The broad ban on AI-generated political content is clearly an affront to the First Amendment.
Among other things, "Michel does not explain how ... the [AI-generated] mistaken attribution of a Puff Daddy song in the closing argument" sufficiently undermined his case.
Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.
This modal will close in 10