Can You Sue Over Assurances Made by Company's Customer Service AI Chatbot?
Maybe, but not in this particular case, a federal court rules.
Maybe, but not in this particular case, a federal court rules.
From criminal penalties to bounty hunters, state laws targeting election-related synthetic media raise serious First Amendment concerns.
As technology develops, we anticipate the use of LLM AI tools to augment corpus linguistic analysis of ordinary meaning—without outsourcing the ultimate task of legal interpretation.
The selling points of LLM AIs are insufficient; corpus tools hold the advantage.
"[C]ounsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing prior to its admission ...."
LLM AIs are too susceptible to manipulation—and too prone to inconsistency—to be viewed as reliable means of producing empirical evidence of ordinary meaning.
Our draft article shows that corpus linguistics delivers where LLM AI tools fall short—in producing nuanced linguistic data instead of bare, artificial conclusions.
As we show in a draft article, corpus linguistic tools have been shown to do what LLM AIs cannot—produce transparent, replicable evidence of how a word or phrase is ordinarily used by the public.
The broad ban on AI-generated political content is clearly an affront to the First Amendment.
Among other things, "Michel does not explain how ... the [AI-generated] mistaken attribution of a Puff Daddy song in the closing argument" sufficiently undermined his case.
It's the twelfth case I've seen this year in which something like this apparently happened.
Lawyers will have to certify they did not use AI, or verify any work produced by AI.
"Spoiler: the robot wins for lack of Article III standing."
Looks like the main problem wasn't the blind reliance, but the coverup.
These are likely just the tip of the fakeberg.
"Duty of care has worked in other areas," the senator said, "and it seems to fit decently well here in the AI model."
"Kenner used an experimental AI program to write his closing argument, which made frivolous arguments, conflated the schemes, and failed to highlight key weaknesses in the Government's case."
"Can you show me the courts opinion in Varghese v China Southern Airlines"? "Certainly! ... I hope that helps!"
The certificates must "attest[] either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being."
Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.
This modal will close in 10
Notifications