Minor Third-Order-Procedure Decision in Walters v. OpenAI Large Libel Models Lawsuit
Procedure about procedure about procedure.
Procedure about procedure about procedure.
"Overwhelmingly impressed by the technology, I excitedly used it to find case law that supports my client's position, or so I thought."
"I felt ... my efficiency ... could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting."
"Every statement of fact in the summary [provided by ChatGPT] pertaining to [plaintiff] Walters is false."
And AI programs' "tendency [to, among other things, produce untruthful content] can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity."
[An excerpt from my forthcoming article on "Large Libel Models? Liability for AI Outputs."]
My Friday post erroneously stated that I got the bogus results from ChatGPT-4; it turns out they were from ChatGPT-3.5—but ChatGPT-4 does also yield similarly made-up results.
[UPDATE: This article originally said this what ChatGPT-4 doing this, which was my error. But, as I note below in an UPDATE, ChatGPT-4 also erroneously reports supposed criminal convictions and sentences, complete with made-up quotes.]
Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.
This modal will close in 10