Large Libel Models
Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error
And AI programs' "tendency [to, among other things, produce untruthful content] can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity."
Large Libel Models: An AI Company's Noting That Its Output "May [Be] Erroneous]" Doesn't Preclude Libel Liability
[An excerpt from my forthcoming article on "Large Libel Models? Liability for AI Outputs."]
Correction re: ChatGPT-4 Erroneously Reporting Supposed Crimes and Misconduct, Complete with Made-Up Quotes?
My Friday post erroneously stated that I got the bogus results from ChatGPT-4; it turns out they were from ChatGPT-3.5—but ChatGPT-4 does also yield similarly made-up results.
Large Libel Models: ChatGPT-3.5 Erroneously Reporting Supposed Felony Pleas, Complete with Made-Up Media Quotes?
[UPDATE: This article originally said this what ChatGPT-4 doing this, which was my error. But, as I note below in an UPDATE, ChatGPT-4 also erroneously reports supposed criminal convictions and sentences, complete with made-up quotes.]