The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Lawfare Podcast on AI Libel, with Prof. Alan Rozenshtein and Me
From the Lawfare blog (link to my paper revised):
If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean for a large language model to act with "malice"? Does the First Amendment put any limits on the ability to hold these models, and the companies that make them, accountable for false statements they make? And what's the best way to deal with this problem: private lawsuits or government regulation?
On this episode of Arbiters of Truth, our series on the information ecosystem, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, discussed these questions with First Amendment expert Eugene Volokh, Professor of Law at UCLA and the author of a draft paper entitled "Large Libel Models."
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Here's today's mostly false ChatGPT take on me. This time it doesn't say, as it previously did, that I was once the President of the Pacific Legal Foundation, and that I represented GWB in Bush v. Gore. (I had nothing to do with Timbs v. Indiana.)
Who is attorney Andrew Dhuey?
Andrew Dhuey is an attorney based in Berkeley, California. He is known for his work as the lead attorney in the case of Timbs v. Indiana, which was heard by the United States Supreme Court in 2018. In that case, Dhuey argued that the Eighth Amendment's Excessive Fines Clause should apply to state governments as well as the federal government. The Supreme Court ultimately agreed with Dhuey's argument in a unanimous decision, which was seen as a major victory for individual rights and civil liberties. Dhuey has also represented clients in a variety of other legal matters, including intellectual property disputes and class action lawsuits.
What does it even mean for a large language model to act with "malice"?
Nothing. The model is not even acting. The party which acts is the party who publishes the defamatory expression, however it was composed.
Think that over for a moment in context of Section 230. That law says an internet publisher cannot be held liable for any content it did not compose itself. That seems to mean one of two things:
1. A party, either a natural person or a corporation, which operates an AI model, and arranges for its output to be published on the internet, will always be held liable as the ostensible publisher, or;
2. Nobody will be held liable, because if the party operating the AI model is not the publisher, neither is the AI model the publisher, and the internet publisher is not to be held liable under law.
Simply put, AI model publishing has brought the Section 230 legal regime to a crisis defined by 3 alternatives:
1. Either Section 230 must be modified, to make internet publishers liable for defamatory content created by AI models, and published by internet publishers previously protected by Section 230, or;
2. Internet contributors must always be held liable—including for everything composed by AI models, or;
3. The notion of defamatory content on the internet must be retired, while a flood of purposefully automated defamation makes the internet useless for any serious mass expressive purpose.
Note also, AI defamatory publishing—automated, prolific beyond mere human capacity, and anonymously sourced, will readily be adapted to portray targeted others as the actual sources of any defamatory allegations the AI machine creates.
Computers can't lie. (which is false, the one or the zero?)
Programmers, however - - - - - - - - - - - -
Eventually, there will be a judge that rules nothing generated from an AI can be considered truthful, and the problem goes away.
Sort of like any social media post.
Private lawsuits or government regulation?
That's just a question of which branch of government handles the problem. Private lawsuits put the determination of falsity and liability on the judicial branch.