The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Journal of Free Speech Law: "Bots Behaving Badly: A Products Liability Approach to Chatbot-Generated Defamation,"
by Prof. Nina Brown, just published in our symposium on Artificial Intelligence and Speech; more articles from the symposium coming in the next few days.
The article is here [UPDATE: link fixed]; the Introduction:
Within two months of its launch, ChatGPT became the fastest-growing consumer application in history with more than 100 million monthly active users. Created by OpenAI, a private company backed by Microsoft, ChatGPT is just one of several sophisticated chatbots made available to the public in late 2022. These large language models generate human-like responses to user prompts based on information they have "learned" during a training process. Ask ChatGPT to explain the concept of quantum physics and it synthesizes the subject into six readable paragraphs. Prompt it with an inquiry about the biggest scandal in baseball history and it describes the Black Sox Scandal of 1919. This is a tool that can respond to an incredible variety of content creation requests ranging from academic papers to language translations, explanations of complicated math problems, and telling jokes. But it is not without risk. It is also capable of generating speech that causes harm, such as defamation.
Although some safeguards are in place, there already exist documented examples of ChatGPT creating defamatory speech. And this should not come as a surprise—if something is capable of speech, it is capable of false speech that sometimes causes reputational harm. Of course, artificial intelligence (AI) tools have caused speech harms before. Amazon's Alexa device—touted as a virtual assistant that can make your life easier—has on occasion gone rogue: It has made violent statements to users, and even suggested they engage in harmful acts. Google search's autocomplete function has fueled defamation lawsuits arising from suggested words such as "rapist," "fraud," and "scam." An app called SimSimi has notoriously perpetuated cyberbullying and defamation. Tay, a chatbot launched by Microsoft, caused controversy when just hours after its launch it began to post inflammatory and offensive messages. So the question isn't whether these tools can cause harm. It's when they do cause harm, who—if anyone—is legally responsible?
The answer is not straightforward, in part because in each example of harm listed above, humans were not responsible—at least not directly—for the problematic speech. Instead, the speech was produced by automated AI programs that were designed to generate output based on various inputs. Although the AI was written by humans, the chatbots were designed to collect information and data in order to generate their own content. In other words, a human was not pulling levers behind a curtain; the human had taught the chatbot how to pull the levers on its own.
As the use of AI for content generation becomes more prevalent, it raises questions about how to assign fault and responsibility for defamatory statements made by these machines. With the projected continued growth of AI applications that generate content, it is critical to develop a clear framework of how potential liability would be assigned. This will spur continued growth and innovation in this area and ensure that proper consideration is given to preventing speech harms in the first instance.
The default assumption may be that someone who is defamed by an AI chatbot would have a case for defamation. But there are hurdles in applying defamation law to speech generated by a chatbot, particularly because defamation law requires assessing mens rea that will be difficult to assign to a chatbot (or its developers). This article evaluates the challenges of applying defamation law to chatbots. Section I discusses the technology behind chatbots and how it operates, and why it is qualitatively different from earlier forms of AI. Section II examines the challenges that arise in assigning liability under traditional defamation law when a chatbot publishes defamatory speech. Sections III and IV suggest that products liability law might offer a solution—either as an alternative theory of liability or as a framework for assessing fault in a defamation action. After all, products liability law is well-suited to address who is at fault when a product causes injury, includes mechanisms for assessing the fault of product designers and manufacturers, and easily adapts to emerging technologies because of its broad theories of liability.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Not yet
Yep. That ain't Nina.
Crosby: It's a machine, Skroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes.
Ben and Crosby: It just runs programs.
Why does everyone act like computers were just invented?
AI is no different from a liability viewpoint than a General ledger program.
If a company can get sued because it runs a G/L program that underpays invoices, it can be sued for "AI" that defames.
It says: "Tay, a chatbot launched by Microsoft, caused controversy when just hours after its launch it began to post inflammatory and offensive messages. So the question isn't whether these tools can cause harm. It's when they do cause harm".
No, there is no example of these chatbots causing harm. Causing controversy is not causing harm. It was supposed to cause controversy. The whole article has a faulty premise.
Words are violence.
Silence is violence.
Bots are sometimes silent.
Bots sometimes produce words.
Ergo, ipso facto, henceforth thereunto, bots are violent.
And bots must therefore cause harm.
Q.E.D.
Benighted free-speech utopianism leads some to suppose they can answer a question of fact—whether harm has occurred, for instance—by reasoning from an ideological axiom. To do that is always unwise. To insist upon it is worse.
Unrelated but helpful If you have experienced substantial financial loss as a result of fraudulent investments, it is crucial to take prompt action. Prioritize conducting comprehensive research, validating the credentials of any recovery service you may be considering, and obtaining recommendations from reliable sources before proceeding with their assistance. I have come across positive feedback about winsburg.net , which may be worth exploring.