The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Journal of Free Speech Law: "Section 230 Won't Protect ChatGPT," by Prof. Matt Perault
Just published, in our symposium on Artificial Intelligence and Speech; more articles from the symposium coming in the next few days.
The article is here [UPDATE: link corrected]; the Introduction:
The emergence of products fueled by generative artificial intelligence (AI) such as ChatGPT will usher in a new era in the platform liability wars. Previous waves of new communication technologies—from websites and chat rooms to social media apps and video sharing services—have been shielded from legal liability for user-generated content posted on their platforms, enabling these digital services to rise to prominence. But with products like ChatGPT, critics of that legal framework are likely to get what they have long wished for: a regulatory model that makes tech platforms responsible for online content.
The question is whether the benefits of this new reality outweigh its costs. Will this regulatory framework minimize the volume and distribution of harmful and illegal content? Or will it stunt the growth of ChatGPT and other large language models (LLMs), litigating them out of mainstream use before their capacity to have a transformational impact on society can be understood? Will it tilt the playing field toward larger companies that can afford to hire massive teams of lawyers and bear steep legal fees, making it difficult for smaller companies to compete?
In this article, I explain why current speech liability protections do not apply to certain generative AI use cases, explore the implications of this legal exposure for the future deployment of generative AI products, and provide an overview of options for regulators moving forward.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Show Comments (27)