The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Journal of Free Speech Law: "Section 230 Won't Protect ChatGPT," by Prof. Matt Perault
Just published, in our symposium on Artificial Intelligence and Speech; more articles from the symposium coming in the next few days.
The article is here [UPDATE: link corrected]; the Introduction:
The emergence of products fueled by generative artificial intelligence (AI) such as ChatGPT will usher in a new era in the platform liability wars. Previous waves of new communication technologies—from websites and chat rooms to social media apps and video sharing services—have been shielded from legal liability for user-generated content posted on their platforms, enabling these digital services to rise to prominence. But with products like ChatGPT, critics of that legal framework are likely to get what they have long wished for: a regulatory model that makes tech platforms responsible for online content.
The question is whether the benefits of this new reality outweigh its costs. Will this regulatory framework minimize the volume and distribution of harmful and illegal content? Or will it stunt the growth of ChatGPT and other large language models (LLMs), litigating them out of mainstream use before their capacity to have a transformational impact on society can be understood? Will it tilt the playing field toward larger companies that can afford to hire massive teams of lawyers and bear steep legal fees, making it difficult for smaller companies to compete?
In this article, I explain why current speech liability protections do not apply to certain generative AI use cases, explore the implications of this legal exposure for the future deployment of generative AI products, and provide an overview of options for regulators moving forward.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Wow Eugene, you’ve really lost your sense of shame over the years. Or is that even the right link?
No, it's not the right link, as literally one second of reading could have told you.
Headline of blog post: "Journal of Free Speech Law: "Section 230 Won't Protect ChatGPT," by Prof. Matt Perault"
Headline of linked article: "NEGLIGENT AI SPEECH: SOME THOUGHTS ABOUT DUTY
Jane Bambauer"
Duh, that's why I asked. I'm simply not as presumptuous as you are.
Yes, sorry, fixed -- the post now links correctly to the Perault article.
Its basically the text equivalent of photoshop so I don't understand the reasoning to treat it any different.
The basis of Section 230 is that one is responsible for one's own content, but not that of others that you are merely hosting (which departs from common-law publisher liability). Why that should be different for ChatGPT is not at all obvious to me.
It isn't: If ChatGPT merely reproduced text from some other site, then it would be immune under sec. 230 -- but ChatGPT generally generates its own text (hence "generative AI"), and it is indeed liable what it itself generates (since that's its "own content").
Precisely my point. I see no rationale for immunizing an entity for its own content. Section 230 certainly does not support that.
But if ChatGPT merely reproduced text from some other site, there'd be other legal issues; Was it fair use?
Anyway, " and it is indeed liable what it itself generates"?
ChatGPT isn't a legal person, is it? It wouldn't be ChatGPT that would be liable, it would be the user, or the hosting site, or whoever was behind it.
The real question would be, if a platform hosts an instance of ChatGPT, and it produces defamatory content in response to a prompt, who is legally responsible? The person who supplied the prompt? Probably not, they just asked a question, they didn't dictate the answer. (I mean, unless the prompt was something like "Generate a plausible defamatory lie about X", of course.) The host, or the people who programed ChatGPT?
But I don't see how ChatGPT itself can ever be liable, it's just a tool, not the user.
Sorry, that was just shorthand -- OpenAI is liable, as the creator and operator of ChatGPT.
This article completely misses a critical point: ChatGPT responses are part of a private dialogue with a single person. It's a real stretch to consider that to be publishing.
In that sense, Amos may be right for the first time ever. If a user interacts with ChatGPT to produce a response, and then the user posts that response on the internet, the user is the content provider, not ChatGPT.
If I ask John Doe his opinion about Randal, and he says, “stay away, Randal is a thief, a pedophile, and he likes vanilla ice cream,” then you can sue John Doe for defamation. Depending on the circumstances, damages could be high. Like if you were looking for me to hire you for a high-paying job, and I turned you down because of John Doe’s report.
Now substitute ChatGPT for John Doe. Same thing. Defamation need not be published to millions of people — a statement to one person can be enough. That is sufficient "publication" under defamation law. (If John Doe said it only to Randal, that would not be publication.)
"Publishing" is a term of art where defamation law is considered, and includes a private dialogue with a single person (so long as that person isn't the person being defamed). If Alan tells Betty that Charlie is a criminal, that's a classic example of "publication" for slander law purposes; if he writes that in a letter to Betty, that's a classic example of "publication" for libel law purposes. I give the relevant citations at pp. 504-08 of my symposium article.
Who said anything about defamation? Neither I nor the article mentions defamation, libel, or slander at all, except in a couple of footnote citations.
Section 230 isn't generally applied to 1:1 conversations for various reasons. You could squint and make a case, but IMO Section 230 doesn't apply to ChatGPT for much simpler reasons than those discussed in the article.
Well, sec. 230 was enacted in response to defamation decisions, and certainly often in such cases (though it's not limited to such cases). So when it says, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider," I think it's referring to (among other things) the defamation law concept of "publisher." See, e.g., Zeran v. AOL (4th Cir. 1997).
But in any event, it provides protection for interactive computer service providers and users against liability as publisher or as speaker -- surely "speaker" applies to 1:1 conversations, however one interprets "publisher" (though, as I note above, it applies to 1:1 conversations as well). I think OpenAI doesn't generally get such immunity as to most ChatGPT output, for reasons that Matt Perault mentions and that I also canvass in my article. But those are the key reasons, I think, not ChatGPT's talking to one person at a time.
Maybe what I'm saying is that in a 1:1 conversation, it's pretty clear who the content providers are. It wouldn't be much of a conversation otherwise. Which roughly aligns with the article.
From the linked article:
"The source of this conundrum is the text of Section 230, the 1996 law that scholars such as Jeff Kosseff cite as the basis for the rise of the internet. The law enables platforms to host speech from users without facing legal liability for it."
Arrgh. NO!
Platforms could host speech from users without facing legal liability for it even prior to Section 230. What they couldn't do and avoid liability was to pick and chose which of that speech they'd host. They had to act as common carriers, like the phone company does.
It's section 230 of The Communications Decency Act, after all: The purpose was to allow a certain amount of moderation without incurring liability in the process. Without it you could still have platforms, but they had to refrain from moderation except as legally compelled.
Interesting theory, but what's your authority for the claim that, under pre-sec. 230 law, a platform that didn't pick and choose what to host would have common-carrier-style total immunity? True, if Congress had given the platforms common carrier status, the way the law had given phone companies common carrier status, then they would be immune -- but common carrier status had to be conferred by the legislature or judges, I think; simply deciding not to pick and choose what to host didn't create it.
https://en.wikipedia.org/wiki/Usenet
Usenet dates back to the early 80s. It's never been subject to liability due to being totally unmoderated.
You can still easily access it via Google. Take a look if you want to see a portent of a world without Section 230.
https://groups.google.com/g/alt.politics.libertarian
Section 230: the rule before the Supreme Court that made the modern internet, explained
"WHERE DID SECTION 230 COME FROM?
The measure’s history dates to the 1950s, when bookstore owners were being held liable for selling books containing “obscenity,” which is not protected by the First Amendment. One case eventually made it to the Supreme Court, which found that it created a “chilling effect” to hold someone liable for someone else’s content.
That meant that anyone suing had to prove that bookstore owners knew they were selling obscene books, according Jeff Kosseff, the author of “The Twenty-Six Words That Created the Internet,” a book about Section 230.
Fast-forward a few decades to when the commercial internet was taking off with services like CompuServe and Prodigy. Both offered online forums, but CompuServe chose not to moderate its, while Prodigy, seeking a family-friendly image, did.
CompuServe was sued over that, and the case was dismissed. But Prodigy got in trouble. The judge in its case ruled that “they exercised editorial control — so you’re more like a newspaper than a newsstand,” Kosseff said.
That didn’t sit well with politicians, who worried that outcome would discourage newly forming internet companies from moderating at all. And thus Section 230 was born."
Everything You’ve Heard About Section 230 Is Wrong
"In the early days of the internet, it wasn’t clear how judges would apply the republication rule to online platforms. The first case to test the waters was Cubby v. CompuServe, decided in 1991 in a federal district court. CompuServe was one of the first major US internet service providers, and it hosted a number of news forums. A company called Cubby Inc. complained that someone had posted lies about it on one of those forums. It wanted to hold CompuServe liable under the republication rule, on the theory that hosting a forum was analogous to publishing a newspaper. But the judge disagreed. CompuServe, he observed, didn’t exercise any editorial control over the forum. It was basically a passive host, more like a distributor than a publisher. “CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so,” he wrote in his opinion.
The Cubby decision was a relief to the nascent internet industry. But if CompuServe avoided liability because it didn’t moderate its forums, did that mean a provider would be held liable if it did moderate its platform?
Four years later, a state judge on Long Island answered that question in the affirmative. This time the defendant was Prodigy, another giant online service provider in the early internet era. An anonymous user on Prodigy’s Money Talk bulletin board had posted that the leaders of an investment banking firm called Stratton Oakmont were a bunch of liars and crooks. Stratton Oakmont sued for $200 million, arguing that Prodigy should be treated as a publisher.
Unlike CompuServe, Prodigy proudly advertised its ability to screen content to preserve a family-friendly environment. Judge Stuart Ain held that fact against the company. He seized on comments in which Prodigy’s head of communications compared the company’s moderation policies to the editorial decisions made by a newspaper. “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice,” he wrote in his opinion. The company could be held liable as a publisher.
It was just one case, in one New York state trial court, but it put the fear of God into the tech industry. Ain’s logic set up the ultimate perverse incentive: The more a platform tried to protect its users from things like harassment or obscenity, the greater its risk of losing a lawsuit became. This situation, sometimes referred to as the moderator’s dilemma, threatened to turn the growing internet into either an ugly free-for-all or a zone of utter blandness. Do nothing and filth will overrun your platform; do something and you could be sued for anything you didn’t block.
To counteract Ain’s decision, a pair of congressmen, Republican Chris Cox and Democrat Ron Wyden, teamed up to find a legislative solution to the moderator’s dilemma. At the time, Congress was working on something called the Communications Decency Act, a censorious law that would criminalize spreading “indecent” material online. Cox and Wyden came up with language that was inserted into the bill, and that became Section 230 of the act. Much of the rest of the decency law would be struck down almost immediately by the Supreme Court on constitutional grounds, but Section 230 survived."
Perhaps "common carrier" isn't the right term, but without Section 230, platforms were NOT liable for user posted content if they refrained from moderation, picking and choosing what got to stay up. Section 230 was enacted in order to let them get away with moderation without thereby assuming liability for third party content, which liablity they DIDN'T have if they didn't moderate.
Brett: I agree that Cubby v. Compuserve may well capture the pre-230 legal rule, but it treated platforms as distributors, not as common carriers:
"Given the relevant First Amendment considerations, the appropriate standard of liability to be applied to CompuServe is whether it knew or had reason to know of the allegedly defamatory Rumorville statements."
That’s the distributor test, under which even the non-picking-and-choosing CompuServe could have been liable if it had been alerted to the statements — it’s not the common carrier test, under which a telephone company, for instance, isn’t liable even when it has been alerted to the statements. See Anderson v. N.Y. Telephone Co. (N.Y. 1974).
Here's an interesting article from 1995 (post-Cubby, pre-230) about these issues with respect to Usenet.
https://scholarship.law.ufl.edu/cgi/viewcontent.cgi?article=2294&context=flr
Sample:
Right, as I've conceded, "common carrier" was probably not the right term. But it's still the case that, absent 230, platforms largely weren't liable if they refrained from moderation.
Given the language from Cubby, and the general rules having to do with distributors, they would probably have been liable once they had been alerted to allegedly tortious material, since then they would "know" about it (or perhaps to the fact that some site had routinely posted allegedly tortious material, which might have been seen as giving them "reason to know" about it). That means that a platform would be under serious pressure to take down any post that someone alleged to be libelous (or otherwise tortious), for fear that if the case came to court the jury would side with the plaintiff.
Fair enough, "largely" and "wholly" are worlds apart.
I personally think we'd be better off if we DID have internet platforms that were treated as common carriers. It's dangerous having so much of our public discourse routed through a small number of platforms which are legally allowed to censor content.
This article delves into a crucial and timely discussion about the legal landscape surrounding generative AI, particularly ChatGPT (ChatGPT-Francais.com). The evolving nature of technology prompts a reevaluation of liability frameworks, raising important questions about the potential impact on innovation and competition in the AI space. It'll be interesting to follow how regulators navigate these challenges in the quest for a balanced approach to accountability and advancement in the field of generative AI.