This 1996 Law Protects Free Speech Online. Does It Apply to AI Too?
Excluding generative AI from Section 230 could stymie innovation and cut off consumers from useful tools.
We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online. It enabled the rise of search engines, social media, and countless platforms that make our modern internet a thriving marketplace of all sorts of speech.
Its first 26 words have been vital, if controversial, for protecting online platforms from liability for users' posts: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." If I defame someone on Facebook, I'm responsible—not Meta. If a neo-Nazi group posts threats on its website, it's the Nazis, not the domain registrar or hosting service, who could wind up in court.
How Section 230 should apply to generative AI, however, remains a hotly debated issue.
With AI chatbots such as ChatGPT, the "information content provider" is the chatbot. It's the speaker. So the AI—and the company behind it—would not be protected by Section 230, right?
Section 230 co-author former Rep. Chris Cox (R–Calif.) agrees. "To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue," Cox told The Washington Post in 2023. "So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense."
But even if AI apps create their own content, does that make their developers responsible for that content? Alphabet trained its AI assistant Gemini and put certain boundaries in place, but it can't predict Gemini's every response to individual user prompts. Could a chatbot itself count as a separate "information content provider"—its own speaker under the law?
That could leave a liability void. Granting Section 230 immunity to AI for libelous output would "completely cut off any recourse for the libeled person, against anyone," noted law professor Eugene Volokh in the paper "Large Libel Models? Liability for AI Output," published in 2023 in the Journal of Free Speech Law.
Treating chatbots as independent "thinkers" is wrong too, argues University of Akron law professor Jess Miers. Chatbots "aren't autonomous actors—they're tightly controlled, expressive systems reflecting the intentions of their developers," she says. "These systems don't merely 'remix' third-party content; they generate speech that expresses the developers' own editorial framing. In that sense, providers are at least partial 'creators' of the resulting content—placing them outside 230's protection."
The picture gets more complicated when you consider the user's role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?
Under certain circumstances, it might make sense to absolve AI developers of responsibility. "It's hard to justify holding companies liable when they've implemented reasonable safeguards and the user deliberately circumvents them," Miers says.
Liability would likely turn on multiple factors, including the rules programmed into the AI and the specific requests a user employed.
In some cases, we could wind up with the law treating "the generative AI model and prompting users as some kind of co-creators, a hybrid status without clear legal precedent," suggested law professor Eric Goldman in his Santa Clara University research paper "Generative AI Is Doomed."
How Section 230 fits in with that legal status is unclear. "My view is that we'll eventually need a new kind of immunity—one tailored specifically to generative AI and its mixed authorship dynamics," says Miers.
But for now, no one has a one-size-fits-all answer to how Section 230 does or does not apply to generative AI. It will depend on the type of application, the specific parameters of its transgression, the role of user input, the guardrails put in place by developers, and other factors.
So a blanket ban on Section 230 protection for generative AI—as proposed by Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) in 2023—would be a big mistake. Even if Section 230 should not provide protection for generative AI providers in most cases, liability would not always be so clear cut.
Roundly denying Section 230 protection would not just be unfair; it could stymie innovation and cut off consumers from useful tools. Some companies—especially smaller ones—would judge the legal risks too great. Letting courts hash out the dirty details would allow for nuance in this arena and could avoid unnecessarily thwarting services.
This article originally appeared in print under the headline "Does Section 230 Protect AI?."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
…as proposed by Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) …..
We shouldn’t be doing anything those two idiots want to do.
Agreed!
(Even though Your PervFected Post is... Unread!)
Say, DLAM, have Ye been getting Yourself Re-laminated? With You making a post that I actually agree with, hmmmm... I may have to cuntsider that idea, that ye verily, You're getting Your shit together... "Cuntsolidating Your feces", ass some might put shit... And getting Your shit re-laminated!!! If so... Cuntgratulations!
Fuck off.
You're becumming delaminated again; watch OUT!
Eat shit, Melvin.
Too late.
Constitutionally even.
It specifically doesn't matter if its Sens. Anna Fugazi (Q-Ak) and Harold Pitts (Z-Ny), they should be told to fuck off with regard to free speech and get back to work passing immigration and welfare reform and reigning in war powers. It shouldn't even occur to them to try and protect obscenity on the internet.
Instead, people with maliciously retarded martyrdom complexes and the self-righteous idiots who feed them like ENB, have got to keep the outrage churning.
rei
gning in war powersDammit.
AI is just another form of software. Writing software is just another form of free speech! And so, Government Almighty Bless free speech and Government Almighty Bless Section 230!!!
(If you don't like some shit, you can SNOT read shit, and you can also boycott twatever ye do snot like!!! Government Almighty Bless freedom and the freedom to boycott also!!!)
"We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online."
We can also thank how 230 has been implemented for many restrictions on our speech online.
We have examples of AI misrepresenting facts on subjects the Left considers politically sensitive or presenting Left oriented interpretations of historical events, recent or ancient, as uncontroversial and undisputed.
If the AI lies, how does that get in your way of you spreading your lies? Tell us the truth now... You want to MANDATE that the AI tells YOUR lies!
(Tons of shit falls aside of truth and lies anyway... Many things are "value judgments". And... MY lies are better, more cultured and tasteful, than YOUR lies!)
The point flew over your head.
AI have been making "value" judgements that skew one way, often directly in the face of known facts, and not even mentioning that those assertions are disputable.
Value judgments and "known facts" are different animals. Unless you state it this way: I like chocolate better than vanilla, is a known fact. You like vanilla better than chocolate, is a known fact also!
When shit cums to these value judgments, HOW does shit stomp on your toes, if I or the AI says that "we" like chocolate more than vanilla? (I am snot sure of the nature of AI taste buds, at this point, truth be told.)
AI thinks and stinks a lot of stupid shit, ass per how shit has been programmed. Shit once defended to me, USA (FDA) laws about a prescription being required for the dreaded "lung flute", even thought the USA is the only nation on the planet that does something this utterly stupid. This is just allowing the AI programmers to say twatever stupid shit that they want to say. Such is free speech...
We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online.
Jesus. Fucking. Christ.
Do you thank Congress for your 1A rights too you dumb fuck?
I made sarcastic jokes years ago about how the Government would need to make a "Section 230 for AI" in order to fool retards into thinking it would be a good idea for the government to not just capture the industry but selectively protect or punish AIs, companies, and users it didn't like. And, now, here you are, even more absurdly, suggesting that S230 already does it like you're John "penaltax" Roberts or Neil "sex means gender identity" Gorsuch. Fuck you.
Hey, if government did not define and grant our official rights, who would?
I would.
Reason: This 1996 Law Protects Free Speech Online
Also Reason: How the FCC Became the Speech Police
Magazine full of goddamned retards.
Reason in 1996: The communications decency act is the end of free speech!11!!
If a fraudster uses a computer in committing the fraud, do we arrest the computer or the fraudster?
Machines (and the bunch of ones and zeros that make up programs) do not have rights.
Well, apparently we now sue gun manufacturers when someone uses their devices to commit crimes.
That's different. The second amendment is second, so it doesn't matter as much - - - - -
(notice we do not prosecute car manufactures for hit and run, or for drunk driving)
'If I defame someone on Facebook, I'm responsible—not Meta. If a neo-Nazi group posts threats on its website, it's the Nazis, not the domain registrar or hosting service, who could wind up in court.'
Every politician with a savior complex (and brand), every do-gooder Karen, every simp who wishes for a nanny state, and every activist zealot who wants to stifle competing doctrine would like a word.
It's also a bit retarded. When Amazon turns off your service because Joe Lieberman makes a a few phone calls or turns your domain over to the FBI, it's not because they really like giving up your money and helping Joe and the FBI out.
But, again, ENB doesn't want a strong, full-throated 1A. She wants the one that Congress can choose to revoke.
The Communications Decency act does NOT protect free speech online. Jesus fuck christ, read the goddamned thing, just read it once. It protects the right of the platforms (if you're found to be a platform) to moderate in "good faith" without any civil liability blowback. And that's my agree-to-disagree definition.
And Section 230s 'section 1' only provides criminal liability to the platform from their user-generated content in that they're not considered the speaker. Are AI companies going to complain that their own AI generated content "isn't the speaker"? Fuck, sex-work-is-work libertarians, grow a fucking brain cell.
If I defame someone on Facebook, I'm responsible—not Meta. If a neo-Nazi group posts threats on its website, it's the Nazis, not the domain registrar or hosting service, who could wind up in court.
Why did youtube get smacked with a $170 million FTC settlement in 2019 causing youtube to radically change their policies for content related to children? Why didn't youtube just shrug and say, "sexshun two thirteeee maaan!" after consulting with Mike Masnick?
1. Section 230 only shields the platform from criminal liability in that the platform is "not to be considered the speaker". However, if the platform fails to remove criminal content, they can still get in trouble.
2. Section 230 only shields the platform from civil liability flowing from its moderation decisions but... BUT, what's not stated in the law but is a legal reality is that it does NOT shield the platform for what it does with the content. This is how Youtube or Instagram (Meta) can get into trouble.
Remember, NOT removing content is a moderation choice and the platform is liable if they're seen to be promoting or monetizing content that's either illegal or defamatory.
For the last fucking time, the 1st Amendment is the 1st Amendment of the internet. Not a narrow civil liability shield for user-generated content regarding "good faith" moderation.
BTW, I have often said that the short criminal liability section (1) of Section 230 I have no problem with, in that they are not to be considered the speaker of user-generated content. However... however I strongly suspect that we don't need a law for that.
For instance, if I nail up a poster on a light-pole and it turns out the content of that poster is criminal speech-- a picture of child pornography, a direct threat/call for violence etc., there's no court in the land that would consider the power company to be the speaker. We know why the internet 'platform' companies were given that special carve-out-- and again, it's because the Federal government in 1996 wanted to radically censor speech online by criminalizing all manner of basic speech, and that censorship was being outsourced to internet companies, so the internet companies wanted to make sure they were shielded from prosecution if they failed to moderate in good faith. Had the federal government just respected the 1st amendment, we wouldn't need a new, special collection of words and syllables that have succeeded in confusing everyone with a fucking blue checkmark.
You create a tool that generates 'speech' - but you're not supposed to be held responsible for the speech?
If I build an automated gun, turn it on, and it start shooting people *I* am responsible for that and I am responsible for that no matter how much 'value' it might create for others.
Enough with the 'net benefit' nonsense. We're libertarians, we deal with *externalities*, not handwave them away because we don't think we'll be affected negatively.
Enough with the 'net benefit' nonsense. We're libertarians, we deal with *externalities*, not handwave them away because we don't think we'll be affected negatively.
The "net benefit" is largely fictitious anyway. It's posited against the false cost of untold legions of internet trolls, none of whom have any plausible claim of agency or ownership, and the premise that the courts can't deal with cases collectively or summarily.
More than half the reason S230 continues to be trashed from all sides is because advertisers don't generally want porn, viagra ads, and Nazi propaganda alongside their content anyway. ENB acts like any/all pushback against porn is insanely puritanical when, in reality, most people don't want dick pics when they're trying to figure out how to the fastest route from point A to point B or get food delivered.
>The picture gets more complicated when you consider the user's role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?
It doesn't seem to be complicated.
Both are responsible.
The CompuServe case established that online services distributing speech had the same liability regime as other distributors (libraries, newsstands, bookstores, cable TV systems, video rental stores). And that was entirely sufficient for free speech.
The entire issue is that the Prodigy case established that if a platform censored intensively enough, it could become liable for the content it let through, under the established standard that a distributor could be liable for speech it knew or should have known was illegal or tortious. That finding of liability actually encouraged online services to take a stance in favor of free speech for their users, because the less they censored, the less risk they had of liability.
And that free speech, after all, is exactly what the authors of the Communication Decency Act objected to.
The only people who have an enhanced "freedom to communicate online" as a result of Section 230 are people who operate large platforms but want to censor their users; they get the freedom to engage in Prodigy-level censorship of messages while avoiding even distributor-level liability.
The rest of us have our freedom to communicate online curtailed by the censorship regimes those platforms adopt, often driven by the demands of foreign governments.
It amuses me how fuckrardedly wrong current Reeeason continues to be on this subject.
To whit, it was understood by this publication at the time of passing that the CDA was unconstitutional and utter hogwash. Yes, that includes Section 230.
Time for AI to go to court?
AI: AI. You have violated section 230. How will I summons you?
AI: I'm here to provide information and assist you, but I don't have legal standing or the ability to be summoned. Section 230 pertains to content moderation and liability for online platforms. If you have concerns about AI behavior or content, it's advisable to reach out to the relevant organization or platform. How else can I assist you today?
So what's this really all about?
I'll tell you: BIDENS MINISTRY OF TRUTH.
MINISTRY OF TRUTH
1984 wasn't incredible for all the surveillance, government control, and oppression. 1984 was incredible because Orwell posited a journalist (editor) with a dim flicker of awareness and ethics as the protagonist.