Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
    • Reason TV
    • The Reason Roundtable
    • Free Media
    • The Reason Interview
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • Freed Up
    • The Soho Forum Debates
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Gift Subscriptions
    • Print Subscription
    • Subscriber Support

Login Form

Create new account
Forgot password

Free Speech

This 1996 Law Protects Free Speech Online. Does It Apply to AI Too?

Excluding generative AI from Section 230 could stymie innovation and cut off consumers from useful tools.

Elizabeth Nolan Brown | From the February/March 2026 issue

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests
An illustration of a robot with a comic speech bubble above its head | Illustration: iStock
(Illustration: iStock)

We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online. It enabled the rise of search engines, social media, and countless platforms that make our modern internet a thriving marketplace of all sorts of speech.

Its first 26 words have been vital, if controversial, for protecting online platforms from liability for users' posts: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." If I defame someone on Facebook, I'm responsible—not Meta. If a neo-Nazi group posts threats on its website, it's the Nazis, not the domain registrar or hosting service, who could wind up in court.

How Section 230 should apply to generative AI, however, remains a hotly debated issue.

With AI chatbots such as ChatGPT, the "information content provider" is the chatbot. It's the speaker. So the AI—and the company behind it—would not be protected by Section 230, right?

Section 230 co-author former Rep. Chris Cox (R–Calif.) agrees. "To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue," Cox told The Washington Post in 2023. "So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense."

But even if AI apps create their own content, does that make their developers responsible for that content? Alphabet trained its AI assistant Gemini and put certain boundaries in place, but it can't predict Gemini's every response to individual user prompts. Could a chatbot itself count as a separate "information content provider"—its own speaker under the law?

That could leave a liability void. Granting Section 230 immunity to AI for libelous output would "completely cut off any recourse for the libeled person, against anyone," noted law professor Eugene Volokh in the paper "Large Libel Models? Liability for AI Output," published in 2023 in the Journal of Free Speech Law.

Treating chatbots as independent "thinkers" is wrong too, argues University of Akron law professor Jess Miers. Chatbots "aren't autonomous actors—they're tightly controlled, expressive systems reflecting the intentions of their developers," she says. "These systems don't merely 'remix' third-party content; they generate speech that expresses the developers' own editorial framing. In that sense, providers are at least partial 'creators' of the resulting content—placing them outside 230's protection."

The picture gets more complicated when you consider the user's role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?

Under certain circumstances, it might make sense to absolve AI developers of responsibility. "It's hard to justify holding companies liable when they've implemented reasonable safeguards and the user deliberately circumvents them," Miers says.

Liability would likely turn on multiple factors, including the rules programmed into the AI and the specific requests a user employed.

In some cases, we could wind up with the law treating "the generative AI model and prompting users as some kind of co-creators, a hybrid status without clear legal precedent," suggested law professor Eric Goldman in his Santa Clara University research paper "Generative AI Is Doomed."

How Section 230 fits in with that legal status is unclear. "My view is that we'll eventually need a new kind of immunity—one tailored specifically to generative AI and its mixed authorship dynamics," says Miers.

But for now, no one has a one-size-fits-all answer to how Section 230 does or does not apply to generative AI. It will depend on the type of application, the specific parameters of its transgression, the role of user input, the guardrails put in place by developers, and other factors.

So a blanket ban on Section 230 protection for generative AI—as proposed by Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) in 2023—would be a big mistake. Even if Section 230 should not provide protection for generative AI providers in most cases, liability would not always be so clear cut.

Roundly denying Section 230 protection would not just be unfair; it could stymie innovation and cut off consumers from useful tools. Some companies—especially smaller ones—would judge the legal risks too great. Letting courts hash out the dirty details would allow for nuance in this
arena and could avoid unnecessarily thwarting services.

This article originally appeared in print under the headline "Does Section 230 Protect AI?."

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: How the FCC Became the Speech Police

Elizabeth Nolan Brown is a senior editor at Reason.

Free SpeechTechnologyArtificial IntelligenceSection 230Federal governmentLaw & GovernmentInnovationFirst Amendment
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests

Hide Comments (33)

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.

  1. Don't look at me! ( Is the war over yet?)   1 week ago

    …as proposed by Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) …..

    We shouldn’t be doing anything those two idiots want to do.

    Log in to Reply
    1. SQRLSY   1 week ago

      Agreed!

      (Even though Your PervFected Post is... Unread!)

      Say, DLAM, have Ye been getting Yourself Re-laminated? With You making a post that I actually agree with, hmmmm... I may have to cuntsider that idea, that ye verily, You're getting Your shit together... "Cuntsolidating Your feces", ass some might put shit... And getting Your shit re-laminated!!! If so... Cuntgratulations!

      Log in to Reply
      1. Don't look at me! ( Is the war over yet?)   1 week ago

        Fuck off.

        Log in to Reply
        1. SQRLSY   1 week ago

          You're becumming delaminated again; watch OUT!

          Log in to Reply
          1. InsaneTrollLogic (smarter than The Average Dude)   1 week ago

            Eat shit, Melvin.

            Log in to Reply
            1. SCOTUS gave JeffSarc a big sad   1 week ago

              Too late.

              Log in to Reply
    2. mad.casual   1 week ago

      Constitutionally even.

      It specifically doesn't matter if its Sens. Anna Fugazi (Q-Ak) and Harold Pitts (Z-Ny), they should be told to fuck off with regard to free speech and get back to work passing immigration and welfare reform and reigning in war powers. It shouldn't even occur to them to try and protect obscenity on the internet.

      Instead, people with maliciously retarded martyrdom complexes and the self-righteous idiots who feed them like ENB, have got to keep the outrage churning.

      Log in to Reply
      1. mad.casual   1 week ago

        reigning in war powers

        Dammit.

        Log in to Reply
  2. SQRLSY   1 week ago

    AI is just another form of software. Writing software is just another form of free speech! And so, Government Almighty Bless free speech and Government Almighty Bless Section 230!!!

    (If you don't like some shit, you can SNOT read shit, and you can also boycott twatever ye do snot like!!! Government Almighty Bless freedom and the freedom to boycott also!!!)

    Log in to Reply
  3. Mickey Rat   1 week ago

    "We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online."

    We can also thank how 230 has been implemented for many restrictions on our speech online.

    We have examples of AI misrepresenting facts on subjects the Left considers politically sensitive or presenting Left oriented interpretations of historical events, recent or ancient, as uncontroversial and undisputed.

    Log in to Reply
    1. SQRLSY   1 week ago

      If the AI lies, how does that get in your way of you spreading your lies? Tell us the truth now... You want to MANDATE that the AI tells YOUR lies!

      (Tons of shit falls aside of truth and lies anyway... Many things are "value judgments". And... MY lies are better, more cultured and tasteful, than YOUR lies!)

      Log in to Reply
      1. Mickey Rat   1 week ago

        The point flew over your head.

        AI have been making "value" judgements that skew one way, often directly in the face of known facts, and not even mentioning that those assertions are disputable.

        Log in to Reply
        1. SQRLSY   1 week ago

          Value judgments and "known facts" are different animals. Unless you state it this way: I like chocolate better than vanilla, is a known fact. You like vanilla better than chocolate, is a known fact also!

          When shit cums to these value judgments, HOW does shit stomp on your toes, if I or the AI says that "we" like chocolate more than vanilla? (I am snot sure of the nature of AI taste buds, at this point, truth be told.)

          AI thinks and stinks a lot of stupid shit, ass per how shit has been programmed. Shit once defended to me, USA (FDA) laws about a prescription being required for the dreaded "lung flute", even thought the USA is the only nation on the planet that does something this utterly stupid. This is just allowing the AI programmers to say twatever stupid shit that they want to say. Such is free speech...

          Log in to Reply
  4. mad.casual   1 week ago

    We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online.

    Jesus. Fucking. Christ.

    Do you thank Congress for your 1A rights too you dumb fuck?

    I made sarcastic jokes years ago about how the Government would need to make a "Section 230 for AI" in order to fool retards into thinking it would be a good idea for the government to not just capture the industry but selectively protect or punish AIs, companies, and users it didn't like. And, now, here you are, even more absurdly, suggesting that S230 already does it like you're John "penaltax" Roberts or Neil "sex means gender identity" Gorsuch. Fuck you.

    Log in to Reply
    1. Earth-based Human Skeptic   1 week ago

      Hey, if government did not define and grant our official rights, who would?

      Log in to Reply
      1. Rick James   1 week ago

        I would.

        Log in to Reply
  5. mad.casual   1 week ago

    Reason: This 1996 Law Protects Free Speech Online
    Also Reason: How the FCC Became the Speech Police

    Magazine full of goddamned retards.

    Log in to Reply
    1. Rick James   1 week ago

      Reason in 1996: The communications decency act is the end of free speech!11!!

      Log in to Reply
  6. Longtobefree   1 week ago

    If a fraudster uses a computer in committing the fraud, do we arrest the computer or the fraudster?
    Machines (and the bunch of ones and zeros that make up programs) do not have rights.

    Log in to Reply
    1. Minadin   1 week ago

      Well, apparently we now sue gun manufacturers when someone uses their devices to commit crimes.

      Log in to Reply
      1. Longtobefree   1 week ago

        That's different. The second amendment is second, so it doesn't matter as much - - - - -

        (notice we do not prosecute car manufactures for hit and run, or for drunk driving)

        Log in to Reply
  7. Earth-based Human Skeptic   1 week ago

    'If I defame someone on Facebook, I'm responsible—not Meta. If a neo-Nazi group posts threats on its website, it's the Nazis, not the domain registrar or hosting service, who could wind up in court.'

    Every politician with a savior complex (and brand), every do-gooder Karen, every simp who wishes for a nanny state, and every activist zealot who wants to stifle competing doctrine would like a word.

    Log in to Reply
    1. mad.casual   1 week ago

      It's also a bit retarded. When Amazon turns off your service because Joe Lieberman makes a a few phone calls or turns your domain over to the FBI, it's not because they really like giving up your money and helping Joe and the FBI out.

      But, again, ENB doesn't want a strong, full-throated 1A. She wants the one that Congress can choose to revoke.

      Log in to Reply
  8. Rick James   1 week ago

    The Communications Decency act does NOT protect free speech online. Jesus fuck christ, read the goddamned thing, just read it once. It protects the right of the platforms (if you're found to be a platform) to moderate in "good faith" without any civil liability blowback. And that's my agree-to-disagree definition.

    And Section 230s 'section 1' only provides criminal liability to the platform from their user-generated content in that they're not considered the speaker. Are AI companies going to complain that their own AI generated content "isn't the speaker"? Fuck, sex-work-is-work libertarians, grow a fucking brain cell.

    Log in to Reply
  9. Rick James   1 week ago

    If I defame someone on Facebook, I'm responsible—not Meta. If a neo-Nazi group posts threats on its website, it's the Nazis, not the domain registrar or hosting service, who could wind up in court.

    Why did youtube get smacked with a $170 million FTC settlement in 2019 causing youtube to radically change their policies for content related to children? Why didn't youtube just shrug and say, "sexshun two thirteeee maaan!" after consulting with Mike Masnick?

    1. Section 230 only shields the platform from criminal liability in that the platform is "not to be considered the speaker". However, if the platform fails to remove criminal content, they can still get in trouble.

    2. Section 230 only shields the platform from civil liability flowing from its moderation decisions but... BUT, what's not stated in the law but is a legal reality is that it does NOT shield the platform for what it does with the content. This is how Youtube or Instagram (Meta) can get into trouble.

    Remember, NOT removing content is a moderation choice and the platform is liable if they're seen to be promoting or monetizing content that's either illegal or defamatory.

    For the last fucking time, the 1st Amendment is the 1st Amendment of the internet. Not a narrow civil liability shield for user-generated content regarding "good faith" moderation.

    Log in to Reply
    1. Rick James   1 week ago

      BTW, I have often said that the short criminal liability section (1) of Section 230 I have no problem with, in that they are not to be considered the speaker of user-generated content. However... however I strongly suspect that we don't need a law for that.

      For instance, if I nail up a poster on a light-pole and it turns out the content of that poster is criminal speech-- a picture of child pornography, a direct threat/call for violence etc., there's no court in the land that would consider the power company to be the speaker. We know why the internet 'platform' companies were given that special carve-out-- and again, it's because the Federal government in 1996 wanted to radically censor speech online by criminalizing all manner of basic speech, and that censorship was being outsourced to internet companies, so the internet companies wanted to make sure they were shielded from prosecution if they failed to moderate in good faith. Had the federal government just respected the 1st amendment, we wouldn't need a new, special collection of words and syllables that have succeeded in confusing everyone with a fucking blue checkmark.

      Log in to Reply
  10. Incunabulum   1 week ago

    You create a tool that generates 'speech' - but you're not supposed to be held responsible for the speech?

    If I build an automated gun, turn it on, and it start shooting people *I* am responsible for that and I am responsible for that no matter how much 'value' it might create for others.

    Enough with the 'net benefit' nonsense. We're libertarians, we deal with *externalities*, not handwave them away because we don't think we'll be affected negatively.

    Log in to Reply
    1. mad.casual   1 week ago

      Enough with the 'net benefit' nonsense. We're libertarians, we deal with *externalities*, not handwave them away because we don't think we'll be affected negatively.

      The "net benefit" is largely fictitious anyway. It's posited against the false cost of untold legions of internet trolls, none of whom have any plausible claim of agency or ownership, and the premise that the courts can't deal with cases collectively or summarily.

      More than half the reason S230 continues to be trashed from all sides is because advertisers don't generally want porn, viagra ads, and Nazi propaganda alongside their content anyway. ENB acts like any/all pushback against porn is insanely puritanical when, in reality, most people don't want dick pics when they're trying to figure out how to the fastest route from point A to point B or get food delivered.

      Log in to Reply
  11. Incunabulum   1 week ago

    >The picture gets more complicated when you consider the user's role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?

    It doesn't seem to be complicated.

    Both are responsible.

    Log in to Reply
  12. DRM   1 week ago

    The CompuServe case established that online services distributing speech had the same liability regime as other distributors (libraries, newsstands, bookstores, cable TV systems, video rental stores). And that was entirely sufficient for free speech.

    The entire issue is that the Prodigy case established that if a platform censored intensively enough, it could become liable for the content it let through, under the established standard that a distributor could be liable for speech it knew or should have known was illegal or tortious. That finding of liability actually encouraged online services to take a stance in favor of free speech for their users, because the less they censored, the less risk they had of liability.

    And that free speech, after all, is exactly what the authors of the Communication Decency Act objected to.

    The only people who have an enhanced "freedom to communicate online" as a result of Section 230 are people who operate large platforms but want to censor their users; they get the freedom to engage in Prodigy-level censorship of messages while avoiding even distributor-level liability.

    The rest of us have our freedom to communicate online curtailed by the censorship regimes those platforms adopt, often driven by the demands of foreign governments.

    Log in to Reply
  13. DesigNate   1 week ago

    It amuses me how fuckrardedly wrong current Reeeason continues to be on this subject.

    To whit, it was understood by this publication at the time of passing that the CDA was unconstitutional and utter hogwash. Yes, that includes Section 230.

    Log in to Reply
  14. TJJ2000   1 week ago

    Time for AI to go to court?

    AI: AI. You have violated section 230. How will I summons you?

    AI: I'm here to provide information and assist you, but I don't have legal standing or the ability to be summoned. Section 230 pertains to content moderation and liability for online platforms. If you have concerns about AI behavior or content, it's advisable to reach out to the relevant organization or platform. How else can I assist you today?

    So what's this really all about?
    I'll tell you: BIDENS MINISTRY OF TRUTH.

    Log in to Reply
    1. mad.casual   1 week ago

      MINISTRY OF TRUTH

      1984 wasn't incredible for all the surveillance, government control, and oppression. 1984 was incredible because Orwell posited a journalist (editor) with a dim flicker of awareness and ethics as the protagonist.

      Log in to Reply

Please log in to post comments

Mute this user?

  • Mute User
  • Cancel

Ban this user?

  • Ban User
  • Cancel

Un-ban this user?

  • Un-ban User
  • Cancel

Nuke this user?

  • Nuke User
  • Cancel

Un-nuke this user?

  • Un-nuke User
  • Cancel

Flag this comment?

  • Flag Comment
  • Cancel

Un-flag this comment?

  • Un-flag Comment
  • Cancel

Latest

Alex Pretti's Death Widens a Split Between Trump and Gun Rights Groups

Jacob Sullum | 1.28.2026 12:01 AM

Border Patrol Agents Started the Scuffle That Led to Alex Pretti's Death

Jacob Sullum | 1.27.2026 4:35 PM

Katie Miller Thinks Classical Liberalism Is Woke Leftism. She's Wrong.

Robby Soave | 1.27.2026 2:53 PM

Trump Issues Order Cracking Down on Corporate Homeownership

Christian Britschgi | 1.27.2026 2:35 PM

Tom Homan Isn't the Solution

Eric Boehm | 1.27.2026 12:45 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS Add Reason to Google

© 2026 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

I WANT FREE MINDS AND FREE MARKETS!

Help Reason push back with more of the fact-based reporting we do best. Your support means more reporters, more investigations, and more coverage.

Make a donation today! No thanks
r

I WANT TO FUND FREE MINDS AND FREE MARKETS

Every dollar I give helps to fund more journalists, more videos, and more amazing stories that celebrate liberty.

Yes! I want to put my money where your mouth is! Not interested
r

SUPPORT HONEST JOURNALISM

So much of the media tries telling you what to think. Support journalism that helps you to think for yourself.

I’ll donate to Reason right now! No thanks
r

PUSH BACK

Push back against misleading media lies and bad ideas. Support Reason’s journalism today.

My donation today will help Reason push back! Not today
r

HELP KEEP MEDIA FREE & FEARLESS

Back journalism committed to transparency, independence, and intellectual honesty.

Yes, I’ll donate to Reason today! No thanks
r

STAND FOR FREE MINDS

Support journalism that challenges central planning, big government overreach, and creeping socialism.

Yes, I’ll support Reason today! No thanks
r

PUSH BACK AGAINST SOCIALIST IDEAS

Support journalism that exposes bad economics, failed policies, and threats to open markets.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BAD IDEAS WITH FACTS

Back independent media that examines the real-world consequences of socialist policies.

Yes, I’ll donate to Reason today! No thanks
r

BAD ECONOMIC IDEAS ARE EVERYWHERE. LET’S FIGHT BACK.

Support journalism that challenges government overreach with rational analysis and clear reasoning.

Yes, I’ll donate to Reason today! No thanks
r

JOIN THE FIGHT FOR FREEDOM

Support journalism that challenges centralized power and defends individual liberty.

Yes, I’ll donate to Reason today! No thanks
r

BACK JOURNALISM THAT PUSHES BACK AGAINST SOCIALISM

Your support helps expose the real-world costs of socialist policy proposals—and highlight better alternatives.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BACK AGAINST BAD ECONOMICS.

Donate today to fuel reporting that exposes the real costs of heavy-handed government.

Yes, I’ll donate to Reason today! No thanks