Artificial Intelligence

A.I. Needs Section 230 To Flourish

A new bill from Sens. Josh Hawley and Richard Blumenthal would stifle the promise of artificial intelligence.


Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) want to strangle generative artificial intelligence (A.I.) infants like ChatGPT and Bard in their cribs. How? By stripping them of the protection of Section 230 of the 1996 Communications Decency Act, which reads, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

"Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others," explains the Electronic Frontier Foundation. "The law prevents most civil suits against users or services that are based on what others say." By protecting free speech, Section 230 enables the proliferation and growth of online platforms like Facebook, Google, Twitter, and Yelp and allows them to function as robust open forums for the exchange of information and for debate, both civil and not. Section 230 also protects other online services ranging from dating apps like Tinder and Grindr to service recommendation sites like Tripadvisor and Healthgrades.

Does Section 230 shield new developing A.I. services like ChatGPT from civil lawsuits in much the same way that it has protected other online services? Jess Miers, legal advocacy counsel at the tech trade group the Chamber of Progress, makes a persuasive case that it does. Over at Techdirt, she notes that ChatGPT qualifies as an interactive computer service and is not a publisher or speaker. "Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider (i.e. a user)."

One commenter at Techdirt asked what will happen "when ChatGPT designs buildings that fall down." Properly answered: "The responsibility will be on the idiots who approved and built a faulty building designed by a chatbot." That is roughly the situation of a couple of New York lawyers who recently filed a legal brief compiled by ChatGPT in which the language model "hallucinated" numerous nonexistent precedent cases. And just as he should, the presiding judge is holding them responsible and deciding what punishments they may deserve. (Their client might also be interested in pursuing a lawsuit for legal malpractice.)

Evidently, Hawley and Blumenthal agree with Miers' analysis and recognize that Section 230 does currently shield the new A.I. services from civil lawsuits. Otherwise, why would the two senators bother introducing a bill that would explicitly amend Section 230 by adding a clause that "strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI"?

"Today, there are tons of variations of ChatGPT-like products offered by independent developers and computer scientists who are likely unequipped to deal with an inundation of litigation that Section 230 typically preempts," concludes Miers. It would be a grave mistake for Congress to strip the nascent A.I. industry of Section 230 protections. Says Miers, "We risk foreclosing on the technology's true potential."