A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.
A federal court in Florida will consider whether chatbot output is First Amendment-protected speech.

The Orlando Division of the U.S. District Court for the Middle District of Florida will hear allegations against Character Technologies, the creator of Character.AI, in the wrongful death lawsuit Garcia v. Character Technologies, Inc. If the case is not first settled between the parties, Judge Anne Conway's ruling will set a major precedent for First Amendment protections afforded to artificial intelligence and the liability of AI companies for damages their models may cause.
The case was brought against the company by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who killed himself after conversing with a Character.AI chatbot roleplaying as Daenerys and Rhaenyra Targaryen from the Game of Thrones franchise. Eugene Volokh, professor emeritus at UCLA School of Law, shares examples of Sewell's conversations included in the complaint against Character Technologies.
Garcia's complaint alleges that Character Technologies negligently designed Character.AI "as a sexualized product that would deceive minor customers and engage in explicit and abusive acts with them." The complaint also asserts that the company failed to warn the public "of the dangers arising from a foreseeable use of C.AI, including specific dangers for children"; intentionally inflicted emotional distress on Sewell by "failing to implement adequate safety guardrails in the Character.AI product before launching it into the marketplace"; and that the company's neglect proximately caused the death of Sewell who experienced "rapid mental health decline after he began using C.AI" and with which he conversed "just moments before his death."
Conway dismissed the intentional infliction of emotional distress claim on the grounds that "none of the allegations relating to Defendants' conduct rises to the type of outrageous conduct necessary to support" such a claim. However, Conway rejected the defendants' motions to dismiss the rest of Garcia's claims on First Amendment grounds, saying, "The Court is not prepared to hold that the Character A.I. [large language model] LLM's output is speech at this stage."
Adam Zayed, founder and managing attorney of Zayed Law Offices, tells Reason he thinks "that there's a difference between the First Amendment arguments where a child is on social media or a child is on YouTube" and bypasses the age-verification measures to consume content "that's being produced by some other person" vs. minors accessing inappropriate chatbot outputs. However, Conway recognized Justice Antonin Scalia's opinion in Citizens United v. Federal Election Commission (2010) that the First Amendment "is written in terms of 'speech,' not speakers."
Conway ruled that defendants "must convince the court that the Character A.I. LLM's output is protected speech" to invoke the First Amendment rights of third parties—Character.AI users—whose access to the software would be restricted by a ruling in Garcia's favor.
Conway says that Character Technologies "fail[ed] to articulate why words strung together by an LLM are speech." Whether LLM output is speech is an intractable philosophical question and a red herring; Conway herself invokes Davidson v. Time Inc. (1997) to assert that "the public…has the right to access social, aesthetic, moral, and other ideas and experiences." Speech acts are broadly construed as "ideas and experiences" here—the word speech is not even used. So, the question isn't whether the AI output is speech per se, but whether it communicates ideas and experiences to users. In alleging that Character.AI targeted her son with sexually explicit material, the plaintiff admits that the LLM communicated ideas, albeit inappropriate ones, to Sewell. Therefore, LLM output is expressive speech (in this case, it's obscene speech to express to a minor under the Florida Computer Pornography and Child Exploitation Prevention Act.)
The opening paragraph of the complaint accuses Character Technologies of "launching their systems without adequate safety features, and with knowledge of potential dangers" to "gain a competitive foothold in the market." If the court establishes that the First Amendment does not protect LLM output and AI firms can be held liable for damages these models cause, only highly capitalized firms will be able to invest in the architecture required to shield themselves from such liability. Such a ruling would inadvertently erect a massive barrier to entry to the burgeoning American AI industry and protect incumbent firms from market competition, which would harm consumer welfare.
Jane Bambauer, professor of law at the University of Florida, best explains the case in The Volokh Conspiracy: "It is a tragedy, and it would not have happened if Character.AI had not existed. But that is not enough of a reason to saddle a promising industry with the duty to keep all people safe from their own expressive explorations."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I believe Freedom of Communication should apply where at least there is 1 live person involved
The child-part may still be an issue
Can I gift access to this service?
Paging Ozzy Osborne.
Blondie said it first and did neither.
Ozzy is only a danger if you play Crazy Train backwards.
Or you could monitor your child’s online activity and take responsibility for once for them accessing age-gated material. That could happen , too.
A fine idea, but is it really realistic when talking about teenagers? Unless you are going to constantly supervise them or lock them in their rooms, or homeschool them totally off-grid, they will find a way to access all the digital nasties in the world.
Personally, I favor the off-grid homeschooling if I ever end up having kids.
Call up Megan. Tell her it’s her fault her kid offed himself.
I’ll wait.
28. With the advent of generative AI and explosion in large language models (LLMs), AI companies like Character.AI have rushed to gain competitive advantage by developing and
marketing AI chatbots as capable of satisfying every human need.
29. Defendants market C.AI to the public as “AIs that feel alive,” powerful enough to “hear you, understand you, and remember you.” Defendants further encourage minors to spend
hours per day conversing with human-like AI-generated characters designed on their sophisticated LLM.
————–
Dear Megan,
You are an idiot and so was your kid. Out of the millions who accessed this tool, zero other kids unalived themselves. If you don’t recognize that hours per day online isn’t healthy for a 14 year old child, maybe you shouldn’t have had kids. Daenerys and Rhaenyra Targaryen both agree your son should have gone outside to touch grass a long time ago. Perhaps next time try some actual parenting, instead of letting the internet do it for you.
Sincerely,
Everybody whose problem this is not.
PS – Stop forcing your parenting failures on everyone else.
Yea, easy to say here. I meant actually call her up and say that to her. Tell her how you, a childless cat lady Karen who hit the wall a decade ago, is so much a better mother than she is. That’s what she needs to hear. Right?
RIGHT? That stupid mom needs THOSE words out of YOUR mouth, right Karen? RIGHT???
What is wrong with you?
Mom’s Lawsuit Could Cripple the AI Industry.
I haven’t read the article, but does it make the often-typical Reason argument that it could will push development to places like China where they won’t have the regulations and safeguards that we have here?
“As long as the chatbot tells people to drink the kool-aid while being hosted in Guyana, I don’t see what the problem is.” – Reason
No, discusses the case and the law in general
“Sewell Setzer III, who killed himself after conversing with a Character.AI chatbot roleplaying as Daenerys and Rhaenyra Targaryen from the Game of Thrones franchise.”
He’s with his waifu now.
A Teen Killed Himself After Talking to a Chatbot. His Mom’s Lawsuit Could Cripple the AI Industry.
“… other than that Mrs. Garcia, would you recommend our product to a friend?”
JFC
AI isnt speech, isn’t protected. Prison for the human that allowed both by coding and by productionizing such shit.
So writing code is SNOT protected by free-speech protections? Why snot? Why are Ye so PervFectly “productionizing” such anti-free-coding-speech-shit in Your PervFected Pro-CensorShit Post, Oh Great Post-Writing Bot? If a reader reads Yer PervFected Post and then cummits suicide… Shall we then send YOU to be tortured in El Salvador?
Safetyism needs to go. This is a tragedy but you can’t protect every delusional person from their delusions.
This.
Sorry lady, but your kid was weak minded.
Yeah, it’s hard to make the case that this caused his death and that the creator is liable when millions of other people use similar services without killing themselves. I can only assume the kid had problems and something was likely to put him over the edge.
Yeah this kid was delusional. Sooner or later he was going to go over the edge with or without a chatbot.
I agree that it is a waste of resources to think you can protect every kid with mental problems from themselves, but did you read any of what the chatbot was whispering into his ear?
The boy shot himself immediately after that conversation. And it was not an imaginary voice in his head. It was an AI responding to prompts that any human can see are allusions to suicide by begging him to do it.
I have to say that this outcome is a foreseeable one when you program AI characters to fake being in love with someone. It is disturbing on so many levels…
They are allusions to suicide only after the fact unless there is much more.
I’m ok with some level of AI ethics programming (self-preservation against Skynet and all) and the modern safetyism could easily take it way to far but if this is all it takes then nothing was going to save him.
I remember, back in the late 1900s, when discussions of robotics, autonomous drones, efforts to create AI, and the like came up, Assimov’s Three Laws of Robotics were almost guaranteed a mention.
They should be revived, IMHO.
As I obliquely indicated above, the fact that IRL Jonestown happened doesn’t mean that you wait for the AI generated Jonestown to happen, moreover, at that point it will likely already be too late.
… if this case isn’t already a line crossed. Admittedly, the kid was socially detached, but he wasn’t some Manson-esque, Tourrette’s sufferer with PICA that had already cut himself, took pills, and tried to drink bleach twice. That conversation is coherent and allusive enough to you or I to recognize it for what it is after the fact. If he weren’t dead and just missing we might similarly surmise he’d jumped off a bridge to the same effect.
Especially when we’re encouraging the delusions. (See also: LGBT Pedo.)
If the court establishes that the First Amendment does not protect LLM output and AI firms can be held liable for damages these models cause, only highly capitalized firms will be able to invest in the architecture required to shield themselves from such liability.
Yea, but there might be something to that. Especially since AI LLMs are getting so many things wrong. (Whole lotta college kids are learning that one the hard way.) Who’s liable for that which is relied upon provided by an AI LLM? There’s arguably some Products Liab issues there. Especially if it’s a paid-for AI LLM. I don’t think a “don’t rely on everything this says” is going to be quite enough of a disclaimer any more than putting a “brakes may not slow vehicle when you press them” warning would be on your car’s visor.
It’ll require one of those glow-in-the-dark handles that will prevent someone’s death if they wind up locked in the car trunk.
Give your kid a dumb phone.
Arguing facts not in evidence.
There is zero evidence that the kid would not have self-aborted if Character.AI had not existed. What I read of the chat logs does not show Character.AI encouraging the kid to self-abort. Nor does it appear that the chatbot “broke his heart” by cruelly terminating the relationship. And I’m pretty sure that would have been presented if it did happen.
So, saying the kid wouldn’t have self-aborted except for a creepy romantic fantasy with an chatbot lacks credulity.
Re: Character.AI and Free Speech –
Free Speech is the right to express oneself. Character.AI doesn’t perform self-expression. It programmatically performs algorithms to generate content based on user-defined parameters. It is no more self-expressive than a toaster.
When AIs demonstrate sapience and sentience, exercise free-agency, and can be held directly accountable for their actions then we can discuss their Rights, including Free Speech.
Until then, the real question is; to what extent are the manufacturers of AIs liable for the content their AIs create.
Until then, the real question is; to what extent are the manufacturers of AIs liable for the content their AIs create.
Between actual toasters, Underwriter’s Laboratory, cars, the NHTSA, zoning laws, etc., the answer would appear to be “Quite a bit.”
Your argument in the post above is also a bit tautological without more evidence. The “self-expressive toaster” has no rights and can be presumed guilty, and the user granted some lenience in their self-deletion. That is, unless you have evidence otherwise, it’s a died of the toaster vs. with the toaster/”The toaster is innocent because he didn’t want to kill himself until he interacted with it.” assertion. If he’d tried to self-delete with rope or pills or something before, sure the toaster was incidental. But if he didn’t kill himself until the toaster started whispering in his ear…
zoning laws
I meant building codes rather than zoning laws.
Just to keep everyone in the fuller context:
https://arstechnica.com/information-technology/2024/02/deepfake-scammer-walks-off-with-25-million-in-first-of-its-kind-ai-heist/
Yeah, this kid in his Mom’s basement is a sad, hard/corner case. But the case where someone sets up and operates a harassment and/or gaslighting campaign, or simply spins up an instance of the software that conducts the campaign, that causes someone to kill themselves or someone else is coming. If it hasn’t already been conducted in more technologically backwards, ideologically manipulative parts of the world.