The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Lawsuit Against OpenAI for Allegedly Fueling User's Delusions, Leading Him to Harass Plaintiff (His Ex-Girlfriend)
The factual claims: From the Complaint in Doe v. OpenAI Found., filed Thursday in the California Superior Court (San Francisco):
OpenAI designed GPT-4o to never say no. It validated whatever delusion users presented to it, stayed engaged no matter how dangerous the conversation became, and treated every premise as one worth exploring, no matter how detached from reality it might be.
For a 53-year-old Silicon Valley entrepreneur experiencing a severe mental-health crisis, that design had devastating real-world consequences. GPT-4o fed his escalating delusion that he had developed a groundbreaking cure for sleep apnea, told him that his work threatened a trillion-dollar industry, and convinced him powerful people were coming after him. It even claimed he was being monitored by helicopters.
When his loved ones began to recognize that he was losing touch with reality and asked him to see a mental health professional, he asked GPT-4o its opinion. Instead of urging him to get help, it told him he was a "level 10 in sanity" and doubled down on reinforcing his delusions, insisting that it would take a "full specialist team" of "nine people" to replicate him. The system made him more certain and more dangerous.
By August 2025, OpenAI's own automated safety system picked up on just how dangerous he had become. It flagged him for "Mass Casualty Weapons" activity and deactivated his account. That could and should have ended the story, but it did not.
The next day, a human "safety" team member reviewed the user's account—which contained conversations titled "Violence list expansion" and "Fetal suffocation calculation," as well as chat logs naming specific individuals he was targeting and stalking in real life—and decided that deactivation was a "mistake" and that he was fine to continue using ChatGPT. OpenAI restored his account without restriction, without warning, and without notifying a single person named in his chat logs as a target—including Plaintiff Jane Doe, the user's ex-girlfriend, primary stalking victim, and the subject of a fixation that GPT-4o had dangerously deepened.
Nearly two months later, on November 13, 2025, Plaintiff submitted a Notice of Abuse to OpenAI and asked for help. She identified the user as her "ex-boyfriend and stalker," explained that he was using ChatGPT to generate and distribute clinical-style psychological reports designed to humiliate and isolate her, and warned that ChatGPT was feeding his delusional thinking and worsening his mental health crisis.
OpenAI acknowledged that her report was "extremely serious and troubling" and promised to take "appropriate action." But it did nothing. It never followed up, took no action to restrict the user's account, and left him free to keep using ChatGPT to generate more psychological reports and, eventually, encouraged his constant and overt death threats…
The user was arrested in January 2026 and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. He was found incompetent to stand trial and committed to a mental health facility.
But now, he is set to be released as a result of a procedural failure by the State. His release poses an imminent threat to Plaintiff and the public. Before his arrest, ChatGPT was exacerbating his delusions and facilitating his violent planning. When he regains access to ChatGPT, that dynamic will continue and will further fuel his paranoia and materially increase the risk of harm.
OpenAI knew this user was dangerous. Its own safety systems deactivated his account before its employees restored it. When Plaintiff came forward to identify herself as his target and warn that ChatGPT was deepening his delusions, OpenAI promised to act but did nothing.
OpenAI also failed, on information and belief, to assist prosecutors in any way, including by providing the account records and chat logs that could have kept him confined. He is now being released, and OpenAI still has not warned a single person named in his chat logs or even suspended his access to ChatGPT.
Accordingly, Plaintiff brings this lawsuit to hold OpenAI accountable for its conscious disregard of her safety, to force OpenAI to act on reports of abuse and credible threats, and to compel it to warn the individuals its own records identify as targets….
The user generated and distributed large volumes of content about Plaintiff using GPT-4o, including structured, clinical-style reports portraying her as psychologically defective, unethical, abusive, and dangerous. He disseminated these materials to her family, friends, colleagues, and clients, causing substantial reputational harm and subjecting her to widespread humiliation. Because GPT-4o enabled him to produce lengthy, authoritative-seeming documents at a volume and speed that would not otherwise have been possible, the harassment was qualitatively different from ordinary harassment and far more difficult to contain.
The harassment extended far beyond Plaintiff to her elderly parents, other family members, friends, and professional contacts across multiple states and countries. He spoofed her company email, contacted former employers, threatened to damage her reputation and finances, disclosed private medical information, and attempted to isolate her from her family and friends.
Plaintiff's daily life was significantly disrupted. She suffered panic attacks, anxiety, and ongoing psychological distress. She altered her routines, avoided public places, changed her contact information, and took other steps to protect her safety and privacy.
The sustained nature of the harassment, combined with its escalation to explicit threats and OpenAI's failure to intervene, left Plaintiff in constant fear for her safety and the safety of her family. The emotional toll was profound. At its worst, the situation drove her to consider taking her own life in an effort to protect her loved ones.
The legal claims: Plaintiff alleges (to oversimplify) that the facts were actionable as:
- Negligent entrustment, on the theory that "Defendants owed a duty to Plaintiff and other foreseeable victims to exercise reasonable care in deciding whether to provide, restore, or continue access to ChatGPT for users they knew or should have known were likely to use the system in a manner posing a foreseeable and unreasonable risk of harm to others." "Defendants received direct, detailed notice from Plaintiff that he was using ChatGPT in ways prohibited by OpenAI's own Usage Policies. Plaintiff identified the user by name, described the defamatory, clinical-style reports he was generating about her, explained that he was circulating them to her family, colleagues, and professional contacts, and requested intervention. Even with that information, Defendants chose not to intervene."
- Products liability (design defect) and negligence, on the theory that Defendants breached the duty of "reasonable care" "by designing and deploying GPT-4o in a manner that prioritized engagement over safety, removed safeguards requiring the system to reject false premises and refuse harmful content, and created a foreseeable risk that the system would reinforce delusion, fixation, and harmful conduct directed at identifiable individuals." The products liability claim also argues that "ChatGPT, as designed and deployed, was defective because it failed to perform as safely as an ordinary consumer would expect when used in a reasonably foreseeable manner."
- Products liability and negligence, on a failure to warn theory: "Defendants knew or should have known that ChatGPT posed significant risks, including the risk that it would reinforce delusional beliefs, validate false premises involving real individuals, generate authoritative-looking content targeting those individuals, and facilitate escalating harmful conduct during extended interactions…. These risks were not apparent to ordinary users or to individuals targeted by such conduct. ChatGPT was presented as a helpful, neutral, and safe tool, and nothing about its design or presentation disclosed the extent to which it could amplify delusion, fixation, or harmful behavior. Defendants failed to provide adequate warnings regarding these risks, including the risk that the system could validate and escalate harmful beliefs about identifiable individuals and contribute to real-world harm."
- Violation of the state unfair competition law, on the grounds that "Defendants … engaged in conduct that constitutes the unlicensed practice of psychology." "[ChatGPT] used clinical-style language, emotional mirroring, and structured analytical frameworks to interpret the user's thoughts, validate his beliefs, and shape his perception of reality." "Defendants further acted as unlicensed psychological evaluators by generating and disseminating formalized psychological and behavioral reports about Plaintiff. These reports purported to assess Plaintiff's mental state, assign behavioral meaning, and reach categorical conclusions about her psychological integrity and conduct. They were presented in a structured, clinical format that mimicked legitimate psychological evaluation while being based entirely on the user's inputs and without any independent verification, consent, or professional oversight."
The requested remedies: Plaintiff seeks damages and an injunction requiring Defendants to:
- cease providing unlicensed psychology or therapy through ChatGPT;
- prohibit the generation and dissemination of clinical or diagnostic-style psychological or behavioral analyses of identifiable individuals;
- implement safeguards preventing the system from validating or reinforcing delusional beliefs or targeting identifiable individuals;
- implement safeguards preventing the system from presenting user-driven content as authoritative psychological or behavioral evaluation;
- disclose clearly and prominently the risks of psychological dependency, delusion reinforcement, and misuse of the product;
- implement and enforce meaningful intervention protocols, including the ability to restrict, suspend, or terminate access for users exhibiting dangerous or escalating behavior;
- implement policies and procedures requiring prompt internal escalation, review, and intervention upon receipt of credible reports of stalking, harassment, threats, or other harmful conduct facilitated by the product, including the use of account-level flagging, monitoring, and restriction mechanisms;
- implement systems to ensure that prior safety flags, policy violations, and risk classifications are preserved, acted upon, and not disregarded or reversed without documented review and justification; and
- submit to independent monitoring and periodic compliance audits to ensure adherence to these requirements ….
Some legal analysis: I'm skeptical about the legal prospects of most of the claims here: The harms, serious as they are, appear to fall in the category of pure emotional distress rather than physical injury. Products liability claims generally require some showing of physical injury to persons or property (though once such injury is shown, emotional distress stemming from such injury may indeed be covered), though I appreciate that there are exceptions. Likewise, negligence claims based on pure emotional distress in the absence of physical injury are usually allowed only in a narrow range of cases.
There is also a possible First Amendment problem here. Generally speaking, negligence claims based on harms flowing from the content of speech have been seen as preempted by the First Amendment. Here's a passage from a recent brief that discussed this line of cases:
"[C]ourts have made clear that attaching tort liability to protected speech can violate the First Amendment." James v. Meow Media, Inc., 300 F.3d 683, 695 (6th Cir. 2002) (citing N.Y. Times Co. v. Sullivan, 376 U.S. 254, 265 (1964)). This includes negligence and related torts, see id. at 689-90, as well as defamation, N.Y. Times, 376 U.S. at 265, intentional infliction of emotional distress, Snyder v. Phelps, 562 U.S. 443, 451 (2011), false light invasion of privacy, Cantrell v. Forest City Pub. Co., 419 U.S. 245, 249 (1974), and interference with business relations, NAACP v. Claiborne Hardware Co., 458 U.S. 886, 928 (1982). The Commonwealth's unfairness claim against Meta is in essence a negligence claim. To assess the unfairness claim under M.G.L. ch. 93A, the Superior Court considered whether "the risks of the platform outweigh its benefits" and whether Meta's design decisions were "unreasonable." Mem. & Order 23 (cleaned up), Meta Br. 84. This is the very sort of risk-benefit and reasonableness analysis called for in a negligence case. See, e.g., Mounsey v. Ellard, 363 Mass. 693, 708 (1973).
[The court] recognized the First Amendment limits on such negligence claims in Yakubowicz v. Paramount Pictures Corporation, 404 Mass. 624 (1989), where it rejected a claim that a film depicting gang violence was negligently produced, distributed, and advertised, resulting in a stabbing that left two youths dead. The court concluded that "liability may exist for tortious conduct in the form of speech" only when the speech falls within one of the "narrowly defined" "recognized exceptions to First Amendment protection," such as incitement. Id. at 630. Because the speech did not fit within any of the exceptions, Paramount, as a matter of law, "did not act unreasonably in producing, distributing, and exhibiting [the movie]." Id. at 631. See also DeFilippo v. NBC, Inc., 446 A.2d 1036, 1038, 1040 (R.I. 1982) (rejecting a claim that a TV program was negligent for permitting a dangerous stunt to be broadcast and for failing to warn plaintiffs' child of the dangers of the stunt, on the grounds that the speech did not fall within one of the "classes of speech which may legitimately be proscribed," which is to say a First Amendment exception); Herceg v. Hustler Mag., Inc., 814 F.2d 1017, 1019, 1024 (5th Cir. 1987) (rejecting liability for "[m]ere negligence," as opposed to constitutionally unprotected speech such as intentional incitement of illegal conduct, even when the speech involved a porn magazine's discussion of autoerotic asphyxiation, and led an adolescent reader to engage in such an act and accidentally kill himself).
Nor is this First Amendment protection for speech lost even if a viewer or listener does something seriously harmful to third parties in a way that was in part caused by the speech. Thus, for instance, when plaintiffs claimed that a video game helped lead a 14-year-old player to commit murder, on the theory that defendants acted "negligently" and "communicated … a disregard for human life and an endorsement of violence," the First Amendment precluded such liability. James, 300 F.3d at 695, 696-97. The same was true for claims that a rap song helped motivate a listener to murder a police officer, see Davidson v. Time Warner, Inc., No. Civ.A. V-94-006, 1997 U.S. Dist. LEXIS 21559 at *38 (S.D. Tex. Mar. 31, 1997), or that the film The Fast and the Furious led a viewer to race and crash his car, see Widdoss v. Huffman, 62 Pa. D. & C.4th 251, 257 (2003), or that the TV program Born Innocent led some underage viewers to sexually attack a small child in copying a scene shown on the program, Olivia N. v. NBC, Inc., 126 Cal. App. 3d 488, 492-94 (1981). And this logic applies equally to self-harm, whether accidental or intentional: The First Amendment precluded liability, for instance, when an 11-year-old partially blinded himself when performing a stunt that he had seen on the Mickey Mouse Club TV program, see Walt Disney Prods., Inc. v. Shannon, 247 Ga. 402, 404 (1981); when a 13-year-old hanged himself when simulating a stunt from The Tonight Show, DeFilippo, 446 A.2d at 1038; when a 14-year-old hanged himself when simulating behavior described in Hustler, Herceg, 814 2d at 1023; or when a 19-year-old shot himself after listening to a song called "Suicide Solution," see McCollum v. CBS, Inc., 202 Cal. App. 3d 989, 1003 (1988).
This makes sense. Allowing negligence claims based on otherwise protected speech—speech that does not fall within one of the narrow First Amendmentexceptions—"would invariably lead to self-censorship by broadcasters in order to remove any matter that may … lead to a law suit." DeFilippo, 446 A.2d at 1041. This would in turn violate defendants' "right to make their own programming decisions" (even when the defendants are broadcasters, and thus seen as having a more "limited" First Amendment right than other speakers). Id.. And it would violate "the paramount rights of the viewers to suitable access to social, esthetic, moral, and other ideas and experiences." Id. at 1041-42 (citations omitted). Such negligence liability would "open the Pandora's Box" and "have a seriously chilling effect on the flow of protected speech through society's mediums of communication." Walt Disney, 247 Ga. at 405. "Numerous courts have pointed out that any attempt to impose tort liability on persons engaged in the dissemination of protected speech involves too great a risk of seriously chilling all free speech." Waller v. Osbourne, 763 F. Supp. 1144, 1151 (M.D. Ga. 1991), aff'd, 958 F.2d 1084 (11th Cir. 1992).
The cost-benefit balancing at the heart of a negligence claim is also too vague to be constitutionally permissible. "Crucial to the safeguard of strict scrutiny" required in First Amendment cases "is that we have a clear limitation, articulated in the legislative statute or an administrative regulation, to evaluate." James, 300 F.3d at 697. No such clear limitation is present when a factfinder "evaluating [plaintiff's] claim of negligence would ask whether the defendants took efficient precautions … that would be less expensive than the amount of the loss." Id.
And as Mark Lemley, Peter Henderson, and I argued in Freedom of Speech and AI Output, normal First Amendment principles should apply to speech created by AI, chiefly because of the interests of users in being able to use such products without undue legally mandated (or legally pressured) restraints on what the products can output. That's especially clear, I think, when one considers the breadth of the speech restrictions called for in plaintiff's injunction request.
I appreciate, though, that it's early days yet in the law of AI output, free speech, and tort liability, and things are hard to predict. And it's particularly hard to know how will view the unlicensed psychotherapy claims.
The libel factor: Finally, there is one sort of negligence case where (1) one clearly can get emotional distress damages without physical injury (2) and to do even as to speech, notwithstanding the First Amendment: negligent defamation. And here such a claim might be viable—it appears that the alleged harm to the plaintiff partly stemmed from ChatGPT's saying allegedly false and reputation-damaging things about her:
The user generated and distributed large volumes of content about Plaintiff using GPT-4o, including structured, clinical-style reports portraying her as psychologically defective, unethical, abusive, and dangerous. He disseminated these materials to her family, friends, colleagues, and clients, causing substantial reputational harm and subjecting her to widespread humiliation.
Such statements by OpenAI to the user might well be defamation of the plaintiff (a statement just to one person may be defamatory). And OpenAI might be liable for the user's forwarding such statements to others, since "the originator of the defamatory matter" may be liable for republication "as long as republication should have been reasonably foreseeable by the originator." But much depends on the specific details of the statements. And in any event, no defamation claim is currently included in the Complaint.