The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

Should Court Order OpenAI to Cut off ChatGPT Access by Mentally Ill and Dangerous User?

|The Volokh Conspiracy |


In her temporary restraining order application in Doe v. OpenAI (see also the complaint), plaintiff asks, among other things, that OpenAI cut off ChatGPT access by a user; ensure that he not create new accounts; and notify plaintiff if the user does try to access ChatGPT. Here are the factual allegations:

Plaintiff Jane Doe is in immediate danger. Driven by a ChatGPT-fueled delusional spiral, her ex-boyfriend (the "User") stalked and harassed her for months—generating dozens of fake psychological reports about her via ChatGPT and distributing them to her family, friends, and colleagues, which escalated to leaving her voicemails threatening her physical safety.

His campaign culminated in encoding a death threat through ChatGPT and sending it to her family, just before he was arrested on four felony counts, including communicating a bomb threat and assault with a deadly weapon in January 2026. The criminal court deemed him incompetent and ordered him committed to a mental health facility, but—just two days ago—ordered his release due to a procedural failure by the state (a delay in transferring him from jail to the facility)….

Before he was arrested, the User was in constant communication with ChatGPT, which affirmed his delusions that he had cured sleep apnea, that the medical industry was out to get him, and that his ex-girlfriend was the problem. As he became more unhinged, it also began consulting on violent plans against third parties: in addition to helping him harass and threaten Plaintiff, his account contains conversations titled "Violence list expansion" and "Fetal suffocation calculation." [My read of the exhibits to the TRO application suggests that "fetal suffocation calculation" likely refers to the user's theories that maternal sleep apnea causes fetal asphyxiation, not to plans by the user to violently suffocate fetuses, though I appreciate that is guesswork on my part. -EV]

With the User now ordered to be freed for procedural reasons, he will be further emboldened in his belief that his worldview was exactly right. It is a certainty that he will immediately attempt to turn back to ChatGPT—again spinning out his delusions and planning violence on the platform….

[So far], OpenAI [has] agreed only to "suspend" his accounts—the same action the company took and dangerously reversed with respect to the User already.

OpenAI's conduct is unacceptable: it has known for months the User was dangerous. Well before he was arrested for calling in a bomb threat, Defendants' own safety systems flagged his account for "Mass Casualty Weapons" activity and banned it. OpenAI initially upheld that determination on appeal after a careful review. The next day, it reversed itself, restored the User's access, and apologized to him for the inconvenience. That reinstatement had the effect of validating his delusions that he was right and everyone else was wrong.

After that, Plaintiff herself had to beg OpenAI for help: she submitted a detailed Notice of Abuse identifying the User as her stalker and describing exactly how ChatGPT was encouraging and assisting his harassment, OpenAI acknowledged the report was "extremely serious and troubling," promised "appropriate action," and did nothing….

Plaintiff sued OpenAI for negligent entrustment, negligence, product design defect, failure to warn, and unlicensed psychological counseling. In her TRO motion, she focuses on her negligence claim:

[OpenAI] breached its duty in at least three ways. First, it designed GPT-4o to validate user delusions, sustain dangerous conversations, and remove safeguards that previously required the system to reject false premises, producing the harassing material the User weaponized against Plaintiff. Second, it failed to warn Plaintiff or anyone else that the User had been flagged for dangerous conduct, even though his chat logs named specific targets. Third, it reinstated the User's access after its own systems determined he was dangerous, then ignored Plaintiff's Notice of Abuse. The User's subsequent arrest on four felony counts and his finding of incompetence confirm that OpenAI's original deactivation was not only justified but necessary. OpenAI "caused [Plaintiff] to be put in a position of peril of a kind from which the injuries occurred," and it cannot disclaim its duty here.

And she argues that she is entitled to a TRO:

The harm to Plaintiff if the Court does not act is severe and ongoing. The User subjected Plaintiff to months of AI-assisted stalking and harassment, generating dozens of defamatory psychological reports about her through ChatGPT and distributing them to her family, friends, colleagues, and clients. He spoofed her company email, contacted former employers, threatened to damage her reputation and finances, disclosed private medical information, and attempted to isolate her from her support network. He left her voicemails threatening her physical safety, used ChatGPT to encode and transmit a death threat to her family, and texted her: "Who is going to kill you?" Plaintiff was forced to alter every aspect of her daily routine, suffered panic attacks and ongoing psychological distress, obtained an Emergency Protective Order, and twice considered taking her own life. In addition to the four felony counts on which the User was ultimately arrested, a separate arrest warrant was issued for the User for misdemeanor electronic harassment and stalking….

Plaintiff's lawyers argue that OpenAI won't suffer much of a hardship if a TRO is issued. But they don't at all discuss the question whether such an injunction would unconstitutionally interfere with the user's ability to use ChatGPT to create speech.

Of course, there wouldn't be a First Amendment problem with OpenAI itself choosing to cut off the user's access. But I take it that a court order requiring OpenAI to do so would implicate the First Amendment (see NRA v. Vullo; Bantam Books v. Sullivan), just as the federal government's recent demands that private universities limit students' pro-Palestinian and allegedly anti-Semitic speech implicate the First Amendment.

Of course, the matter is complicated by the user's allegedly illegal conduct, which has led to an arrest and an order of mental health commitment: When someone is jailed or committed, his speech can indeed be restricted incident to the other restrictions on his liberty. But it's not clear to me that such restrictions can be imposed via a TRO in a separate proceeding, at which the person whose access to communications technology isn't even heard.

UPDATE: I now have OpenAI's opposition; to quickly summarize it, it basically argues that OpenAI has already done what it can to block the user's ChatGPT access (though "Because a limited version of ChatGPT can be accessed without an account, OpenAI cannot prevent John Roe from accessing any form of the ChatGPT services"), and a TRO is thus unnecessary. It also argues that a TRO requires a showing of "a likelihood of success on the merits" on plaintiff's underlying substantive claims (and not just that this particular order is needed to avoid certain harms), and that such a showing hasn't been made and can't be made in this abbreviated proceeding:

As Plaintiff's counsel knows from their other cases against OpenAI, … these claims pose multiple difficult and novel questions so far unsettled, especially around causation, application of the First Amendment and Section 230 of the Communications Decency Act. The Application's two-page analysis of that claim does not even grapple with those complex legal questions, let alone provide a reasonable probability that Plaintiff will prevail on them.

It also briefly mentions the user's First Amendment rights:

It is important to note, beyond just this case, that the government's ability to order OpenAI to block a user's access to general-purpose services raises significant questions under the First Amendment and Section 230 of the Communications Decency Act. See Packingham v. North Carolina (2017).

It also discusses the user privacy questions related to a separate request made in the TRO application (which I hadn't focused on in the initial post):

[T]he Application demands that OpenAI provide all the information in its possession about absent third party John Roe [that's the opposition's label for the person whom the TRO application just calls the user]—including his ChatGPT transcripts—to Plaintiff's counsel. The Application's claim of irreparable harm absent a TRO does not even mention this request….

Instead of establishing exigency, Plaintiff's counsel argues that they will need these discovery materials "to show that [John Roe's] ChatGPT account must be permanently shut down for her own safety and that OpenAI was negligent in its handling of that account." But the Application identifies no harm, much less irreparable harm, from making that showing after going through the ordinary discovery process—given the suspension of the accounts.

The Application asserts that the chatlogs are needed to "engage the police and prosecutors," but the record shows otherwise. Despite not having these materials, Plaintiff has successfully complained to the police about John Roe, as evidenced by the outstanding warrant against him "for misdemeanor electronic harassment and stalking of" Plaintiff. Plaintiff has even been able to obtain an Emergency Protective Order against John Roe. Both the warrant and protective order were issued without law enforcement even contacting OpenAI. In sum, Plaintiff's counsel has not shown that these ordinary-course discovery materials, which contain stale information that is at least three months old if not more, are necessary for her to obtain law enforcement or court protection. Thus, there is no emergency reason for Plaintiff's counsel to access materials that they can seek in ordinary-course discovery.

Moreover, granting the requests for these discovery materials now would cause irreparable harm to an absent third party, John Roe. Plaintiff's counsel seeks private materials related to John Roe, but has chosen not to add him as a party or (as far as OpenAI is aware) give him notice and an opportunity to be heard before those private materials are released to his former romantic partner.

The Court is being asked to override any legally cognizable interest or statutory protection he may have in those materials {see, e.g., [the Stored Communications Act]}, which would ordinarily be considered in the JCCP [Judicial Council Coordinated Proceeding that deals with various other cases raising similar claims] after coordination. Instead, this question should be addressed by Judge Schulman in the ongoing JCCP, which was expressly created to provide consistent answers across cases to these difficult, novel questions. Deciding these questions now risks preempting that coordinated process.