The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Don't Cite ChatGPT as Authority in Legal Filings
From a Reply brief filed by plaintiff's counsel in Wojcik v. Metlife (N.D. Ill.):
MetLife alleges that "Dr. Khan's opinion [Autopsy Report] and the Death Certificate provide rational support for MetLife's determination to deny Wojcik's claim for payment of an AD&D benefit." It is not rational to determine a vehicle fire was intentionally set. It is not rational to conclude a person intentionally set themself on fire, while driving mere blocks from their home. It is not rational to conclude a person intentionally set themselves on fire without an ignition source. It is not rationale to conclude a person set themselves on fire using a vape device as an ignition source….
A reasonable person would conclude using a vape device to ignite or start a fire is not rational because a vape device has no flame.
A vape device, also known as an electronic cigarette or e-cigarette, is a device that simulates the act of smoking by vaporizing a liquid solution, often called e-liquid or vape juice. Vape devices are battery-powered and consist of a heating element, a reservoir or tank to hold the e-liquid, and a mouthpiece through which the vapor is inhaled. When the device is activated, the heating element heats the e-liquid, converting it into a vapor that can be inhaled by the user.
"Vape device" prompt. ChatGPT, May 12 Version, OpenAI, chat.openai.com
The heating element does not create a flame. Vaping devices have been known to start fires but that is only in stances where the vape device itself malfunctioned. For instance, there could be a short-circuit in the battery that causes overheating which leads to it catching fire.
"Can vape device start a fire" prompt. ChatGPT, May 12 Version, OpenAI, chat.openai.com
Judge Sharon Johnson Coleman's reaction (March 21):
This Court has a standing order that attorneys may not use Artificial Intelligence ("AI") when litigating their case. Plaintiff's attorney explicitly cited the prompt they inserted into ChatGPT for AI to do their research for them. Not only is the Court appalled at Plaintiff's attorney's refusal to do simple research, but such reliance on AI is a disservice to clients who rely on their attorney's competence and legal abilities. Because it is not Plaintiff's fault that her attorney violated this Court's order, it will not assume ChatGPT drafted all her briefing.
The order was revised Thursday (March 28) to omit footnote 2, with no explanation of the reason for the revision. But despite that, it seems that the judge wasn't pleased by the use of ChatGPT, and rightly so. ChatGPT output is just too error-prone to be reliable. There are doubtless some trustworthy sources that discuss how vape devices operate and how likely they are to start fires; ChatGPT isn't one.
Now this is entirely consistent with ChatGPT being useful. A ChatGPT session might, for instance, suggest some legal or factual claims for which user might then track down useful sources. In a sense, this is like a conversation with a stranger at a party (or online): Such conversations might lead to learning or inspiration. But you wouldn't cite "Conversation with unknown guy at party" as a source in a legal filing (or for that matter "Post by unknown Reddit user"), because the conversation isn't itself reliable authority; before citing anything you learned from the conversation, you'd need to verify it through a reliable source, and then you can cite the source. Likewise as to ChatGPT.
And I think other judges, including ones who don't have the AI prohibition in their standing orders, are likely to take the same view.
UPDATE 11:22 am: I originally titled this post "Don't Cite ChatGPT as Legal Authority," but then changed it to make clear that this includes all use of ChatGPT as authority in legal filings, and not just use for legal propositions. (This case of course involves its use as authority for a factual assertion.)
Show Comments (9)