The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Don't Cite ChatGPT as Authority in Legal Filings
From a Reply brief filed by plaintiff's counsel in Wojcik v. Metlife (N.D. Ill.):
MetLife alleges that "Dr. Khan's opinion [Autopsy Report] and the Death Certificate provide rational support for MetLife's determination to deny Wojcik's claim for payment of an AD&D benefit." It is not rational to determine a vehicle fire was intentionally set. It is not rational to conclude a person intentionally set themself on fire, while driving mere blocks from their home. It is not rational to conclude a person intentionally set themselves on fire without an ignition source. It is not rationale to conclude a person set themselves on fire using a vape device as an ignition source….
A reasonable person would conclude using a vape device to ignite or start a fire is not rational because a vape device has no flame.
A vape device, also known as an electronic cigarette or e-cigarette, is a device that simulates the act of smoking by vaporizing a liquid solution, often called e-liquid or vape juice. Vape devices are battery-powered and consist of a heating element, a reservoir or tank to hold the e-liquid, and a mouthpiece through which the vapor is inhaled. When the device is activated, the heating element heats the e-liquid, converting it into a vapor that can be inhaled by the user.
"Vape device" prompt. ChatGPT, May 12 Version, OpenAI, chat.openai.com
The heating element does not create a flame. Vaping devices have been known to start fires but that is only in stances where the vape device itself malfunctioned. For instance, there could be a short-circuit in the battery that causes overheating which leads to it catching fire.
"Can vape device start a fire" prompt. ChatGPT, May 12 Version, OpenAI, chat.openai.com
Judge Sharon Johnson Coleman's reaction (March 21):
This Court has a standing order that attorneys may not use Artificial Intelligence ("AI") when litigating their case. Plaintiff's attorney explicitly cited the prompt they inserted into ChatGPT for AI to do their research for them. Not only is the Court appalled at Plaintiff's attorney's refusal to do simple research, but such reliance on AI is a disservice to clients who rely on their attorney's competence and legal abilities. Because it is not Plaintiff's fault that her attorney violated this Court's order, it will not assume ChatGPT drafted all her briefing.
The order was revised Thursday (March 28) to omit footnote 2, with no explanation of the reason for the revision. But despite that, it seems that the judge wasn't pleased by the use of ChatGPT, and rightly so. ChatGPT output is just too error-prone to be reliable. There are doubtless some trustworthy sources that discuss how vape devices operate and how likely they are to start fires; ChatGPT isn't one.
Now this is entirely consistent with ChatGPT being useful. A ChatGPT session might, for instance, suggest some legal or factual claims for which user might then track down useful sources. In a sense, this is like a conversation with a stranger at a party (or online): Such conversations might lead to learning or inspiration. But you wouldn't cite "Conversation with unknown guy at party" as a source in a legal filing (or for that matter "Post by unknown Reddit user"), because the conversation isn't itself reliable authority; before citing anything you learned from the conversation, you'd need to verify it through a reliable source, and then you can cite the source. Likewise as to ChatGPT.
And I think other judges, including ones who don't have the AI prohibition in their standing orders, are likely to take the same view.
UPDATE 11:22 am: I originally titled this post "Don't Cite ChatGPT as Legal Authority," but then changed it to make clear that this includes all use of ChatGPT as authority in legal filings, and not just use for legal propositions. (This case of course involves its use as authority for a factual assertion.)
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
AI is garbage. Exhibit 425.
Don't tell this guy about AI:
How To Light A Fire With An E Cig
https://www.youtube.com/watch?v=4oLU5kS2-WY
The judge fails to cite it: is this the relevant standing order? https://www.ilnd.uscourts.gov/_assets/_documents/_forms/_judges/Cole/Artificial%20Intelligence%20standing%20order.pdf
It certainly does NOT say "attorneys may not use Artificial Intelligence ("AI") when litigating their case". Many legal searches are now AI-facilitated; auto-complete uses AI. The standing order makes an anodyne evidentiary point: *because* X is AI-generated does not make X admissible. The GPT quotation might rightly be ignored/excluded.
The standing order also makes a legal ethics point about transparency of source, with which the lawyer fully complied. The dismissed inference that the whole brief was AI-generated is entirely misplaced.
Diligence in prosecuting the claim or summary judgment might warrant concern, in this broader case; the inclusion of AI seems a red herring.
No, the order is the one I link to near the end of the post, see here.
The analogy ("you wouldn't cite 'Conversation with unknown guy at party' as a source") is extremely useful as is now in my stack of "keepers."
I hope that isn't the only reason the judge wouldn't assume the full brief was written by AI. For one, the attorney cited the use. It would be strange indeed for them to openly use AI for part of it but surreptitiously do so with the rest. Second, you'd have to be pretty wacky to assume that an AI prompt would, itself, call up another instance of itself to answer questions within the first results, citing itself in its answer. While not technically impossible, it should seem unlikely to a reasonable person.
On the other hand, as the number of AI-generated websites grows, it won't be long before it *does* start citing itself.
"But you wouldn't cite "Conversation with unknown guy at party" as a source in a legal filing (or for that matter "Post by unknown Reddit user")"
https://volokh.com/posts/1209165393.shtml
res ipsa
good answer