The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Colorado Lawyer "Says ChatGPT Created Fake Cases He Cited in Court Documents"
"I felt ... my efficiency ... could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting."
KRDO (Quinn Ritzdorf) reports the lawyer "thought he was filing a motion with cited cases that would favor his client's argument, only to find out many of the cases were made up by Artificial Intelligence software ChatGPT." The lawyer said that "it was the first motion to set aside a summary judgment he had ever researched, drafted, and filed by himself:
"I felt my lack of experience in legal research and writing, and consequently, my efficiency in this regard could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting," Crabill said in a court document….
At first, he asked ChatGPT about existing Colorado laws. He said the responses were accurate and he trusted the technology….
"Based on the accuracy of prior validated responses, and the apparent accuracy of the case law citations, it never even dawned on me that this technology could be deceptive," Crabill [the lawyer] said in court documents.
Thanks to Jake Karr for the pointer.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Did that lawyer mistakenly think he was writing for, say, a VC comment thread instead of a submission to the court? Making shit up is de rigeur only for the former.
Question from a non-lawyer.
If your brand new para legal handed you case law you asked for, would you trust it to be relevent? real? Or would you run quick spot checks?
Not a great analogy. Paralegals don't do legal research. At most they get a list of case citations that they pull of a database, like Westlaw or Lexis.
It depends on the firm. At smaller firms it is not uncommon for paralegals to do initial research.
As an attorney, I would check each citation because I am a bit of stickler for whether it has been overruled in whole or in part and also whether quotes are accurate. Whenever I get a memo of law from opposition the first thing I do is look up every citation. You would not believe how many times I have found cases where the attorney excerpted the first part of a quote from a case but did not take a look at the rest which undermines their entire point. Also, if I don't see any website listed on a printout I am definitely checking because I want to verify it is real.
If your paralegal hands you made-up caselaw, they know they're getting fired once it's discovered. ChatGPT doesn't have any worries about that (or anything else).
Validating software is quite different from evaluating a person's professional competence, but even if this lawyer didn't understand the difference, it's still hard to understand a lawyer trusting someone's, or some thing's, legal abilities based on nothing more that a few correct answers to questions about Colorado law. Most states require a law degree from an accredited law school before they will give you a license to practice law. Merely passing the bar exam is not enough.
The story about the lawyer relying on ChatGPT to generate fake cases cited in court documents is truly astonishing. It highlights the power and potential pitfalls of AI technology in the legal profession. The lawyer's initial trust in ChatGPT's responses and the perceived accuracy of the generated case law citations demonstrate the complexity of dealing with AI-generated content. This incident serves as a wake-up call for the legal community to exercise caution and verify the authenticity of information derived from AI systems. It raises important questions about the ethical and legal implications of utilizing AI tools in legal research and writing. This article sheds light on a significant and thought-provoking issue that warrants further discussion and examination. It serves as a stark reminder that while AI can be a valuable resource, it must be used with discernment and human oversight. Click here to try ChatGPT Online: https://storysaverhd.io/chatgpt-online/