The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Lawyer's Affidavit in the Colorado AI-Hallucinated Precedent Case
"Overwhelmingly impressed by the technology, I excitedly used it to find case law that supports my client's position, or so I thought."
Thanks to the invaluable UCLA Law Library, I got a copy of the affidavit in which the lawyer apologizes and explains why he used ChatGPT to draft a motion:
I have been practicing Civil Litigation for less than three (3) months and the MSA was the first motion I have ever researched, drafted, and filed myself….
As of today, May 5, 2023, I have spent 6.5 hours researching this case, conferring with paralegals and senior attorneys about our client's options, and drafting the MSA. With respect to drafting the MSA specifically, I have spent approximately 4 hours researching, drafting, and revising that motion. I detail this in hopes to demonstrate to the Court my dutiful time spent drafting a motion that I hoped would relieve my client from an exceptional judgment against him.
Now I will explain the fictitious case cite issue to assuage the Court's concern of willful misconduct. The issue surrounds the emerging technological advancement and use of Artificial Intelligence, commonly referred to as "AI". AI for the legal industry is emerging, and coincidentally on 5/5 I received an email from Lexis Nexis introducing an AI search engine for their platform – Meet Lexis+ AI, the most powerful generative AI solution for legal professionals—YouTube (see Exhibit 1 – Email from Lexis+ AI). In this instance, a search engine/software from OpenAI, commonly known as "ChatGPT" was used. This software was brought to my attention as a potentially useful research tool for our firm on April 26, 2023, just three (3) days before the MSA was finalized and filed. Overwhelmingly impressed by the technology, I excitedly used it to find case law that supports my client's position, or so I thought.
As a new attorney practicing in the civil litigation field with which I was unfamiliar, ChatGPT was very impressive and excited me for several reasons. As a prosecutor, I rarely conducted legal research and writing and to the extent I did, I used templates from other prosecutors with case law and statutory authority built in. Ergo, the primary reason I explored ChatGPT and decidedly utilized it for the MSA was that I felt my lack of experience in legal research and writing, and consequently, my efficiency in this regard could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting.
There were several inquiries/prompts given to ChatGPT that proved accurate based on my existing knowledge of the law and what I confirmed through research, such that I made the imprudent leap in assuming that the tool would be generally accurate. (See Exhibit 2 – Export of ChatGPT Dialog_1) As you can see from the Dialog, the AI model generated a number of responses, for all intents and purposes, which appear very thorough and accurate. (See Exhibit 3 – Export of ChatGPT Dialog_2) Unfortunately, by the time I actually started using ChatGPT for case law research on the MSA, I was already convinced of its apparent trustworthiness. As you can see from Dialog_2, ChatGPT cites a number of cases as requested, but if you look for them, they do not exist. Based on the accuracy of prior validated responses, and the apparent accuracy of the case law citations, it never even dawned on me that this technology could be deceptive. In short, the initial confirmatory searches emboldened my confidence in the technology and I imprudently accepted the case law research that followed without investigation into each case citation's accuracy.
It wasn't until the morning of the Show Cause Hearing on 5/5 that I, in an effort to prepare to argue the case law cited, dug deeper to realize the inaccuracies of the citations. (See Exhibit 4 – Screenshot of Teams Message with Paralegal) As you can see, I was unaware of what to do in that situation and I was unaware of my ability and obligation to withdraw the motion due to the inaccuracies. In hindsight, the first thing I should have done when your Honor took the bench was move to withdraw the motion and request leave to refile after curing the inaccuracies. Rule 3.3 of the Colorado Rules of Professional Conduct requires a mental state of "knowingly" which denotes actual knowledge. Prior to filing the MSA, I did not have actual knowledge of the inaccuracies, proven by Exhibit 4, otherwise, I would have never filed it.
This has been a tremendously humbling, yet growing experience for me as a budding civil litigation attorney. I have learned the importance and absolute necessity of thoroughly vetting each pleading before signing my name to it and filing it with the Court. I sincerely and wholeheartedly regret having wasted the Court's time in this instance and humbly ask for your Honor's grace moving forward. I did not and I never will intentionally mislead a court of law, or anyone for that matter, as I hold myself to a Higher Standard than even the Colorado Bar Association and the Colorado Model Rules of Professional Conduct.
I respectfully request that the Court excuse the inaccuracies found in the MSA, permit Defendant to file an Amended MSA, and accept Exhibits 1, 2, 3, and 4 as evidence of good faith and not willful misconduct….
I don't know whether the judge was satisfied with this; the June 13 KRDO article (Quinn Ritzdorf) reports,
The judge overseeing the hearing … [had] threatened to file a complaint against the attorney. The Office of Attorney Regulations couldn't confirm if a complaint had been filed against Crabill.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Show Comments (23)