The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
AI Hallucinations in a Self-Represented Litigant's Brief in the Colorado Court of Appeals
Even lawyers have at times filed briefs containing AI-hallucinated citations; but the danger is likely especially great for the many self-represented litigants. Here, for instance, is a passage from the Dec. 26 decision of the Colorado Court of Appeals in Al-Hamim v. Star Hearthstone, LLC, written by Colorado Court of Appeals Judges Lino Lipinsky, joined by Judges Jerry Jones and Grant Sullivan; the underlying case was a landlord-tenant dispute, but I'm focusing here on the discussion of the AI hallucinations:
Al-Hamim's opening brief in this appeal contained hallucinations, as well as bona fide legal citations. This case provides the first opportunity for a Colorado appellate court to address the appropriate sanction when a self-represented litigant files a brief peppered with GAI-produced hallucinations…. We affirm the court's judgment against Al-Hamim and put him, the bar, and self-represented litigants on notice that we may impose sanctions if a future filing in this court cites "non-existent judicial opinions with fake quotes and citations." …
Al-Hamim's opening brief contains citations to [eight] fake cases …. After we attempted, without success, to locate these cases, we ordered Al-Hamim to provide complete and unedited copies of the cases, or if the citations were GAI hallucinations, to show cause why he should not be sanctioned for citing fake cases. In his response to our show cause order, Al-Hamim admitted that he relied on AI "to assist his preparation" of his opening brief, confirmed that the citations were hallucinations, and that he "failed to inspect the brief." He did not address why he should not be sanctioned….
A GAI system "can generate citations to totally fabricated court decisions bearing seemingly real party names, with seemingly real reporter, volume, and page references, and seemingly real dates of decision[ ]." These hallucinations "can relate, in whole or in part, to the case name, case citation, and/or the content or holding of a fake case or a real judicial decision." …
Accordingly, using a GAI tool to draft a legal document can pose serious risks if the user does not thoroughly review the tool's output. Reliance on a GAI tool not trained with legal authorities can "lead both unwitting lawyers and nonlawyers astray." A self-represented litigant may not understand that a GAI tool may confidently respond to a query regarding a legal topic "even if the answer contains errors, hallucinations, falsehoods, or biases." (In 2023 and 2024, various companies introduced GAI tools trained using legal authorities. Those legal GAI tools are not implicated in this appeal, and we offer no opinion on their ability to provide accurate responses to queries concerning legal issues.) …
Even if Al-Hamim lacked actual knowledge that GAI tools can produce fake citations, "[a] pro se litigant who chooses to rely upon his own understanding of legal principles and procedures is required to follow the same procedural rules as those who are qualified to practice law and must be prepared to accept the consequences of his mistakes and errors." (We note that Al-Hamim filed his opening brief on June 24, 2024 — more than one year after media outlets throughout the country [citing the N.Y. Times and the AP] reported on the attorneys' submission of a brief filled with ChatGPT-generated hallucinations in Mata v. Avianca, Inc. (S.D.N.Y. 2023)….
The court, however, declined to impose sanctions on this particular litigant:
While we conclude that Al-Hamim's submission of a brief containing hallucinations violated C.A.R. 28(a)(7)(B), this deviation from the Appellate Rules was not as serious as the self-represented appellant's misconduct in [a previous case where sanctions were imposed in part because] {[t]he appellant's violations included his failure "to file an Appendix," to provide "an [ ]adequate Statement of Facts," and to include a "Points Relied On" section in her brief}. Further, in his response to our show cause order, Al-Hamim acknowledged his use of AI, apologized for his mistake, and accepted responsibility for including hallucinations in his opening brief. (We rejected his request to submit an amended opening brief that only cited real cases, however. While we do not impose sanctions against Al-Hamim, his inclusion of hallucinations in his original brief does not entitle him to a second opportunity to file an opening brief.)
Because until now, no Colorado appellate court has considered appropriate sanctions for a self-represented litigant's submission of a brief containing GAI-derived hallucinations, and because the record does not show that Al-Hamim previously filed court documents containing fake citations, we conclude that imposing monetary sanctions or dismissing this appeal would be disproportionate to Al-Hamim's violation of the Appellate Rules. Further, in their answer brief, the landlords failed to alert this court to the hallucinations in Al-Hamim's opening brief and did not request an award of attorney fees against Al-Hamim. Under the circumstances, we exercise our discretion not to order Al-Hamim to pay the landlords' attorney fees or to impose another form of sanction against him.
However, we warn Al-Hamim, as well as lawyers and self-represented parties who appear in this court, that we will not "look kindly on similar infractions in the future." A lawyer's or a self-represented party's future filing in this court containing GAI-generated hallucinations may result in sanctions….
Show Comments (0)