Hallucinated Citations Created When Using Generative AI "to Improve the Writing in [a] Brief"
From a declaration in Green Building Initiative, Inc. v. Green Globe Int'l, Inc., a case I wrote about last month (Apparent AI Hallucinations in Filing from Two >500-Lawyer Firms):
In preparing the Reply brief, I performed legal research on Westlaw for authorities supporting arguments set forth in the brief. I included some of those authorities I found on Westlaw into the brief.
In generating the Reply brief, I also used Microsoft's Copilot for its editing functions in an effort to review and improve the draft document by fixing grammar, spelling, and improving badly phrased sentences. To be clear: I did not use Copilot for research nor would I use generative artificial intelligence for legal research since I am aware of generative AI's potential for "hallucination." Because I am concerned about client privacy, I cut and paste only the portions that did not contain any client information from the Word document into Copilot, and then I pasted Copilot's revisions back into the document.
Not by way of excuse, but rather explanation of context, unfortunately, I was in a rush to complete the initial draft of the Reply brief because I was traveling to the east coast related to a terminal illness in my family, and I failed to pay close enough attention to the details of what I was doing when I was drafting the brief. I entered a prompt into Copilot to instruct it to improve the writing in the brief, and merely expected Copilot to refine my writing; I never expected Copilot to insert any case citations, much less hallucinated ones. As such, I did not carefully review the Reply as revised by Copilot, and therefore, I did not recognize that Copilot inserted two hallucinated citations, especially since Page v. Parsons is an Oregon Court of Appeals decision frequently cited in anti-SLAPP cases. I made a terrible error in not doing so before filing the document….
The lawyer also said that "Other than experimenting with Westlaw's generative artificial intelligence research tool, I have never intentionally used generative artificial intelligence to perform legal research or drafting," because he was "aware of the potential for 'hallucination'" and of his firm's "strict policy against using generative artificial intelligence for this purpose." Indeed, as he notes, "[t]he risks of using AI in the legal profession" "is, perhaps, the most commonly written about and reported issue in the legal field today." As I read his declaration, he just didn't draw the connection between that and what he mentally characterized as AI editing rather than AI drafting.
I of course can't vouch for the accuracy of this, but it seems quite plausible: From all I've seen of AI hallucinations, it seems that hallucinations can appear whether one is using AI to generate text from a prompt or to revise text. So if you're going to use AI to edit, make sure that you do that before the final cite-check and the final substantive proofread—both of which have to be extra thorough and skeptical whenever AI is used as part of the writing or editing process.
UPDATE: I scheduled this post in advance, but now I see that the court issued an order Wednesday saying, "The Court is satisfied with the remedial actions already taken and those proposed to be taken by Plaintiff's counsel and thus will not be imposing any formal sanctions." Those remedial actions included reimbursing the client for fees in connection with the filing that contained the hallucinations, "reimburs[ing] Defendant for attorney's fees reasonably incurred in connection with the citation of hallucinated cases (if any – this was a reply brief and there was no hearing)," continuing to educate attorneys and staffs as to the risks of AI, having the lawyer involved take additional continue legal education on those risks, and donating $5000 to legal aid for the poor in civil cases.
Thanks to commenter Life of Brian for alerting me to the court's order, and for always looking on the bright side of life.