The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Hallucinated Citations Created When Using Generative AI "to Improve the Writing in [a] Brief"
From a declaration in Green Building Initiative, Inc. v. Green Globe Int'l, Inc., a case I wrote about last month (Apparent AI Hallucinations in Filing from Two >500-Lawyer Firms):
In preparing the Reply brief, I performed legal research on Westlaw for authorities supporting arguments set forth in the brief. I included some of those authorities I found on Westlaw into the brief.
In generating the Reply brief, I also used Microsoft's Copilot for its editing functions in an effort to review and improve the draft document by fixing grammar, spelling, and improving badly phrased sentences. To be clear: I did not use Copilot for research nor would I use generative artificial intelligence for legal research since I am aware of generative AI's potential for "hallucination." Because I am concerned about client privacy, I cut and paste only the portions that did not contain any client information from the Word document into Copilot, and then I pasted Copilot's revisions back into the document.
Not by way of excuse, but rather explanation of context, unfortunately, I was in a rush to complete the initial draft of the Reply brief because I was traveling to the east coast related to a terminal illness in my family, and I failed to pay close enough attention to the details of what I was doing when I was drafting the brief. I entered a prompt into Copilot to instruct it to improve the writing in the brief, and merely expected Copilot to refine my writing; I never expected Copilot to insert any case citations, much less hallucinated ones. As such, I did not carefully review the Reply as revised by Copilot, and therefore, I did not recognize that Copilot inserted two hallucinated citations, especially since Page v. Parsons is an Oregon Court of Appeals decision frequently cited in anti-SLAPP cases. I made a terrible error in not doing so before filing the document….
The lawyer also said that "Other than experimenting with Westlaw's generative artificial intelligence research tool, I have never intentionally used generative artificial intelligence to perform legal research or drafting," because he was "aware of the potential for 'hallucination'" and of his firm's "strict policy against using generative artificial intelligence for this purpose." Indeed, as he notes, "[t]he risks of using AI in the legal profession" "is, perhaps, the most commonly written about and reported issue in the legal field today." As I read his declaration, he just didn't draw the connection between that and what he mentally characterized as AI editing rather than AI drafting.
I of course can't vouch for the accuracy of this, but it seems quite plausible: From all I've seen of AI hallucinations, it seems that hallucinations can appear whether one is using AI to generate text from a prompt or to revise text. So if you're going to use AI to edit, make sure that you do that before the final cite-check and the final substantive proofread—both of which have to be extra thorough and skeptical whenever AI is used as part of the writing or editing process.
UPDATE: I scheduled this post in advance, but now I see that the court issued an order Wednesday saying, "The Court is satisfied with the remedial actions already taken and those proposed to be taken by Plaintiff's counsel and thus will not be imposing any formal sanctions." Those remedial actions included reimbursing the client for fees in connection with the filing that contained the hallucinations, "reimburs[ing] Defendant for attorney's fees reasonably incurred in connection with the citation of hallucinated cases (if any – this was a reply brief and there was no hearing)," continuing to educate attorneys and staffs as to the risks of AI, having the lawyer involved take additional continue legal education on those risks, and donating $5000 to legal aid for the poor in civil cases.
Thanks to commenter Life of Brian for alerting me to the court's order, and for always looking on the bright side of life.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
The docket also has two other declarations from the supervising partner and the chair of the firm's professional responsibility group, as well as a 12-page brief outlining the firm's internal remedial plan and arguments why further sanctions weren't warranted.
The judge bought all that, issuing a 1-pager two days later saying there would be no sanctions. The offending attorney doesn't appear to show up in the firm directory any more, though. What an unfortunate mess.
" . . . by fixing grammar, spelling, and improving badly phrased sentences."
You charge how much an hour, and can't write proper English?
That's a fair point in general. Here, taking him at his word, someone in his family was dying and he was scrambling to get this off his plate before getting on a plane to see them.
Though it's sometimes tempting to think otherwise, lawyers are humans too.
It's worth saying that one should not do what this attorney did, even if there were no possibility of hallucinatory citations.
Microsoft's tools have a host of grammatical and vocabulary preferences that are not geared toward legal writing. In terms of grammar, they tend to make legal writing less forceful by removing nuances and intensifiers. In terms of vocabulary, the tools do not understand legal terms, so they tend to obscure legal points by using incorrect terminology. They also don't understand the limitations of the record, so they tend to overstate tricky facts in an effort to make phrasing simpler or more direct.
It is not that the grammar tools can never help. But if you use AI to revise a document, you play with fire if you do not send the results to "track changes" and individually approve or deny each change.
In other words, he never should have given it to Microsoft.