The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
$2000 Sanction in Another AI Hallucinated Citation Case
From Massachusetts judge Brian Davis's opinion Monday in Smith v. Farwell:
[T]his Court is the unhappy recipient of several legal memoranda filed by counsel for plaintiff Darlene Smith ("Plaintiff's Counsel") that cite and rely, in part, upon wholly-fictitious case law (the "Fictitious Case Citations") in opposing the motions to dismiss filed by defendants …. When questioned about the Fictitious Case Citations, Plaintiff's Counsel disclaimed any intention to mislead the Court and eventually pointed to an unidentified AI system as the culprit behind the Fictitious Case Citations. He has, at the same time, openly and honestly acknowledged his personal lack of diligence in failing to thoroughly review the offending memoranda before they were filed with the Court…. Having considered all of the facts and circumstances, and hoping to deter similar transgressions by Plaintiff's Counsel and other attorneys in the future, the Court will require Plaintiff's Counsel to pay a monetary sanction in the amount of $2,000.00….
On November 1, 2023, all counsel appeared in person before the Court for oral argument on the Motions to Dismiss filed by Defendants W. Farwell, Devine, Heal and the Town. Before turning to the substance of the parties' motions, the Court informed Plaintiff's Counsel of its discovery of the three Fictitious Case Citations and inquired how they had come to be included in Plaintiffs Oppositions. Plaintiff's Counsel stated that he was unfamiliar with the Fictitious Case Citations and that he had no idea where or how they were obtained. When asked who had drafted the Oppositions, Plaintiff's Counsel responded that they had been prepared by "interns" at his law office. The Court thereupon directed Plaintiff's Counsel to file a written explanation of the origin of the Fictitious Case Citations on or before November 8, 2023.
On November 6, 2023, Plaintiff's Counsel submitted a letter to the Court in which he acknowledged that the Oppositions "inadvertently" included citations to multiple cases that "do not exist in reality." He attributed the bogus citations to an unidentified "AI system" that someone in his law office had used to "locat[e] relevant legal authorities to support our argument[s]." At the same time, Plaintiff's Counsel apologized to the Court for the fake citations and expressed his regret for failing to "exercise due diligence in verifying the authenticity of all caselaw references provided by the [AI] system." He represented that he recently had subscribed to LEXIS, which he now uses exclusively "to obtain cases to support our arguments." He also filed amended versions of the Oppositions that removed the Fictitious Case Citations….
[At a later hearing, Plaintiff's Counsel] explained that the Oppositions had been drafted by three legal personnel at his office; two recent law school graduates who had not yet passed the bar and one associate attorney. The associate attorney admitted, when asked, that she had utilized an AI system (Plaintiff's Counsel still did not know which one) in preparing the Oppositions.
Plaintiff's Counsel is unfamiliar with AI systems and was unaware, before the Oppositions were filed, that AI systems can generate false or misleading information. He also was unaware that his associate had used an AI system in drafting court papers in this case until after the Fictitious Case Citations came to light. Plaintiff's Counsel said that he had reviewed the Oppositions, before they were filed, for style, grammar and flow, but not for accuracy of the case citations. He also did not know whether anyone else in his office had reviewed the case citations in the Oppositions for accuracy before the Oppositions were filed. Plaintiff's Counsel attributed his own failure to review the case citations to the trust that he placed in the work product of his associate, which (to his knowledge, at least) had not shown any problems in the past.
The Court finds Plaintiff's Counsel's factual recitation concerning the origin of the Fictitious Case Citations to be truthful and accurate. The Court also accepts as true Plaintiff's Counsel's representation that the Fictitious Case Citations were not submitted knowingly with the intention of misleading the Court. Finally, the Court credits the sincerity of the contrition expressed by Plaintiff's Counsel…. [But] notwithstanding Plaintiff's Counsel's candor and admission of fault, the imposition of sanctions is warranted in the present circumstances because Plaintiff's Counsel failed to take basic, necessary precautions that likely would have averted the submission of the Fictitious Case Citations. His failure in this regard is categorically unacceptable….
For the legal profession, Generative AI technology offers the promise of increased efficiency through the performance of time-consuming tasks using just a few keystrokes. For example, Generative AI can draft simple legal documents such as contracts, motions, and e-mails in a matter of seconds; it can provide feedback on already drafted documents; it can check citations to authority; it can respond to complex legal research questions; it can analyze thousands of pages of documents to identify trends, calculate estimated settlement amounts, and even determine the likelihood of success at trial. Given its myriad of potential uses, Generative AI technology seems like a superhuman legal support tool.
The use of AI technology, however, also poses serious ethical risks for the legal practitioner. {While this case centrally involves violations of Mass. R. Prof. C. 1.1, as amended, 490 Mass. 1302 (2022), Competence, AI presents numerous other potential ethical pitfalls for attorneys including, but not limited to, potential violations of Mass. R. Prof. C. 1.3, 471 Mass. 1318 (2015), Diligence; Mass. R. Prof. C. 1.6, 490 Mass. 1302 (2022), Confidentiality of Information; Mass. R. Prof. C. 2.1, 471 Mass. 1408 (2015), Advisor; Mass. R. Prof. C. 3.3, as amended, 490 Mass. 1308 (2022), Candor Toward the Tribunal; Mass. R. Prof. C. 5.1, as amended, 490 Mass. 1310 (2022), Responsibilities of Partners, Managers and Supervisory Lawyers; Mass. R. Prof. C. 5.5, as amended, 474 Mass. 1302 (2016), Unauthorized Practice of Law; and Mass. R. Prof. C. 8.4, 471 Mass. 1483 (2015), Misconduct.} For example, entering confidential client information into an AI system potentially violates an attorney's obligation to maintain client confidences because the information can become part of the AI system's database, then disclosed by the AI system when it responds to other users' inquiries. Additionally, as demonstrated in this case, AI possesses an unfortunate and unpredictable proclivity to "hallucinate." The terms "hallucinate" or "hallucination," as used in the AI context, are polite references to AI's habit of simply "making stuff up." AI hallucinations are false or completely imaginary information generated by an AI system in response to user inquiries. AI researchers are unsure how often these technological hallucinations occur, but current estimates are that they happen anywhere from three to twenty-seven percent of the time depending on the particular AI system.
Generative AI hallucinations can be highly deceptive and difficult to discern. The fictitious information often has all the hallmarks of truthful data and only can be discovered as false through careful scrutiny. For example, as demonstrated in this case, AI can generate citations to totally fabricated court decisions bearing seemingly real party names, with seemingly real reporter, volume, and page references, and seemingly real dates of decision. In some instances, AI even has falsely identified real individuals as accused parties in lawsuits or fictitious scandals. For these reasons, any information supplied by a Generative AI system must be verified before it can be trusted….
[T]he Court considers the sanction imposed upon Plaintiff's Counsel in this instance to be mild given the seriousness of the violations that occurred. Making false statements to a court can, in appropriate circumstances, be grounds for disbarment or worse. See, e.g., In re Driscoll, 447 Mass. 678, 689-690 (2006) (one-year suspension appropriate where attorney pleaded guilty to one count of making false statement); Matter of Budnitz, 425 Mass. 1018, 1019 (1997) (disbarment appropriate where attorney knowingly lied under oath and perpetrated lies though making false statements in disciplinary proceeding). The restrained sanction imposed here reflects the Court's acceptance, as previously noted, of Plaintiff's Counsel's representations that he generally is unfamiliar with AI technology, that he had no knowledge that an AI system had been used in the preparation of the Oppositions, and that the Fictitious Case Citations were included in the Oppositions in error and not with the intention of deceiving the Court….
It is imperative that all attorneys practicing in the courts of this Commonwealth understand that they are obligated under Mass. Rule Civ. P. 11 and 7 to know whether Al technology is being used in the preparation of court papers that they plan to file in their cases and, if it is, to ensure that appropriate steps are being taken to verify the truthfulness and accuracy of any AI-generated content before the papers are submitted…. "Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings." … The blind acceptance of Al-generated content by attorneys undoubtedly will lead to other sanction hearings in the future, but a defense based on ignorance will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known.
Thanks to Scott DeMello for the pointer.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Financial sanctions ought to weed out careless lawyers. Repeat offenders should loose their license.
If you love something set it free. If it comes back it's yours. If not, it was never meant to be.
I finally retired from law practice after almost 50 years in 2016, so I didn't have to deal with AI. But throughout my practice you still had to make sure that the authorities you cited were valid. If, as an associate, I cited a case without "Shephardizing" it before a brief was filed, my future with the firm would be in serious doubt. Almost the first stories I saw in the web about AI in law practice highlighted phony case citations. There's NO EXCUSE for this, and courts are wrong to be so easy on violators. Suspend a few violators from practice for 6-12 months, and they'll start to take it seriously.
These instances of fictitious citations are so bizarre and lazy. With a modern word processor, it should be fairly easy to set up automatic checking of citations for reality.
A cursory online search indicates that, there are a plethora of tools that can extract citations from a PDF and then analyze them. I have to admit that I have not checked any of them, but I would certainly do so to avoid the risk of a fine for imaginary citation.
If the filings are created by some sort of legal AI system, shouldn't the AI system be validating the citations?
Probably the AI software was a “large language model” such as ChatGPT, rather than AI software designed to perform legal reasoning. Large language models can produce text that looks like a legal brief written by a lawyer. Generally speaking, a brief containing fictional citations looks less like a brief written by a lawyer than a brief containing valid citations does. What this means is that if there is case law to support a proposition, a large language model will likely produce valid citations. If you ask a large language model to generate a brief where no supportive case law exists, it will do the best it can, which is likely to be generating plausible sounding but fictitious citations.