The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Hallucinated Citations in Brief Filed by Assistant County Attorney
From a Minnesota Tax Court decision Thursday, Delano Crossing 2016 v. County of Wright (Chief Judge Jane N. Bowman and Judges Bradford S. Delapena and Beverly J. Luther Quast):
In support of a motion for summary judgment, which the court denies in a concurrent order, the County submitted a brief that included five case citations generated by Artificial Intelligence (AI); none of the five citations referred to an actual judicial decision. Indeed, much of the County's brief appeared to be written by AI. We subsequently ordered Wright County to show cause why it should not be sanctioned and why Ms. Pence, who signed and filed the brief, should not be reported to the Minnesota Lawyers Professional Responsibility Board. For the reasons below, although we believe Ms. Pence's conduct violated Rule 11, we decline to order sanctions. Additionally, we refer this matter to the Minnesota Lawyers Professional Responsibility Board for further review….
[When asked to explain the situation,] Ms. Pence generally characterized her filing of a brief containing AI-hallucinated case citations as a mistake, stating: "I believe I inadvertently filed a draft Motion that was never intended to be the final product. I did not intend to file an AI-generated pleading; however, I have been unable to locate any other documents containing my research."
Ms. Pence further averred that she did not realize her brief contained fake case citations until approximately 6:48 p.m. the night before the motion hearing, prompting Ms. Pence to conclude that the best time to deal with the situation was at the impending hearing. Ms. Pence further attested that although the case citations in her brief were fake, "the legal contentions in the motion are warranted by existing law as cited [orally during the hearing]." Ms. Pence generally attested, in other words, that although she "erroneously filed a document with hallucinated cases," the County's arguments were otherwise legally sound. Ms. Pence added that she has taken several remedial measures to ensure this does not happen again, and she understands "the seriousness of [her] mistake."
Rule 11.02 states that, "by presenting to the court … a pleading, written motion, or other document," an attorney certifies "to the best of [their] knowledge, information, and belief, formed after an inquiry reasonable under the circumstances" that "the claims, defenses, and other legal contentions therein are warranted by existing law…." "Thus, rule 11 prescribes an affirmative duty on counsel to investigate the factual and legal underpinnings of a pleading." …
We conclude that the inclusion of citations to non-existing cases (or other legal authorities) is a violation of Rule 11.02(b), as fake case citations cannot support any legal claim. Further, using fake case citations is inherently misleading, as the signing attorney induces readers to believe that their legal contentions are supported by existing law….
We find no merit in Ms. Pence's defense that all of the County's legal claims can be supported by genuine controlling precedent (that Ms. Pence orally offered to the court during the hearing). As an initial matter, we conclude that using fake case citations, particularly while knowing that AI can generate fictitious citations, is a Rule 11 violation, because the rule imposes an affirmative duty to investigate the "legal underpinnings of a pleading." In any event, the County's substitute cases do not support the legal contentions asserted in its brief….
Calling the inclusion of fake case citations a "mistake" in this matter is not objectively reasonable….
We believe one appropriate sanction here would be to summarily deny the County's motion. If an attorney submits to this court legal arguments using fake legal authority, we generally will deny the motion without expending the time and resources necessary to analyze those arguments. As described in our concurrent Order denying the County's motion, however, we concluded that the County's arguments were so clearly incorrect that it was preferable to deny them on the merits. In lieu of summarily denying the County's motion as a sanction, we believe that this Order … sufficiently deters Ms. Pence from relying solely on AI for case citations or legal conclusions in the future. Thus, we decline to order any further sanctions in this matter….
"Competent representation requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation." Minn. Rules of Pro. Conduct r. 1.1. In addition, a lawyer "shall not knowingly … make a false statement of fact or law to a tribunal …" Wright County's brief in this matter raises questions of truthfulness and candor to the court.
We are mindful of our responsibility as judges. "A judge having knowledge that a lawyer has committed a violation of the Rules of Professional Conduct that raises a substantial question regarding the lawyer's honesty, trustworthiness, or fitness as a lawyer in other respects shall inform the appropriate authority."
We believe the submission of an AI-generated brief, apparently unreviewed, as evidenced by inclusion of entirely fake case citations, reasonably raises questions as to a lawyer's honesty, trustworthiness, and/or fitness as a lawyer. Thus, as we are obligated to inform the authority under our governing rule, we will send to the Minnesota Lawyer's Professional Responsibility Board [the relevant documents in the case -EV] …. The Board will respond as it sees fit.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Does anybody Shepardize anymore?
Shepardize means to check that a case remains good law to ensure that you are not citing to a case that has been overruled. It was never meant as a term to determine whether a case existed in the first place. The failure in these cases is that no initial research is done at the start. It is literally the equivalent of just making up a case name like Scooby Doo v Superman and turning it in to the court. An actual case needs to exist before it can be Shepardized.
I think the point is that every cases cited in a brief should be Shepardized. That is lawyering 101, but it obviously wasn't done, because the fake cites were not caught.
As they say in logic class, Shepardizing is necessary but not sufficient. It will tell you whether your case does indeed exist and whether it is still good law. It won't tell you whether it stands for the proposition that the AI claims it does.
In the case above, it explicitly says, "none of the five citations referred to an actual judicial decision," so Shepardizing would have told you there is no such case. It was not used to determine whether it stood for what AI claimed to be.
It would've uncovered that Scooby Doo v Superman isn't a real case.
The obvious quick fix would be for AI-generated citations to be labeled "AI-generated citation" so that humans could quickly identify and confirm them.
Are lawyers stupid?
The obvious fix is to disbar lawyers who use false citations regardless of the source of the citation.
There is no sense 'blaming the computer'; these are lazy bums who have no business in their "profession".
Ordinary old-fashioned crimes don't get shrugged off as "I realized at 6:48 pm that my plan to rob the bank next morning was a bad idea, but decided the best time to tell the gang was once we were inside the bank".
ETA I wouldn't disbar them; I don't like any occupational licensing. I'd simply call their fake citations perjury and make them liable for the punishment they had intended to inflict on the other party.
A lazy bum should be clever enough to have a computer highlight the things they need to check.
Why would you disbar lawyers for filing briefs with formatting issues?
There is no sense 'blaming the computer'; these are lazy bums who have no business in their "profession".
Now, I can agree that people shouldn't be trusted in a profession when they don't even check the output of an AI.
That would have been a much sicker burn if your link didn't say in so many words "minor citation and formatting errors"....
And fairly ironic when you cite it in the process of castigating people for not checking their sources.
The "sick burn" is in how obviously the administration was trying to minimize the sloppiness of that report by saying that it was "minor citation and formatting errors".
Nice try, but you explicitly presented it as only "formatting issues." Which is exactly the sort of mischaracterization everyone is jumping on to try to milk this for something beyond a handful of mis-cites.
Nice try, but you explicitly presented it as only "formatting issues."
I was implicitly satirizing the problems with that report. I don't believe that you actually misunderstood what I was doing there.
Which is exactly the sort of mischaracterization everyone is jumping on to try to milk this for something beyond a handful of mis-cites.
Read that NOTUS article. The hallucinated citations are just the most obvious problems with the report. Also look at who the 14 commissioners are that are responsible for that report. Exactly 2 of them have any medical training or other expertise relevant to childhood health.
'I didn't bother to click through and don't follow the news enough to get your joke. This is your fault.'
Man, you sure are a grumpus these days, LoB.
This is the article that Axios is itself citing that goes into detail about the errors in that report:
https://www.notus.org/health-science/make-america-healthy-again-report-citation-errors
They aren't "minor citation" errors to cite studies that don't exist, by authors that do exist but that never wrote a paper with that title and/or never worked with the listed co-authors, or to cite a study by an author that doesn't even seem to exist on Google Scholar at all. They obviously let a chatbot write that report. When a commission led by the HHS secretary puts out an AI report with those kinds of mistakes, that matters. It is a sign that the people leading this administration's public health agencies aren't making sure that what they are saying is reliably true.
Well, let's see: scores of humans with hugely vested interests have now crawled over the report with a fine-toothed comb, and at most have identified 4 citations that may not exist at all (I can't help but note that some of the "couldn't find it" language is pretty cagey).
But even if that's correct, I bet the investment world would very much like to know what chatbot could produce a document like that and make up only 4 out of 522 citations.
Hey, if you want to trust a report that has any citations that don't exist without digging any deeper, be my guest.
(I can't help but note that some of the "couldn't find it" language is pretty cagey)
They spoke directly with people cited and quoted them. If you can find the author or the paper from this citation, feel free to let us know:
Shah, M. B., et al. (2008). Direct-to-consumer advertising and the rise in ADHD medication use among children. Pediatrics, 122(5), e1055- e1060.
As I said, others with a hugely vested interest have done that digging, and apparently have come up with, at most, 4 out of 522.
But then I guess you yourself only feel rock-solid about 1 out of 522.... ouch.
And sorry -- if you mentioned what core proposition in the paper this cite was supporting, I must have missed it.
"At most" is a bizarre phrase in this context. One fictitious citation is significant.
Um, why do you think RFK Jr. is above fabricating citations?
Um, his link didn't say that. The administration did. The link correctly noted that the citations were fabricated.
AI-generated content shouldn't be directly copied into a brief. It should be read separately, then the cases read, then the attorney's actual argument typed into the working document of a brief. There's no reason to ever worry that a draft was accidentally submitted with AI in it.
I'm curious. Has any AI been trained on information scraped fro legal libraries?
It wouldn't matter. Large language models aren't true intelligence that find sources for legal points and know how to cite to legal authority for those points. All they do is predict the next words in a sequence. Any lawyer that has any clue about how the current AI clients like ChatGPT operate would know not to try to use it to generate legal writing.
By extension is it fair to say that there are too many who have passed the bar examination based on memorization, and have not the aptitude to practice law ?
Ms. Pence: "I believe I inadvertently filed a draft Motion that was never intended to be the final product. ..."
Is the future even more strained courts, and procedures by all involved needed to verify citations ?
And, what about citations presented in Court rulings and Opinions ? Will judges become more or less conscientious and deliberate in rationalizing their politicized decisions ?
When I do legal research using AI, I assume that any citations it gives me will be a result of hallucinations. I have a hard time getting accurate citations unless I add them myself.
Apart from the fake citations aren't legal briefs written by AI fairly obviously written by AI?
I just cannot comprehend how this sort of thing happens. Every single brief I submit, no matter how busy I am, gets a quick proofread and all citations checked in Lexis/Westlaw for accuracy. Even if you're some big and important partner who can't do that personally, you should have an assistant whose job it is to check. If you take that one very, very simple and basic step, you will never submit a hallucinated case. So why does this keep happening??