The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Second Circuit Refers Lawyer for Disciplinary Proceedings Based on AI-Hallucinated Case in Brief
From Park v. Kim, decided today by the Second Circuit (Judges Barrington Parker, Allison Nathan, and Sarah Merriam); this is the 13th case I've seen in the last year in which AI-hallucinated citations were spotted:
We separately address the conduct of Park's counsel, Attorney Jae S. Lee. Lee's reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT. Because citation in a brief to a non-existent case suggests conduct that falls below the basic obligations of counsel, we refer Attorney Lee to the Court's Grievance Panel, and further direct Attorney Lee to furnish a copy of this decision to her client, Plaintiff-Appellant Park….
Park's reply brief in this appeal was initially due May 26, 2023. After seeking and receiving two extensions of time, Attorney Lee filed a defective reply brief on July 25, 2023, more than a week after the extended due date. On August 1, 2023, this Court notified Attorney Lee that the late-filed brief was defective, and set a deadline of August 9, 2023, by which to cure the defect and resubmit the brief. Attorney Lee did not file a compliant brief, and on August 14, 2023, this Court ordered the defective reply brief stricken from the docket. Attorney Lee finally filed the reply brief on September 9, 2023.
The reply brief cited only two court decisions. We were unable to locate the one cited as "Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep't 2014)." Appellant's Reply Br. at 6. Accordingly, on November 20, 2023, we ordered Park to submit a copy of that decision to the Court by November 27, 2023. On November 29, 2023, Attorney Lee filed a Response with the Court explaining that she was "unable to furnish a copy of the decision." Although Attorney Lee did not expressly indicate as much in her Response, the reason she could not provide a copy of the case is that it does not exist—and indeed, Attorney Lee refers to the case at one point as "this non-existent case."
Attorney Lee's Response states:
I encountered difficulties in locating a relevant case to establish a minimum wage for an injured worker lacking prior year income records for compensation determination …. Believing that applying the minimum wage to in injured worker in such circumstances under workers' compensation law was uncontroversial, I invested considerable time searching for a case to support this position but was unsuccessful….
Consequently, I utilized the ChatGPT service, to which I am a subscribed and paying member, for assistance in case identification. ChatGPT was previously provided reliable information, such as locating sources for finding an antic furniture key. The case mentioned above was suggested by ChatGPT, I wish to clarify that I did not cite any specific reasoning or decision from this case.
All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure. Among other obligations, Rule 11 provides that by presenting a submission to the court, an attorney "certifies that to the best of the person's knowledge, information, and belief, formed after an inquiry reasonable under the circumstances … the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law." "Rule 11 imposes a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are well grounded in fact, [and] legally tenable." "Under Rule 11, a court may sanction an attorney for, among other things, misrepresenting facts or making frivolous legal arguments."
At the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely. Indeed, we can think of no other way to ensure that the arguments made based on those authorities are "warranted by existing law," Fed. R. Civ. P. 11(b)(2), or otherwise "legally tenable." As a District Judge of this Circuit recently held when presented with non-existent precedent generated by ChatGPT: "A fake opinion is not 'existing law' and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system." Mata v. Avianca, Inc. (S.D.N.Y. 2023).
Attorney Lee states that "it is important to recognize that ChatGPT represents a significant technological advancement," and argues that "[i]t would be prudent for the court to advise legal professionals to exercise caution when utilizing this new technology." Indeed, several courts have recently proposed or enacted local rules or orders specifically addressing the use of artificial intelligence tools before the court. {See, e.g., Notice of Proposed Amendment to 5th Cir. R. 32.3, U.S. Ct. of Appeals for the Fifth Cir., https://www.ca5.uscourts.gov/docs/default-source/default-document-library/public-comment-local-rule-32-3-and-form-6 [https://perma.cc/TD4F-WLV2] (Proposed addition to local rule: "[C]ounsel and unrepresented filers must further certify that no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human."); E.D. Tex. Loc. R. AT-3(m) ("If the lawyer, in the exercise of his or her professional legal judgment, believes that the client is best served by the use of technology (e.g., ChatGPT, Google Bard, Bing AI Chat, or generative artificial intelligence services), then the lawyer is cautioned that certain technologies may produce factually or legally inaccurate content and should never replace the lawyer's most important asset—the exercise of independent legal judgment. If a lawyer chooses to employ technology in representing a client, the lawyer continues to be bound by the requirements of Federal Rule of Civil Procedure 11, Local Rule AT- 3, and all other applicable standards of practice and must review and verify any computer-generated content to ensure that it complies with all such standards."); Self-Represented Litigants (SRL), U.S. Dist. Ct. for the E. Dist. of Mo., https://www.moed.uscourts.gov/self-represented-litigants-srl [https://perma.cc/Y7QG-VVEF] ("No portion of any pleading, written motion, or other paper may be drafted by any form of generative artificial intelligence. By presenting to the Court … a pleading, written motion, or other paper, self- represented parties and attorneys acknowledge they will be held responsible for its contents. See Fed. R. Civ. P. 11(b)."). But such a rule is not necessary to inform a licensed attorney, who is a member of the bar of this Court, that she must ensure that her submissions to the Court are accurate.
Attorney Lee's submission of a brief relying on non-existent authority reveals that she failed to determine that the argument she made was "legally tenable." The brief presents a false statement of law to this Court, and it appears that Attorney Lee made no inquiry, much less the reasonable inquiry required by Rule 11 and long-standing precedent, into the validity of the arguments she presented. We therefore REFER Attorney Lee to the Court's Grievance Panel pursuant to Local Rule 46.2 for further investigation, and for consideration of a referral to the Committee on Admissions and Grievances….
Thanks to Andy Patterson for the pointer.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"Park's reply brief in this appeal was initially due May 26, 2023. After seeking and receiving two extensions of time, Attorney Lee filed a defective reply brief on July 25, 2023, more than a week after the extended due date. . . . The reply brief cited only two court decisions."
How does it take that long to write a brief that cites to two opinions, one of which is fake? It sounds like Lee is taking on more work than she can actually handle, hence cutting corners and being late. Part of the job is either hiring help or turning away work when things get too busy, which in the grand scheme of things is a nice problem to have.
"one of which is fake"
Could not find the time to do a cite check on 2 cases? No one is that busy.
This one was.
Counsel in the Second Circuit are bound to comply with the Federal Rules of Civil Procedure? I'd thought they are bound to comply with the Federal Rules of Appellate Procedure.
Here’s an idea.
From now on for take home papers for any subject make students come up with and use a list of sources and cite them thoroughly throughout integrated naturally into the papers writing. When they turn it in make sure the paper matches with what the citations actually say in their own bodies. Straightforward way to detect AI generation at least with current technology and you get to keep take home papers. The work it would take to thwart this seems like it is so much you’d might as well write a paper honestly.
It's not clear to me how what you are suggesting is different from having a bibliography and footnotes.
We were required to that starting at some point in high school. No more?
Most take home essays for lower levels do not require a bibliography. In those that do references just have to relate to the article information in a general loose sense ie the referenced article ‘Seed eating habits of birds’ mentions somewhere that birds eat seeds. I’m not only proposing bibliographies but tighter references to some standard ie. ‘birds eat seeds’ to ‘Martin et al’s 2017 study showed in figure 27 that bird’s ate pecan seeds at the rate of 30g/month’. Integrating in specific information (bonus points that it comes from visual analysis of a figure). The AI now not only has to generate a believable essay but somehow grok the structure so that the linked references internal structures are also correct too.
I’m not sure what Davy is talking about when he says AIs can already do this very well. What I read show GPT-4 which is the most advanced AI currently widely available failing 25% of the time with a very superficial check which just looked at the articles existence and basic stats like date and page number. Which is impressive but still obviously much easier for AIs to handle.
Some AIs do cite sources. The citations are sometimes, but not usually, wrong.
When you've got 90 papers to grade, checking every source is going to take you a while.
Have student print out citations and highlight what they cited.
Then spot check.
You ask the AI to print out the fake paper it's citing, and it will oblige. You do know that, right?
Given that AI generates such garbage, why does anyone rely on it for anything?
There is talk now about AI replacing computer programmers and even doctors. Who'd want to use them? They would just diagnose you with a fake disease.
(Oh yes, Mr. Jones. You have the dreaded Hawaiian Cat Flu. Transmitted by cats who eat pineapple. Unfortunately, there is no cure, so go home and write your will.)
I think that the answer is to “Trust But Verify”. Check the case cites. Check the medical diagnosis with additional testing. Etc.
My wife’s GP made a recent diagnosis that seemed out in left field. Probably fewer than 1 in 10 GPs would have made it. He referred her out to a specialist, who confirmed it. That is why I see more positive than negative, with this trend. Medicine is now an extremely broad field, and no one can probably understand even a part of it. Average IQ of an MD is probably about 125 (similar to JDs), with specialists probably scoring a bit higher, and GPs, a bit lower. Yet, they are expected to have encyclopedic knowledge of the entire field. Helping the average GP make the diagnosis that my wife’s expensive concierge doctor did, has to be useful, as well as helping them from missing something that only specialists would see.
As for lawyers, the first cut for briefs like this should include automatically checking cites, as well as producing copies of cited cases. The problem of bogus cites may be new, but the problem of bad cites is long standing. Whether you found the cite in a digest, or from a ChatAI (etc) generated brief, you, as a lawyer, are ethically required to read it and make sure that it says what you are citing it for saying. And, yes, a good lawyer checks the cases cited by opposing counsel too.
"Trust But Verify"
More aptly: verify, then trust.
One reason is if they are better :-). In some narrow cases - reading mammograms, IIRC, AI has promise of doing better than radiologists. Looking at xrays all day is not a task that plays to human cognitive strengths.
Is that sort of thing the same as what ChatGPT, etc. do?
It looks like an awfully narrow, well-defined task, and it's easy to imagine that a computer would be better at it than humans, or would at least be awfully helpful to a human radiologist.
"Is that sort of thing the same as what ChatGPT, etc. do?"
No, I don't think so - it was from several years ago.
AI, but not particularly like ChatGPT.
(disclaimer: I was a computer nerd, but not an AI type)
What recourse does her client have?
None for the ChatGPT issue per se. A malpractice claim for the overall mishandling of the case, if (a) the lawyer was the one who did it,¹ and (b) if the case was winnable anyway.
¹Obviously the lawyer did the ChatGPT thing, but the case was screwed up long before that, based on failure to cooperate with discovery.
Whose penalty should be more severe -- Lee (who, I hope, is a relatively inexperienced lawyer lacking institutional resources), or John Eastman (who was a law professor and candidate for attorney general before turning into an un-American law-talking guy)?
Better yet, let's punish every lawyer who makes a losing argument in a brief. Dis-bar them all. Eventually we will have just one lawyer left.
Do you customarily find that people (outside militia gatherings, QAnon rallies, and Republican committee meetings) regard you as persuasive, competent, relevant, and/or helpful?
Kirkland, do you have any idea how many slezey lawyers the left has?
That's an awful lot of brittle bluster, hollow aggression, and counterproductive pomposity!
How do these morons ever manage to get law degrees in the first place? Unless you've been living in a cave off the grid, you know that reliance on AI is dangerous. This is not a case of being lazy or over-worked, it is a case of stupidity. No lawyer who does this should ever be given a chance to do it a second time.
The issue isn't getting a law degree, in my experience -- it is developing as a lawyer after graduation.
The route many lawyers followed to success 40 to 60 years ago -- years of training by senior lawyers at a firm -- seems less common these days, for a number of reasons.
For years, I was the beneficiary of guidance from scores of highly skilled, generous, attentive lawyers who (1) invested plenty of time and effort in my development and (2) provided appropriate work and enjoyable opportunities to succeed. I sense that this occurs far too infrequently today.
Many recent (10 years or sooner) law graduates seem to have been raised by wolves, learning brittle bluster, hollow aggression, and counterproductive pomposity by watching television lawyers. I hope our profession devises and implements a better system.
(I remember hearing about the glory of preceptorships from some senior lawyers decades ago.)
I'm not going to bother looking it up, but I think Delaware might still require a preceptorship.
You mean that Godzilla v. Mothra isn't actually a legal precedent?
Sure it is. From the District of Hollywood.
I think Godzilla movies are by a Japanese company, Toho, and unless Hollywood has some extension campus in Japan, I wouldn't call them a Hollywood company.
I also like the precedent of Godzilla v. Ghidorah.
I also like the precedent of Godzilla v. Ghidorah.
Distinguished from Godzilla v Hedorah.
However, The Incredible Hulk vs. Man Thing is inarguably a US case, though IIRC there was a dispute over whether it should be tried in California or Delaware.
Nice to have confirmation that some lawyers aren’t too bright.
SOME???
Some of them have delusions of janitor.
"ChatGPT was previously provided reliable information, such as locating sources for finding an antic furniture key. The case mentioned above was suggested by ChatGPT, I wish to clarify that I did not cite any specific reasoning or decision from this case."
Incredibly bad excuse, both in writing and content.
an antic furniture key.
I can see that it would be hard to locate a furniture key that was jumping around and running all over the place, rather than sitting still.
English might not be that lawyer's first language, which would be another reason to hope that lawyer benefits from some institutional (law firm, most likely) resources.