The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Any Lawyer Unaware That [Generative AI Research] Is Playing with Fire Is Living in a Cloud"
From In re Martin, decided yesterday by Bankruptcy Judge Michael Slade (N.D. Ill.):
While I appreciate Mr. Nield's and Semrad's remorse and candor [in their response to my order to show cause], I find that they both violated Federal Rule of Bankruptcy Procedure 9011 [by] {filing a brief containing fake quotations and nonexistent authority manufactured by artificial intelligence}. I further find that a modest, joint-and-several sanction of $5,500, paid to the Clerk of the Bankruptcy Court, along with a requirement that Mr. Nield and another senior Semrad attorney attend an upcoming course on the dangers of AI scheduled for the National Conference of Bankruptcy Judges (NCBJ) annual meeting in September, is the least harsh sanction that will appropriately address counsel's conduct and deter future, similar misconduct from them and others….
The first reason I issue sanctions stems from Mr. Nield's claim of ignorance—he asserts he didn't know the use of AI in general and ChatGPT in particular could result in citations to fake cases. Mr. Nield disputes the court's statement in Wadsworth v. Walmart Inc. (D. Wyo. 2025) that it is "well-known in the legal community that AI resources generate fake cases." Indeed, Mr. Nield aggressively chides that assertion, positing that "in making that statement, the Wadsworth court cited no study, law school journal article, survey of attorneys, or any source to support this blanket conclusion."
I find Mr. Nield's position troubling. At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud. This has been a hot topic in the legal profession since at least 2023, exemplified by the fact that Chief Justice John G. Roberts, Jr. devoted his 2023 annual Year-End Report on the Federal Judiciary (in which he "speak[s] to a major issue relevant to the whole federal court system," Report at 2) to the risks of using AI in the legal profession, including hallucinated case citations.6 To put it mildly, "[t]he use of non-existent case citations and fake legal authority generated by artificial intelligence programs has been the topic of many published legal opinions and scholarly articles as of late."7 At this point there are many published cases on the issue—while only a sampling are cited in this opinion, all but one were issued before June 2, 2025, when Mr. Nield filed the offending reply. See, e.g., Jaclyn Diaz, A Recent High-Profile Case of AI Hallucination Serves as a Stark Warning, NPR Illinois (July 10, 2025, 12:49 PM) ("There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases …. It has become a familiar trend in courtrooms across the U.S."). The Sedona Conference wrote on the topic in 2023. Newspapers, magazines, and other well-known online sources have been publicizing the problem for at least two years. And on January 1, 2025, the Illinois Supreme Court issued a "Supreme Court Policy on Artificial Intelligence" requiring practitioners in this state to "thoroughly review" any content generated by AI.
Counsel's professed ignorance of the dangers of using ChatGPT for legal research without checking the results is in some sense irrelevant. Lawyers have ethical obligations not only to review whatever cases they cite (regardless of where they pulled them from), but to understand developments in technology germane to their practice. And there are plenty of opportunities to learn—indeed, the Illinois State Bar Association chose "Generative Artificial Intelligence—Fact or Fiction" as the theme of its biennial two-day Allerton Conference earlier this year, calling the topic "one that every legal professional should have on their radar." Similar CLE opportunities have been offered across the nation for at least the past two years.
The bottom line is this: at this point, no lawyer should be using ChatGPT or any other generative AI product to perform research without verifying the results. Period. See, e.g., Lacey v. State Farm Gen. Ins. Co. (C.D. Cal. May 5, 2025) ("Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology—particularly without any attempt to verify the accuracy of that material."); Mid Cent. Operating Eng'rs Health & Welfare Fund v. HoosierVac LLC (S.D. Ind. 2025) ("It is one thing to use AI to assist with initial research, and even non-legal AI programs may provide a helpful 30,000-foot view. It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented."). In fact, given the nature of generative AI tools, I seriously doubt their utility to assist in performing accurate research (for now). "Generative" AI, unlike the older "predictive" AI, is "a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on." Platforms like ChatGPT are powered by "large language models" that teach the platform to create realistic-looking output. They can write a story that reads like it was written by Stephen King (but wasn't) or pen a song that sounds like it was written by Taylor Swift (but wasn't). But they can't do your legal research for you. ChatGPT does not access legal databases like Westlaw or Lexis, draft and input a query, review and analyze each of the results, determine which results are on point, and then compose an accurate, Bluebook-conforming citation to the right cases—all of which it would have to do to be a useful research assistant. Instead, these AI platforms look at legal briefs in their training model and then create output that looks like a legal brief by "placing one most-likely word after another" consistent with the prompt it received.
If anything, Mr. Nield's alleged lack of knowledge of ChatGPT's shortcomings leads me to do what courts have been doing with increasing frequency: announce loudly and clearly (so that everyone hears and understands) that lawyers blindly relying on generative AI and citing fake cases are violating Bankruptcy Rule 9011 and will be sanctioned. Mr. Nield's "professed ignorance of the propensity of the AI tools he was using to 'hallucinate' citations is evidence that [the] lesser sanctions [imposed in prior cases] have been insufficient to deter the conduct."
The second reason I issue sanctions is that, as described above, I also have concerns about the way this particular case was handled. I understand that Debtor's counsel has a massive docket of cases. But every debtor deserves care and attention. Chapter 13 cases can be challenging to file and manage—especially when they involve complexities like those in this case. If a law firm does not have the resources to devote the time and energy necessary to shepherd hundreds of Chapter 13 cases at the same time, it should refer matters it cannot handle to other attorneys who can—lest a search for time-saving devices lead to these kinds of missteps. What I mean to convey here is that while everyone makes mistakes, I expect—as I think all judges do—attorneys to be more diligent and careful than has been shown here….
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Should state regulatory authorities mandate CLE classes in computer-aided lawyering?
I understand the rule against trying to fool the court. However, lawyers make bogus arguments. Judges imposed their sicko biases, and make shit up in lawyer gibberish to justify them afterwards. Enforce the rule for everyone. Most real case decisions are hallucinations of biased, stupid, mood swing, hanger, whether the prick got yelled at by the wife that AM. It's not like real cases have the slightest external validity. They are just as crazy as hallucinated one.
I wrote a well crafted intervention lawsuit on ChatGPT. I provided the arguments. I asked for real case citations, and for real Rules of Bankruptcy Procedure. It took 15 seconds to get a well crafted motion requesting permission to submit the lawsuit. It knew to do that. I verified its references. It added its own valid headings and points I had not known about or considered.
Bye, bye lawyers.
Next, replace judges with algorithms. If the definition of justice requires repeatability of treatment, then an algorithm is mandatory, not just nice or efficient. An algorithm is mandated by the Fifth Amendment Procedural Due Process Clause, and by the one in the Fourteenth Amendment.
Driving a car is 100 times better than riding a horse, especially on a snow day. ChatGPT and judge algorithms will be 100 times better than the toxic lawyer profession.
I would like to subscribe to your substack.
I may start one.
Well, non-lawyers who got law degrees from unaccredited mail order diploma mills, anyway.
David. All my lawsuits have been dismissed, mooted by the end of the bad conduct. They saved thousands of lives. Lawfare works, I hate to say it.
You forget the ones that were tossed because you couldn't manage to properly follow the rules to effectuate service.
A fox may steal your hens, Sir,
A Whore your health and Pence, Sir,
Your daughter rob your Chest, Sir,
Your Wife may steal your Rest, Sir,
A Thief your Goods and Plate,
A Thief your Goods and Plate.
But this is all but picking,
With Rest, Pence, Chest, and Chicken,
If ever was decreed, Sir,
If Lawyer's Hand is fee'd, Sir.
He steals your whole Estate,
He steals your whole Estate.
The lawyer profession is a rent seeking criminal cult enterprise. It takes our $1.5 tril and returns nothing of value. It allows a billion crimes. It destroyed manufacturing. It stymied our growth to 2% instead of its natural 10%. All pc, all woke is case. It destroyed the black family,after the worst stresses could not. It prevents the ending of war killing millions and destroying $trils. It needs to be cancelled.
Yes.
I would've thought this completely unnecessary. Yet, it is the pervasiveness and frequency of these cases that suggests otherwise. Simply put, these people know better already. For business reasons at the individual or firm level, they are choosing to pretend otherwise. I'm not sure if this is really an issue about instructing them how to use it responsibly, rather than ensuring they are fully aware of the harsh consequences that attend this kind of misuse.
"Hey, you, get off of my cloud." -- In re Glimmer Twins, 392 US 19 (1968).
Yes, the attorney should have read the Chief Justice's annual report, and should be sanctioned for not reading it.
Nice rebuke by this judge .
Back in the Dark Ages, before computers took over our lives, I began my legal career by clerking for two judges -- one a trial court judge and the other a justice on the highest court in my state. I was such a nerd that I actually enjoyed research. A good day for me was sitting at a big table in the library with a Thermos of coffee and case books, statute books, digests, volumes of Shepard's, and yellow pads (legal-size, of course) spread out in front of me. Even though I eventually learned how to use Lexis and Westlaw, I never felt that my research was as thorough as it was when I used real books.
I'm retired now, but out of curiosity I have played around a little with ChatGPT. Nothing about it makes me think that it would be useful for legal research. And even if I did, I cannot imagine anyone submitting an AI-generated memo or brief without checking the cites.
I agree. I asked the following to ChatGPT and received flawed answers that would not have been useful in arguing a Title 47 issue.
"Explain why common carriage is more than an implied-in-fact contract. Then explain how a telegraph service differs from common carriage of a message electrically by wires or by wireless means or from transmission of intelligence electrically."
Section 153 is confusing, and ChatGPT did not help. Because Chevron Deference no longer exists, a precise understanding of a statute is important if agency interpretation or misinterpretation is important in a litigation.
The FCC seems to have been arbitrary and capricious in determining that telegraph services and pager service were telecommunications service while SMS is an information service. All three are store-and-forward services without change of the message, which is transmitted.
EV I have asked this before and no one seemed willing to answer. Westlaw (hard to find a lawyer who is not aware of Westlaw) is integrating AI and singing its praises. So I asked is anyone using it and what are the results. There have been plenty of posts about what I will call cheapskates using bargain basement approaches to legal research with AI getting bad results. What about it, is anyone using the Westlaw stuff and what are the results. Asking for a friend.
Ah, so he's going with the Costanza defense.
Couldn't very well say, "I knew it was risky, but I was in a hurry and took a chance. I got away with it the last three times, after all."
"Was that wrong? Should I have not done that? I tell ya, I gotta plead ignorance on this thing because if anyone had said anything to me at all when I first started here that that sort of thing was frowned upon... you know, cause I've worked in a lot of offices and I tell you people do that all the time."
Don't bite off more than you can chew.
And if you do, don't puke it up in my court or you will owe $5500.
This is emerging as a significant issue. After researching it, I decided to develop a tool to identify fabricated citations. I've built a web tool that performs lookups on citations and estimates the probability of AI hallucinations (it should go live this week, hopefully).
There's also a website by Damien Charlotin that tracks about 223 cases of this occurring worldwide.
https://legalcitationverifier.com (in testing for the next few days)
In med screw-el to signal that a particular patient was a malingerer/drug seeker/hypochondriac, the Attending would order us to “check a Serum Porcelain level!”
Or say that the patient had a severe case of “MPH”
The females with the Fibromyalgia and Chronic Fatigue needed a “BBD”
Frank
So when will courts finally treat entirely fabricated submissions as the willful acts they in fact are. For any professional in any field to believe that AI output can be relied upon as factual is malpractice, whether in law, medicine, engineering, or anywhere.
Sanction the lawyer for what they have willfully done, for which $5,000 is a mere slap on the wrist.
How about a punishment similar to what an undergrad should suffer: the paper is tossed, the grade is zero, the grade for the semester is calculated including that zero (which will almost surely then be an F), and further, the case is submitted to the honesty/ethics committee to determine whether the student should remain in school.. If this lawyer's submission is tossed, and misses a deadline causing his client to suffer, then his client obtains a competent attorney to sue the first for malpractice. The judge pretends that this is a serious sanction, but it is not by any stretch. A fine amounting to a couple of car payments is not going to make the profession take notice. Real malpractice claims, and loss of livelihood just might get lawyers to read the briefs they plan to submit.
These are not typos or a single honest mistake of swapping one legitimate citation for another. This is ethical breach at its highest.
But then, IANAL. So what do I know.
My purely anecdotal experience of using AI has shown me how often it makes mistakes which I must then correct. I am fairly certain it would be an unreliable source for anything other than basic forms in court.
What it might allow is for the author of whatever is to be submitted to have their AI review it and possible offer suggestions for refinement.
It has been helpful to me in finding my own errors as well as ways to better express the information I am attempting to convey.
As for citing fake cases, i have no idea but I have had AI show me cases that support or refute my position but even then, everything needs to be verified.
(Not a lawyer)
Note that the judge did not forbid use of AI. He only said that the attorney must review and verify all submissions. That's entirely reasonable.
BTW, when any non-licensed interns, or aide, or newly hired grad, write briefs, don't the licensed attorneys have the same obligation to review and verify? If yes, then the emphasis should be on enforcing that obligation, rather than making AI a special case.
Yes, this. Honestly don't understand how anyone could think or understand otherwise. If lawyers think otherwise, then there is no reason for there even to be law licensing. No AI has been licensed to date. Asking an AI to write something is no different than asking an intern or junior associate. The buck doesn't stop with them.
Unless this indicates something more pervasive, that senior partners do not actually closely review the work of their underlings. But because so few underlings are committing this level of fraud so it goes unnoticed.
Yes. In fact, if your name is on a brief, you should be reviewing it even if a licensed attorney wrote it.