The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Any Lawyer Unaware That [Generative AI Research] Is Playing with Fire Is Living in a Cloud"
From In re Martin, decided yesterday by Bankruptcy Judge Michael Slade (N.D. Ill.):
While I appreciate Mr. Nield's and Semrad's remorse and candor [in their response to my order to show cause], I find that they both violated Federal Rule of Bankruptcy Procedure 9011 [by] {filing a brief containing fake quotations and nonexistent authority manufactured by artificial intelligence}. I further find that a modest, joint-and-several sanction of $5,500, paid to the Clerk of the Bankruptcy Court, along with a requirement that Mr. Nield and another senior Semrad attorney attend an upcoming course on the dangers of AI scheduled for the National Conference of Bankruptcy Judges (NCBJ) annual meeting in September, is the least harsh sanction that will appropriately address counsel's conduct and deter future, similar misconduct from them and others….
The first reason I issue sanctions stems from Mr. Nield's claim of ignorance—he asserts he didn't know the use of AI in general and ChatGPT in particular could result in citations to fake cases. Mr. Nield disputes the court's statement in Wadsworth v. Walmart Inc. (D. Wyo. 2025) that it is "well-known in the legal community that AI resources generate fake cases." Indeed, Mr. Nield aggressively chides that assertion, positing that "in making that statement, the Wadsworth court cited no study, law school journal article, survey of attorneys, or any source to support this blanket conclusion."
I find Mr. Nield's position troubling. At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud. This has been a hot topic in the legal profession since at least 2023, exemplified by the fact that Chief Justice John G. Roberts, Jr. devoted his 2023 annual Year-End Report on the Federal Judiciary (in which he "speak[s] to a major issue relevant to the whole federal court system," Report at 2) to the risks of using AI in the legal profession, including hallucinated case citations.6 To put it mildly, "[t]he use of non-existent case citations and fake legal authority generated by artificial intelligence programs has been the topic of many published legal opinions and scholarly articles as of late."7 At this point there are many published cases on the issue—while only a sampling are cited in this opinion, all but one were issued before June 2, 2025, when Mr. Nield filed the offending reply. See, e.g., Jaclyn Diaz, A Recent High-Profile Case of AI Hallucination Serves as a Stark Warning, NPR Illinois (July 10, 2025, 12:49 PM) ("There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases …. It has become a familiar trend in courtrooms across the U.S."). The Sedona Conference wrote on the topic in 2023. Newspapers, magazines, and other well-known online sources have been publicizing the problem for at least two years. And on January 1, 2025, the Illinois Supreme Court issued a "Supreme Court Policy on Artificial Intelligence" requiring practitioners in this state to "thoroughly review" any content generated by AI.
Counsel's professed ignorance of the dangers of using ChatGPT for legal research without checking the results is in some sense irrelevant. Lawyers have ethical obligations not only to review whatever cases they cite (regardless of where they pulled them from), but to understand developments in technology germane to their practice. And there are plenty of opportunities to learn—indeed, the Illinois State Bar Association chose "Generative Artificial Intelligence—Fact or Fiction" as the theme of its biennial two-day Allerton Conference earlier this year, calling the topic "one that every legal professional should have on their radar." Similar CLE opportunities have been offered across the nation for at least the past two years.
The bottom line is this: at this point, no lawyer should be using ChatGPT or any other generative AI product to perform research without verifying the results. Period. See, e.g., Lacey v. State Farm Gen. Ins. Co. (C.D. Cal. May 5, 2025) ("Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology—particularly without any attempt to verify the accuracy of that material."); Mid Cent. Operating Eng'rs Health & Welfare Fund v. HoosierVac LLC (S.D. Ind. 2025) ("It is one thing to use AI to assist with initial research, and even non-legal AI programs may provide a helpful 30,000-foot view. It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented."). In fact, given the nature of generative AI tools, I seriously doubt their utility to assist in performing accurate research (for now). "Generative" AI, unlike the older "predictive" AI, is "a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on." Platforms like ChatGPT are powered by "large language models" that teach the platform to create realistic-looking output. They can write a story that reads like it was written by Stephen King (but wasn't) or pen a song that sounds like it was written by Taylor Swift (but wasn't). But they can't do your legal research for you. ChatGPT does not access legal databases like Westlaw or Lexis, draft and input a query, review and analyze each of the results, determine which results are on point, and then compose an accurate, Bluebook-conforming citation to the right cases—all of which it would have to do to be a useful research assistant. Instead, these AI platforms look at legal briefs in their training model and then create output that looks like a legal brief by "placing one most-likely word after another" consistent with the prompt it received.
If anything, Mr. Nield's alleged lack of knowledge of ChatGPT's shortcomings leads me to do what courts have been doing with increasing frequency: announce loudly and clearly (so that everyone hears and understands) that lawyers blindly relying on generative AI and citing fake cases are violating Bankruptcy Rule 9011 and will be sanctioned. Mr. Nield's "professed ignorance of the propensity of the AI tools he was using to 'hallucinate' citations is evidence that [the] lesser sanctions [imposed in prior cases] have been insufficient to deter the conduct."
The second reason I issue sanctions is that, as described above, I also have concerns about the way this particular case was handled. I understand that Debtor's counsel has a massive docket of cases. But every debtor deserves care and attention. Chapter 13 cases can be challenging to file and manage—especially when they involve complexities like those in this case. If a law firm does not have the resources to devote the time and energy necessary to shepherd hundreds of Chapter 13 cases at the same time, it should refer matters it cannot handle to other attorneys who can—lest a search for time-saving devices lead to these kinds of missteps. What I mean to convey here is that while everyone makes mistakes, I expect—as I think all judges do—attorneys to be more diligent and careful than has been shown here….
Show Comments (15)