The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

AI in Court

Two Federal Judges Apologize For Issuing Opinions With AI Hallucinations

An intern and a law clerk used generative AI, and the judges didn't catch the hallucinations.

|

In July, I wrote about Judge Julien Xavier Neals of the U.S. District Court for the District of New Jersey, who withdrew an opinion that used generative AI. Judge Henry T. Wingate of the Southern District of Mississippi likewise withdrew an opinion that used generative AI. Both opinions included made-up citations, which were obvious hallucinations.

Senator Chuck Grassley, the Chairman of the Senate Judiciary Committee, wrote to both Neals and Wingate.

Both judges wrote to Judge Robert Conrad, the Director of the Administrative Office of the U.S. Courts.

Judge Neals explained that a law school intern used generative AI, in violation of chambers policy, as well as the student's law school's policy:

As referenced in the Senator's letter, a "temporary assistant," specifically, a law school intern, used CHATGPT to perform legal research in connection with the CorMedix decision. In doing so, the intern acted without authorization, without disclosure, and contrary to not only chambers policy but also the relevant law school policy. My chambers policy prohibits the use of GenAI1 in the legal research for, or drafting of, opinions or orders. . . .

I would be remiss if I did not point out as well that the law school where the intern is a student contacted me after the incident to, among other things, inform me that the student had violated the school's strict policy against the use of GenAI in their internships.

Judge Neals has his chambers in Newark. We can guess which law school the student attends.

Judge Wingate explains that his law clerk used generative AI. However, the draft was published prematurely before it was checked:

In the case of the Court's Order issued July 20, 2025, a law clerk utilized a generative artificial intelligence ("GenAI") tool known as Perplexity strictly as a foundational drafting assistant to synthesize publicly available information on the docket. . .  .

The standard practice in my chambers is for every draft opinion to undergo several levels of review before becoming final and being docketed, including the use of cite checking tools.1 In this case, however, the opinion that was docketed on July 20, 2025, was an early draft that had not gone through the standard review process. It was a draft that should have never been docketed. This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again, as described below

Judge Conrad also sent a letter to Senator Grassley. The AO does not keep statistics on judges who have withdrawn opinions with hallucinations:

We are aware anecdotally of incidents in which judges have taken official action (such as those described above) relating to the integrity of court filings in which the use of AI tools was in question, although we currently do not systematically track such activity at the national level.

We learn that the AO convened a task force on generative AI.

The interim guidance cautions against delegating core judicial functions to AI, including decision-making or case adjudication, and it recommends that users exercise extreme caution especially if using AI to aid in addressing novel legal questions. It recommends that users review and independently verify all AI-generated content or output, and it reminds judges and Judiciary users and those who approve the use of AI that they are accountable for all work performed with the assistance of AI.

I suspect some district court judges will impose the requisite layers of review to detect hallucinations. Other district court judges, who delegate much of their work to law clerks, will not perform these checks.

Litigants should check any adverse decision for hallucinations. This simple step will be cheaper than filing an appeal.