The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Expert Report Admitted Despite AI Hallucinations in Citations
From retired Third Circuit Judge Thomas Vanaskie (who had also served on the Middle District of Pennsylvania), and who was serving as a court-appointed Special Master in In re: Valsartan Losartan, and Irbesartan Products Liability Litigation; the decision was handed down Sept. 3, but just came up on one of my searches:
Dr. Sawyer's citation to non-existent sources due to his use of an artificial intelligence tool without adequate verification of the sources generated by the artificial intelligence tool, while perhaps warranting an award of costs in favor the defense and permitting cross examination of Dr. Sawyer during the trial on his failure to verify the sources cited in his report, does not warrant exclusion of his opinions as they are otherwise the product of reliable scientific methodology and are supported by "good grounds," especially given "the liberal thrust of the Federal Rules of Evidence, the flexible nature of the Daubert inquiry, and the proper roles of the judge and jury in evaluating the ultimate credibility of an expert's opinion" ….
This was appealed to District Judge Renée Marie Bumb, who decided that the appeal was moot in light of her opinion granting summary judgment to defendant in this case, but "Defendants have preserved their position should Dr. Sawyer's testimony be presented in another action in this [Multi-District Litigation]." Here's an excerpt of the plaintiff's argument in favor of not excluding the expert opinion:
Defendants devote a substantial portion of their brief to mistaken citations in Dr. Sawyer's report, insinuating that these mistakes render Dr. Sawyer's entire report invalid. The record shows otherwise. During his May 2, 2025 deposition, Dr. Sawyer forthrightly explained that he used a software tool employing artificial intelligence to assist him in locating scientific articles and toxicology studies for his report. This tool was meant to expedite literature searches for well-established background information on NDMA. While drafting, a handful of references (ten, to be exact) were inadvertently cited incorrectly in Dr. Sawyer's report. These errors were largely confined to a two-page section of the report summarizing general background facts about NDMA (such as its carcinogenic classification, its genotoxic potential, and common exposure pathways).
Many of the citations at issue, such as those in footnotes 3, 4, 5, 6, 7, and 14, are used only for introductory or background context and are not central to Dr. Sawyer's core analysis of NDMA or his application of toxicological principles. These references merely provide general scientific context regarding mechanisms of NDMA metabolism, oxidative stress, or DNA repair and have no bearing on the methodologies Dr. Sawyer applied in forming his case-specific opinion. Footnote 8 contains a broken FDA hyperlink, but the referenced announcement clearly exists and is readily accessible. In footnote 66 the correct studies were cited, but minor formatting or author-order errors scrambled the reference. Finally, footnote 109 relates to background information on Bradford Hill criteria and was not used in the causation analysis itself. In sum, none of these minor citation discrepancies affect the substance or reliability of Dr. Sawyer's opinion.
And here's an excerpt from the defendants' reply:
First, far from "forthrightly explain[ing]" that he used AI in writing his report, Dr. Sawyer falsely testified on multiple occasions that the phantom articles he cited exist and that he had reviewed them. (See Sawyer 5/2/2025 Dep. 63:14-25 (Mem. Ex. 5) (asserting that Yuan 2027 "is a real article that [he] reviewed"); see also id. 70:12-71:7 ("I recall reviewing [the Sokolow paper,] and I included the link … which was functional.").) Only after repeated questioning by defense counsel on the topic did Dr. Sawyer finally admit the citations were false and that they resulted from his use of either Google or AI. (Id. 72:13-17.) To this day, Dr. Sawyer and Plaintiff's counsel have not identified the particular tool that Dr. Sawyer used to "create" the fake citations, rendering Dr. Sawyer's dishonesty even more egregious than that in Kohls v. Ellison, No. 24-CV-3754 (LMP/DLM), 2025 WL 66514, at *3 (D. Minn. Jan. 10, 2025). Plaintiff does not address this squarely on-point authority, effectively conceding its applicability.
Second, Plaintiff's attempt to paint Dr. Sawyer's citation to 10 fake sources as insubstantial and "peripheral" (Opp'n at 35), is foreclosed by Dr. Sawyer's testimony. At his deposition, Dr. Sawyer made clear that he used "wording [taken] directly" from the fake sources in drafting sections of his report. (Sawyer 5/2/2025 Dep. 67:22-24, 73:17-20; see also id. 60:10-23 (agreeing some of the language "is a quote from Yuan").) Moreover, those non-existent sources comprise the bases of Dr. Sawyer' causation opinions, which is presumably why he originally falsely claimed to have read them when questioned about them at his deposition. As the Kohls court explained, an expert's "citation to fake, AI-generated sources … shatters his credibility with th[e] Court" and "undermine[s] [the expert's] competence and credibility[.]" Kohls, 2025 WL 66514, at *4-5. That is precisely what happened here, which should be dispositive.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
I think that is the right opinion. The test for admissibility is does he have expertise over and beyond a layperson in a particular discipline. It is not whether he is a particularly good expert or checks his citations. Those go to the weight, not admissibility.
He faked the report.
Does admissibility not require a minimum 'weight'?
If you can't check your citations, how can I take your testimony seriously? If I can't take your testimony seriously, why would the judge waste court time?
Yes. The minimum weight is does this witness have specialized knowledge in a recognized field. That's it.
What weight to give it is up to the jury. Just because you would disregard it because he didn't check his cites doesn't mean that the rules of evidence require exclusion.
No, that's lawyer-speak for "I like to quibble".
If his citations were necessary to back up his report, then their falseness does not back up his report and his report should be thrown out.
If his citations were not necessary to back up his report, then he should not have included the unnecessary garbage.
"If his citations were necessary to back up his report, then their falseness does not back up his report and his report should be thrown out.
If his citations were not necessary to back up his report, then he should not have included the unnecessary garbage."
1) If the expert no longer stands by his opinion after being shown the error of the cites, then he doesn't testify. Problem solved. If he still stands by it, then he testifies and can get hammered for his sloppiness.
2) Nobody would disagree, including the expert, that he screwed up and should not have included the fake cite. But, and I know this is hard for posters here, HE MADE A MISTAKE. He is human. People make mistakes. Doesn't mean he gets tossed out of court.
The jury can use his mistake however it likes. You want to disbelieve every word out of his mouth? You have that right. What we don't do is have a judge made rules that throws what could be a very good expert opinion out of a case because he didn't know how to use AI.
Aren't judges supposed to certify experts as actual experts?
Not knowing how to use his tools ought to disqualify him as an expert. Not proofreading his report ought to disqualify him.
His "mistakes" disqualify him as an expert.
Well, yes and no. If a guy is testifying as to ballistics; I'm not sure if him using make-up citations regarding (let's say) how quickly or slowly a 3-ton SUV can accelerate or brake in rainy conditions is a good reason to disqualify him from being an expert on the penetrating power of ammo X vs ammo Y.
As others have noted; a jury is certainly free to discount or completely disregard his expert testimony re bullets due to his sloppiness on the unrelated issue. But I don't think--if I were the judge--that I'd block this nationally-known expert on ballistics from testifying on this ballistics issue.
(Yes, I've seen expert witnesses include lots of things outside their expertise. That's a repeating battle between lawyers on both sides in lots of cases...one side wants to let the expert also testify as an expert on this thing that's *sort of* related to his field. The other side wants to block him from such extraneous testimony...or, at least, to make it clear to the jury where he is testifying as a layperson and where he is testifying as an expert.)
Stupid,
On the other hand (regarding my earlier post), I did have a case where the prosecutions expert was a well-known professor and doctor at UCLA. I was able to impeach her entire testimony by showing that she could not have possibly worked on the case on the dates and hours she had billed (and been paid by) the county.
It was a bench trial, so no jury. But once I did this, and once she started taking the Fifth, the judge--on his own motion--did throw out her entire testimony as incredible. (I took a great deal of pleasure from getting this 730 expert bounced from the list of experts that the county could/would use in the future.)
Good for you!
ETA: As far as I'm concerned, that was perjury on her part, she was basically trying to frame the defendant (or convict, I don't know which), and she should have received whatever punishment her fraudulent testimony was in support of. And yes, I do know that's not the legal definition of perjury.
He would be quite a target for cross examination.
This is wrong. The report was faked. Thus everything in it is unreliable as is the person who presented it as legitimate.
Interesting. Does the party really want that ? If an expert has totally discredited himself, I would think the best thing is to erase him from the face of the earth and get another expert, or have no expert if that is procedurally necessary.
What ever happened to the legal maxim of "Falsus in uno, falsus in omnibus?"
Falsus in uno is about credibility, not admissibility. It does not tell the judge to keep the witness off the stand. It tells the jury what they’re allowed to infer if they catch someone lying.
Also, falsus in uno doesn’t get triggered by every mistake or every loose paraphrase. It’s about intentional or reckless falsity on a material point.
Falsus in omnibus is also about credibility. Application of the doctrine is to instruct the jury that they (may/must depending on jurisdiction) presume the witness's further testimony to be incredible.
It's disfavored because it was abused too much, but it's still around in some jurisdictions. Too many judges sustained a "falsus in omnibus" motion for a witness who misspoke slightly or failed to have perfect recall of irrelevant details. Coincidentally this tends to happen when local lawyers who know the judge well are protecting local moneyed interests: The Good Ole Boy system at work. One by one, US jurisdictions have been watering down or fully repudiating it. Looks good on paper, doesn't work well in practice.
Dr. Sawyer could probably have avoided his deposition and trial distress had he begun his review of the literature with NLM/PubMed searches instead of whatever sources he actually used. PubMed is user-friendly, and it includes with each abstract or article numerous links to related literature and other sources' citations. It can be something of a rabbit hole. If one doesn't watch out for personal presumptions when choosing search terms and following links, one can easily be led astray by confirmation or selection bias and end up with an inadequate review. But that's true with any other search engine or archive website. Part of what scientists and scholars do in their work is to understand this problem and do their best to maintain a neutral and skeptical approach to the material or issue.
This looks like slop to me. Taking a chatbot out for a drive, are we?