The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Misinformation Expert's "Citation to Fake, AI-Generated Sources in His Declaration … Shatters His Credibility with This Court"
"[A] credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
From today's order by Laura Provinzino (D. Minn.) in Kohls v. Ellison (see this Nov. 19 post for more):
Minnesota law prohibits, under certain circumstances, the dissemination of "deepfakes" with the intent to injure a political candidate or influence the result of an election. Plaintiffs challenge the statute on First Amendment grounds and seek preliminary injunctive relief prohibiting its enforcement.
With his responsive memorandum in opposition to Plaintiffs' preliminary-injunction motion, Attorney General Ellison submitted two expert declarations … [including one] from Jeff Hancock, Professor of Communication at Stanford University and Director of the Stanford Social Media Lab. The declarations generally offer background about artificial intelligence ("AI"), deepfakes, and the dangers of deepfakes to free speech and democracy….
Attorney General Ellison concedes that Professor Hancock included citations to two non-existent academic articles and incorrectly cited the authors of a third article. Professor Hancock admits that he used GPT-4o to assist him in drafting his declaration but, in reviewing the declaration, failed to discern that GPT-4o generated fake citations to academic articles.
The irony. Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less. Professor Hancock offers a detailed explanation of his drafting process to explain precisely how and why these AI-hallucinated citations in his declaration came to be. And he assures the Court that he stands by the substantive propositions in his declaration, even those that are supported by fake citations. But, at the end of the day, even if the errors were an innocent mistake, and even if the propositions are substantively accurate, the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations.
It is particularly troubling to the Court that Professor Hancock typically validates citations with a reference software when he writes academic articles but did not do so when submitting the Hancock Declaration as part of Minnesota's legal filing. One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles. Indeed, the Court would expect greater diligence from attorneys, let alone an expert in AI misinformation at one of the country's most renowned academic institutions.
To be clear, the Court does not fault Professor Hancock for using AI for research purposes. AI, in many ways, has the potential to revolutionize legal practice for the better. See Damien Riehl, AI + MSBA: Building Minnesota's Legal Future, 81-Oct. Bench & Bar of Minn. 26, 30–31 (2024) (describing the Minnesota State Bar Association's efforts to explore how AI can improve access to justice and the quality of legal representation). But when attorneys and experts abdicate their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers, the quality of our legal profession and the Court's decisional process suffer.
The Court thus adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions! See Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023) (sanctioning attorney for including fake, AI-generated legal citations in a filing); Park v. Kim, 91 F.4th 610, 614–16 (2d Cir. 2023) (referring attorney for potential discipline for including fake, AI-generated legal citations in a filing); Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024) (dismissing appeal because litigant filed a brief with multiple fake, AI-generated legal citations).
To be sure, Attorney General Ellison maintains that his office had no idea that Professor Hancock's declaration included fake citations, and counsel for the Attorney General sincerely apologized at oral argument for the unintentional fake citations in the Hancock Declaration. The Court takes Attorney General Ellison at his word and appreciates his candor in rectifying the issue. But Attorney General Ellison's attorneys are reminded that Federal Rule of Civil Procedure 11 imposes a "personal, nondelegable responsibility" to "validate the truth and legal reasonableness of the papers filed" in an action. The Court suggests that an "inquiry reasonable under the circumstances," Fed. R. Civ. P. 11(b), may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.
The question, then, is what to do about the Hancock Declaration. Attorney General Ellison moves for leave to file an amended version of the Hancock Declaration, and argues that the Court may still rely on the amended Hancock Declaration in ruling on Plaintiffs' preliminary-injunction motion. Plaintiffs seem to accept that Professor Hancock is qualified to render an expert opinion on AI and deepfakes, and the Court does not dispute that conclusion. Nevertheless, Plaintiffs argue that the Hancock Declaration should be excluded in its entirety and that the Court should not consider an amended declaration. The Court agrees.
Professor Hancock's citation to fake, AI-generated sources in his declaration—even with his helpful, thorough, and plausible explanation—shatters his credibility with this Court. At a minimum, expert testimony is supposed to be reliable. More fundamentally, signing a declaration under penalty of perjury is not a mere formality; rather, it "alert[s] declarants to the gravity of their undertaking and thereby have a meaningful effect on truth- telling and reliability." The Court should be able to trust the "indicia of truthfulness" that declarations made under penalty of perjury carry, but that trust was broken here.
Moreover, citing to fake sources imposes many harms, including "wasting the opposing party's time and money, the Court's time and resources, and reputational harms to the legal system (to name a few)." Morgan v. Cmty. Against Violence, 2023 WL 6976510, at *8 (D.N.M. Oct. 23, 2023). Courts therefore do not, and should not, "make allowances for a [party] who cites to fake, nonexistent, misleading authorities"—particularly in a document submitted under penalty of perjury. Dukuray v. Experian Info. Sols., 2024 WL 3812259, at *11 (S.D.N.Y. July 26, 2024). The consequences of citing fake, AI- generated sources for attorneys and litigants are steep. See Mata; Park; Kruse. Those consequences should be no different for an expert offering testimony to assist the Court under penalty of perjury.
To be sure, the Court does not believe that Professor Hancock intentionally cited to fake sources, and the Court commends Professor Hancock and Attorney General Ellison for promptly conceding and addressing the errors in the Hancock Declaration. But the Court cannot accept false statements—innocent or not—in an expert's declaration submitted under penalty of perjury. Accordingly, given that the Hancock Declaration's errors undermine its competence and credibility, the Court will exclude consideration of Professor Hancock's expert testimony in deciding Plaintiffs' preliminary-injunction motion.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Civilized discourse, and customary practical activity, are replete with instances where reliability must be near-perfect, or the work product judged useless, or worse. An impulse to separate out and make use of accuracy arrived at by happenstance ought not be encouraged in instances where reliability is inherently necessary. The Court gets it right.
This seems like a harsh penalty for a minor error. A report could be defective for a lot of reasons. What if the author did a last minute search-and-replace, and did not check all the substitutions? Yes, it is embarrassing, and it should have been checked, but mistakes happen.
You think getting his testimony excluded is harsh?
If he'd done it in an academic paper here he'd be under investigation by the university, and if found to be merely negligent, the penalty at the very minimum would be prohibition from claiming any credit whatsoever for the paper toward annual performance, tenure, or periodic post tenure review. The journal involved would put him on its banned list for at least several years.
If there was anything smacking of intention or deliberate disregard, the starting range on the penalty would be a letter of reprimand, which sounds minor but in reality means probation without raises for five years (because the state policy prohibits merit increases for anyone who has been disciplined.)
Considering that raises are cumulative that's something could add up to $4K-5K penalty.
But as the court noted, this is *more* serious than doing it in an academic paper.
I fully agree with you. The witness destroyed his credibility—his fault and his alone.
No, I do not believe a university would be so harsh. I have heard of many examples of a professor who published a paper that turns out to be entirely wrong because of negligence in handling the data. The university did not punish the professor.
Professors do sometimes get punished if they get caught deliberately faking data. But this expert did not deliberately fake data. He was merely sloppy in checking references.
This expert did waste a couple of hours of the opposing party's time in checking the references, but that's all. Not a big deal.
It's not like it was a typo.
Deliberately fabricating a citation is obviously a significant wrongdoing. Delegating the fabrication to a piece of software doesn't make it better.
Now he'd probably respond that it wasn't intentional, it was just he didn't know AI is sometimes deceptive. I'd be more willing to accept that, except that the whole effing point of his testimony was AI can be deceptive.
Yes, I do think that there is a difference between an accidental data error, and deliberately fabricating data. And those university ethics committees also make the distinction.
It is worse than a typo. A better analogy might be writing a report on the reliability of Wikipedia, and getting caught with a faulty statement extracted from Wikipedia. Yes, embarrassing. Food for the irony police. But not really a big deal.
Harvard has looked into the plagiarism by their President and we're perfectly fine with it until it became a PR black eye so , no, they would be as likely to ignore it as anything.
It's not a minor error. Expert witnesses are supposed to be impartial sources of true facts. For example, in suits over injuries from some industrial machinery, an expert witness is often used to determine what the OSHA rules are, whether the machine is in compliance, if it's in working order, was being used correctly, and perhaps even whether it was designed and built according to industry standards.
An expert witness who is wrong even once, even through an understandable mistake is no longer useful as an expert. Just a typo can flip the outcome. If the expert says the machine shouldn't be run at more than 100 RPM when the standard is 1000 RPM, that can change the case completely.
The facts regarding developing technology such as AI are perhaps more difficult to determine, but that should call for even more attention to the details, not less.
A report can be defective, but when you're submitting it under penalty of perjury, you should be taking an extra step to review it. And while it's "just" a citation, I know that before I file a single brief, I do a quick search on every case in my index of authorities to make sure the citation is correct. The expert isn't held to a lesser standard.
All this reminds me of the long history of software-based "intelligent" assistants available to PC and Mac users. Spell-checkers came into widespread use in all the desktop document creation/editing programs developed since the rollout of the first Mac in 1984 (if not earlier). It was a great idea on paper, but many users didn't agree and ignored it, while others agreed and relied on it to excess and to their chagrin.
Another fun development was the animated MS Office assistant, Clippit aka Clippy. Some users liked it enough to find out how it worked and how to use it successfully. Others couldn't figure it out and ignored or disabled it. It was erratic and unhelpful too often, and MS did away with it in the early 2000's (I think).
The problem with all digital automation software is that most of it has ignored the human factor. Artificial intelligence will always bump up against human needs, human ignorance, and human stupidity. I was the first graduate student in my department to compose a dissertation on a PC, in 1989. I loved Wordstar, but I failed to perform a final proofreading, and so there were some errors of form (not of fact, thankfully) in the bound copy. I'm still hoping that no one ever noticed them.
My university had a "computer lab" years earlier than 1989, complete with a dozen or so PCs and HP LaserJet printers for student use. (The LaserJet II came out in 1987.) Those students would have been using WordStar or Word Perfect, however, as Microsoft Word for Windows wasn't released until 1989 (according to Wikipedia).
Both my parents worked in the computer industry, so I grew up with (last year's) computer in my bedroom (initially taking up most of the desk, such as during the DOS and CP/M era), unlike nearly everyone else I knew. Fun times. I mostly hate computers now...
Let’s see if I get this. An expert in Misinformation submitted misinformation in the form of an AI generated report containing bogus article cites.
I thought that the expert might claim that the bad cites were deliberate, in order to prove a point.
The irony is incredible.
The implications (e.g. AI) are not.
The judge noted the irony:
"The irony. Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
Doesb't this remind anybody of the "human AI"-like testimony of Martha Nussbaum This professor testified about homosexuality and Plato's GORGIAS
"Nussbaum insists with great force that this very passage [Gorgias 494-95] shows how “the interlocutors” (Callicles and Socrates) share the “social prejudices” of “a Greek of Callicles’ class and background” in regarding the sexual enjoyment and activity of “the passive homosexual” as ridiculous, loathsome, disgraceful, shameful, and wretched."
Many scholars say that is nonsense and indeed most Greek or Philosophy never even consider that silly route of intepretation. Why write a dialog to show you share societal views of homosexuality when you call them immoral elsewhere?
"in Laws 636c. Here Plato, speaking through the character of the Athenian stranger, rejects homosexual behavior as “unnatural” (para physin), describes it as an “enormity” or “crime” (tolmema), and explains that it derives from being enslaved to pleasure. "
I stop here. Look this case up. Over 10 years old but brilliantly shows how 'intelligence' , artificial or not , is often used as a bludgeon, stupid though it be.
https://www.firstthings.com/article/1994/06/in-the-case-of-martha-nussbaum
PS for Greek students, start with this
"Nussbaum’s contention about tolmema is insupportable. The very authorities whom she proposed to the court as trustworthy in these matters, Kenneth Dover and A. N. Price, translate it respectively as “a crime” and “crime of the first rank.”"
Plato’s Laws 636c
ton proton to tolmema
Plato strongly disapproved of gay sex. Martha Nussbaum disagreed, claiming that the first witness was mistranslating Plato’s use of tolmema
YOU DECIDE...for myself I find her testimony despicable in the extreme.
Does it say anywhere who caught the error?
In any case, if this isn't disqualifying for an expert, nothing is. It doesn't matter whether what he says otherwise is correct or not. He has proven that HE can't be trusted. 'Experts' aren't just some guy asked to be a witness. Experts receive a halo over their works - that's the whole point of experts. If you're not careful enough to copyedit your own writing, you aren't careful enough to serve as an expert. When submitting papers in my graduate genetics lab, all of us read every manuscript, searching for so much as a misplaced comma or an extra space after a period. And that was for papers no one would ever read.
1970s: Garbage In, Garbage Out
2020s: AI, the font of wisdom and knowledge.
2025: He who controls AI controls the world.