The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Misinformation Expert's "Citation to Fake, AI-Generated Sources in His Declaration … Shatters His Credibility with This Court"
"[A] credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
From today's order by Laura Provinzino (D. Minn.) in Kohls v. Ellison (see this Nov. 19 post for more):
Minnesota law prohibits, under certain circumstances, the dissemination of "deepfakes" with the intent to injure a political candidate or influence the result of an election. Plaintiffs challenge the statute on First Amendment grounds and seek preliminary injunctive relief prohibiting its enforcement.
With his responsive memorandum in opposition to Plaintiffs' preliminary-injunction motion, Attorney General Ellison submitted two expert declarations … [including one] from Jeff Hancock, Professor of Communication at Stanford University and Director of the Stanford Social Media Lab. The declarations generally offer background about artificial intelligence ("AI"), deepfakes, and the dangers of deepfakes to free speech and democracy….
Attorney General Ellison concedes that Professor Hancock included citations to two non-existent academic articles and incorrectly cited the authors of a third article. Professor Hancock admits that he used GPT-4o to assist him in drafting his declaration but, in reviewing the declaration, failed to discern that GPT-4o generated fake citations to academic articles.
The irony. Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less. Professor Hancock offers a detailed explanation of his drafting process to explain precisely how and why these AI-hallucinated citations in his declaration came to be. And he assures the Court that he stands by the substantive propositions in his declaration, even those that are supported by fake citations. But, at the end of the day, even if the errors were an innocent mistake, and even if the propositions are substantively accurate, the fact remains that Professor Hancock submitted a declaration made under penalty of perjury with fake citations.
It is particularly troubling to the Court that Professor Hancock typically validates citations with a reference software when he writes academic articles but did not do so when submitting the Hancock Declaration as part of Minnesota's legal filing. One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles. Indeed, the Court would expect greater diligence from attorneys, let alone an expert in AI misinformation at one of the country's most renowned academic institutions.
To be clear, the Court does not fault Professor Hancock for using AI for research purposes. AI, in many ways, has the potential to revolutionize legal practice for the better. See Damien Riehl, AI + MSBA: Building Minnesota's Legal Future, 81-Oct. Bench & Bar of Minn. 26, 30–31 (2024) (describing the Minnesota State Bar Association's efforts to explore how AI can improve access to justice and the quality of legal representation). But when attorneys and experts abdicate their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers, the quality of our legal profession and the Court's decisional process suffer.
The Court thus adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions! See Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023) (sanctioning attorney for including fake, AI-generated legal citations in a filing); Park v. Kim, 91 F.4th 610, 614–16 (2d Cir. 2023) (referring attorney for potential discipline for including fake, AI-generated legal citations in a filing); Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024) (dismissing appeal because litigant filed a brief with multiple fake, AI-generated legal citations).
To be sure, Attorney General Ellison maintains that his office had no idea that Professor Hancock's declaration included fake citations, and counsel for the Attorney General sincerely apologized at oral argument for the unintentional fake citations in the Hancock Declaration. The Court takes Attorney General Ellison at his word and appreciates his candor in rectifying the issue. But Attorney General Ellison's attorneys are reminded that Federal Rule of Civil Procedure 11 imposes a "personal, nondelegable responsibility" to "validate the truth and legal reasonableness of the papers filed" in an action. The Court suggests that an "inquiry reasonable under the circumstances," Fed. R. Civ. P. 11(b), may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.
The question, then, is what to do about the Hancock Declaration. Attorney General Ellison moves for leave to file an amended version of the Hancock Declaration, and argues that the Court may still rely on the amended Hancock Declaration in ruling on Plaintiffs' preliminary-injunction motion. Plaintiffs seem to accept that Professor Hancock is qualified to render an expert opinion on AI and deepfakes, and the Court does not dispute that conclusion. Nevertheless, Plaintiffs argue that the Hancock Declaration should be excluded in its entirety and that the Court should not consider an amended declaration. The Court agrees.
Professor Hancock's citation to fake, AI-generated sources in his declaration—even with his helpful, thorough, and plausible explanation—shatters his credibility with this Court. At a minimum, expert testimony is supposed to be reliable. More fundamentally, signing a declaration under penalty of perjury is not a mere formality; rather, it "alert[s] declarants to the gravity of their undertaking and thereby have a meaningful effect on truth- telling and reliability." The Court should be able to trust the "indicia of truthfulness" that declarations made under penalty of perjury carry, but that trust was broken here.
Moreover, citing to fake sources imposes many harms, including "wasting the opposing party's time and money, the Court's time and resources, and reputational harms to the legal system (to name a few)." Morgan v. Cmty. Against Violence, 2023 WL 6976510, at *8 (D.N.M. Oct. 23, 2023). Courts therefore do not, and should not, "make allowances for a [party] who cites to fake, nonexistent, misleading authorities"—particularly in a document submitted under penalty of perjury. Dukuray v. Experian Info. Sols., 2024 WL 3812259, at *11 (S.D.N.Y. July 26, 2024). The consequences of citing fake, AI- generated sources for attorneys and litigants are steep. See Mata; Park; Kruse. Those consequences should be no different for an expert offering testimony to assist the Court under penalty of perjury.
To be sure, the Court does not believe that Professor Hancock intentionally cited to fake sources, and the Court commends Professor Hancock and Attorney General Ellison for promptly conceding and addressing the errors in the Hancock Declaration. But the Court cannot accept false statements—innocent or not—in an expert's declaration submitted under penalty of perjury. Accordingly, given that the Hancock Declaration's errors undermine its competence and credibility, the Court will exclude consideration of Professor Hancock's expert testimony in deciding Plaintiffs' preliminary-injunction motion.
Show Comments (5)