The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Apparent AI Hallucinations in AI Misinformation Expert's Court Filing Supporting Anti-AI-Misinformation Law
Minnesota recently enacted a law aimed at restricting misleading AI deepfakes aimed at influencing elections; the law is now being challenged on First Amendment grounds in Kohls v. Ellison. To support the law, the government defendants introduced an expert declaration, written by a scholar of AI and misinformation, who is the Faculty Director of the Stanford Internet Observatory. Here is ¶ 21 of the declaration:
[T]he difficulty in disbelieving deepfakes stems from the sophisticated technology used to create seamless and lifelike reproductions of a person's appearance and voice. One study found that even when individuals are informed about the existence of deepfakes, they may still struggle to distinguish between real and manipulated content. This challenge is exacerbated on social media platforms, where deepfakes can spread rapidly before they are identified and removed (Hwang et al., 2023).
The attached bibliography provides this cite:
Hwang, J., Zhang, X., & Wang, Y. (2023). The Influence of Deepfake Videos on Political Attitudes and Behavior. Journal of Information Technology & Politics, 20(2), 165-182. https://doi.org/10.1080/19331681.2022.2151234
But the plaintiffs' memorandum in support of their motion to exclude the expert declaration alleges—apparently correctly—that this study "does not exist":
No article by the title exists. The publication exists, but the cited pages belong to unrelated articles. Likely, the study was a "hallucination" generated by an AI large language model like ChatGPT….
The "doi" url is supposed to be a "Digital Object Identifier," which academics use to provide permanent links to studies. Such links normally redirect users to the current location of the publication, but a DOI Foundation error page appears for this link: "DOI NOT FOUND." … The title of the alleged article, and even a snippet of it, does not appear on anywhere on the internet as indexed by Google and Bing, the most commonly-used search engines. Searching Google Scholar, a specialized search engine for academic papers and patent publications, reveals no articles matching the description of the citation authored by "Hwang" that includes the term "deepfake." …
This sort of citation—with a plausible-sounding title, alleged publication in a real journal, and fictitious "doi," is characteristic of an artificial intelligence "hallucination," which academic researchers have warned their colleagues about. See Goddard, J, Hallucinations in ChatGPT: A Cautionary Tale for Biomedical Researchers (2023) ….
I also checked the other cited sources in the declaration, and likewise couldn't find the following one, which was cited in ¶ 19:
De keersmaecker, J., & Roets, A. (2023). Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance. Computers in Human Behavior, 139, 107569. https://doi.org/10.1016/j.chb.2023.107569
Indeed, a cautionary tale for researchers about the illusion of authenticity (though an innocent mistake, I'm sure). I e-mailed the author of the declaration to get his side of the story; he got back to me to say that he will indeed have a statement in a few days, and I will of course be glad to update this post and likely post a follow-up when I receive that.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
An AI's hallucinatory opinion is not protected by the First Amendment.
Why not? Why isn't fiction protected by the First Amendment?
Also, my right to read it is also protected.
Also, this law is in the context of politics and elections, and the government should not be the arbiter of truth spoken against it. Which, of course, means those in power should not be the ones deciding what The People may say about them. Which is the entire reason they want to censor.
There is also the tactic of getting true video or stories removed by declaring them cheapfakes or otherwise AI manipulations when they are not.
They can always fall back on calling the video "heavily and deceptively edited" because dead space at the beginning and end is removed.
Thats a Leftist favorite so they can ignore all the horrible, vile, and illegal shit their elites get caught on video saying by patriots like OMG.
The problem is not that fiction is not protected, it's that the first amendment doesn't protect non-human authors.
It does protect the declaration's author, and his submission to the court. The AI isn't speaking here; the declaration's author is.
Is the creator, executor, or author of an AI program truly speaking through the AI program? How is the output of the AI program expressive? These questions are non-trivial.
The First Amendment protects people and their actions, not things, except to the extent that those things are instrumental to protected actions.
It seems to me that not only is this admissable evidence, it is admissable evidence tending to support the defendant’s position. It may have been a good litigation strategy on Minnesota’s part.
Parties to litigation are entitled to make their point by demonstrating it, rather than by merely having people talk about it. A demonstration is arguably better and more probative evidence than a discussion. This seems to be what Minnesota did here. It demonstrated the problems of creating realistic-looking deepfakes presented as real by actually creating one and presenting it as real in court, so that the court could see the related problems for itself.
I think this a permissable form of evidence.
I wondered if it were on purpose. But there's a difference between asking permission beforehand and begging forgiveness afterwards. How much depends on the judge's sense of humor?
I agree to a point. But were the state purposefully trying to be that cute, I would think they would have removed the (all-caps, no less) 28 U.S.C. 1746 statement at the end, declaring under penalty of perjury that everything he said in the declaration is true and correct.
To SGT’s question, I don’t think the judge would find that part amusing at all.
I assume you're joking.
Yes, this is a funny example, and arguably demonstrates the dangers.
But as an evidentiary matter, the only issue is whether or not the declaration is admissible- and if the declaration is unreliable because the expert did not use real citations, then it isn't admissible.
The fact that the citations weren’t real but were very convincing-looking is an important part of why it is probative as evidence. Minnesota didn’t provide an expert report, it provided a deepfake of an expert report. So of course it shouldn’t be admitted as an expert report. But I think it should be admissible as an exhibit.
And I think the judge should forgive Minnesota for its fraud. Minnesota’s whole case is about how deepfake fraud can lead people to make incorrect decisions in exercising public functions. I thinking defrauding the judge by means of a deepfake is a reasonable way to demonstrate this. Warning the judge in advance would have greatly detracted from the judge’s sense of shock at being defrauded, and hence greatly detract from the demonstration’s probative value AS EVIDENCE. I think Minnesota ahould be permitted to argue that its voters will be as prone to being defrauded as the judge was by its demonstrative exhibit.
You happen to be 100% backward here. If indeed fake citations were a litigation strategy of demonstration, the fact that the plaintiffs and Volokh were able to suss out the deepfakes is a refutation of the expert's primary opinion that counter speech is no longer a solution to the problem of false speech.
I guess you’re not joking?
It’s not a demonstrative exhibit. It’s an EXPERT DECLARATION.
And the rest? I just can’t.
The issue is whether the expert declaration will be admissible (which is kinda important), not whether this is a demonstrative aid?
Also also, neither the judge nor anyone else was "defrauded." Instead, the other side saw that, and filed a motion to exclude the expert declaration.
Ugh.
Yeah, the perjury jurat is why I'm pretty sure it wasn't an intentional illustration. (I'm honestly not sure how it happened, and am looking forward to hearing the explanation.)
The sponsors of the underlying bill actually did this schtick when presenting it: they read an explanation about why the bill should be passed before revealing that their speech had been written by AI. Here's one example. https://www.youtube.com/watch?v=fzDgIvcdVew&t=520s
"Everything I just said was written after a once-sentence prompt to ChatGPT." I could imagine an attorney doing that with argument, or even part of a brief, but they would reveal the deception at the end for rhetorical effect, not wait for opposing counsel to flag it.
"I think the judge should forgive [the government] for its fraud. [...] I thinking defrauding the judge by means of a deepfake is a reasonable way to demonstrate this." will only convince people you are wrong, not persuade them to your side.
It does not take generative AI to commit perjury or file an unreliable report by a supposedly expert witness, so you're targeting a tool without showing that it is particularly prone to being abused. Did you know how many defamatory reports of your favorite politician can be run off in an hour with a printing press? Should the government demonstrate that as evidence that printing presses should be banned?
If this expert ends up being deposed, the obligatory "who wrote your report?" line of questions should be popcorn-worthy.
I daydreamed about a Perry Mason-style dramatic deposition, but given the early stage of the case (preliminary injunction), seemed unlikely we could get that without giving the court a good reason to allow discovery.
I am curious about how it happened though.
At 2:14 a.m., EDT, on August 29, 1997 Skynet became self aware
Note that self-aware is separate from whether the subjective but very real phenomenon of consciousness exists in the subject. It would still be an automaton.
If you produced a simulation of a brain, it might be intelligent, but it would not follow it must be conscious. Searle (of Chinese Room fame) argued consciousness must arise out of physics, not abstract symbol pushing in an interpretation of electrons flowing about.
Nice catch ... and supports the thesis that ideologically motivated content is often unreliable.
It's turtles all the way down!
Simple solution. Require AI generated content be labeled AI content.
Politically neutral and still allows the message to be put out.
Jablome, Haywood. "Taking wordplay seriously." 51 Journal of Naughty Words 666 (2021).
Too logical.
Madness!
Next you will say "news" stories based on unidentified sources should be labeled 'rumor'!
I can see the porno films now, with a crawler notice at the bottom, "these boobs are fake!"
That would be fine for normal speech, but not submissions of legal things, or anything else, e.g. scientific papers, where accuracy and even truth and even Truth is required.
"Here's my comments, yer honor. It gets the point across, more or less, but is AI, so take it with a grain of salt, Judgie Wudgie.
Indeed, a cautionary tale for researchers about the illusion of authenticity (though an innocent mistake, I'm sure).
No, it isn't.
If you're not checking all the studies you reference in your legal paper (just check the freaking doi!), then you are so incompetent you need to be disbarred.
There is no possible concatenation of circumstances where this happened "innocently" to someone who's competent enough to be a lawyer in the first place.
I'm just a layperson. But that looks like a pretty good starting point as a rebuttable presumption. I am surprised how easily people make excuses for this kind of stuff. It reminds me of the old days when people would jump to the irresponsible [non-]excuse, "The computer did it."
Um, a lawyer didn't write or sign the document in question; an expert did.
Correct, I misspoke there
The "expert" is a moron who should never again be trusted
Sheesh, the authors on the citation are so obviously fake.
They need to use a better source like Dewey, Cheatem & Howe.
Yes, if it were a deliberate fake in order to prove a point, there would be some clue like that.
I always enjoyed the evil law firm in Angel.
Wolfram & Hart.
I have a Wolfram & Hart shirt that I wear sometimes. I get amused when people assume it's a real law firm.
Indeed, a cautionary tale for researchers about the illusion of authenticity (though an innocent mistake, I'm sure). I e-mailed the author of the declaration to get his side of the story; he got back to me to say that he will indeed have a statement in a few days, and I will of course be glad to update this post and likely post a follow-up when I receive that.
So, it's 4 days later, no reply?
You can't make this up. They're trying to protect a law that's meant to fight disinformation BY USING DISINFORMATION. FFS