The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Androids Dreaming of Electric Sheep, or It's Hallucinated Turtles All the Way Down
My UCLA colleague John Villasenor pointed out that Googling De Keersmaecker, J., & Roets, A. (2023) Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance yields:
Readers of the blog may recall that De Keersmaecker, J., & Roets, A. (2023) Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance is one of the hallucinated—i.e., nonexistent—articles cited in the AI misinformation expert's Kohls v. Ellison declaration I blogged about Tuesday. But Google's AI Overview seems to think there's a there there. Indeed, perhaps that's in part because I had included the citation in my post; human readers would realize that I gave the citation as an example of something hallucinated, but the AI software might not.
UPDATE 11/25/2024, 11:07 am: The AI Overview doesn't come up for me any more, though it consistently did Saturday, when I captured the screenshot I included above.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
For some reason AI turns up first these days in any google search. It's pretty amusing to read. The readouts are laughably wrong sometimes. At first I thought it's like an E.T. trying to imitate a human, working from a clumsy set of instructions, but then I realized, that's what it actually is.
FWIW, the two browsers I use (firefox and brave) have an option to turn off the AI search.
In a few years lawyers may have to go back to bound volumes of cases as a trustworthy source. I hope law libraries remember where they carted all the old books off to.
How long before AI hopelessly muddles both history and legal documents? Isn't one of the communist goals to erase inconvenient history?
So has Jeff Hancock, the man who produced the hallucinatory "expert declaration" that you blogged about last Tuesday, https://reason.com/volokh/2024/11/19/apparent-ai-hallucinations-in-misinformation-experts-court-filing-supporting-anti-ai-misinformation-law/ Gotten back to you the way he promised he would?
I'm certain that there'll be a filing in Kohls v. Ellison setting forth the defendants' and the expert's explanation of the situation; I had hoped to see it by now, but it will definitely need to be filed within the next couple of weeks. I'm fine waiting on that, and I'll post it as soon as I see it (or as soon as I get a statement through some other channel).
You wrote this in your first post on the subject:
Indeed, a cautionary tale for researchers about the illusion of authenticity (though an innocent mistake, I'm sure). I e-mailed the author of the declaration to get his side of the story; he got back to me to say that he will indeed have a statement in a few days, and I will of course be glad to update this post and likely post a follow-up when I receive that.
So was that "I'll get back to you in a few days" as much of a "hallucination" as the rest of his "expert" opinion?
Sorry, but people who are pro-censorship are never entitled to the benefit of the doubt.
I did a Duck Duck Go query, not quotation marks around the string
First two hits were your to Volokh posts on this. Third hit was this:
https://dl.acm.org/doi/10.1016/j.tele.2023.102047
Adjusting news accuracy perceptions after deepfakes exposure: : Evidence from a non-Western context
So one key here is "never use Google"