AI Isn't Destabilizing Elections
Researchers analyzed political content made with artificial intelligence and found much of it was not deceptive at all.

Artificial intelligence pessimists, take note: New research suggests that fears about AI tools destabilizing elections through political misinformation may be overblown.
The research was conducted by computer scientist Arvind Narayanan, director of the Princeton Center for Information Technology Policy, and Sayash Kapoor, a computer science Ph.D. candidate at Princeton. The pair are writing a book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.
Using information compiled by the WIRED AI Elections Project, Narayanan and Kapoor analyzed 78 instances of AI-created political content that appeared last year during elections around the world. "AI does make it possible to fabricate false content. But that has not fundamentally changed the landscape of political misinformation," they write in an essay about their research.
Their analysis found that much of the AI-generated content was not intended to be deceptive. "To our surprise, there was no deceptive intent in 39 of the 78 cases in the database," they write. In more than a dozen instances, campaigns used AI tools to improve campaign materials.
There were also more novel uses, such as in Venezuela, where "journalists used AI avatars to avoid government retribution when covering news adversarial to the government," or in California, where "a candidate with laryngitis lost his voice, so he transparently used AI voice cloning to read out typed messages in his voice during meet-and-greets."
Moreover, deceptive content was not necessarily dependent on AI for its production. "For each of the 39 examples of deceptive intent, where AI use was intended to make viewers believe outright false information, we estimated the cost of creating similar content without AI—for example, by hiring Photoshop experts, video editors, or voice actors," write Narayanan and Kapoor. "In each case, the cost of creating similar content without AI was modest—no more than a few hundred dollars."
In one instance, they even found a video involving a hired actor misclassified by Wired's database as AI-generated content. This snafu, they say, highlights how "it has long been possible to create media with outright false information without using AI or other fancy tools."
Their takeaway: We should be focusing on the demand side of this equation, not the supply side. Election-related misinformation has long been an issue. And while AI might change how such content is created, it doesn't fundamentally change how it spreads or its impacts.
"Successful misinformation operations target in-group members—people who already agree with the broad intent of the message," point out Narayanan and Kapoor. "Sophisticated tools aren't needed for misinformation to be effective in this context."
Meanwhile, outgroups are unlikely to be fooled or influenced, whether such operations are AI-aided or not. "Seen in this light, AI misinformation plays a very different role from its popular depiction of swaying voters in elections," the researchers suggest.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
So, no need to worry--only HALF of it is bullshit.
Interesting how "researchers" are incapable of logic.
AI doesn't change that people will lie and misrepresent things for politics. What it does is give those liars a more convincing tool to manufacture false evidence of anything. People get angry over being duped and start to distrust everything they are shown. Eventually that leads to a situation where the postmodernist left gets their wish where effectively "there is no truth but power."
AI exacerbates a low trust society. It is the height of stupidity to pretend that the media and politicians aren't and won't use it to more effectively lie and accumulate power.
Having trouble reading the article? The study started with and tested exactly your premise. What they found is that AI is not actually a "more convincing tool" at convincing out-group members of anything (and you don't need any fancy tool, much less AI, to convince in-group members).
That isn't even what they claim to have found...
They say that half of what they looked at was intentionally deceptive. Their argument was that the deception could be done without AI or that AI isn't necessarily used deceptively. Saying that partisans aren't going to be influenced one way or the other by AI doesn't say the same thing about independents who passively are informed by social media and news who are likely to platform this deception.
Their takeaway: We should be focusing on the demand side of this equation, not the supply side.
Oh, so people are the problem, not the AI that errantly produces misinformation about half of the time.
Only in the mind of a progressive does this make any logical sense whatsoever.
https://www.youtube.com/watch?v=XQr4Xklqzw8
That never gets old.
"Seen in this light, AI misinformation plays a very different role from its popular depiction of swaying voters in elections"
Sorry, but I could not find any information in this article that would justify such a conclusion. The research study did not appear to be designed to detect changes in "opinion" amongst voters or differences in such opinion changes between one group of voters and another based on the source or technology behind the "misinformation." Therefore any opinion about the "effectiveness" of AI are based on the researchers' opinions about the appearance of the product, not about its actual effect on real people either in-group or in general.
I'm not sure why REASON is one of AI's biggest fanboys. The potential to control information, police speech, and even report thought crimes committed while chatting to family on the telephone is incredible.
You would think that REASON would be a bit more circumspect about its enthusiastic support of AI.
I'm not sure that's what their viewpoint is. My take on it is that AI is coming with or without our circumspection. We all know that the worst possible way for it to roll out would be under massive regulation by central government authority. For one thing, that trick NEVER works. And for another, regulation is likely to ensure that the result will be more dangerous than it would have been without authoritarian intervention.