AI Isn't Destabilizing Elections
Researchers analyzed political content made with artificial intelligence and found much of it was not deceptive at all.
Artificial intelligence pessimists, take note: New research suggests that fears about AI tools destabilizing elections through political misinformation may be overblown.
The research was conducted by computer scientist Arvind Narayanan, director of the Princeton Center for Information Technology Policy, and Sayash Kapoor, a computer science Ph.D. candidate at Princeton. The pair are writing a book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference.
Using information compiled by the WIRED AI Elections Project, Narayanan and Kapoor analyzed 78 instances of AI-created political content that appeared last year during elections around the world. "AI does make it possible to fabricate false content. But that has not fundamentally changed the landscape of political misinformation," they write in an essay about their research.
Their analysis found that much of the AI-generated content was not intended to be deceptive. "To our surprise, there was no deceptive intent in 39 of the 78 cases in the database," they write. In more than a dozen instances, campaigns used AI tools to improve campaign materials.
There were also more novel uses, such as in Venezuela, where "journalists used AI avatars to avoid government retribution when covering news adversarial to the government," or in California, where "a candidate with laryngitis lost his voice, so he transparently used AI voice cloning to read out typed messages in his voice during meet-and-greets."
Moreover, deceptive content was not necessarily dependent on AI for its production. "For each of the 39 examples of deceptive intent, where AI use was intended to make viewers believe outright false information, we estimated the cost of creating similar content without AI—for example, by hiring Photoshop experts, video editors, or voice actors," write Narayanan and Kapoor. "In each case, the cost of creating similar content without AI was modest—no more than a few hundred dollars."
In one instance, they even found a video involving a hired actor misclassified by Wired's database as AI-generated content. This snafu, they say, highlights how "it has long been possible to create media with outright false information without using AI or other fancy tools."
Their takeaway: We should be focusing on the demand side of this equation, not the supply side. Election-related misinformation has long been an issue. And while AI might change how such content is created, it doesn't fundamentally change how it spreads or its impacts.
"Successful misinformation operations target in-group members—people who already agree with the broad intent of the message," point out Narayanan and Kapoor. "Sophisticated tools aren't needed for misinformation to be effective in this context."
Meanwhile, outgroups are unlikely to be fooled or influenced, whether such operations are AI-aided or not. "Seen in this light, AI misinformation plays a very different role from its popular depiction of swaying voters in elections," the researchers suggest.
Show Comments (4)