Artificial Intelligence

What the Past Can Teach Us About Our AI Fears

Regulating AI could threaten free speech, just as earlier proposed regulations of other media once did.

|

Public discourse around the effect of artificial intelligence on misinformation and disinformation seems to have grown after the World Economic Forum labeled artificial intelligence (AI) the greatest short-term risk in 2024. But the sudden rise of AI is not the first time we've seen concerns about the potential impact of manipulated media.

This election season has seen both policymakers and the press express concerns about how AI might lead to misinformation and disinformation impacting elections. Yet history has shown us that when people fear that a new technology will cause the public to be unable to decipher truth from fiction, often the monsters turn out to be just trees.

AI isn't the first time we've faced fears about manipulated media in the political context. As American Action Forum's Jeffrey Westling noted in 2020 regarding the potential concerns around deepfakes, "history is littered with deceptive practices—from Hannibal's fake war camp to Will Rogers' too-real impersonation of President [Henry] Truman to [Joseph] Stalin's disappearing of enemies from photographs."

In fact, in the 1910s, concerns about the potential "misinformation" in faked photos led to calls to ban the (literal) "photoshops," which sound incredibly like calls to regulate or ban AI tools today. Fortunately, Congress did not ban these prior technologies because they could be used or abused in misleading ways, and we saw plenty of beneficial impacts for expression from the same tools.

While some manipulated media may create uncertainty or discomfort, just as it has in the past, regulation of AI in the political context could create free speech issues. Vague definitions around artificial intelligence or what type of content is covered may outlaw tools of political discourse if they include references to political figures or the election, including funny memes or Saturday Night Live skits that use AI technology in benign or beneficial ways—such as audio or visual editing tools or auto-translation.

As a result, legitimate, protected forms of speech, including political commentary such as parody, could be silenced by burdensome regulation. Rules around AI use in elections could also prevent AI services from providing factual information about a candidate's positions or policies if a platform were to find itself subject to potential liability.

Often mandatory AI labeling is seen as a potentially less invasive alternative to banning the use of AI in election or political contexts. However, a government-mandated label would be different from the labeling voluntarily established by platforms and could fail to achieve the goal of improved consumer awareness. Though well-intentioned, a government-mandated label could require even ordinary filters or standard practices to be labeled as AI-generated, and if there is no distinguishment between an overt, manipulative use and a neutral use of AI, it could result in an unhealthy degree of mistrust among the public.

One positive thing we have learned about manipulated media is that early attempts to mislead the public are often easily identified and debunked. As more sophisticated attempts to push fraudulent images and videos emerge, society is better equipped to be critical media consumers.

We are already seeing this to some degree with AI and other concerns about media manipulation. Consider two recent viral examples. When a robocall claiming to be President Joe Biden told voters to save their vote for the general election, it was easily identified as a poor fake, and the fraudster was identified and fined. Earlier this year, when a photo of the Princess of Wales, Kate Middleton, and her children was identified as manipulated, possibly using AI, wire services around the world quickly reported it and removed the image.

The market is also proving responsive to consumers' concerns. Various platforms are already establishing norms that allow creative commentary using technological tools while also helping users understand how to approach such content. Users can now report the concerning activity they may have noticed from malign foreign actors.

Additionally, both government sources and traditional media like The Washington Post have shared information about potential foreign malign influence or other concerns to help users understand how manipulated media is potentially being used and to help individuals better respond to what they encounter online. Rather than rush to potentially concerning regulations that might also impact legitimate speech, society learns new skills to help us understand the truth—and if anything, education, not regulation, is often the best response.

In the final days before the election and in the immediate response, we are likely to hear renewed concerns about the potential for AI and misinformation. Hopefully, as we have seen in the past, it will be the development of societal norms and not government regulations that will bring us out of the AI woods.