Should the Government Regulate Deepfakes? We Asked People on Venice Beach
Deepfakes don't pose a novel threat, and they have many exciting applications that would be stymied by legal restrictions.
The Chinese app Zao can replace a celebrity in a movie scene with anyone holding a smartphone. Stanford researchers have developed an algorithm that can edit talking-head interviews as easily as text, subtracting or adding words that the subject never actually uttered. And the short-lived app DeepNude allowed users to generate a fake (though believable) naked picture of just about anyone.
A recent Pew Poll found that 77 percent of Americans think the government needs to step in and restrict altered or made-up videos or images. The House is considering legislation called the DEEP FAKES Accountability Act, or the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act, which would make it a crime to post "synthetic media" unless it's labeled by "irremovable digital watermarks."
Some fear that deepfakes could bring an end to generally accepted truth. In this dystopian future, a 4Chan-using basement-dweller could alter the course of international politics.
What these breathless predictions overlook is that deepfakes are just the latest extension of a tradition of manipulation that goes back to the earliest days of captured and recorded media. They not only pose no threat to society—they have many exciting applications that would be stymied by legal restrictions.
To test the effectiveness of today's cutting-edge software, we challenged people on the Venice Beach boardwalk to spot the deepfake among a series of real video clips featuring Barack Obama.
Hosted and edited by Justin Monticello. Produced by Monticello and Zach Weissmueller. Shot by Monticello, Weissmueller, and John Osterhoudt. Additional graphics by Joshua Swain. Music by Silent Partner, Jingle Punks, RW Smith, and Riot.