The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Taking AI Existential Risk Seriously
Episode 499 of the soon-to-be-suspended Cyberlaw Podcast
This episode is notable not just for cyberlaw commentary, but for its imminent disappearance from these pages and from podcast playlists everywhere. Having promised to take stock of the podcast when it reached episode 500, I've decided that I, the podcast, and the listeners all deserve a break.
So, I'll be taking one after the next episode. No final decisions have been made, so don't delete your subscription, but don't expect a new episode any time soon. It's been a great run, from the dawn of the podcast age in 2014, through the ad-fueled podcast boom, which I manfully resisted, to the podcast market correction that's still under way. It was a pleasure to engage with listeners from all over the world. (Yes, even the EU! )
As they say, in the podcast age, everyone is famous for fifteen people. That's certainly been true for me, and I'll always be grateful for listeners' support – not to mention for all the great contributors who've joined the podcast over the years.
Turning back to cyberlaw, there are a surprising number of people arguing that there's no reason to worry about existential and catastrophic risks from proliferating or runaway AI risks. Some of that is people seeking clever takes; a lot of it is ideological, driven by fear that talking about the end of the world will distract attention from the dire danger of face recognition. One useful antidote to this view is the Gladstone Report, written for the State Department's export control agency. David Kris gives an overview of the report for this episode of the Cyberlaw Podcast. The report explains the dynamic, and some of the evidence, behind all the doom-saying, a discussion that is more persuasive than the report's prescriptions for avoiding disaster through regulation.
Speaking of the moral panic over face recognition, Paul Stephan and I unpack a New York Times piece saying that Israel is using face recognition in its Gaza conflict. Actually, we don't so much unpack it as turn it over and shake it, only to discover it's largely empty. Apparently, the editors of the NYT thought that tying face recognition to Israel and Gaza was all their readers needed to understand that the technology is evil, evil, evil.
More interesting is this story arguing that the National Security Agency, traditionally at the forefront of computers and national security, may have to sit out the AI revolution. The reason, David tells us, is that NSA's access to mass quantities of data for training is complicated by rules and traditions against intelligence agencies accessing data about Americans. And there are few training databases not contaminated with data about and by Americans.
While we're feeling sorry for the intelligence community's struggles with new technology, Paul notes that Yahoo News has assembled a long analysis of all the ways that personalized technology is making undercover operations impossible for CIA and FBI alike.
Michael Ellis weighs in with a review of a report by the Foundation for the Defence of Democracies on the need for a U.S. Cyber Force to man, train, and equip warfighting nerds for Cyber Command. It's a bit of an inside baseball solution, heavy on organizational boxology, but we're both persuaded that the current system for attracting and retaining cyberwarriors is not working. As "Yes, Minister" would tell us, we must do something, and this is something.
In contrast, it's fair to say that the latest Senate Judiciary proposal for a "compromise" 702 renewal bill is nothing, or at least nothing much – a largely phony compromise that substitutes ideological baggage for real-world solutions. David and I are unimpressed -- and surprised at how muted the Biden administration has been in trying to wrangle the Democratic Senate toward a workable bill.
Paul and Michael review the latest trouble for TikTok – a likely FTC lawsuit over privacy. And Michael and I puzzle over the stories claiming that Meta may have "wiretapped" Snapchat analytic data. They come from
trial lawyers suing Meta, and they raise are a lot of unanswered questions, such as whether users consented to the collection of the data. In the end, we can't help thinking that if Meta had 41 of its lawyers reviewing the project, they probably found a way to avoid wiretapping liability.
The most intriguing story of the week is the complex and surprising three-or four-cornered fight in northern Myanmar over hundreds of thousands of women trapped in call centers to run romance and pig-butchering scams. Angry that many of the women and many of the victims are Chinese, China persuaded a warlord to attack the call centers and free many of the women, deeply embarrassing the current Myanmar ruling junta and its warlord allies, who'd been running the scams. And we thought our southern border was a mess!
And in quick hits:
- Elon Musk's X Corp. has lost lawsuit against the left-wing smear artists at CCDH.
- AT&T has lost millions of customer records in a data breach.
- Utah has passed an AI regulation bill.
- The U.S. is still in the cyber sanctions business, tagging several Russian fintech firms and a collection of Chinese state hackers.
- The SEC isn't done milking the SolarWinds hack; now it's investigating companies harmed by the supply chain attack.
- Apple's reluctant compliance with EU law has attracted the expected EU investigation of its app store policies.
- And in a story that will send chills through large parts of the financial and tech elite, it turns out that Jeffrey Epstein's visitor records didn't die with him. Thanks to geolocation adtech, they can be reconstructed. WIRED may need 41 lawyers to do it, but don't be surprised to see future stories naming names.
Direct Download: https://traffic.libsyn.com/steptoecyber/The_Cyberlaw_Podcast_499_.mp3
You can subscribe to The Cyberlaw Podcast using iTunes, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Show Comments (1)