The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The new AI executive order
Plus Scott Shapiro on "Fancy Bear Goes Phishing" in episode 479 of the Cyberlaw Podcast
In this episode of the Cyberlaw Podcast, I take advantage of Scott Shapiro's participation to interview him about his book, Fancy Bear Goes Phishing – The Dark History of the Information Age, in Five Extraordinary Hacks. It's a remarkable tutorial on cybersecurity, told through stories that you may think you've already heard until you see what Scott has turned up by digging into historical and legal records. We cover the Morris worm, the Paris Hilton hack, and the earliest Bulgarian virus writer's nemesis. Along the way, we share views about the refreshing emergence of a well-paid profession largely free of the credentialism that infects so much of the American economy. In keeping with the rest of the episode, I ask Bing Image Creator to generate alternative artwork for the book.
In the news roundup, Michael Ellis walks us through the "sweeping" ™ White House executive order on artificial intelligence. The tl;dr: the order may or may not actually have real impact on the field. The same can probably be said of the advice now being dispensed by AI's "godfathers."™ -- the keepers of the flame for AI existential risk who have urged that AI companies devote a third of their R&D budgets to AI safety and security and accept liability for serious harm. Scott and I puzzle over how dangerous AI can be when even the most advanced engines can only do multiplication successfully 85% of the time. Along the way, we evaluate methods for poisoning training data and their utility for helping starving artists get paid when their work is repurposed by AI.
Speaking of AI regulation, Nick Weaver offers a real-life example: the California DMV's immediate suspension of Cruise's robotaxi permit after a serious accident that the company handled poorly.
Michael tells us what's been happening in the Google antitrust trial, to the extent that anyone can tell, thanks to the heavy confidentiality restrictions imposed by Judge Mehta. One number that escaped -- $26 billion in payments to maintain Google as everyone's default search engine – draws plenty of commentary.
Scott and I try to make sense of CISA's claim that its vulnerability list has produced cybersecurity dividends. We are inclined to agree that there's a pony in there somewhere.
Nick explains why it's dangerous to try to spy on Kaspersky. The rewards may be big, but so is the risk that your intelligence service will be pantsed. Nick also notes that using Let's Encrypt as part of your man in the middle attack has risks as well – advice he probably should deliver auf Deutsch.
Scott and I cover a great Andy Greenberg story about a team of hackers who discovered how to unlock a vast store of bitcoin on an IronKey but may not see a payoff soon. I reveal my connection to the story.
Michael and I share thoughts about the effort to renew section 702 of FISA, which lost momentum during the long battle over choosing a Speaker of the House. I note that USTR has surrendered to reality in global digital trade and point out that last week's story about judicial interest in tort cases against social media turned out to be the first robin in what now looks like a remake of The Birds.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Leftists - Trump is a danger to democracy because he will use sweeping executive authority to destroy our cherished institutions. Oh, wait, never mind, that's our guy.
I'm sure you think you're making sense, but I literally have no idea what you're talking about.
I *think* you probably don't like the AI Executive Order or think it's hypocritical that "leftists" like that Biden is doing it, but of course there's nothing in this post or your comment that provides obvious support for either of those arguments,
Chat GPT-4, please right a rebuttle to this paragraph about the dangers of AI:
Rebuttal to the Paragraph:
While the White House executive order on artificial intelligence may leave some uncertain about its tangible impacts, it's worth noting that any acknowledgment from the highest echelons of power signals a growing recognition of AI's prominence in the modern landscape. Referring to the prominent figures in AI as its "godfathers"™ may come off as belittling, but their advice is steeped in years of understanding the implications of the technology. The recommendation for AI companies to allocate a significant portion of their R&D budgets to AI safety and security isn't merely alarmist; it's a prudent call to prioritize the long-term well-being of society over short-term technological gains.
The assertion that the most advanced AI engines can only do multiplication 85% of the time is a gross misrepresentation. This statistic, devoid of context, does not depict the multifaceted achievements of AI in fields such as medical diagnosis, language translation, and even artistic endeavors. The topic of poisoning training data and its potential use for compensating artists, while intriguing, needs a more nuanced discussion than a mere passing mention. This method has broader implications that include not just benefiting creators but also understanding the potential harm of misusing such techniques.
Lastly, the case of the California DMV's suspension of Cruise's robotaxi permit is a testament to the existing regulatory measures in place. Instead of seeing it as a weakness in AI technology, it should be recognized as a sign that our systems are designed to prioritize public safety. Proper handling of incidents by AI companies is crucial, but a single misstep should not overshadow the countless successful trials and deployments that pave the way for a future where AI and humans coexist harmoniously.