The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Who's the bigger cybersecurity risk – Microsoft or open source?

Episode 500 of the Cyberlaw Podcast

|

There's a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this the podcast will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it's appropriate, then, for our two lead stories to revive a theme from the 90s – who's better, Microsoft or Linux? Sadly for both, the current debate is over who's worse, at least for cybersecurity.

Microsoft's sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the compromise of a Microsoft signing key that gave China access to U.S. government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require that it renew the focus on security—at a time when the company feels compelled to put all its effort into building AI into its offerings. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they've been questioned by the rest of government for decades.

With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anybody can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access for the authors of the feature. It was weeks from  being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off were well-practiced and well-resourced. They've likely done this before, and will likely do it again, making them and others like them open source's long-term strategic vulnerability.

It wouldn't be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court's Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why, you might ask, would the Republicans who control the House go along with this bill? I offer two reasons: first, business lobbyists want both preemption and a way to avoid lawsuits over discrimination, even if it means relying on quotas; second, maybe former Wyoming Senator Alan Simpson (R) was right, and the Republican Party really is the Stupid Party.

Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is likely to sweep the world. Nick is critical of Israel's approach; I am less so. But there's no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be.

Paul takes the next story, in which Microsoft serves up leftover "AI gonna steal yer election" tales that are not much different than all the others we've heard since 2016. The bottom line: China is using AI to advance its interests in American social media and to probe U.S. weaknesses, but so far the effort doesn't seem to be having much effect.

Nick answers the question, "Will AI companies run out of training data?"He thinks they already have. He invokes the Hapsburgs to explain what's going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical about both legal risks.

Paul comments on two U.S. quasi-agreements, with the UK and the EU, on AI cooperation.

Adam breaks down the FCC's burst of initiatives, which are a belated celebration of the long-awaited arrival of a Democratic majority on the Commission—for the first time since President Biden's inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security.

Adam covers the security researcher who responded to a North Korean hacking attack by taking down that country's internet, Adam acknowledges that maybe my advocacy of hacking back wasn't quite as crazy as he thought when he was in government.

In Cyberlaw Podcast alumni news, I note that Paul Rosenzweig has been appointed an advocate at the Data Protection Review Court, where he'll be expected to channel Max Schrems.

And Paul closes with a tribute to what has made the last 500 episodes so much fun for me, our guests, and our audience. Thanks to you all for the gift of your time and your tolerance!

Direct Download is here.

You can subscribe to The Cyberlaw Podcast using iTunes, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.