Ratchet to Disaster

Episode 301 of the Cyberlaw Podcast

|The Volokh Conspiracy |

 We interview Ben Buchanan about his new book, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics. This is Ben's second book and second interview on the podcast about international conflict and cyber weapons. It's safe to say that America's strategic posture hasn't improved since the first one.

We face more adversaries with more tools and a considerably greater appetite for cyber adventurism. Ben recaps some of the stories that were under-covered in the US press when they occurred. The second large attack on Ukraine's grid, for example, was little noticed during the US election of 2016, but it looks much more ominous after a recent analysis of the tools used, and perhaps most importantly, those that were available to the GRU but not used. Meanwhile, the US is not making much progress in cyberspace on the basic requirement of a great power, which is making sure our enemies fear us.

In the news, Nick Weaver, Gus Hurwitz, and I take a quick pass at the Internet content regulation problem and Section 230 of the Communications Decency Act. I've written that Section 230 needs to be reconsidered, and I predict that the Justice Department, which held a workshop on Section 230 last week, will propose reforms. Gus and I offer two different takes on Facebook's recent white paper about content moderation. Gus is more a fan of Twitter's approach. And Nick reminds us that there are some communities on the Internet whose content causes real harm, including to innocent children.

The debate in the US is taking a distinctly European turn, I suggest, which makes Europe's determination to regulate its way to digital innovation a little less implausible than usual. Maury Shenk outlines the very tentative (and almost certainly out of date before it's launched) European plan for building a European data lake to foster a European AI and digital economy.

Speaking of AI regulation, Elon Musk hasn't given up on his concerns about the technology's risks. But the real action in media circles is attacking fairly simple machine learning tools as used by law enforcement and the justice system. I argue that the attack is wrongheaded and will either result in abandoning tools that could have disciplined true outliers or in imposing dangerous racial quotas on things like incarceration. Nick thinks there's enough institutionalization of bias in AI as it now exists that giving up such tools may be the better course.

In quick hits, Nick explains how Google's effort to stamp out ad click fraud can generate a secondary form of criminal extortion. Maury explains the latest flap over Australia's encryption law; the tl;dr is that nothing is likely to change soon. Gus makes a down payment on an emerging issue: Whether ISPs can defeat Internet privacy laws that affect them by pleading their First Amendment rights. Nick calls BS on the simplest forms of "anonymization" for credit card data. I highlight a ransomware attack on a US natural gas operator that actually affected operations and is thus a forerunner of future attacks. Nick reminds us that Julian Assange is still in court to stop a US extradition bid. And Europe's data protection advisor is questioning Google's acquisition of Fitbit.

Download the 301st Episode (mp3).

Take our listener poll at steptoe.com/podcastpoll!

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed!

As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of participants' institutions, clients, family, or friends.

Advertisement

NEXT: Harvey Weinstein's Sexual Assault Conviction Is a Well-Deserved Win for #MeToo

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. The user censorship question is easy:

    If you censor, your must in accordance with 1st Amendment jurisprudence, otherwise you’re inaction subjects you to tort liability should an “endorsed” user defame a party (or some other tort).

    1. Well, that’s an easy standard for a private entity to pass, considering the 1st amendment only applies to the government…

      Also, holding private entities to a 1st amendment standard is obviously and patently ridiculous. If you’re running a kid-friendly environment, you absolutely want to censor language that would be 1st amendment protected.

Please to post comments

Comments are closed.