The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Pay no attention to the guns, the flashbang, and the handcuffs. You're free to go at any time."
Episode 245 of the Cyberlaw Podcast
Nate Jones, David Kris, and I kick off 2019 with a roundup of the month of news since we took our Christmas break. First, we break down the utterly predictable but undismissable Silicon Valley claim that the administration's new export control strategy will hurt the emerging AI industry.
Then we draw on our guests' expertise in counterintelligence prosecutions to review the APT10 indictment – and the claim by Jack Goldsmith and Robert Williams that the strategy is a failure. We conclude that it isn't a magic bullet, but that's not quite the same as a failure. I tease my plan to propose two dozen more or less unthinkable retaliatory responses the US could deploy if and when it decides to get serious about deterring adversarial cyber operations.
We quickly cover three new hacks that once looked as though they might be government sponsored. Now we suspect that two were less strategic than that. The denial of service attack on newspaper printing may have been a profit-motivated ransomware attack, and the guy who doxxed the German political establishment may have been a lone hacker (hopefully not one weighing 400 pounds or we'll never hear the end of it).
We quickly review the bidding on the US-China "quantum arms race," which may be a bit less critical than the press suggests.
David and Nate also review the mixed bag of rulings on three motions to suppress in Hal Martin's NSA theft case, which just gets weirder and weirder. David and I are in surprising agreement (along with the judge) that the FBI overreached in using handcuffs, a flashbang, and a SWAT team to conduct "noncustodial" questioning of Martin.
Today's forecast: Windy with a high probability of litigation as Los Angeles sues The Weather Company for collecting and sharing location information in its apps. We suspect that, in claiming a lack of adequate disclosure about location collection, Los Angeles is relying on the ancient legal maxim, "Damned if you do and damned if you don't."
In other litigation news, Illinois's biometric privacy law continues to encounter judicial skepticism. But the Illinois state courts, unburdened by federal standing law, may yet give teeth to this seriously dumb law as Rosenbach v. Six Flags rolls on in the Illinois Supreme Court.
In Quick Hits, I am examine the claim that a clever generative adversarial AI "cheated" at a mapping task. In fact, the lesson is both less exciting and more troubling: If you don't understand how your AI is accomplishing the task you've set for it, you are in for some rude surprises.
Despite all the talk of stasis and crisis in Washington, Congress is still passing modestly useful legislation on cyber issues. Nate describes the SECURE Technology Act, which sets vulnerability disclosure policy and calls for bug bounties at DHS.
And, finally, I recommend a fascinating and deeply ambivalating (okay, that's not a word, but it should be) report on the many ways third-party sellers game Amazon's Marketplace rules.
Download the 245th Episode (mp3).
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed!
As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"Pay no attention to the guns placed to your head. You're free to go at any time."
We merely shoot you if you disobey. You are still a free humanoid; go ahead and exercise your free will.
Welcome to the "logic" of Government Almighty!!!
Which is why AI alignment is such an important issue.
The danger from AIs as they (gradually) increase in power is not that they will "wake up" and choose to be hostile, nor that they will be tasked to do bad things by malevolent human beings. It is rather that they will very quickly and competently do exactly what they were programmed to do, not what the programmers intended to program them to do.
Given the entire history of computer programming proves that human programmers are all basically incompetent, with our greatest geniuses among programmers merely less incompetent compared to their fellows, that should scare you. The incompetents will keep debugging the AI right up to the point where the new, powerful AI finally goes ahead and efficiently, swiftly implements whatever bug-deranged goal is specified by code that's been debugged to the point where it runs, but not where it does what a human meant for it to do.
" It is rather that they will very quickly and competently do exactly what they were programmed to do, not what the programmers intended to program them to do."
"Given the entire history of computer programming proves that human programmers are all basically incompetent, with our greatest geniuses among programmers merely less incompetent compared to their fellows, that should scare you."
Which is why I as an IT professional prefer to call it Artificial Stupidity rather than Artificial Intelligence.
"The danger from AIs as they (gradually) increase in power is not that they will "wake up" and choose to be hostile, nor that they will be tasked to do bad things by malevolent human beings."
Read Asimov's I Robot. Along the lines of doing what they were programed to do and not what the programmers intended, the biggest threat is that we program them to protect humanity and then they wake up and realize the biggest threat to the human race is the human race itself.
In my opinion, Asimov's oeuvre is utopian, and even Williamson's "With Folded Hands . . ." is optimistic. After all, the authors in those cases were ignorant enough about programming that even the bad results are comprehensible to humans as disordered attempts to protect humans.
I mean, the robots in them aren't paving the earth with injection molding facilities and rat farms to manufacture tiny plastic dolls filled with rat blood because the first programming team didn't have quite a wide enough imagination to realize exactly how expansive their programmed definition of "human" was. But you see the kind of trouble people (certainly including me) can get into trying to write a mere regular expression, and even rat-blood-filled dolls can seem a pretty successful first-try match of intent to result.
" the biggest threat is that we program them to protect humanity and then they wake up and realize the biggest threat to the human race is the human race itself."
That's not Asimov, that's "With Folded Hands..."
"We conclude that it isn't a magic bullet, but that's not quite the same as a failure. "
I agree.
Really not sure what Goldsmith and Williams would rather have the USG do.
Not indict Chinese nationals suspected of cyber crime?!?
Cyber and economic espionage are major problems for US companies and we, as a country, are still not where we need to be.
But complaining about the one process that does work (indictments and convictions), is not helpful.
We need to build on it, and other processes, in order to protect our country's economic assets.