The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Who needs cyberlaw when we can have unicorns and fairy dust?
Episode 406 of the Cyberlaw Podcast
Nick Weaver kicks off this wide-ranging episode by celebrating Treasury's imposition of sanctions on a cryptocurrency mixer that facilitated the laundering of stolen cryptocurrency. David Kris calls on Justice to step up its game in the face of this competition, while Nick urges Treasury to also sanction Tornado Cash -- and explains why this would incentivize better behavior more generally. Scott Shapiro weighs in to describe North Carolina's effort to prohibit government entities from paying ransomware gangs; he doubts it will work.
David and Scott also further our malware education by summarizing two chilling reports about successful long-term intrusion campaigns – one courtesy of Chinese state hackers and the other likely launched by Russian government agents. I can't help wondering whether the Russian agencies haven't prioritized flashy hacks over effective ones – to Russia's cost in the war with Ukraine.
Nick provides a tutorial on why quantum cryptanalysis is worrying the Biden Administration and what it thinks we ought to do about it. I note how good U.S. physicists have gotten at selling expensive dreams to their government – and express considerable relief that Chinese physicists are apparently at least as good at extracting funding from their government.
I find a story mainstream media is already burying because it doesn't fit the "AI bias" narrative. It turns out that, in a study of face recognition systems by the Department of Homeland Security, most errors (75%) were introduced at the photo capture stage, not by the matching algorithms. What's more, the bias we keep hearing about has disappeared for the best products. Error rates were reported for the most accurate systems by gender and skin color. Errors in matching women, light-skinned subjects, and dark-skinned subjects were all as low as it's possible to be -- zero. For men, the error rate was nearly zero -- 0.8%. These tests were of authentication/identification face recognition, which is easier to do than 1:n "searches" for matching faces, but the results mean that we can expect the whole bias issue to disappear as soon as the public wises up to the ideologically driven journalism now on offer.
Nick and I spar over location data sales by software providers. I pour cold water on the notion that evil prosecutors will use location data to track women to abortion clinics in other states. Nick thinks I'm wrong and we put some money on the outcome, though it may take five years for one of us to collect.
Scott unpacks the flap over Department of Homeland Security (DHS) Disinformation Governance Board, headed by Cyberlaw Podcast alumna Nina Jankowicz, who revealed on Tiktok that I should have asked her to sing the interview. Scott and I agree that DHS is retreating quickly from the board's name and mission as negative reviews pile up for the body's name, leader, and mission.
This Week in Schadenfreude is covered by Nick, who dwells on the irony of the Spanish prime minister's phone being targeted with Pegasus spyware not long after the Spanish government was widely blamed for using Pegasus against Catalan separatists.
In quick hits,
- Scott explains why British Internet Service Providers (ISPs) are complaining about a government order that they not give British citizens access to sanctioned websites.
- Scott and I take turns mocking the fashion for phony international law agreements. These now include Silicon Valley's astroturfed Paris Call and the Biden Administration's Declaration for the Future of the Internet, better known as the "Convention for International Unicorns and Fairy Dust."
- David celebrates the one-year term extension for Gen. Nakasone, and we share views on the unwisdom of dividing the leadership of Cyber Command and NSA.
- Squeaking under the wire, I manage to bring Elon Musk into the podcast as the exit music mounts, noting that the Committee on Foreign Investment in the United States (CFIUS) is likely to complicate but not stall his acquisition of Twitter.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"as soon as the public wises up to the ideologically driven journalism now on offer."
Well, based on the last couple of decades, don't hold your breath.
It's easy to get to 400 episodes when you're superficial and glib.
Where is the download link?
Setting aside the obvious snark here, I'm less sanguine. I've worked for several years in AI as a researcher, so this isn't “I read it online somewhere” nonsense. Keep in mind that:
1. Algorithms aren't biased (unless someone makes them biased). The bias comes from the training data, so you can have an algorithm that works perfectly, but if given bad data, it will deliver results we perceive as biased. So capability and reality are different things. Just because this test worked well does not mean that real-world implementations will not have significant discrepancies along axes understood to evince bias.
2. From what I can tell in the webinar on the process, the tests were conducted in pretty optimal conditions, but real-world usage would be in noisier environments. In particular, low and inconsistent lighting creates problems for recognizing individuals with darker skin. This is separate from the issue of training data or any putative bias. It's inherent due to issues of contrast and skin albedo.
3. These are (presumably) best-of-breed systems. Not all systems are at this level. The reports of anti-cheating software that fails much simpler tasks (detecting attention) for people with darker skin is a good example of where the bulk of the industry is today.
In practical terms, this means that evidence of “bias” will continue to accumulate and provide grist for articles. The problems with AI-driven features will continue to disproportionately affect certain groups. So don't expect journalism to change its story any time soon, even though the story should emphasize that it is not AI bias, but rather plain old technical challenges and poor data policies for implementers in many cases.
Cranky old clingers muttering bitterly and impotently about the 'mainstream media' and 'all of this damned progress' are among my favorite culture war casualties.
Unicorns, Fairy Dust, And Jesus would be a great band name, though.
The Band.
That AI-driven facial recognition software is getting better is not something to cheer, it's cause to make mask-wearing in public as common and uncontroversial as can be. It might just be a delaying tactic, but I think we need all the time we can get for the culture to flip on the acceptability of ubiquitous police spying.