The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
This episode features a lively (and – fair warning – long) interview with Daphne Keller, Director of the Program on Platform Regulation at Stanford University's Cyber Policy Center. We explore themes from her recent paper on regulation of online speech. It turns out that more or less everyone has an ability to restrict users' speech online, and pretty much no one has both authority and an interest in fostering free-speech values. The ironies abound: Conservatives may be discriminated against, but so are Black Lives Matter activists. In fact, it looks to me as though any group that doesn't think it's the victim of biased content moderation would be well advised to scream as loudly as possible about censorship anyway for fear of losing the victimization sweepstakes.
Feeling a little like a carny at the sideshow, I serve up one solution for biased moderation after another, and Daphne methodically shoots them down. Transparency? None of the companies is willing to allow real transparency, and the government may have a first amendment problem forcing companies to disclose how they make their moderation decisions. Competition law as a way to encourage multiple curators? It might require a "magic" API, and besides, most users like a moderated Internet experience. Regulation? Only if we want to take First Amendment law back to the heyday of broadcast regulation (which is frankly starting to sound pretty good to me).
As a particularly egregious example of foreign governments and platforms ganging up to censor Americans, we touch on the CJEU's insufferable decision encouraging the export of European defamation law to the US – with an extra margin of algorithmic censorship to keep the platform from any risk of liability. Turns out, that speech suppression regime is not just an end run around the first amendment; it's protected by the first amendment. I offer to risk my Facebook account to see if that's already happening.
In the news, FISA follies take center stage, as the March 15 deadline for reauthorizing important counterterrorism authorities draws near. No one has a good solution. Matthew Heiman explains that another kick-the-can scenario remains a live option. And Nick Weaver summarizes the problems that the PCLOB found with the FISA call detail record program. My take: The program failed because it was imposed on NSA by libertarian ideologues who had no idea how it would work in practice and who now want to blame NSA for their own shortsightedness.
Another week, another couple of artificial intelligence ethics codes: The two most recent ones come from DOD and … the Pope? Mark MacCarthy sees a lot to like. I offer my quick and dirty CTRL-F test for whether the codes are serious or flaky, and both fail.
In China news, Matthew covers China's ever-spreading censorship regime – which now reaches Twitter users whose accounts are blocked by the Great Firewall. We also ask whether and how much the US "name and shame" campaign has actually reduced Chinese cyberespionage. And whether China is stealing tech from universities for the same reason Willie Sutton robbed banks – that's where the IP is.
Nick recounts with undisguised glee the latest tribulations suffered by Clearview AI's facial recognition system: Its app has been banned from Android and Apple, and both its customers and its data collection methods have been doxed.
Mark notes the success of threats to boycott Pakistan on the part of Facebook, Google, and Twitter. I wonder if that will simply incentivize Pakistan to drive its social media ecosystem toward the Chinese giants.
Nick gives drug dealers a lesson in how not to store the codes for €53.6 million in Bitcoin; or is it a lesson in what to say to the police if you want that €53.6 million waiting for you when you get out of the clink?
Finally, in a few quick hits, we cover new developments in past stories: It turns out, to the surprise of no one, that removing a police tracking device from your car isn't theft. West Virginia has apparently recovered from a fit of insanity and now does not plan to allow voting by insecure app. And the FCC is doing a slow striptease in its investigation of mobile carriers for selling customer location data; now we know who'll be charged (pretty much everyone) and how much it will cost them ($200 million), but we still don't know the theory or whether the inquiry is going to kill off legitimate uses of location data.
Take our listener poll at steptoe.com/podcastpoll!
As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, families or pets.