The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Triangulating Apple
Episode 486 of the Cyberlaw Podcast
Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we'll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question we'll be asking is simple: How could an attack like this be introduced without Apple's knowledge and support? We don't get to this question until near the end of the episode, and I don't claim great expertise in exploit design, but it's very hard to see how such an elaborate compromise could be slipped past Apple's security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation.
Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can't get patents as "inventors." What's more interesting is the possibility that they'll make a whole lot of technology "obvious" and thus unpatentable. Speaking of obvious, claiming that companies violate copyright when they train AI models on New York Times content requires a combination of arrogance and cluelessness that can only be found at, well, the New York Times.
Paul Stephan joins us to note that the National Institute of Standards and Technology (NIST) has come up with some good questions about standards for AI safety.
Jeffery notes that U.S. lawmakers have finally woken up to the EU's misuse of tech regulation to protect the continent's failing tech sector. Even the continent's tech sector seems unhappy with the EU's AI Act, which was rushed to market in order to beat the competition and is therefore flawed and likely to yield unintended and disastrous consequences, a problem that inspires this week's Cybertoon.
Paul covers a lawsuit blaming AI for the wrongful denial of medical insurance claims. As he points out, insurers have been able to wrongfully deny claims for decades without needing AI. Justin Sherman and I dig deep into a New York Times article claiming to have found a privacy problem in AI. We conclude that AI may have a privacy problem, but extracting a few email addresses from ChatGPT doesn't prove the case.
Finally, Jeffery notes an SEC "sweep" examining the industry's AI use.
Paul explains the competition law issues raised by app stores – and the inconsistent outcome of app store litigation against Apple and Google. Apple's app store skated free in a case tried before a judge, but Google lost before a jury and has now entered into an expensive settlement with other app makers. Yet it's hard to say that Google's handling of its app store monopoly is more egregiously anticompetitive than Apple's.
We do our own research in real time to address an FTC complaint against Rite Aid for using facial recognition to identify repeat shoplifters. The FTC has clearly adopted Paul's dictum, "The best time to kick someone is when they're down." And its complaint shows a lack of care consistent with that posture. I criticize the FTC for claiming without citation that Rite Aid ignored "false positive" racial bias in its facial recognition software. Digging into the research, I conclude that, if the FTC itself was subject to penalties for unfair and deceptive marketing, this filing would lead to sanctions.
The FTC fares a little better in our review of its effort to toughen the internet rules on child privacy, though Paul isn't on board with the whole package.
We move from stories about the government regulating Silicon Valley to stories about Silicon Valley regulating the government. Apple has decided that it will now require a judicial order to give government's access to customers' "push notifications." And, giving the back of its hand to crime victims, Google decides to make geofence warrants impossible by blinding itself to the necessary location data. Finally, Apple decides to regulate India's hacking of opposition politicians and runs into a Bharatiya Janata Party (BJP) buzzsaw.
Paul and Jeffery decode the EU's decision to open a DSA content moderation investigation into X. We also celebrate the welcome failure of X's lawsuit to block California's content moderation law.
Justin takes us through the latest developments in Cold War 2.0. China is hacking our ports and utilities with intent to disrupt (as opposed to spy on) them. And the U.S. is discovering that derisking our semiconductor supply chain is going to take hard, grinding work. Justin looks at a recent report presenting actual evidence on the question of TikTok's standards for boosting content of interest to the Chinese government.
And in quick takes,
- I celebrate the end of the Reign of Mickey Mouse in copyright law
- Paul explains why Madison Square Garden is still able to ban lawyers who have sued the Garden
- I note the new short-term FISA 702 extension
- Paul predicts that the Supreme Court will soon decide whether police can require suspects to provide police with phone passcodes
- And Paul and I quickly debate Daphne Keller's amicus brief for Frances Fukuyama in the Supreme Court's content moderation cases
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Show Comments (14)