The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Triangulating Apple
Episode 486 of the Cyberlaw Podcast
Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we'll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question we'll be asking is simple: How could an attack like this be introduced without Apple's knowledge and support? We don't get to this question until near the end of the episode, and I don't claim great expertise in exploit design, but it's very hard to see how such an elaborate compromise could be slipped past Apple's security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation.
Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can't get patents as "inventors." What's more interesting is the possibility that they'll make a whole lot of technology "obvious" and thus unpatentable. Speaking of obvious, claiming that companies violate copyright when they train AI models on New York Times content requires a combination of arrogance and cluelessness that can only be found at, well, the New York Times.
Paul Stephan joins us to note that the National Institute of Standards and Technology (NIST) has come up with some good questions about standards for AI safety.
Jeffery notes that U.S. lawmakers have finally woken up to the EU's misuse of tech regulation to protect the continent's failing tech sector. Even the continent's tech sector seems unhappy with the EU's AI Act, which was rushed to market in order to beat the competition and is therefore flawed and likely to yield unintended and disastrous consequences, a problem that inspires this week's Cybertoon.
Paul covers a lawsuit blaming AI for the wrongful denial of medical insurance claims. As he points out, insurers have been able to wrongfully deny claims for decades without needing AI. Justin Sherman and I dig deep into a New York Times article claiming to have found a privacy problem in AI. We conclude that AI may have a privacy problem, but extracting a few email addresses from ChatGPT doesn't prove the case.
Finally, Jeffery notes an SEC "sweep" examining the industry's AI use.
Paul explains the competition law issues raised by app stores – and the inconsistent outcome of app store litigation against Apple and Google. Apple's app store skated free in a case tried before a judge, but Google lost before a jury and has now entered into an expensive settlement with other app makers. Yet it's hard to say that Google's handling of its app store monopoly is more egregiously anticompetitive than Apple's.
We do our own research in real time to address an FTC complaint against Rite Aid for using facial recognition to identify repeat shoplifters. The FTC has clearly adopted Paul's dictum, "The best time to kick someone is when they're down." And its complaint shows a lack of care consistent with that posture. I criticize the FTC for claiming without citation that Rite Aid ignored "false positive" racial bias in its facial recognition software. Digging into the research, I conclude that, if the FTC itself was subject to penalties for unfair and deceptive marketing, this filing would lead to sanctions.
The FTC fares a little better in our review of its effort to toughen the internet rules on child privacy, though Paul isn't on board with the whole package.
We move from stories about the government regulating Silicon Valley to stories about Silicon Valley regulating the government. Apple has decided that it will now require a judicial order to give government's access to customers' "push notifications." And, giving the back of its hand to crime victims, Google decides to make geofence warrants impossible by blinding itself to the necessary location data. Finally, Apple decides to regulate India's hacking of opposition politicians and runs into a Bharatiya Janata Party (BJP) buzzsaw.
Paul and Jeffery decode the EU's decision to open a DSA content moderation investigation into X. We also celebrate the welcome failure of X's lawsuit to block California's content moderation law.
Justin takes us through the latest developments in Cold War 2.0. China is hacking our ports and utilities with intent to disrupt (as opposed to spy on) them. And the U.S. is discovering that derisking our semiconductor supply chain is going to take hard, grinding work. Justin looks at a recent report presenting actual evidence on the question of TikTok's standards for boosting content of interest to the Chinese government.
And in quick takes,
- I celebrate the end of the Reign of Mickey Mouse in copyright law
- Paul explains why Madison Square Garden is still able to ban lawyers who have sued the Garden
- I note the new short-term FISA 702 extension
- Paul predicts that the Supreme Court will soon decide whether police can require suspects to provide police with phone passcodes
- And Paul and I quickly debate Daphne Keller's amicus brief for Frances Fukuyama in the Supreme Court's content moderation cases
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"Paul predicts that the Supreme Court will soon decide whether police can require suspects to provide police with phone passcodes"
The solution, of course, is to have TWO passcodes -- one which unlocks the phone, and the other which incinerates it.
It can't be that difficult to arrange for something to intentionally create a short circuit that sends a (relatively) massive amount of electricity through everything that could possibly retain data -- and if you get a runaway battery fire in the process, so much the better...
So Paul the Perp gives the cops the wrong password and the phone is destroyed. Sure they'll be pissed, but legally, what can they do?
And prove that he KNEW it was the self destruct password.
It's not technically hard to do that. It's even easier to bring criminal charges of tampering with evidence.
It was the POLICE who tampered with the evidence -- I warned them not to use that code because I wasn't sure it was the right one, I told them what would happen if it wasn't, I even ASKED them not to do it, but they did so anyway.
I want a new phone...
You can charge a ham sandwich, but what's the chance of getting a conviction under these facts?
"Ladies and Gentlemen of the Jury, have YOU ever entered the wrong password by mistake? If you have, you must find my client innocent because he warned the officers that he wasn't sure it was the right password and of the consequences if it wasn't.
Notwithstanding all of this -- IANAA -- what is the basis of a charge of "tampering with evidence"? It was the COPS who were tampering with the evidence, they were warned that the number might not be the correct one, and of the consequences of if it wasn't.
Likewise, it is possible to intentionally forget a password -- memorize a different one, or a few other ones. So then what -- "I intentionally forgot the password" -- it wasn't evidence when I did it.
And my personal favorite -- copyright law. "That's my intellectual property and you have to buy a license to view it. Forgetting the odious of not offering to sell one, what if the person does? "I'll give you a one-time license to view for $1000 and a one-time license to copy for $100,000 -- I will give you the password when your check clears my bank."
One of the things that I have never seen litigated is the takings clause relative to seized evidence, particularly when the owner isn't even charged with a crime. For example, someone steals a store's sign and the police recover the sign, but keep it in evidence for 3 years until the court proceedings are over. The store owner had to buy a NEW sign and the returned one is essentially useless -- he lost the $1000 value of it because he had to buy the new one.
How is that not a taking?
Or how about the seizure of a depreciating asset such as a motor vehicle. Even if the state returns it a decade later, it isn't worth what it was when it was seized.
How's that not a taking? (I believe IL is litigating that.)
So I have my thoughts on who is going to win the 2026 election, which I believe (right or wrong) I can sell to a major media outlet for big bucks. But the police seize it -- and read it.
Or take Turtleboy who is investigating what looks like police misconduct in SE Massachusetts. They seized all his computers and phones -- he had to buy new ones. How's that not a taking?
You would be right, if we had a better jury system. As it is, federal prosecutors get convictions 99% of the time.
Dr. Ed is one of those stupid potential clients who I turn away from time to time because they tell me that they want to do something and I explain why it's illegal and they say, "Well, I'm going to do it anyway because how could they prove it?"
Phones sold in America have to meet standards that probably include not exploding too often.
You have multiple passcodes. Some unlock the innocent content. Some unlock all the content. The number has to be uncertain so you can plausibly claim to have provided them all. The passcodes should be longer than usual because there are more correct guesses.
Or some of the passcodes erase the encryption key to the private data before unlocking, leaving it unreadable. This works better on a real phone than on a hacking device that uses a copy of the phone's data.
Hey Baker, how many innocent people have you helped murder with your lobbying for NGO Group so far?
Be careful . . . Prof. Volokh has banned commenters and vanished comments for less, Jason Cavanaugh.
Elon Musk seems to be applying Volokh-style moderation at Twitter, claiming to be a free expression champion; whining about perceived censorship; and enthusiastically providing a platform for right-wing bigots and antisocial liars, then censoring liberal and libertarian commenters who make fun of or criticize conservatives.
Carry on, clingers. So far as disaffected, on-the-spectrum, faux libertarian conservatives can carry anything in modern America, that is.
The liberals were quickly reinstated. Meanwhile there are still a lot of right-wingers who remain banned on Twitter/X.
If Musk is quickly reversing course on his viewpoint-driven conservative censorship he is a step ahead of Prof. Volokh in that context.
I too celebrate at least the beginning of the end of the mouses reign of terror over our copyright laws.
It IS the end -- and I'm waiting for Steamboat Willie the Mass Murderer to come out. What will be the impact of trademark though?
For example, Blood and Honey...
https://www.theguardian.com/film/2023/feb/16/winnie-the-pooh-blood-and-honey-movie-review
I don't know why someone would do this to my childhood heroes...