The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Volokh Conspiracy

Some for-the-moment final thoughts on Apple, encryption and the FBI

|

As everyone knows by now, the FBI has withdrawn its action to compel Apple to devise a means to bypass the password-protect feature on San Bernardino terrorist Syed Rizwan Farook's iPhone so that the FBI could access the information on the phone. In a terse two-page filing with the court (C.D. California), the FBI noted simply:

"The government has now successfully accessed the data stored on Farook's iPhone and therefore no longer requires the assistance from Apple Inc. mandated by Court's "Order Compelling Apple Inc. to Assist Agents in Search" dated February 16, 2016.

Accordingly, the government hereby requests that [that] Order be vacated."

So the FBI got in, somehow. No details have been released about exactly how it got in, and rumors, needless to say, are flying around: It was some Israeli cryptographic firm, it was Apple itself, it was the National Security Agency, etc. [Whether the FBI will, should or must release the information about how it got in—to Apple, or to anybody else—is an interesting and, at present, open question.]

The case—In The Matter Of The Search Of An Apple iPhone Seized During The Execution Of A Search Warrant On A Black Lexus IS300, California License Plate #5KGD203—is (was?) a pretty fascinating one, and a great deal has already been said (by Orin Kerr here, here and here on the VC, and by many others) about the legal and technical underpinnings of the suit.

It's fascinating because it so well crystallized one important corner of a Big Problem—the problem of the "government-mandated backdoor": In what circumstances—if ever—must providers of encryption and encryption-related technologies make available to the government the tools to defeat those technologies, to "crack the code"?

And, relatedly: Who decides that? Should those circumstances (if any) be defined by the executive? The legislature? The judiciary?

There has been a pretty lively debate about these questions since the so-called "Crypto Wars" of the early 1990s, when the Clinton administration proposed various mandatory "key escrow" schemes (including the infamous "Clipper Chip" proposal), under which it would have been unlawful to distribute encryption technology without a system for providing "master keys," capable of decoding the encrypted messages, to a so-called "trusted third party," who would, in turn, be permitted to release the keys to the government in specified circumstances.

Those proposals generated a small, but quite intense, firestorm of criticism, from within and (to a lesser extent) without the cryptographic community. It was a small firestorm, because that community was (and is) a relatively small one, and because in 1993 very few people had even the slightest interest in this new "Internet," let alone in the cryptography policy that would apply to Internet communications.

The Clipper Chip proposals became something of an embarrassment to the Clinton administration and were withdrawn after a couple of years. But the issues they raised keep coming back, and they will continue to keep coming back, because they reflect some strong but irreconcilable forces in the world that will not be changing anytime soon. (1) We want to protect ourselves from Bad Guys (who are never in short supply). (2) We want (and, sometimes, we need) the government to help protect us from the Bad Guys. And (3) We want to protect ourselves from the government.

We'll never "solve" that problem; that would be like simultaneously maximizing three variables in a system of equations, and that can't be done; you can't design a system that gives maximum protection against the Bad Guys, maximum power to governments to fight the Bad Guys and maximum protection against the government. Something has to give—there are trade-offs.

What's interesting about the case is how well it illustrates just what some of those trade-offs are.

On the one hand, it presents the government's strongest case for a mandated backdoor. This case isn't—on its face, at least—a case about government snooping or government surveillance; it falls right in the bull's-eye of the government protecting us from the Bad Guys. The FBI has a lawfully obtained, particularized, judicially reviewed warrant for the information and data on this particular device. There's constitutionally protected information on that phone, to be sure—but the FBI has already done everything we require it to do in order to seize it and to search it. As a citizen, I want the FBI to have access to the information on this phone; it seems to me that one almost has to want the FBI to have access to the information on this phone, unless you have some fundamental problem with the warrant requirement, or you think that for some reason it was improperly applied here.

But it's a very strong case for the other side, too—which is precisely what makes it so interesting. Yes, the information on the phone will help with an ongoing criminal investigation; but since when can the government simply commandeer private resources—the Apple programmers who were ordered to find a way to defeat the password-protect scheme and extract the information—to help it in an ongoing criminal investigation? Mandating the delivery of a backdoor involves a very delicate question of encryption policy, and if it is to be accomplished it should be accomplished only by a legislative act, not by a one-off judicial order under the All Writs Act.

And in any event, government-mandated backdoors, whether they're obtained by legislative fiat (as in the Clipper Chip proposals) or by judicial order (as here, pursuant to the All Writs Act), are always a bad idea, because they will end up actually decreasing our security, against both the Bad Guys and against illicit government action. It is extremely difficult, and may even be impossible, to design a backdoor scheme that does not compromise the overall security of the encryption scheme itself. There's no such thing as a "This backdoor only functions in the right hands for the right reasons" button; the existence of the backdoor puts the entire encryption scheme at risk.

This case proves the point. iPhone owners (3 million or so of whom lost their phones, with all of the private and sensitive information on them, last year) today are a little less secure—against Bad Guys, and against illicit and unlawful government snooping—than they were a couple of weeks ago, because the password-protect scheme is a little less secure today than it was a couple of weeks ago. Maybe the FBI will be willing and able to keep it out of the hands of the Bad Guys—but maybe it won't. Maybe it will secure it against those within the law enforcement community who want to use it for other, illicit purposes—but maybe it won't.

So where does that leave us? My sense is that the actual outcome of this dispute is about the best we can hope for. The government, having figured out how to unlock the phone, gets the information to which it is entitled—making us, in the aggregate, (a little) better off. The system has been compromised, making us, in the aggregate, (a little) worse off.

But because there was no "mandate" involved (i.e., the government got what it wanted without resolving the question of whether it is entitled to demand that Apple provide the backdoor), Apple (and anyone else) is free to revise and redesign the system to make it more secure going forward (which is what I assume it's doing right now).

Yes, it's just an endless game of cat and mouse. But given the choices, what's so terrible about cat and mouse, anyway? It has a bad reputation, but that is undeserved. After all, the original cat-and-mouse game, the one involving real cats and real mice, has produced very well-designed cats and very well-designed mice. Given strongly competing interests, is there a better process to keep them in balance?