The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Congratulations to Orin Kerr, Cited 4 Times in Today's Supreme Court Majority Opinion
It's Van Buren v. U.S., which is all about the Computer Fraud and Abuse Act, on which Orin is the leading expert.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Congratulations, Orin.
It's against my religion to agree with Justice Thomas, but I actually think he had the better argument.
I am torn.
It's in my religion to always start by agreeing with Thomas, but giving the Federal Government the power to prosecute you for violating Twitter's terms of service just doesn't seem like a reasonable idea
You all have odd religions....
But Greg's is heretical.
It’s in my religion to always start by agreeing with Thomas, but giving the Federal Government the power to prosecute you for violating Twitter’s terms of service just doesn’t seem like a reasonable idea
And I think that sums up the difference between the majority and the dissent. Thomas's dissent is a fairly straightforward, and convincing, analysis of the text, whereas the majority opinion is about the consequences.
Do we want the courts to discern a meaning that is kinda text-adjacent and "reasonable" or do we want the courts to stick to text-right-slap-bang-in-the-middle and leave the judgement of what is reasonable to the legislature ?
Barrett only pretending to be a textualist. Not a good sign.
And I think if you have two reasonably plausible interpretations, it's legitimate to look at the consequences, especially if the consequences are severe. In this case, I'm not sure the two interpretations are just as plausible and I think Thomas has the better statutory interpretation argument.
I agree. If the text is genuinely ambiguous then you have to have recourse to some other kind of tiebreaker. But in this case - as in many other cases - the text isn't really ambiguous. There's a clearly much better answer, and the majority "interpretation" is highly strained. But as with most texts if you desperately want to find ambiguity, so as to get you into the happy land of policy, you can probably come up with some sort of argument for ambiguity. The question is - what sort of judge thinks straining for ambiguity is part of the job description ?
As for "severe" consequences being a good excuse for sneaking past the text, I challenge the alleged severity of the consequences. Convicts can be pardoned, and laws can be changed by the legislature.
As often happens in statutory interpretation, the court may have put more work into interpreting a law than the drafters spent thinking about the exact language. The end of the majority opinion makes a good policy argument for the narrow interpretation, but officially that is dicta.
" the court may have put more work into interpreting a law than the drafters spent thinking about the exact language."
A necessary consequence of the drafters putting too little thought into the exact wording and/or deliberate attempts at making the language more ambiguous.
I dunno. Was the drafting really any more careless or obfuscated than usual? I don't know that you can blame Congress so much here. Keep in mind they were going off the state of the art in the mid 80s. Like Wargames. The web hadn't even been invented yet, let alone widely adopted. I think they did a decent enough job of capturing what hacking consisted of then, as well as what it entails now. It seems a little unfair to take them to task for not envisioning how things would end up years later with internet terms of service, corporate technology policies, and modern data privacy concerns, and how those might interact with the statute. A lot of the mischief there is on courts and the executive. I suppose a later Congress could have stepped in to make amends, but you can't fault the original one that much.
Good show, Prof. Kerr. Carry on!
First, congrads to Orin.
Second, I tend to think the opinion of the court was correct here. The officer had authorization to obtain the information he did. He then proceeded to misuse the information, by selling it. Nevertheless, he had authorization to access it in the first place.
Can't wait for the first IRS employee to offer Trump's tax returns for sale.
While amusing, there are separate laws to address that particular issue. Typically release of confidential information laws.
The IRS employees may be entitled to ACCESS it. But separately, releasing it may be illegal.
That would violate a different law, 26 USC 6103. I don't know what the punishment is for disclosure of returns in violation of that law. I remember the government employees who queried Senator Obama's passport records got fired. I don't think they went to jail.
The same thig happened with a variety of folks in Ohio who ran "Joe the Plumber" and disseminated what they found.
They were fired, for violating workplace policies. They didn't have an 18 month jail sentence.
I'm mostly happy with this decision, at least on first thought
But I was entirely happy with the corrupt cop getting an 18 month jail sentence for what he did.
Clearly this is a case where the State / local jurisdiction should have laws against police officers looking up data for personal purposes, and he should have been prosecuted under that law.
It's a classic example of "you don't have to make a Federal case out of it."
I think the jail sentence is appropriate. What he did could have major, dangerous implications. Looking up license plate information to check if the person was an undercover cop, then selling that information? That can get undercover cops killed...
There has got to be a different law to prosecute him under though.
Agreed
I confess that I haven't read the opinion in full, and frankly wouldn't likely understand it if I did. That won't stop me from commenting though. 🙂
I once worked in teletype for a law enforcement agency. We were authorized to run plates, people, etc. for law enforcement purposes and certain other limited purposes. We weren't authorized to run the because someone paid us to. How is the officer's running people authorized if he is doing so for an illegitimate reason?
"That won’t stop me from commenting though. ????"
Ehh...what else are comments for? 🙂
The CFAA statue reads
"(2)intentionally accesses a computer without authorization or exceeds authorized access"
"Exceeds authorized access" is further defined as
"(6) the term “exceeds authorized access” means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter;"
So, the officer is entitled to access the information. He has authorization to access the computer. Just like you were authorized to run plates as a teletype.
Now, there are a series of workplace laws often put in place to prevent misuse of the information, which is entirely reasonable and appropriate. But that's different from felony level laws. And when you begin changing what counts as a felony level law for the "reason" somebody is doing something (that they are otherwise perfectly well authorized to do)...a whole bunch of issues come up.
The simplest one is this. At a workplace, you are authorized to use your computer for work, as well as access the internet. Instead, you check your personal e-mail. You have now used your computer for an "unauthorized" access, and now are subject to up to 18 months in jail. Or perhaps you check a news site that is related to your job. But a jury finds that you really did it for other reasons. 18 months in jail. "Unauthorized access"
The issue is whether he was "unauthorized" in the very specific way defined in this particular law. The larger issue is that generalized "unauthorized" that applied in your teletype example had a maximum penalty of getting fired while "unauthorized" as the Government tried to claim in this case would have had a maximum penalty of jail time. That's a pretty big step up and, in my opinion, very clearly not what Congress intended when they passed this law.
CFAA was intended to criminalize hacking - the electronic equivalent of breaking and entering. Prosecutors have stretched this law to try to include far more minor violations - the electronic equivalent of filing criminal charges against an employee who's allowed in the building but who walked past a 'closed for cleaning' sign. I'm not saying that the employee who blew past the sign and made more work for the cleaning staff is a good guy - but charging him with breaking and entering would be stupidly excessive.
Before the Driver's Privacy Protection Act a student I went to college with talked about having a contact who could run plates for her. And there were lots of similar favors being done in crime fiction. Now police are more reluctant to do it. In Massachusetts we have a law prohibiting ticket fixing. It didn't stop ticket fixing but ticket fixing is a bigger favor than it used. In theory the officer could get in trouble. (It's not illegal to withdraw the charges in the traffic court hearing, but it's illegal to make the ticket disappear from the system.)
I just looked it up. DPPA violations are punishable by fine only, not prison.
" Now police are more reluctant to do it. "
Some, maybe.
It just makes more sense to ban the release of the information to parties for unauthorized reasons. It's not too hard to come up with a legitimate reason for accessing information if you want to, but it's almost impossible to come up with a legitimate reason for selling the information to a third party.
Congratulations Orin!
Based on my brief skim and prior familiarity with CFAA this seems like the correct result too.
I havent looked into it in full, but my understanding was that the text was way overbroad, Thomas probably had a more clearer textual argument, and Barrett used the ... if I stare long enough at this text I can probably find some limiting principle somewhere.
And, if I recall correctly, Orin Kerr was making mostly a policy argument that had a textual basis, but not necessarily what an ordinary person reading it might assume it to mean.
Obligatory congratulations to him.
I'm ok with this, I'm a developer and if there wasn't a limiting principle a substantial portion of my work would be illegal, but I wish there was a better way of doing this. Maybe say something like, Thomas's interpretation is constitutionally overbroad so this is what we are doing?
Because there is a line of cases, Bond, Yates, where the court does this and ... they ought to be a little more honest about whats happening here.
I think the key question is: Was the defendant doing the kind of action, and accessing the kind of data, that he was allowed to? I think the answer is yes -- he was allowed to look up these records.
He looked them up for an unauthorized and improper purpose, and that can be punished as a violation of workplace policies or perhaps other laws, but it's not what the CFAA was intended to outlaw.
Separate from the merits, the lineup is interesting. It undercuts the arguments that Trump's appointees to the Court are far-right, or even moving the court to the right in any kind of consistent way.
I hope you'll feel differently after taking more time with the opinion, but I don't see how Barrett did much hand waving. Her argument was firmly rooted in the technical language of the statute dealing with a technical subject matter. It could have been fortified even more, as I mentioned elsewhere, but that was the only shortcoming if you want to call it that.
Likewise, I don't see how Prof. Kerr was making a policy argument. You have to look at the technical context and purpose of Congress in trying to address certain conduct using the statutory language, but that's a far cry from a naked argument that something's good or bad policy.
"Thomas probably had a more clearer textual argument, and Barrett used the … if I stare long enough at this text I can probably find some limiting principle somewhere."
Spot on.
I'll admit found both arguments convincing, but there are a couple of reasons Barrets is slightly more convincing: both the rule of leniency and vaqueness would cut against Thomas's reading. Thomas says: "I would not give so much weight to the hypothetical concern that the Government might start charging innocuous conduct and that courts might interpret the statute to cover that conduct."
It's one thing to charge innocuous conduct when the law clearly prohibits it, such as his example of a horse grazing on federal property, but it's another when the statute is not clear and you find your self charged with with a crime, even just a misdemeanor.
I'll point out an example of a real world situation, say you are a programmer that is authorized to access a computer system and you access a certain area trying to diagnose a bug, but you continue reading when you see something you find interesting, for several more pages, or even alter your query to bring back more information for that purpose, even though it would have been perfectly legit to write the query that way before you became interested in seeing more. Is that criminal or not?
Happy for Prof. Kerr for sure. It must be very rewarding to get multiple citations, including right off the bat in the third paragraph of the majority opinion.
However, it was quite disappointing to see the "code-based" approach dismissed so tersely in a footnote. That's an extremely important consideration for interpreting the statute. It wasn't that surprising though, because I'm sure the Justices (and/or clerks) don't have much appetite for that kind of technical discussion.
Frankly, the rule of the case is a bit obscured. The footnote says that they didn't decide whether contract/policy violations rather than code violations implicate criminality, but the only way to reject the government's theory is to reject contract/policy culpability. So it's something short of "code as law," but pure violations of contract/policy somehow don't implicate the rule as a matter of law.
I'm not sure how to reconcile this non-fish/non-flesh with these scenarios:
1. Brute force attack that reaches a correct credential, logs out, and then gives it to someone else to access the protected area of the drive.
2. Guessing the password (Trump wasn't an outlier -- in my experience, simple passwords are a NYC power thing.)
3. A faulty gatekeeping that simply allows any credential in.
4. A cached credential on a shared workplace machine.
5. Intercepting information from an unsecured authorized user's session.
6. Buffer overflow attack altering a system to which you had permission to post data.
7. Stealing the password from someone else's monitor Post-It array.
8. Removing the hardware from the machine and seeing which unencrypted parts of the memory (or RAM cards) you can reach.
9. Phishing the PW by targeted emails to office admins.
10. Legacy password that tech support hasn't removed from the system.
In all of these scenarios, you're not doing anything that the computer code would recognize as prohibited. And they're not outliers -- this is how people generally get into computer systems that they're not supposed to get into. On the other hand, a property-based approach that ascribes value to the protected data based on another private party's superior claim is also clearly untenable for a system of public law.
At root, CFAA has always seemed to me like the analogue to a law increasing penalties for disabling a burglar alarm system. Perhaps the clearest answer to the question: "Who has the right to possess this information?" is: "Anyone for whom it isn't illegal to do so." If one breaks a law by defrauding somebody of something (or something similarly criminal), and in the commission of the crime, defeated security devices on a computer network (identified in the same fundamentally equitable manner), then gaining possession of the relevant information (which is simply a rephrasing of "accessing") was, in terms of the public law of the jurisdiction, unauthorized.
My $0.02.
Mr. D.
I feel like this set of examples is exactly the reason the court didn't take the code-based approach. The analysis that Barrett did I think leads to the essentially the same results as a code-based approach intuitively would, without all the mess.
Barrett adopted -- practically if not explicitly -- the technical meanings of "authorization" and "access" that Orin promoted. (I believe explicitly in the case of "access".) I'm sure that the contours of "authorization" will continue to be explored in future cases. For example, is pure policy sufficient, as in, if a computer system allows everyone access from a code perspective, is it even possible to access it without authorization? (My guess is yes, policy is sufficient, but I don't think the opinion answers the question definitively.)
I do think this opinion's "gates up or down" approach is the right starting place. Barrett is saying that authorization is a function of the individual seeking the data and the data being sought, and no more. That's aligned with the technical definition of "authorization", even if technical _implementations_ of authorization aren't always 100% fidelity (as your examples show... e.g. I can guess a password and achieve code-based authorization that way, but that doesn't make me, as an individual, authorized).
The dissent thinks of authorization as a function of the whole of the circumstances, which is closer to an everyman definition of "authorization." But for all the reasons Barrett (and Orin) give, that would be an awkward way to read this computer-hacking-oriented statute.
Turning a machine's ports to a single portcullis is an important signal, yes, but in the end, I think this opinion was a peculiar use of cert. Basically, it says that a contract theory alone can't support a prosecution under prong two as a matter of law. This implies that it's a mixed question of fact and law; a gates totality, perhaps. There's no new standard here, it just prunes one branch -- the prosecution under a pure contract/administrative rule theory under prong two. Perhaps the most logical rule is that a technological restriction (or lack thereof) creates a rebuttable presumption of proof of authorized access, vel non. I suppose we'll have to see what grows in the space.
Mr. D.
So, in regards to points 1-10, in each case, it is non-authorized access.*
*Unless you actually were authorized to access the data that you did, using unorthodox means to get it. i.e. Whitehat hacking.
It's pretty simple, for my interpretation. Are you authorized to access the data under normal circumstances? If so, then you are authorized to access the data. If you misuse the data that you acquire, that falls under different laws.
Well, that's a bit circular, though. Much like saying that the law is what a reasonable person (aka: the judge's imaginary friend) would do under the circumstances. Kerr's amicus says that there's consensus in the lower courts that circumventing technological restrictions suffices for prong one, and urges the same for prong two. Circumventing doesn't mean evading, it means beguiling -- 'get round,' not 'get around.' I don't circumvent a traffic jam, I circumvent a feature on my GPS that would lead me into one. It seems to require some affirmative act to defeat a positive denial of access at the point of access.
There's daylight between a contract theory and a lock-pick theory, but if you say 1-10 violates prong two with reference to internal limits on the machine, you would seem to explicitly adopt the contract theory, rather than the mechanical test.
Mr. D.
I think that's right, but I think this opinion leaves the question open.
That is, the two prongs have the same standard, but it's not clear yet whether it's the "affirmative act" standard or the "no access under any circumstance" standard.
Although this opinion very nearly shuts the door on "affirmative act." I think we're headed towards "any circumstance."
But note that a contract would suffice under "any circumstance" as long as it denied authorization to a access piece of data totally, for any reason.
Changing the language from objective technological restriction to the user's subjective perspective (affirmative act -- perhaps something more than a visible act in furtherance, but less than a knowing act in violation of law) is interesting, and perhaps more appropriate to Barrett's view, since gates up/gates down seems to refer to a particular user's encounter, not the system's general configuration. It might also help with the clickwrap overreach problem without making the restriction too vague.
Mr. D.
Personally (as my above reply suggests), I'm a fan of Barrett's opinion. But for those of you who prefer Thomas's... I don't think you're asking the hard questions. These are the questions you (and Thomas) should be asking, in my opinion:
1. Barrett says the circumstances don't matter. But some circumstances must matter.
Time matters. Authorization is often time-dependent, especially on long scales (you can lose authorization to data if you're fired) but even on shorter scales (you are only authorized to access data while working on a specific project, you're only authorized to access data during working hours).
Place matters. Authorization is often place-dependent (you're only authorized to access data from within a secure area / the office, you're only authorized to access data from within the country). I think Thomas did mention the latter one, but not with much oomph.
Both time and place rules are often aspects of computer authorization and access systems, and it should be considered illegal hacking to circumvent them, right? (There may be others as well, such as being authorized to access data only from specific devices, or only with multiple people present, or with restrictions like no-printing, or with limits like no more than 3 requests per month. All of these can be implemented technically.)
2. Barrett focuses on whether an individual is generally authorized to access the data. But authorization is often role-dependent.
In other words, it's common for one individual to have multiple hats, usually represented as multiple "accounts" or "credentials" to use the technical term. For example, a police officer might be authorized to access data using his officer credentials but not his personal credentials.
Given the fact that computer systems lean heavily on these roles to define authorization rules, why wouldn't the concept apply in the context of an anti-hacking statute? Shouldn't misuse of a credential count as unauthorized access?
3. Barrett explicitly bars an individual's purpose in accessing data from being considered as part of the authorization determination. But there's nothing preventing a computer system from imposing a purpose-based authorization rule.
Imagine in this case if, as part of logging in to the license plate database, after entering your password, there was another step to indicate your purpose: A) searching for outstanding warrants, B) finding the owner of a stolen vehicle, or C) stalking an ex, and if you answer C you don't get access. If the officer answers A when stalking his ex, shouldn't that count as unauthorized access?
If that example seems too contrived, there are more realistic ways where purpose can be part of the authorization phase. A great example is video. It's possible to access video for the purpose of viewing it or for the purpose of copying it, and there are technical barriers in place to copying video that was accessed for the purported purpose of viewing it (e.g. HDCP). Shouldn't it be considered unauthorized access to stream a video for the purported purpose of viewing it, hack the technical barrier against copying, and then copy the video?