The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Cybersecurity, European privacy, Snowden, and more
An interview with Cyber Insecurity News
The second half of my interview with Cyber Insecurity News has been posted, here. It deals with a lot of stuff from my career, including DHS, European privacy negotiations, Snowden, and cybersecurity. (The first half is here.) Here's an excerpt that captures some of my thinking on cybersecurity:
CIN: Is the government providing enough intelligence information—threats they're aware of—to companies? I know that's sometimes a complaint that companies have.
SB: Yeah, I do hear that. And I take that with a grain of salt. One thing I've learned from watching the intelligence community for 25 years is that intelligence in the abstract is almost never useful. For the intelligence to get good, you have to have a customer who understands the intelligence and how it's being collected, and can tell the intelligence officers exactly what he wants and what's wrong with the intelligence that has been collected. You don't get good intelligence if you just try to go out and steal the best secrets you can. Because you usually don't know what secrets really matter. You need a very sophisticated consumer who can say, "OK, I see what you've brought me. There are some interesting things here. But it isn't exactly what I need. Go look for X." The spies on their own would never think to look for X. They're just not as deep into the technology or the situation as the consumer. So for an exchange of intelligence to be really useful, you have to make the companies your customer. I don't think that's likely to happen. Part of the private sector's unhappiness with what they're being given is their assumption that since what they got wasn't that useful, they must not have gotten the good stuff. That's not how intelligence works, in my experience. There isn't "good stuff" just hiding out there. The good stuff—you have to dig it out as a customer. That isn't likely ever to be something that the intelligence community does for individual companies.
CIN: What can companies be doing better?
SB: I think companies need to ask themselves—and they're starting to do this—not so much what's on my checklist of technologies to deploy, but who wants my stuff? And what tactics and tools are they using this week to get it? Those are things that are knowable. Those are things that you can get from intelligence. You can get it from the private sector forensics people as well. So knowing that there's a North Korean or Chinese campaign to get information on, let's say, windpower means that, if you're a windpower company, you suddenly have a whole new set of adversaries, and you're going to have to spend a whole hell of a lot more money on cybersecurity or lose your technology. You constantly have to do a cost-benefit analysis there. But the key is knowing who's breaking into your system, and why: Are they motivated by money, are they motivated by the desire to steal your technology? Are they criminals or governments? All of those are things that should be part of your cybersecurity planning.
CIN: You just said the magic word: "money." When I talk to CISOs at companies, if we talk long enough and they feel comfortable enough, they will almost always say something about budgets, and how the management of companies so often underestimates how much is needed. And they routinely underfund cybersecurity budgets. What do you think?
SB: I agree. But I would also like to speak up for management and the board. Their most common complaint is: "My CISO comes in and says, 'Bogeyman, bogeyman, bogeyman! Give me more money!' He tells me that the North Koreans could do these things to me. Should I be worried about that or not? I feel as though I'm chasing goal posts that will always recede." So there is a culture clash there. That's one of the reasons that I like a model that asks, "Who wants my stuff, and what are they doing to get it?" Once you have that answer, you can also ask the question, "What would it cost to make sure they don't get it? What do I have to do to defeat every one of the tools and tactics they currently are using?" Then, when I know how much that will cost, I can compare the cost of letting them win to the cost of making sure they don't. So I think you can reduce this to a cost-benefit analysis, if you're disciplined about identifying who's likely to attack companies like yours, and if you have up-to-date intelligence on how they're getting into the systems they attack.
CIN: If you were the country's cybersecurity czar, what would be the first things you would do?
SB: I would say, "If you are in a business that the lives of a large number of Americans depend upon—pipelines, refineries, electrical power grids, water, sewage systems—you're going to have to meet certain cybersecurity standards. We're going to continue to use regulatory agencies that you are familiar with, if you are regulated already at the federal level, but we're going to find a way to make you meet these standards. And when we're done getting you to meet those standards, the people that you do business with are going to have to meet those standards. We can't afford to have weak links in our security process." I hate to say that, because I see all the downsides to regulation. It slows everything down. It raises the cost of everything. It locks in a lack of competition. But there's no obvious alternative, other than saying, "If you're hacked, we're going to expropriate your company, because you don't deserve to be in business." That would motivate people—sort of like the death penalty for companies. I don't think we really want to do that. So, better to come up with a constantly shifting set of best practices, and impose them on people who currently don't care enough to adopt them. They're not evil people; they just don't have enough skin in the game to worry about security.
CIN: What advice would you like to dispense before we conclude this conversation? Anything special you have to say to general counsel?
SB: Yes. It's from a theme that we've been talking about—the need to think about who your adversary is. And it's one of the hidden advantages in having your general counsel involved in cybersecurity. Engineers are used to protecting against inanimate nature. They're really comfortable asking whether gravity is going to bring a bridge down. They're much less comfortable asking, "What if the Russians wanted to bring it down, and they had all the explosives they needed?" They don't do adversarial thinking as a routine part of their job description. And neither do the risk managers, who are most comfortable asking questions like, "Is this a risk we can self-insure, or do we have to get insurance for it?" In contrast, if you're a good general counsel, you wake up every day and you say, "I know there are people—maybe competitors, maybe the plaintiffs bar, maybe governments—people out there who want to do my company harm using legal tools. I need to know who they are. I need to know what tools they will use to attack, and I need an early warning as they develop new ones. When the plaintiffs bar develops new lawsuits, I want to watch that. I hope that they try them out on somebody else first. But if they don't, my job is to figure out what we will do in response." In other words, the way of thinking about cybersecurity that I'm advocating is a way of thinking about corporate interests that lawyers are already familiar with. It should give general counsel a little bit of confidence that they have a perspective to bring to cybersecurity that isn't just, "What will the regulators think if we don't do this?" They can insist on more deeply adversary-based cybersecurity planning.
CIN: Based on your experience as an outside lawyer who advises general counsel, do you think that many GCs have a sense that cybersecurity is a really important part of their jobs, and have a seat at the table at their companies to help tackle it?
SB: Yes and yes. First, you'd be crazy as a CEO not to give your general counsel a seat at the table, because liability is a significant part of the calculation here. In practically every tabletop exercise we run, people start turning to the general counsel almost routinely. It's odd how often questions that easily could be answered by somebody else—there's no necessary connection to the law—end up getting lateraled to the general counsel for one reason or another. I'm a little less confident that general counsel always bring to the task a breadth of strategic thinking that maximizes their value to the company. They will always say, "I can tell you what our regulatory obligations are. I can tell you what our privacy statement says. I can tell you what the liability decisions of the court might be, or what the FTC would say about this." Those are all things that general counsel should be doing. I feel a little bit like a proselytizer when I say, "No, that isn't enough; what you need to bring to this, in addition to all that, is the sense that your company is engaged in an eternal battle with institutional opponents, and that your job is to make sure your company is as fully armed for that battle as possible."
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The fundamental barrier to security is that security and convenience and mutually exclusive. The more security you have, the less convenient it is, and the more convenient is is, the less security you have. (It's also possible to both it so that you have neither one, but it is NOT possible to have both). So every question of security involves asking "what convenience am I going to lose, and do my stakeholders think the gain in security is worth the loss of convenience".
In some organizations, buy-in for security is high. Banks accept that security procedures are necessary, and more importantly, bank employees accept that security procedures are necessary. But retail establishments have less buy-in. The notion of protecting every point-of-sale terminal from tampering, for example, doesn't automatically show. Workers and managers are reasonably diligent when there's cash in the drawer, but often lackadaisical when there isn't... even though most of the money that flows through the terminals isn't in cash form, and malware in the POS terminals represents a potential multi-billion-dollar threat to the company.
Keeping trained professional staff to maintain those terminals is expensive. Hiring out maintenance to a service company or even a jobber is cheaper... but again, exposes the company to a potential multi-billion-dollar risk.
I've often wondered why we don't have white hat hackers recruiting computers into bot-nets to install security software, breaking into corporate computers to install "Update your virus software, moron" chyrons on the screens, that sort of thing.
I understand why it doesn't happen spontaneously, but you'd think it would be encouraged by law.
NO WAY.
ABSOLUTELY NO WAY.
There's absolutely no way any US company can - unilaterally - protect themselves from Chinese, North Korean, etc. cyber attacks.
They simply do not have the assets (tools, manpower, info ([i.e. intelligence]) to predict and counter foreign intelligence cyber efforts which - BTW - are constantly changing (both targets and tactics).
And I mean even the largest defense companies (SAIC, Lockheed, etc.), can't do this.
Hell, the US Government can't even protect its databases (e.g. OPM hack).
The adversaries are foreign governments so our defense has to be our government and its immense resources (including our political, economic, intelligence, military if needed, assets etc.).
Yes, industry and the govt have to work closely together but countering cyber attacks has to be led by the US Govt.
And yes, we have to get better.
"Hell, the US Government can't even protect its databases (e.g. OPM hack)."
I think you seriously misunderstand the nature of the OPM "hack". The OPM databases weren't hacked into. IT professionals working out of mainland China, on the payroll of Chinese intelligence services, were handed passwords and top level permissions to the databases, having been hired to do database maintenance.
Literally, we hired Chinese programmers working out of China to service secure government databases.
This was done over the vehement, "Are you mad???" opposition of numerous people within the OPM, which opposition was overridden by people further up the chain.
The OPM databases didn't fall to hacking, they fell to treason. You don't need mad hacking skills when you can buy the cooperation of high level political appointees.
You're absolutely correct and I was using 'hack' in the general sense that someone, somehow, gained access to information they weren't authorized to have (not the technical sense where an unauthorized person alters or bypasses computer code/software in order to gain unauthorized access).
It's a very important distinction, though. You can have all the computer security in the world, multi-factor verification for every transaction, and if the people at the top have been compromised by a foreign intelligence service, it's all for naught.
But the OPM was compromised in a way that could have been detected in plenty of time, before the database was penetrated. You'd need a system in place to deal with THAT, where somebody down in the trenches gets ordered to do something stupidly, blatantly insecure, hand the keys to the system to a foreign power, they have somebody to report it to besides the guy ordering them to do it.
You need a system set up to recognize that the real threat is upper management being compromised, quite possibly from the very top. You need a hotline, really, and the lower level employees encouraged to use it.
FYI, the govt in the past few years has developed the Insider Threat program which allows all/any employees to notify Security if another employee is displaying indicators of questionable activity. It's all the rage now.
From Wiki: The Insider Threat Program is the United States government's response to the massive data leaks of the early twenty-first century, notably the diplomatic cables leaked by Chelsea Manning but before the NSA leaks by Edward Snowden. The program was established under the mandate of Executive Order 13587 issued by Barack Obama.
"There's absolutely no way any US company can - unilaterally - protect themselves from Chinese, North Korean, etc. cyber attacks."
Unplug the computer.
"if you're a good general counsel, you wake up every day and you say, "I know there are people?maybe competitors, maybe the plaintiffs bar, maybe governments?people out there who want to do my company harm using legal tools."
"maybe the plaintiffs bar." Yaaas. Straight into my veins.