The Cybersecurity-Industrial Complex

The feds erect a bureaucracy to combat a questionable threat.

In the last two years, approximately 50 cybersecurity-related bills have been introduced in Congress. In May the White House released its own cybersecurity legislative proposal. The Federal Communications Commission and the Commerce Department have each proposed cybersecurity regulations of their own. Last year, Senate Armed Services Committee Chairman Carl Levin (D-Mich.) even declared that cyberattacks might approach “weapons of mass destruction in their effects.” A rough Beltway consensus has emerged that the United States is facing a grave and immediate threat that can only be addressed by more public spending and tighter controls on private network security practices.

But there is little clear, publicly verified evidence that cyber attacks are a serious threat. What we are witnessing may be a different sort of danger: the rise of a cybersecurity-industrial complex, much like the military-industrial complex of the Cold War, that not only produces expensive weapons to combat the alleged menace but whips up demand for its services by wildly exaggerating our vulnerability.

The Regulatory Urge

The proposals on the table run the gamut from simple requests for more research funding to serious interventions in the business practices of online infrastructure providers. The advocates of these plans rarely consider their costs or consequences.

At one end of the spectrum, there have been calls to scrap the Internet as we know it. In a 2010 Washington Post op-ed, Mike McConnell, former National Security Agency chief and current Booz Allen Hamilton vice president, suggested that “we need to reengineer the Internet to make attribution, geolocation, intelligence analysis and impact assessment—who did it, from where, why and what was the result—more manageable.” Former presidential cybersecurity adviser Richard Clarke has recommended the same. “Instead of spending money on security solutions,” he said at a London security conference last year, “maybe we need to seriously think of redesigning network architecture, giving money for research into the next protocols, maybe even think about another, more secure Internet.”

A re-engineered, more secure Internet is likely to be a very different Internet than the open, innovative network we know today. A government that controls information flows is a government that will attack anonymity and constrict free speech. After all, the ability to attribute malicious behavior to individuals would require users to identify themselves (or be identifiable to authorities) when logging on. And a capability to track and attribute malicious activities could just as easily be employed to track and control any other type of activity.

Many current and former officials, from Clarke to FBI Director Robert Mueller, have proposed requiring private networks to engage in deep packet inspection of Internet traffic, the online equivalent of screening passengers’ luggage, to filter out malicious data and flag suspicious activity. The federal government already engages in deep packet inspection on its own networks through the Department of Homeland Security’s “Einstein” program. Mandating the same type of monitoring by the Internet’s private backbone operators—essentially giving them not just a license but a directive to eavesdrop—would jeopardize user privacy. 

There have also been proposals at the FCC and in Congress for the certification or licensing of network security professionals, as well as calls for mandating security standards. While certification may seem harmless, occupational licensing mandates should never be taken lightly; they routinely restrict entry, reduce competition, and hamper innovation. Politicians have also called for substantial new government subsidies, including the creation of regional cybersecurity centers across the country to help medium-sized businesses protect their networks.

Many of the bills would mandate a new cybersecurity bureaucracy within either the Department of Homeland Security or the Defense Department. Many would also create new reporting requirements. For example, the administration’s proposed legislation requires that private firms deemed by the head of Homeland Security to be “critical infrastructure” must develop cybersecurity plans and have those plans audited by federally accredited third parties.

With proposals as intrusive and expensive as these, you might think the case for federal intervention is overwhelming. But it isn’t. Again and again, the regulators’ argument boils down to “trust us.”

The CSIS Commission

One of the most widely cited arguments for more federal involvement in online security was made by the Commission on Cybersecurity for the 44th Presidency, which unveiled its report in December 2008. The commission, assembled by the Center for Strategic and International Studies (CSIS), a foreign policy think tank, in February 2008, served as a sort of cybersecurity transition team whoever the new president turned out to be. It was chaired by two members of Congress and composed of security consultants, academics, former government officials, and representatives of the information technology industry. Their report concluded that “cybersecurity is now a major national security problem for the United States” and urged the feds to “regulate cyberspace” by enforcing security standards for private networks.

Yet the commission offers little evidence to support those conclusions. There is a brief discussion of cyberespionage attacks on government computer systems, but the report does not explain how these particular breaches demonstrate a national security crisis, let alone one that “we are losing.” 

The report notes, for example, that Defense Department computers are “probed hundreds of thousands of times each day.” Yet it fails to mention that probing and scanning networks are the digital equivalent of trying doorknobs to see if they are unlocked—a maneuver available to even the most unsophisticated would-be hackers. The number of times a computer network is probed is not evidence of a breach, an attack, or even a problem.

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  • ||

    One of the biggest proponents of this is Stewart Baker, the old undersecretary for policy under Bush. He swears it is the next form of warfare like blitzkrieg was in the 1930s. I asked him once if that is true, then where is this decades Spanish Civil War where the concept is proven? He didn't have an answer other than "it will happen". Yeah sure Stewart.

  • jj||

    As a former internet security software engineer I agree with the author's assessment that the threat is excessively overblown, and from the way in which these bureaucracies and laws are being pushed by the government and media, can only assume that there is a nefarious effort to undermine our freedoms in yet another fashion.

  • ||

    A lot of it is just careerism. Baker looks at cyber war as a way to become an important person. I am sure he honestly believes what he says. But it is a serious case of confirmation bias since he has everything to gain from the threat being real.

  • ||

    One of the biggest proponents of this is Stewart Baker, the old undersecretary for policy under Bush. He swears it is the next form of warfare like blitzkrieg was in the 1930s. I asked him once if that is true, then where is this decades Spanish Civil War where the concept is proven? He didn't have an answer other than "it will happen". Yeah sure Stewart.

  • Stewart||

    The Chinese have already taken control of your "submit" button. That's all the proof I need.

  • Ah, So||

    The Chinese gave him ADD.

  • Old Mexican||

    "How about Global Thermonuclear War?"
    "Wouldn't you prefer to play a nice game of chess?"
    "Later. I would like Global Thermonuclear War."
    "Fine."
    "All right!!!!"

  • Old Mexican||

    The Regulatory Urge


    A mental disease, similar to autism, in which the patient shows no regards for the natural rights of others.

  • ||

    "A mental disease, similar to autism, in which the patient shows no regards for the natural rights of others."

    ...and the futile, pretentious endorsement of the patients personal “expertise” to correct some ill-perceived imbalance of nature itself.

  • Scruffy Nerfherder||

    The funny part is that as DC goes broke, all these security contractors are going to be fighting for a decreasing pool of wealth.

    The concerns I have about "cyber" security initiatives is the capabilities it will give the gov't when it goes looking for money, which it inevitably must do. The kiddie porn witch hunt will pale in comparison to what they will do when looking for something to fine or seize.

  • jj||

    It has been my experience that the private secor is a far better protector of software and networks than the government can ever be. A regulatory agency that lumbers at the speed of a glacier cannot function andequately in a world where technology changes literally overnight.

  • strat||

    Bingo, jj. Remember the "Computer Security Act of 1987?" There were deadlines for USG Departments and Agencies to just come up with a plan, not to actually implement anything, and still some ridiculous majority never complied with the law in time.

    The interesting question is whether the "cloud" makes this better or worse.

  • Old Mexican||

    ere have also been proposals at the FCC and in Congress for the certification or licensing of network security professionals, as well as calls for mandating security standards.


    And certainly, calls from the industry to limit their competition, which is the REAL reason behind licensing laws and certifications.

  • strat||

    I'd be careful who we consider "the industry." Let's just say that people in the training business are working very hard on these policy proposals, to the point of having quotes in the preambles of the draft bills.

    A lot of us who actually do/have done the work find that distasteful in the extreme.

  • Old Mexican||

    Re: Strat,

    I'd be careful who we consider "the industry."


    Cui bono? That would be them.

  • Tim||

    AT last! Something new to fear.

  • anon||

    Phew! I was starting to get tired of fearing muslim extremists, debt defaults, etc.

  • Mr. Obvious||

    Bring back the Cold War!

  • kinnath||

    I miss Tricky Dick

  • The Taliban||

    us too!

  • plumbing las vegas nv||

    this is a great comment but i disagree

  • ||

    There's a really simple way to make your network secure from "cyber-attacks". Unplug it from the internet.

    Meanwhile, let the rest of us take our chances.

  • anon||

    That's not true, remember the Iran nuke plant worm?

    Don't misconstrue this to be an argument in favor of the fed's power grab. It's not. I'm just saying, security is not simply a matter of having a closed circuit network.

  • rather||

    PS,Unplug it from the internet.

    How will government workers view porn?

  • fish||

    What we are witnessing may be a different sort of danger: the rise of a cybersecurity-industrial complex, much like the military-industrial complex of the Cold War, that not only produces expensive weapons to combat the alleged menace but whips up demand for its services by wildly exaggerating our vulnerability.

    I don't know. I like the notion of a guy in a 350 million dollar a copy F-22 squaring off a against a barefoot Afghan carrying a pre-WW1 Lee Enfield rifle. There's a certain symmetry to it.

  • Paul||

    The ultimate in Net Neutrality.

  • strat||

    http://www.youtube.com/watch?v=jbmc7roekSM

    http://www.wired.com/wired/archive/8.07/haven.html

    Relevant, or it would have been if some German guy with more guns hadn't taken it over.

  • ||

    It's simply amazing how anything the government does not control now potentially poses a threat, and therefore they must control it. Convenient, no?

  • ||

    it's the essence of statism. people need to be protected from both themselves and from outside threats, and govt. is the only entity that can do that.

  • strat||

    Pay no attention to that armed citizen behind the curtain.

  • Pip||

  • tarran||

    I think dunphy was describing how statists think, not actually describing how it ought to be.

  • ||

    pip and coeus (sp?) are to law enforcement analysis what bull connor and david duke were to race analysis.

    reflexive, tiresome, illogical bigotry.

    you can just feel the hate dripping from the keyboard when they type.

  • Apostate Jew||

    So sensitive.

    Must be what makes you such a great policeman.

  • ||

    i'm sensitive and fragile. like fine china

  • Pip||

    Fuck off you lying cunt. You lied. Plain and simple.

  • ||

    smooches! pip!

  • William Rinehart||

    Exactly, and it is not just a US thing. Sarkozy made the same kind of argument a couple of weeks back

    http://blog.williamrinehart.co.....t-anarchy/

  • strat||

    As someone who has worked in that area for the better part of 25 years, and used to break into things for a living, I'll just say that it really is ugly out there, both in terms of vulnerabilities we allow to fester for years and in terms of players looking to score hits, be they militaries, organized criminals, or people with an axe to grind.

    All that having been said, I completely agree with the exhortation by the authors to apply some considered analysis and nuanced thought to policy proposals before trotting them out.

    We have legislators floating bills that even their staffers admit are over the top because they wanted to foster discussion. Surely there is some instrument less blunt than DRAFT LAW that the USG can use to start discussions.

    Oh wait, there might not be. The model used by the U.S. Government, even when well-meaning and considerate of the variety of stakeholders, is inadequate to allow them to actually identify and contact those stakeholders in a timely fashion.

    It's possible to get a lot done by contacting senior executives of leading technology firms, but that hardly represents all of the people with an interest in the security of their information.

    Here's one example: I would never call Facebook "critical infrastructure" (despite the fact that some municipalities are beginning to experiment with it for disaster notifications), but you can't deny that a service with 500 million users is relevant. At the very least it's a huge attack surface or potential vector for attacks.

    Then you have the hundreds of little startups who might, at any moment, displace some entrenched player, or cause them to be disused or marginalized. At least there's one part of our economy where there's a little creative destruction. It does make things difficult for policymakers though when they have no idea who is relevant this month, or how to contact them.

    I used to be hired to break into banks, hospitals, energy companies (yes, even the ones with nuclear power plants), and I can assure folks that we always found ways in. The recent alarm over SCADA (supervisory control and data acquisition systems) is not unwarranted. Some of those things are still running decades-old operating system software, without patches for fear that it's too "brittle." I won't even get into the fact that the software license usually forbids deployment into exactly that sort of high-risk operation.

    There are good reasons why some specific information on actors and tools or tradecraft is restricted from public disclosure, but it is a given that if governments learn better how to share information their own populations about threats, we'd all be better off.

  • ||

    If what you say is true and I don't doubt it is, then isn't the proper solution limiting the potential damage that can result from an intrusion rather than going crazy trying to limit intrusions, since you can't stop them anyway?

  • strat||

    John, that's an astute question. The truth is we should do our best to mitigate both. I think this distinction is why we've seen dramatic growth in tools classed under "data leakage (or loss) prevention."

    Someone finally realized that if the _information_ is what we're trying to protect, it might make sense to have a component actually designed to do that. People used to scoff at whether "dirty word searches" would actually be practical at preventing data from fleeing systems when it shouldn't. Now it's a growth industry.

    My personal hot button, of which I was reminded this week, is that there is still really no secondary market for IT security risk. Sure, insurance companies will sell you "hacker insurance" but the reality is that the limits are low enough to be mere gestures, and they don't have anything resembling real actuarial data. They offer policies just so that they can claim to be doing it and competitive, but it's not real from any significant business perspective.

    If I could buy options on IT risk, I'd be retired on a private island somewhere.

  • anon||

    You assume it's in the government's interest to share information. I don't think this is a correct premise to assume.

  • strat||

    Actually, in the case of the USG, it's on record as an accepted premise. The problem is that they are faced with overhauling quite a few processes (what isn't automated, after all?) and not discriminating too much or "picking winners" in terms of the entities with whom they'll share.

    I have to grudgingly admit that I have seen what I consider to be positive movement along those lines. Unfortunately, the effort moves at a pretty glacial pace between legal reviews, security level decisions, and the like. There are folks trying to make it better, at least with some appropriate initial communities, but nowhere near what the article's authors (I'm inferring) or I would prefer on a day-to-day basis.

  • Mr. Obvious||

    I want your job.

  • rather||

    Memo to self:
    Be nice to strat

  • strat||

    As policymaking goes, I can get on board with that.

  • ChicagoSucks||

    You should be careful who you piss off on the Internet

  • ||

    Emmanuel Goldstein, is that you?

  • strat||

    Ha! Hardly. He and I have had a few spirited discussions along the anarcho-capitalist vs anarcho-communist axis in the past though.

  • T||

    Some of those things are still running decades-old operating system software, without patches for fear that it's too "brittle."

    Extremely bitter knowing laugh, and one of many reasons why I don't work at my last job anymore.

  • strat||

    I feel your pain, T. I got into a debate with one of the luminaries in SCADA security where I asserted that a refusal to patch those systems might be one of the few cases where I'd consider it contributory negligence.

    He went non-linear on me and practically shouted that it wasn't practical to expect those people to patch their software and that I just didn't understand. I pretty much knew what I was facing after that.

  • ChicagoSucks||

    Nobody patches SCADA. Most implementations I've seen are on physically segregated networks. Good luck getting those past 3.1

  • strat||

    I am not a proponent of government regulation, but I sure as heck would like to see some more private sector standards that hold those people to account.

    Companies can complain all they want about the Payment Card Industry (PCI) standards, but there's no denying that effort is probably one of the single biggest improvements to the security of normal people's lives ever - whether they know it or not.

  • ChicagoSucks||

    I love and hate the PCI DSS. It's prescriptive enough to be usefull but I don't like the enforcement mechanism. The problem I have with it is the issuing banks that levy the rate hikes (which can be huge for an ML1 or ML2) are in many cases the perpetrators of non-compliance. I'm sure you've heard the horror stories, "our bank will only accept FTP-based transactions and is threatening to raise our interchange rate because our QSA says we're not compliant." That's bullshit, but nothing compared to arbitrary controls required by GLBA, HIPAA, or even SoX.

  • strat||

    Heh. I remember back when Fidelity would show up at their customers' sites with a router and say "Plug this in. No you can't see how it's configured, but if you want to transact with us you'll have to use it."

    The one thing that utterly confuses me with the PCI DSS is the "You either have to run audits or use a web application firewall." It's sort of like saying "You either have to take a bath, or get an annual physical." Comparing apples and giraffes. What's up with that?

  • ChicagoSucks||

    LOL, I hear ya man. I didn't figure I'd get into a discussion like this on H&R.

  • strat||

    On that note if any H&R people are going to be at Black Hat, drinks are in order.

  • ChicagoSucks||

    Unfortunately I won't be in attendance this year. Kick a few back for me.

  • Paul||

    He, we just ran into consequences of not patching in favor of platform stability. Got hit with the Conficker virus. Good times.

  • strat||

    As a chief scientist and friend once remarked: "You pays your money and you takes your chance."

    I'm really sorry to hear that. At least it happened now, where people have some handle on how to manage that. I hope you get a nice long break after the cleanup.

  • Paul||

    My mom died on the Friday of the outbreak and consequently I took the next week off. The rest of the team got it cleaned up that week. So I missed it.

    Now we're just in the mop-up stage.

  • strat||

    Sorry for your loss, Paul. It's not much of an upside, but better them than you.

  • Paul||

    Thanks. My phone was blowing up all that week with all the cleanup chatter on email.

  • ChicagoSucks||

    You beat me to it strat. I also have been in the information security field for many years - 10 of which were spent doing penetration testing (ethical hacking).

    I concur that the vulnerabilities we see today are pervasive and constantly changing. I think that if we removed a lot of the consumer protections that are in place today, more retail businesses would have a bigger incentive to increase security spend.

    Think about the recent Sony hacks. Do you think Sony is going to loose a significant amount of business because a number of their customers had to check their credit card statements and possibly dispute a charge or two? Probably not. Take those protections away and Sony would be open to tremendous civil liability; at which point the security of their customers' information would be an acceptable cost of doing business.

    I'm a partner in an IS consulting firm, and it's always irritated me to have to "push the regulatory compliance button" to sell our services. It's very difficult to sell security in terms of cost savings (it's expensive) or competitive differentiation (customers are federally insured). So the only really effective way to sell security is to tie it to a gov't fine for noncompliance or breach notification costs.

    I think that part of our problem is caused by this mechanism. If we really want to improve security; and, professionally speaking, we really should; We need to deregulate not just the information security industry, but also the banking, healthcare, credit, etc industries.

    Make security valuable to the public and they will demand it. Make customer data valuable to businesses and they will secure it.

  • strat||

    ChicagoSucks, I'm with you. I think the fundamental challenge with the business of security is that to actually demonstrate ROI in a slam dunk way, you'd have to prove a negative.

    "See, nothing happened!" is not an inspired, rousing path for an IT manager who wants budget to do what he needs.

  • ChicagoSucks||

    Exactly, I had client about 6 years ago who made wax. After my team completely "pantsed" them in front of the board the CIO stood up and said "We make wax"; implying that they had nothing to loose by a breach. My response was something along lines of: "sure, you're not a bank, or an insurance company, a three-letter agency, or a nuclear power plant... But what happens when you stop making wax?"

  • strat||

    It can be a thankless job. We sold our firm a while back, but I remember having to give bad news to a CEO who was building a new $X billion telecommunications system to the effect that their whole design was full of holes and that their big 4 consulting firm had completely neglected to think about security.

    I admit I briefly wondered whether I'd be seeing a black sedan following me after that briefing.

    I do NOT believe in security for security's sake, and I always hated and tried to avoid "selling the threat." I do however think that the whole industry has dropped the ball - there are cases where better security can also improve the user experience. Look at single sign on, for example, but don't get me started about the idiocy of passwords in 2011.

  • ChicagoSucks||

    I agree. Security for security's sake is a waste (and possibly reckless). While I do think security can improve the user experience, I don't see many instances where the improvement justifies the cost.

    Single sign on is very convenient, but it's also very expensive to implement. Federated identity projects alone can cost in the millions, but for what? So an actuary only needs to remember 1 password instead of 2? We could probably debate the pricing of security services and technology, but in our current market security is a cost and when a company's CISO reports to the CIO instead of the board, we have a cost within a cost center. When times get tough the budget axe tends to fall on IT and security usually takes a big hit.

    Passwords are pretty much pointless, but don't let that get out because most people think passwords == security.

  • Paul||

    Look at single sign on, for example, but don't get me started about the idiocy of passwords in 2011.

    Group policy can mitigate a lot of that. Hell, our policy is so tough on passwords, I almost have to 'suggest' a password to the user because they can't understand CAPs plus numbers plus symbos, and eight characters minimum.

  • ChicagoSucks||

    [i]Group policy can mitigate a lot of that. Hell, our policy is so tough on passwords, I almost have to 'suggest' a password to the user because they can't understand CAPs plus numbers plus symbos, and eight characters minimum.[/i]

    It's not password complexity that's the issue. It's the concept of password technology that's outdated. Common tools like "pass the hash" and "rainbow tables" really make the case that single factor authentication is a dying concept.

  • strat||

    If you haven't seen the latest figures for how fast you can brute force passwords with graphics processing units, don't go look. You'll just lose sleep and feel bad.

    The Department of Defense said around 1985 (IIRC) that passwords were obsolete. Now they truly are, unless they're long and way more complicated than people actually use.

  • kinnath||

    I used to tell people that the desk locks at work were just good enough to keep your friends out.

    Same for passwords.

  • strat||

    The act of learning how to pick locks reaffirmed my faith in humanity. When you realize just how badly most locks work, it becomes obvious that most people are basically honest.

  • Matrix||

    I really our passwords... change every so often... lower and upper case, numbers, and symbols. 2 of each. Not supposed to write it down, but should remember it. And we have a new one for every system we have to login to.

  • Paul||

    I'm a partner in an IS consulting firm, and it's always irritated me to have to "push the regulatory compliance button" to sell our services. It's very difficult to sell security in terms of cost savings (it's expensive) or competitive differentiation (customers are federally insured). So the only really effective way to sell security is to tie it to a gov't fine for noncompliance or breach notification costs

    I've never been directly on the sales side of things, Chicago, but in general, when you've been in the business long enough, you should have a list of first-hand horror stories of lax security to use as a sales tool to sell it to the customer.

  • strat||

    I have tried to use the "due diligence" angle more than the horror story or "management by magazine" strategy. I don't mean due diligence in terms of investigation, but rather the basic set of things you have to do to demonstrate you're competent and care about fiduciary responsibilities.

    Security people have been their own worst enemies for pushing an "all or nothing" strategy. Reality hasn't worked that way and probably doesn't in ANY sphere of endeavour.

    I think the overregulation of the past 2 decades (SOX,GLB, etc) is awful, but it did make a nice check box about which to worry.

  • Paul||

    I have tried to use the "due diligence" angle more than the horror story or "management by magazine" strategy.

    This is probably why I'm not in sales.

    Security people have been their own worst enemies for pushing an "all or nothing" strategy.

    This. I think that too many IS departements go overboard, spending millions on security trying to mitigate the most unlikely of threats and it turns CIOs and CFOs off.

    I work for a hospital and HIPAA really twisted a lot of IT minds. A lot of guys in this business are now convinced that the Russians are circling the hospital in Black Helicopters trying to steal the contents of Mrs. Kravitz's urinalysis. Meanwhile they give out the Admin password like candy, their antivirus isn't up to date, and their servers aren't patched because they're afraid of stability issues.

    There's no contextual threat analysis.

    Virus infections: High
    Angry former employees: Medium to high.
    Chinese hackers stealing pregnancy reports from patients using a man-in-the-middle attack: Low

  • strat||

    You made my quote file.

    I think people spend on security until they "feel' secure. Then they're done, unless something bad happens.

    Bruce Schneier has a TED talk about how people inaccurately perceive risk in pretty much every way possible. It's worth watching.

    I will say this though, because I'll lose membership in the secret-fellowship-of-hackers-who-run-the-world-and-stuff if I don't...

    The pregnancy report database is the steppingstone that the hacker will use to get to his REAL target, the wife of the defense contractor whom they want to blackmail.

  • ChicagoSucks||

    I do, and in my experience it's still a tough sell. The worst horror stories, ironically enough, involve breach cleanup costs as a result of non-compliance. But each industry is different to be honest. You really don't need to sell security to a financial services company for example.

  • Paul||

    You really don't need to sell security to a financial services company for example.

    If I were dumber, I'd have snapped out a response before thinking. Age and wisdom, while not having done a lot for me did make me smile at this.

  • Scruffy Nerfherder||

    Selling security is roughly equivalent to selling insurance. The same sales techniques apply.

  • ChicagoSucks||

    Kind of. Insurance is a control that is used to avoid risk. Security typically involves itself with other controls that mitigate either the probability of a risk occurring, or the impact of the threat. You're correct though, insurance is one way to address risk and can be sold in a similar way.

  • Sploithunter||

    I am with you guys on this. There are some aspects of cyber security that are overblown, but it is more real than most threats (much more real). The analysis of the threat in the article is certainly lacking; however, we shouldn't, as critical thinkers, dismiss the threat on the account that the government believes it real. If upon analysis we find a critical threat, we should analyze why the threat exists. Here I think you would find that the government has created a problem it now needs power, authority, and money to fix. It is the regulation and standardization of loss that it the biggest reason that cyber security sucks. The limited liability of companies for loss of data is why they over collect and under secure it. Combined with the all too frequently mentioned issue of having to justify your success through negative results (“no security incident happened this week, therefore we are doing our job and you need us”) there is a high likelihood of security being undervalued.

    PCI is good only because the playing field is so poor. However, it is the best example of business being more effective than government. PCI is also a monopoly for all intensive purposes as VISA has taken the role of government in regulating auditors with a high price of entry into the market. There is no competition, the service is poor, and there is an inherent conflict of interest between the auditor and their responsibilities to both VISA and the client. Real competition in this market would help, but will not work until the government quits limiting liability.

    *I recognize my conflict working in the industry for 10 years; however, I do what I do because I know there is a problem and the work does not depend on government spending (or even a good economy).

  • ||

    "I used to be hired to break into banks, hospitals, energy companies (yes, even the ones with nuclear power plants)"

    I worked for a guy who was a COO of a large cap ($10B+) combined utility and a member of the DOE's ESSG (Electricity Security Steering Group).

    So, he's at a meeting and some guy was brought in to claim as you did, that they had been able to breech security at nuke plants. So he asked how, because if this is true, we need to address it? And they said "We can't tell you. It's classified."

    His thought? Total. Bullshit.

  • ChicagoSucks||

    It's possible that he was full of shit. However, most of our clients who contract us for this type of work are really paranoid about it. It looks really bad on an audit report for starters, not to mention that you're asking somebody to demonstrate how to do something that could be disastrous to your company. Most organizations make us sign a confidentiality agreement. And the reports we write typically are only seen by a privileged few.

    I used to do a lot of work in utilities and energy. These are considered "critical infrastructure" by the USG and I would agree (though the USG tends to consider pretty much anything critical infrastructure these days). So without knowing if your guy was reputable or not it's hard to say. My bet would be that he's telling the truth.

    Most of my clients struggle to fix the vulnerabilities that we identify, let alone the root causes that caused them to exist in the first place. I would venture to guess that the reason that your guy didn't describe how to breach security at nuke plants (whatever that means) is because the method he used probably still works and that by sharing that information could expose him to civil liability.

  • strat||

    If a private contractor said that, I'd be suspicious too, or think he was a bonehead and didn't know the difference between "confidential" and "classified." If a DOE red team person said that, I'd probably believe that it was.

  • Virginia||

    Will using the prefix cyber- make me look like an idiot?

    http://willusingtheprefixcyber.....idiot.com/

  • ||

    WOOHOO! I scored a 'Dangerous Moron'. Beat that, loserdopians!

  • ChicagoSucks||

    Yes

  • strat||

    They need to add "Am I writing a document for consumption within the U.S. Government?"

  • tarran||

    They do. You have to say yes to the "nonexistant reason" question to see it.

  • SIV||

  • Warty||

    Two suffer life-threatening injuries on extreme-survival course.

    You didn't expect something you might not survive, did you, kids?

  • ||

    It's EXTREME! Did they have Mountain Dew?

  • ||

    No, but the bear did.

  • SIV||

    The bear could use a cold beverage if they were "spicy" campers:

    The hikers carried three canisters of bear spray, but there was no initial indication that the hikers used the repellent, Palmer said.

  • Amakudari||

    I'm pretty sure that yellow stuff in a cup that Bear has isn't Mountain Dew.

  • BigBob||

    Well played.

  • Brett L||

    We obviously need to ban either summer camps or both summer and caps.

  • Brett L||

    *camps

    Okay, squirrels I fucking get it. Preview is my fucking friend. I'm taking a hammer to my fingers.

  • WTF||


    The hikers carried three canisters of bear spray

    I wouldn't go wandering around bear country with anything less than a .44 magnum strapped to my hip.

  • kinnath||

    I knew an experienced camper that would wear bells when out hiking in the backwoods. The bears are less interested in running into us than we are in them.

  • WTF||

    The bears are less interested in running into us than we are in them.

    This is usually true - the firepower is for the exceptions to the rule. Better to have it and not need it than need it and not have it.

  • kinnath||

    few organization will give loaded firearms to minors and then let them wander off in the wilderness.

    I assume that "bear repellent" is pretty much useless even if you have time to get it out.

  • Amakudari||

    Bear spray is pepper spray, it just creates a larger mist with more range. It can definitely repel bears, and it's probably a much better idea than a handgun.

    I don't know why the kids didn't use it, although I assume they had it in their bags or something.

  • kinnath||

    Don't fuck with the bears

  • Ditka||

    Da Bearssss

  • Dushku||

    The dolls

  • Nick||

    "If the buildup to the Iraq war teaches us anything, it is that we cannot accept the word of government officials with access to classified information as the sole evidence for the existence or scope of a threat."

    That is an excellent point.

  • strat||

    Isn't that why god made the media and cameras? Oh wait, nevermind.

  • Sploithunter||

    AGREED; however, neither can you simply dismiss it on the same basis.

  • DuKu||

  • ||

    Question for strat:

    Some time ago, I was talking to some cyber-security folks, and they said that the biggest weakness most supposedly secure systems had was "social engineering" (that is, conning someone into giving you access).

  • ChicagoSucks||

    There's a common saying in the field - the best hackers hack people not computers. Human beings are the biggest weakness in a system. Social engineering in this context refers to the act "hacking people." I used to do a fair amount of it back in my pentesting days, and it was almost always successful.

    The thought process is this: You can implement millions of dollars worth of technical security controls but if I can call your security and "ask" for the information I'm after then what's the point? I don't necessarily agree with this thought process because I still think there's merit in implementing a technically secure system. I interpret to mean that it's very easy to neglect human error and the very real fact that people can circumvent your security measures just as easily as a virus. To the lay person most social engineering would appear at face value to be "con artist-ish" though most IS people would consider conning a form of social engineering.

    If you want to know more about social engineering you should look into Kevin Mitnick's book "The Art of Deception". It's a good read and doesn't use much technical jargon. Mitnick is *the* poster child for social engineering. You may have seen the movie (or read the book) "Catch Me if You Can" by Frank Abignale (sp). He was also a gifted social engineer.

  • ChicagoSucks||

    Edit: "call your secretary" sorry, I have security on the brain.

  • Sploithunter||

    Human beings are the biggest weakness in any system that is the reason I will NEVER, NEVER, did I mention NEVER endorse any REAL ID or other “secure” ID system. Any such system can always be compromised via the Great Wall method, ie just bribe someone. Any such system will just become the playing ground for the well connected and wealthy even if they could technically secure the damn thing. The result would be that your average teenager would have a hard time getting a fake ID, but your connected terrorist should have no problem. Thanks Mr....Smith, have a nice flight.

    (The TSA rhetorical example was just that, an example only...No need for the near obligatory anti-TSA or flight safety posts)

  • ||

    My ball-point pen of choice uses the Frank Abagnale-approved "207" ink. Wonderful pen!

  • Amakudari||

    There was a good article on this in Bloomberg recently:

    The U.S. Department of Homeland Security ran a test this year to see how hard it was for hackers to corrupt workers and gain access to computer systems. Not very, it turned out.

    Staff secretly dropped computer discs and USB thumb drives in the parking lots of government buildings and private contractors. Of those who picked them up, 60 percent plugged the devices into office computers, curious to see what they contained. If the drive or CD case had an official logo, 90 percent were installed.
  • strat||

    That's certainly a huge issue, especially in firms that don't train their people well. People try to be helpful, so you can't exactly fault them sometimes. Unless they were trained first.

    It's hard to categorically say "this is the majority" of attacks, as there is a host of things that can lay an enterprise wide open.

    Remember that there are already-infected zombie machines out there by the millions.

    It may only take one subverted box to compromise an entire enterprise.

  • ||

    Is it just me, or is that super-duper high-tech operations center above positively littered with blue screens of death?

  • Brian R||

    Glad to see some people (strat & ChicagoSucks) already called out the author on his "this isn't really a problem" subtext. It clearly is a problem, even if government is not the solution.

    But if you really think it's not a problem, go have a look at what a bunch of kids who were just in it for the lulz pulled off, and consider what actual motivated professionals could do. Also, check out the Stuxnet virus and consider what that could've done if it had been written to cause a nuclear accident instead of just slow down the operation, and what something similar could do at any number of other industrial facilities.

    Sometimes people say scary things to whip up hysteria - but sometime they say scary things because there are actual problems out there.

  • ||

    the devil's greatest achievement is convincing the world he does not exist

  • ||

    Man, those old IMSAI 8080s were workhorses. With the blinking LED panel and toggle switches, and the screechy 8" Calcomp floppy drives ... yeah! Good times.

  • strat||

    Yeah, but they don't heat the basement as well as my either my PDP-10 or PDP-11s.

  • ||

    Disclaimer-
    I work in IT and specifically in network administration and security at that. So I am bit biased here. I have worked in IT for 10 years, starting as a member of the US Navy and since then through various private firms and government contractors.
    One of the largest problems with IT security is that virtually all the people who keep dragging it up have no idea what the fuck they are actually talking about. And I see this constantly. Very few people pitching doom and gloom scenarios, or from the other side saying everything is just fine, have any actual experience as a network administrator, IT security officer, penetration tester (ethical hacker) or anything like that. They are strictly coming at it from an academic point of view and most often do not understand what the hell it is that they are rambling on about.
    In the world of IT security you generally run into two people. The first generally assumes anything is secure until it’s demonstrated that isn’t the case, and they often don’t invest in proper security until they are caught with their pants down despite the pleading of their IT staff, or they are under the impression that you can hack into the Pentagon and nuke Washington DC from a Starbucks.
    Is our security a joke, oh yeah it sure as hell is. It’s extremely easy to pilfer any sort of data you want from any major company. Though, oddly enough, the vast majority of the attacks like this that go on are done through simply low tech hacking (social engineering) rather than over the top stunts. Can you blow up a pipeline, sure, easy. The CIA did it, stuxnet proves it’s possible, and rouge systems administrators have brought down cities traffic grids (in California) out of spite. But in all of these cases someone had to have physical access to the network. The control functions of a nuclear power plant are not on a network that touches the internet in any shape or form. Can you steal thousands of peoples identities, well… look at the recent debacle that happened to Sony, 80 odd million peoples information was stolen.
    What often happens is people who don’t actually work in IT conflate all these sorts of things together. So when you’re asked “could someone do xyz” the answer is pretty much always “yeah, they could”, and next thing you know someone who doesn’t even know what the OSI layer, Kerberos, ACLs, CRL, or any of the basic terminology in actual IT security is has stormed up onto the TV and is telling everybody that some kid in China is going to hack into hospitals and pull the plug on grandma, and since your average person is utterly computer illiterate, they believe it! Idiotic “news” companies like Fox (worst of the worst) claiming that lulzsec or Anonymous “hacked the CIA”, when all they did was DDoS the website, the cyber equivillant of toilet papering someone’s house, are certainly not helping things either.
    The government is correct that the current state of things is ripe for disaster, but the solution of “monitor everybody, monitor them now” is hilarious from several angles. For one it’s not even possible against someone who actually knows what they are doing. Second, most real damage is done by actors gaining or having access to a private network (see stuxnet, see traffic light fiascos) and monitor doesn’t stop this at all. In other cases it’s done by social engineering (see the HBGary hack), or it’s done by extremely poor security systems on behalf of “it’s all about the money” private corporations, see Sony.
    Finally, if you’re worried about privacy it’s not the government you should worry about here. Google wardrived the entire nation under the guise of street view and vacuumed up every one’s MAC address and a slew of other private data even though they claimed not to, so they can track a phone that was in LA as it moves across the country, they can do this with your laptop as well. And let’s not ignore apple’s insane policy of storing where your iphone goes and moves in nearly real time. Attacks on your cyber privacy by private industry are constant and intrusive. And since most of the public isn’t literate in the technology or too busy worrying about the government they get away with this nonsense constantly.

    /end rant.

  • ||

    The article is a very well written piece on the ambiguities of cyber-policies being promoted by various government entities.

    The problem is, and the author alludes to this, is that practically all of them are completely without any foundation... and nor would any sophisticated security implementation actually work.

    First off, the government's Einstein security program has not worked out too well since it is not complete and as a result has not been successfully implemented across all government networked infrastructures.

    Second, by the time the government does generate a security solution it will be obsolete in the face of increasing hacker sophistication.

    The US is right to be concerned about concerted efforts on the parts of hackers to overcome their security defenses. However, if hackers want to take down the Internet there is a much easier way of doing it than by attacking government installations; simply attack and take down the 13 world-wide servers that support the Internet addressing. With 11 located in a single place in the US and the other two in Europe, this should not be a very hard thing to do for those with the skills.

    In any event, what the government fails to realize is that why they are planning so to are their cyber opponents and with software in general being highly fragile and very porous itself, there is little the government can do except use some common sense; which I doubt exists in such environs.

    The US government unleashed the Internet and it is simply too late to put the "genie back into the bottle"...

  • Bradley||

    Taking down some root nameservers is hardly destroying the Internet.

  • nike basketball||

    is good

  • قبلة الوداع||

    thank u

  • ||

    "The control functions of a nuclear power plant are not on a network that touches the internet in any shape or form." There you have it. My network people have assured me that the "power grid" is not on the internet either, nor should the control systems for public water and sewer systems. Remember stuxnet? How did it infect Iranian computers controlling centrifuges? Thumb drives. Remember sneakernet? Same thing. An employee hand carries an infected device across the chasm that separates the internet from command and control systems that are separate and independent of the internet.
    All of these threats of "cyberwarfare" boil down to the equivalent of human error.

GET REASON MAGAZINE

Get Reason's print or digital edition before it’s posted online

  • Video Game Nation: How gaming is making America freer – and more fun.
  • Matt Welch: How the left turned against free speech.
  • Nothing Left to Cut? Congress can’t live within their means.
  • And much more.

SUBSCRIBE

advertisement