The Cybersecurity-Industrial Complex
The feds erect a bureaucracy to combat a questionable threat.
In the last two years, approximately 50 cybersecurity-related bills have been introduced in Congress. In May the White House released its own cybersecurity legislative proposal. The Federal Communications Commission and the Commerce Department have each proposed cybersecurity regulations of their own. Last year, Senate Armed Services Committee Chairman Carl Levin (D-Mich.) even declared that cyberattacks might approach "weapons of mass destruction in their effects." A rough Beltway consensus has emerged that the United States is facing a grave and immediate threat that can only be addressed by more public spending and tighter controls on private network security practices.
But there is little clear, publicly verified evidence that cyber attacks are a serious threat. What we are witnessing may be a different sort of danger: the rise of a cybersecurity-industrial complex, much like the military-industrial complex of the Cold War, that not only produces expensive weapons to combat the alleged menace but whips up demand for its services by wildly exaggerating our vulnerability.
The Regulatory Urge
The proposals on the table run the gamut from simple requests for more research funding to serious interventions in the business practices of online infrastructure providers. The advocates of these plans rarely consider their costs or consequences.
At one end of the spectrum, there have been calls to scrap the Internet as we know it. In a 2010 Washington Post op-ed, Mike McConnell, former National Security Agency chief and current Booz Allen Hamilton vice president, suggested that "we need to reengineer the Internet to make attribution, geolocation, intelligence analysis and impact assessment—who did it, from where, why and what was the result—more manageable." Former presidential cybersecurity adviser Richard Clarke has recommended the same. "Instead of spending money on security solutions," he said at a London security conference last year, "maybe we need to seriously think of redesigning network architecture, giving money for research into the next protocols, maybe even think about another, more secure Internet."
A re-engineered, more secure Internet is likely to be a very different Internet than the open, innovative network we know today. A government that controls information flows is a government that will attack anonymity and constrict free speech. After all, the ability to attribute malicious behavior to individuals would require users to identify themselves (or be identifiable to authorities) when logging on. And a capability to track and attribute malicious activities could just as easily be employed to track and control any other type of activity.
Many current and former officials, from Clarke to FBI Director Robert Mueller, have proposed requiring private networks to engage in deep packet inspection of Internet traffic, the online equivalent of screening passengers' luggage, to filter out malicious data and flag suspicious activity. The federal government already engages in deep packet inspection on its own networks through the Department of Homeland Security's "Einstein" program. Mandating the same type of monitoring by the Internet's private backbone operators—essentially giving them not just a license but a directive to eavesdrop—would jeopardize user privacy.
There have also been proposals at the FCC and in Congress for the certification or licensing of network security professionals, as well as calls for mandating security standards. While certification may seem harmless, occupational licensing mandates should never be taken lightly; they routinely restrict entry, reduce competition, and hamper innovation. Politicians have also called for substantial new government subsidies, including the creation of regional cybersecurity centers across the country to help medium-sized businesses protect their networks.
Many of the bills would mandate a new cybersecurity bureaucracy within either the Department of Homeland Security or the Defense Department. Many would also create new reporting requirements. For example, the administration's proposed legislation requires that private firms deemed by the head of Homeland Security to be "critical infrastructure" must develop cybersecurity plans and have those plans audited by federally accredited third parties.
With proposals as intrusive and expensive as these, you might think the case for federal intervention is overwhelming. But it isn't. Again and again, the regulators' argument boils down to "trust us."
The CSIS Commission
One of the most widely cited arguments for more federal involvement in online security was made by the Commission on Cybersecurity for the 44th Presidency, which unveiled its report in December 2008. The commission, assembled by the Center for Strategic and International Studies (CSIS), a foreign policy think tank, in February 2008, served as a sort of cybersecurity transition team whoever the new president turned out to be. It was chaired by two members of Congress and composed of security consultants, academics, former government officials, and representatives of the information technology industry. Their report concluded that "cybersecurity is now a major national security problem for the United States" and urged the feds to "regulate cyberspace" by enforcing security standards for private networks.
Yet the commission offers little evidence to support those conclusions. There is a brief discussion of cyberespionage attacks on government computer systems, but the report does not explain how these particular breaches demonstrate a national security crisis, let alone one that "we are losing."
The report notes, for example, that Defense Department computers are "probed hundreds of thousands of times each day." Yet it fails to mention that probing and scanning networks are the digital equivalent of trying doorknobs to see if they are unlocked—a maneuver available to even the most unsophisticated would-be hackers. The number of times a computer network is probed is not evidence of a breach, an attack, or even a problem.
More ominously, the report warns: "Porous information systems have allowed opponents to map our vulnerabilities and plan their attacks. Depriving Americans of electricity, communications, and financial services may not be enough to provide the margin of victory in a conflict, but it could damage our ability to respond and our will to resist. We should expect that exploiting vulnerabilities in cyber infrastructure will be part of any future conflict."
An enemy able to take down our electric, communications, and financial networks at will would indeed be a serious threat. And it may well be the case that the state of security in government and private networks is deplorable. But the CSIS report cites no reviewable evidence to substantiate this supposed danger. There is no support for the claim that opponents have "mapped vulnerabilities" and "planned attacks." Neither the probing of Pentagon computers nor the cited cases of cyberespionage—for instance, the hacking of a secretary of defense's unclassified email—have any bearing on the probability of a successful attack on the electrical grid.
Nevertheless, the commission concludes that tighter regulation is the only way toward greater security. It is "undeniable," the report claims, that "market forces alone will never provide the level of security necessary to achieve national security objectives." But without any verifiable evidence of a threat, how are we to know what exactly the "appropriate level of cybersecurity" is and whether market forces are providing it? With at least some security threats, such as industrial espionage and sabotage, private industry has a strong incentive to protect itself. If there is a market failure here, the burden of proof is on those who favor regulation. So far they have not delivered.
Although they never explicitly say so, the report's authors imply that they are working from classified sources, which might explain the dearth of reviewable evidence. To its credit, the commission laments what it considers the "overclassification" of information related to cybersecurity. But this excessive secrecy should not serve as an excuse. If the buildup to the Iraq war teaches us anything, it is that we cannot accept the word of government officials with access to classified information as the sole evidence for the existence or scope of a threat.
Cyberwar: The Book
If the CSIS report is the document cyberhawks cite most, the most widely read brief for their perspective is the 2010 bestseller Cyber War, by Richard Clarke and Robert Knake, a cybersecurity specialist at the Council on Foreign Relations. This book makes the case that U.S. infrastructure is extremely vulnerable to cyber attack by enemy states. Recommendations include increased regulation of electrical utilities and Internet service providers.
"Obviously, we have not had a full-scale cyber war yet," Clarke and Knake write, "but we have a good idea what it would look like if we were on the receiving end." The picture they paint includes the collapse of the government's classified and unclassified networks, the release of "lethal clouds of chlorine gas" from chemical plants, refinery fires and explosions across the country, midair collision of 737s, train derailments, the destruction of major financial computer networks, suburban gas pipeline explosions, a nationwide power blackout, and satellites in space spinning out of control. In this world, they warn, "Several thousand Americans have already died, multiples of that number are injured and trying to get to hospitals.…In the days ahead, cities will run out of food because of the train-system failures and the jumbling of data at trucking and distribution centers. Power will not come back up because nuclear plants have gone into secure lockdown and many conventional plants have had their generators permanently damaged. High-tension transmission lines on several key routes have caught fire and melted. Unable to get cash from ATMs or bank branches, some Americans will begin to loot stores." All of which could be the result of an attack launched "in fifteen minutes, without a single terrorist or soldier appearing in this country."
Clarke and Knake assure us that "these are not hypotheticals." But the only verifiable evidence they present relates to several well-known distributed denial of service (DDOS) attacks. A DDOS attack works by flooding a server on the Internet with more requests than it can handle, thereby causing it to malfunction. A person carrying out a DDOS attack will almost certainly produce this flood of requests with a botnet—a network of computers that have been compromised without their users' knowledge, usually through a virus. Vint Cerf, one of the fathers of the Internet and Google's chief Net evangelist, has estimated that possibly a quarter of personal computers in use today are compromised and placed in unwilling service of a botnet.
Clarke and Knake cite several well-known DDOS attacks, such as the attacks on Estonia in 2007 and Georgia in 2008, both widely suspected to have been coordinated by Russia. They also mention an attack on U.S. and NATO websites in 1999 after American bombs fell on the Chinese embassy in Belgrade. And they cite a July 4, 2009, attack on American and South Korean websites, widely attributed to North Korea. These reputedly state-sponsored operations, along with the hundreds of thousands of other DDOS attacks each year by private vandals, are certainly a sign of how vulnerable publicly accessible servers can be. They are not, however, evidence of the capability necessary to derail trains, release chlorine gas, or bring down the power grid.
The authors admit that a DDOS attack is often little more than a nuisance. The 1999 attack saw websites temporarily taken down or defaced, but it "did little damage to U.S. military or government operations." Similarly, the 2009 attacks against the United States and South Korea caused several government agency websites, as well as the websites of the NASDAQ Stock Market, the New York Stock Exchange, and The Washington Post, to be intermittently inaccessible for a few hours. But they did not threaten the integrity of those institutions. In fact, the White House's servers were able to deflect the attack easily thanks to the simple technique of "edge caching," which involves serving Web content from multiple sources, in many cases servers geographically close to users.
Without any formal regulation mandating that it be done, the affected agencies and businesses worked with Internet service providers to filter out the attacks. Once the attackers realized they were no longer having an effect, the vandalism stopped. Georgia, hardly the world's richest or most technologically sophisticated country, similarly addressed attacks on its websites by moving them to more resilient servers hosted outside of its borders.
Clarke and Knake recognize that DDOS is a "primitive" form of attack that would not pose a major threat to national security. Yet DDOS attacks make up the bulk of the evidence for the dire threat they depict. If we have no verifiable evidence of the danger we're in, they write, it is merely because the "attackers did not want to reveal their more sophisticated capabilities, yet." With regard to the Georgian and Estonian episodes, they argue that the "Russians are probably saving their best cyber weapons for when they really need them, in a conflict in which NATO and the United States are involved."
When Clarke and Knake venture beyond DDOS attacks, their examples are easily debunked. To show that the electrical grid is vulnerable, for example, they suggest that the Northeast power blackout of 2003 was caused in part by the "Slammer" worm, which had been spreading across the Internet around that time. But the 2004 final report of the joint U.S.-Canadian task force that investigated the blackout explained clearly that no virus, worm, or other malicious software contributed to the power failure. Clarke and Knake also point to a 2007 blackout in Brazil, which they believe was the result of criminal hacking of the power system. Yet separate investigations by the utility company involved, Brazil's independent systems operator, and the energy regulator all concluded that the power failure was the result of soot and dust deposits on the high-voltage insulators on transmission lines.
Before we pursue the regulations that Clarke and Knake advocate, we should demand more precise evidence of the threat they portray and the probability that it will materialize. That will require declassification and a more candid, on-the-record discussion.
Media Panic
While Cyber War has been widely criticized in the security trade press, the popular media have tended to take the book at its word. Writing in The Wall Street Journal in December 2010, U.S. News & World Report Editor-in-Chief Mort Zuckerman warned that enemy hackers could easily "spill oil, vent gas, blow up generators, derail trains, crash airplanes, cause missiles to detonate, and wipe out reams of financial and supply-chain data." The sole source for his column, and for his recommendation that the federal government establish a cybersecurity agency to regulate private networks, was Clarke and Knake's "revealing" book. The New York Times also endorsed Cyber War, sweeping aside skepticism about the book's doomsday scenarios by noting that Clarke, who had warned the Bush and Clinton administrations about the threat from Al Qaeda before 9/11, has been right in the past.
Then there was the front-page article that The Wall Street Journal published in April 2009 announcing that the U.S. power grid had been penetrated by Chinese and Russian hackers and laced with "logic bombs"—computer programs that can be triggered remotely to cause damage. As with Judith Miller's notorious New York Times articles on Iraq's alleged weapons of mass destruction, the only sources for the story's claim that key infrastructure has been compromised were anonymous U.S. intelligence officials. With little specificity about the alleged infiltrations, readers were left with no way to verify the claims. The article did cite a public pronouncement by senior CIA official Tom Donahue that a cyber attack had caused a power blackout overseas. But Donahue's pronouncement is what Clarke and Knake cite to support their claim that cyber attacks caused a blackout in Brazil, which we now know is untrue.
The author of the Journal article, Siobhan Gorman, contributed to another front-page cybersecurity scoop claiming that spies had infiltrated Pentagon computers and stolen terabytes of data related to the F-35 Joint Strike Fighter. The only sources for that April 2009 report were "current and former government officials familiar with the attacks." Later reporting by the Associated Press, also citing anonymous officials, found that no classified information was compromised in the breach.
The Brazil blackout was also the subject of a 2009 60 Minutes exposé on cyberwar. To back up its claim that the blackouts were the result of cyber attacks, the show cited only anonymous "prominent intelligence sources." The segment also featured an interview with Mike McConnell, who claimed that a blackout was within the reach of foreign hackers and that the United States was not prepared.
In February 2010, The Washington Post granted McConnell 1,400 words to make his case. He told readers: "If an enemy disrupted our financial and accounting transactions, our equities and bond markets or our retail commerce—or created confusion about the legitimacy of those transactions—chaos would result. Our power grids, air and ground transportation, telecommunications, and water-filtration systems are in jeopardy as well." Rather than offering evidence to corroborate this fear, McConnell pointed to corporate espionage generally, and specifically to an incident in which Google's Gmail service had been compromised—another instance of espionage attributed to China.
The Cybersecurity-Industrial Complex
Washington is filled with people who have a vested interest in conflating and inflating the threats to our digital security. In his famous farewell address to the nation in 1961, President Dwight Eisenhower warned against the dangers of what he called the "military-industrial complex": a excessively close nexus between the Pentagon, defense contractors, and elected officials that could lead to unnecessary expansion of the armed forces, superfluous military spending, and a breakdown of checks and balances within the policy making process. Eisenhower's speech proved prescient.
Cybersecurity is a big and booming industry. The U.S. government is expected to spend $10.5 billion a year on information security by 2015, and analysts have estimated the worldwide market to be as much as $140 billion a year. The Defense Department has said it is seeking more than $3.2 billion in cybersecurity funding for 2012.
Traditional defense contractors, both to hedge against hardware cutbacks and get in on the ground floor of a booming new sector, have been emphasizing cybersecurity in their competition for government business. Lockheed Martin, Boeing, L-3 Communications, SAIC, and BAE Systems have all launched cybersecurity divisions in recent years. Other defense contractors, such as Northrop Grumman, Raytheon, and ManTech International, have also invested in information security products and services.
Traditional I.T. firms such as McAfee and Symantec also see more opportunities to profit from cybersecurity business in both the public and private sectors. As one I.T. market analyst put it in a 2010 Bloomberg report: "It's a cyber war and we're fighting it. In order to fight it, you need to spend more money, and some of the core beneficiaries of that trend will be the security software companies." I.T. lobbyists, too, have pushed hard for cybersecurity budget increases. Nir Zuk, chief technology officer at Palo Alto Networks, complained to The Register last year that "money gets spent on the vendors who spend millions lobbying Congress."
Meanwhile, politicians have taken notice of the opportunity to bring more federal dollars to their states and districts. Recently, for example, the Air Force established Cyber Command, a new unit in charge of the military's offensive and defensive cyber capabilities. Cyber Command allows the military to protect its critical networks and coordinate its cyber capabilities, an important function. But the pork feeding frenzy that it touched off offers a useful example of what could happen if legislators or regulators mandate similar buildups for private networks.
Beginning in early 2008, towns across the country sought to lure Cyber Command's permanent headquarters. Authorities in Louisiana estimated that the facility would bring at least 10,000 direct and ancillary jobs, billions of dollars in contracts, and millions in local spending. Politicians naturally saw the command as an opportunity to boost local economies. Governors pitched their respective states to the secretary of the Air Force, a dozen congressional delegations lobbied for the command, and Louisiana Gov. Bobby Jindal even lobbied President George W. Bush during a meeting on Hurricane Katrina recovery. Many of the 18 states vying for the command offered gifts of land, infrastructure, and tax breaks.
The city of Bossier, Louisiana, proposed a $100 million "Cyber Innovation Center" office complex next to Barksdale Air Force Base and got things rolling by building an $11 million bomb-resistant "cyber fortress," complete with a moat. Yuba City, California, touted its proximity to Silicon Valley. Colorado Springs pointed to the hardened location of Cheyenne Mountain, headquarters for NORAD. In Nebraska the Omaha Development Foundation purchased 136 acres of land just south of Offutt Air Force Base and offered it as a site.
The Air Force ultimately established Cyber Command HQ at Fort Meade, Maryland, integrated with HQ for the National Security Agency. In the run-up to the announcement, Sen. Barbara Mikulski (D-Md.) proclaimed, "We are at war, we are being attacked, and we are being hacked."
Proposed cybersecurity legislation presents more opportunities for pork spending. The Cybersecurity Act of 2010, proposed by Sens. Jay Rockefeller (D-W. Va.) and Olympia Snowe (R-Maine) called for the creation of regional cybersecurity centers across the country, a cyber scholarship-for-service program, and myriad cybersecurity research and development grants.
Sensible Steps
Before enacting sweeping changes to stop cybersecurity threats, policy makers should clear the air with some simpler steps.
First: Stop the apocalyptic rhetoric. The alarmist scenarios dominating policy discourse may be good for the cybersecurity-industrial complex, but they aren't doing real security any favors.
Second: Declassify evidence relating to cyber threats. Overclassification is a widely acknowledged problem, as the CSIS report and Clarke and Knake's book both acknowledge, and declassification would allow the public to verify the threats rather than blindly trusting self-interested officials.
Third: Disentangle the disparate dangers that have been lumped together under the "cybersecurity" label. This has to be done if anyone is to determine who is best suited to address which threats. In cases of cybercrime and cyberespionage, for instance, private network owners may be best suited and may have the best incentive to protect their own valuable data, information, and reputations.
Only after disentangling the alleged threats can policy makers assess whether a market failure or systemic problem exists in each case. They can then estimate the costs and benefits of regulation and other alternatives, and determine what if anything Washington must do to address the appropriate issues. Honestly sizing up the threat from cyberspace and crafting an appropriate response does not mean that we have to learn to stop worrying and love the cyber bomb.
Jerry Brito is a senior research fellow at the Mercatus Center at George Mason University and director of its Technology Policy Program. Tate Watkins is a research associate at the Mercatus Center.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
One of the biggest proponents of this is Stewart Baker, the old undersecretary for policy under Bush. He swears it is the next form of warfare like blitzkrieg was in the 1930s. I asked him once if that is true, then where is this decades Spanish Civil War where the concept is proven? He didn't have an answer other than "it will happen". Yeah sure Stewart.
As a former internet security software engineer I agree with the author's assessment that the threat is excessively overblown, and from the way in which these bureaucracies and laws are being pushed by the government and media, can only assume that there is a nefarious effort to undermine our freedoms in yet another fashion.
A lot of it is just careerism. Baker looks at cyber war as a way to become an important person. I am sure he honestly believes what he says. But it is a serious case of confirmation bias since he has everything to gain from the threat being real.
One of the biggest proponents of this is Stewart Baker, the old undersecretary for policy under Bush. He swears it is the next form of warfare like blitzkrieg was in the 1930s. I asked him once if that is true, then where is this decades Spanish Civil War where the concept is proven? He didn't have an answer other than "it will happen". Yeah sure Stewart.
The Chinese have already taken control of your "submit" button. That's all the proof I need.
The Chinese gave him ADD.
"How about Global Thermonuclear War?"
"Wouldn't you prefer to play a nice game of chess?"
"Later. I would like Global Thermonuclear War."
"Fine."
"All right!!!!"
A mental disease, similar to autism, in which the patient shows no regards for the natural rights of others.
"A mental disease, similar to autism, in which the patient shows no regards for the natural rights of others."
...and the futile, pretentious endorsement of the patients personal "expertise" to correct some ill-perceived imbalance of nature itself.
The funny part is that as DC goes broke, all these security contractors are going to be fighting for a decreasing pool of wealth.
The concerns I have about "cyber" security initiatives is the capabilities it will give the gov't when it goes looking for money, which it inevitably must do. The kiddie porn witch hunt will pale in comparison to what they will do when looking for something to fine or seize.
It has been my experience that the private secor is a far better protector of software and networks than the government can ever be. A regulatory agency that lumbers at the speed of a glacier cannot function andequately in a world where technology changes literally overnight.
Bingo, jj. Remember the "Computer Security Act of 1987?" There were deadlines for USG Departments and Agencies to just come up with a plan, not to actually implement anything, and still some ridiculous majority never complied with the law in time.
The interesting question is whether the "cloud" makes this better or worse.
And certainly, calls from the industry to limit their competition, which is the REAL reason behind licensing laws and certifications.
I'd be careful who we consider "the industry." Let's just say that people in the training business are working very hard on these policy proposals, to the point of having quotes in the preambles of the draft bills.
A lot of us who actually do/have done the work find that distasteful in the extreme.
Re: Strat,
Cui bono? That would be them.
AT last! Something new to fear.
Phew! I was starting to get tired of fearing muslim extremists, debt defaults, etc.
Bring back the Cold War!
I miss Tricky Dick
us too!
this is a great comment but i disagree
There's a really simple way to make your network secure from "cyber-attacks". Unplug it from the internet.
Meanwhile, let the rest of us take our chances.
That's not true, remember the Iran nuke plant worm?
Don't misconstrue this to be an argument in favor of the fed's power grab. It's not. I'm just saying, security is not simply a matter of having a closed circuit network.
PS,Unplug it from the internet.
How will government workers view porn?
What we are witnessing may be a different sort of danger: the rise of a cybersecurity-industrial complex, much like the military-industrial complex of the Cold War, that not only produces expensive weapons to combat the alleged menace but whips up demand for its services by wildly exaggerating our vulnerability.
I don't know. I like the notion of a guy in a 350 million dollar a copy F-22 squaring off a against a barefoot Afghan carrying a pre-WW1 Lee Enfield rifle. There's a certain symmetry to it.
The ultimate in Net Neutrality.
http://www.youtube.com/watch?v=jbmc7roekSM
http://www.wired.com/wired/archive/8.07/haven.html
Relevant, or it would have been if some German guy with more guns hadn't taken it over.
It's simply amazing how anything the government does not control now potentially poses a threat, and therefore they must control it. Convenient, no?
it's the essence of statism. people need to be protected from both themselves and from outside threats, and govt. is the only entity that can do that.
Pay no attention to that armed citizen behind the curtain.
STFU, Dunphy.
http://reason.com/blog/2011/01.....nt_2071673
I think dunphy was describing how statists think, not actually describing how it ought to be.
pip and coeus (sp?) are to law enforcement analysis what bull connor and david duke were to race analysis.
reflexive, tiresome, illogical bigotry.
you can just feel the hate dripping from the keyboard when they type.
So sensitive.
Must be what makes you such a great policeman.
i'm sensitive and fragile. like fine china
Fuck off you lying cunt. You lied. Plain and simple.
smooches! pip!
Exactly, and it is not just a US thing. Sarkozy made the same kind of argument a couple of weeks back
http://blog.williamrinehart.co.....t-anarchy/
As someone who has worked in that area for the better part of 25 years, and used to break into things for a living, I'll just say that it really is ugly out there, both in terms of vulnerabilities we allow to fester for years and in terms of players looking to score hits, be they militaries, organized criminals, or people with an axe to grind.
All that having been said, I completely agree with the exhortation by the authors to apply some considered analysis and nuanced thought to policy proposals before trotting them out.
We have legislators floating bills that even their staffers admit are over the top because they wanted to foster discussion. Surely there is some instrument less blunt than DRAFT LAW that the USG can use to start discussions.
Oh wait, there might not be. The model used by the U.S. Government, even when well-meaning and considerate of the variety of stakeholders, is inadequate to allow them to actually identify and contact those stakeholders in a timely fashion.
It's possible to get a lot done by contacting senior executives of leading technology firms, but that hardly represents all of the people with an interest in the security of their information.
Here's one example: I would never call Facebook "critical infrastructure" (despite the fact that some municipalities are beginning to experiment with it for disaster notifications), but you can't deny that a service with 500 million users is relevant. At the very least it's a huge attack surface or potential vector for attacks.
Then you have the hundreds of little startups who might, at any moment, displace some entrenched player, or cause them to be disused or marginalized. At least there's one part of our economy where there's a little creative destruction. It does make things difficult for policymakers though when they have no idea who is relevant this month, or how to contact them.
I used to be hired to break into banks, hospitals, energy companies (yes, even the ones with nuclear power plants), and I can assure folks that we always found ways in. The recent alarm over SCADA (supervisory control and data acquisition systems) is not unwarranted. Some of those things are still running decades-old operating system software, without patches for fear that it's too "brittle." I won't even get into the fact that the software license usually forbids deployment into exactly that sort of high-risk operation.
There are good reasons why some specific information on actors and tools or tradecraft is restricted from public disclosure, but it is a given that if governments learn better how to share information their own populations about threats, we'd all be better off.
If what you say is true and I don't doubt it is, then isn't the proper solution limiting the potential damage that can result from an intrusion rather than going crazy trying to limit intrusions, since you can't stop them anyway?
John, that's an astute question. The truth is we should do our best to mitigate both. I think this distinction is why we've seen dramatic growth in tools classed under "data leakage (or loss) prevention."
Someone finally realized that if the _information_ is what we're trying to protect, it might make sense to have a component actually designed to do that. People used to scoff at whether "dirty word searches" would actually be practical at preventing data from fleeing systems when it shouldn't. Now it's a growth industry.
My personal hot button, of which I was reminded this week, is that there is still really no secondary market for IT security risk. Sure, insurance companies will sell you "hacker insurance" but the reality is that the limits are low enough to be mere gestures, and they don't have anything resembling real actuarial data. They offer policies just so that they can claim to be doing it and competitive, but it's not real from any significant business perspective.
If I could buy options on IT risk, I'd be retired on a private island somewhere.
You assume it's in the government's interest to share information. I don't think this is a correct premise to assume.
Actually, in the case of the USG, it's on record as an accepted premise. The problem is that they are faced with overhauling quite a few processes (what isn't automated, after all?) and not discriminating too much or "picking winners" in terms of the entities with whom they'll share.
I have to grudgingly admit that I have seen what I consider to be positive movement along those lines. Unfortunately, the effort moves at a pretty glacial pace between legal reviews, security level decisions, and the like. There are folks trying to make it better, at least with some appropriate initial communities, but nowhere near what the article's authors (I'm inferring) or I would prefer on a day-to-day basis.
I want your job.
Memo to self:
Be nice to strat
As policymaking goes, I can get on board with that.
You should be careful who you piss off on the Internet
Emmanuel Goldstein, is that you?
Ha! Hardly. He and I have had a few spirited discussions along the anarcho-capitalist vs anarcho-communist axis in the past though.
Some of those things are still running decades-old operating system software, without patches for fear that it's too "brittle."
Extremely bitter knowing laugh, and one of many reasons why I don't work at my last job anymore.
I feel your pain, T. I got into a debate with one of the luminaries in SCADA security where I asserted that a refusal to patch those systems might be one of the few cases where I'd consider it contributory negligence.
He went non-linear on me and practically shouted that it wasn't practical to expect those people to patch their software and that I just didn't understand. I pretty much knew what I was facing after that.
Nobody patches SCADA. Most implementations I've seen are on physically segregated networks. Good luck getting those past 3.1
I am not a proponent of government regulation, but I sure as heck would like to see some more private sector standards that hold those people to account.
Companies can complain all they want about the Payment Card Industry (PCI) standards, but there's no denying that effort is probably one of the single biggest improvements to the security of normal people's lives ever - whether they know it or not.
I love and hate the PCI DSS. It's prescriptive enough to be usefull but I don't like the enforcement mechanism. The problem I have with it is the issuing banks that levy the rate hikes (which can be huge for an ML1 or ML2) are in many cases the perpetrators of non-compliance. I'm sure you've heard the horror stories, "our bank will only accept FTP-based transactions and is threatening to raise our interchange rate because our QSA says we're not compliant." That's bullshit, but nothing compared to arbitrary controls required by GLBA, HIPAA, or even SoX.
Heh. I remember back when Fidelity would show up at their customers' sites with a router and say "Plug this in. No you can't see how it's configured, but if you want to transact with us you'll have to use it."
The one thing that utterly confuses me with the PCI DSS is the "You either have to run audits or use a web application firewall." It's sort of like saying "You either have to take a bath, or get an annual physical." Comparing apples and giraffes. What's up with that?
LOL, I hear ya man. I didn't figure I'd get into a discussion like this on H&R.
On that note if any H&R people are going to be at Black Hat, drinks are in order.
Unfortunately I won't be in attendance this year. Kick a few back for me.
He, we just ran into consequences of not patching in favor of platform stability. Got hit with the Conficker virus. Good times.
As a chief scientist and friend once remarked: "You pays your money and you takes your chance."
I'm really sorry to hear that. At least it happened now, where people have some handle on how to manage that. I hope you get a nice long break after the cleanup.
My mom died on the Friday of the outbreak and consequently I took the next week off. The rest of the team got it cleaned up that week. So I missed it.
Now we're just in the mop-up stage.
Sorry for your loss, Paul. It's not much of an upside, but better them than you.
Thanks. My phone was blowing up all that week with all the cleanup chatter on email.
You beat me to it strat. I also have been in the information security field for many years - 10 of which were spent doing penetration testing (ethical hacking).
I concur that the vulnerabilities we see today are pervasive and constantly changing. I think that if we removed a lot of the consumer protections that are in place today, more retail businesses would have a bigger incentive to increase security spend.
Think about the recent Sony hacks. Do you think Sony is going to loose a significant amount of business because a number of their customers had to check their credit card statements and possibly dispute a charge or two? Probably not. Take those protections away and Sony would be open to tremendous civil liability; at which point the security of their customers' information would be an acceptable cost of doing business.
I'm a partner in an IS consulting firm, and it's always irritated me to have to "push the regulatory compliance button" to sell our services. It's very difficult to sell security in terms of cost savings (it's expensive) or competitive differentiation (customers are federally insured). So the only really effective way to sell security is to tie it to a gov't fine for noncompliance or breach notification costs.
I think that part of our problem is caused by this mechanism. If we really want to improve security; and, professionally speaking, we really should; We need to deregulate not just the information security industry, but also the banking, healthcare, credit, etc industries.
Make security valuable to the public and they will demand it. Make customer data valuable to businesses and they will secure it.
ChicagoSucks, I'm with you. I think the fundamental challenge with the business of security is that to actually demonstrate ROI in a slam dunk way, you'd have to prove a negative.
"See, nothing happened!" is not an inspired, rousing path for an IT manager who wants budget to do what he needs.
Exactly, I had client about 6 years ago who made wax. After my team completely "pantsed" them in front of the board the CIO stood up and said "We make wax"; implying that they had nothing to loose by a breach. My response was something along lines of: "sure, you're not a bank, or an insurance company, a three-letter agency, or a nuclear power plant... But what happens when you stop making wax?"
It can be a thankless job. We sold our firm a while back, but I remember having to give bad news to a CEO who was building a new $X billion telecommunications system to the effect that their whole design was full of holes and that their big 4 consulting firm had completely neglected to think about security.
I admit I briefly wondered whether I'd be seeing a black sedan following me after that briefing.
I do NOT believe in security for security's sake, and I always hated and tried to avoid "selling the threat." I do however think that the whole industry has dropped the ball - there are cases where better security can also improve the user experience. Look at single sign on, for example, but don't get me started about the idiocy of passwords in 2011.
I agree. Security for security's sake is a waste (and possibly reckless). While I do think security can improve the user experience, I don't see many instances where the improvement justifies the cost.
Single sign on is very convenient, but it's also very expensive to implement. Federated identity projects alone can cost in the millions, but for what? So an actuary only needs to remember 1 password instead of 2? We could probably debate the pricing of security services and technology, but in our current market security is a cost and when a company's CISO reports to the CIO instead of the board, we have a cost within a cost center. When times get tough the budget axe tends to fall on IT and security usually takes a big hit.
Passwords are pretty much pointless, but don't let that get out because most people think passwords == security.
Look at single sign on, for example, but don't get me started about the idiocy of passwords in 2011.
Group policy can mitigate a lot of that. Hell, our policy is so tough on passwords, I almost have to 'suggest' a password to the user because they can't understand CAPs plus numbers plus symbos, and eight characters minimum.
[i]Group policy can mitigate a lot of that. Hell, our policy is so tough on passwords, I almost have to 'suggest' a password to the user because they can't understand CAPs plus numbers plus symbos, and eight characters minimum.[/i]
It's not password complexity that's the issue. It's the concept of password technology that's outdated. Common tools like "pass the hash" and "rainbow tables" really make the case that single factor authentication is a dying concept.
If you haven't seen the latest figures for how fast you can brute force passwords with graphics processing units, don't go look. You'll just lose sleep and feel bad.
The Department of Defense said around 1985 (IIRC) that passwords were obsolete. Now they truly are, unless they're long and way more complicated than people actually use.
I used to tell people that the desk locks at work were just good enough to keep your friends out.
Same for passwords.
The act of learning how to pick locks reaffirmed my faith in humanity. When you realize just how badly most locks work, it becomes obvious that most people are basically honest.
I really our passwords... change every so often... lower and upper case, numbers, and symbols. 2 of each. Not supposed to write it down, but should remember it. And we have a new one for every system we have to login to.
I'm a partner in an IS consulting firm, and it's always irritated me to have to "push the regulatory compliance button" to sell our services. It's very difficult to sell security in terms of cost savings (it's expensive) or competitive differentiation (customers are federally insured). So the only really effective way to sell security is to tie it to a gov't fine for noncompliance or breach notification costs
I've never been directly on the sales side of things, Chicago, but in general, when you've been in the business long enough, you should have a list of first-hand horror stories of lax security to use as a sales tool to sell it to the customer.
I have tried to use the "due diligence" angle more than the horror story or "management by magazine" strategy. I don't mean due diligence in terms of investigation, but rather the basic set of things you have to do to demonstrate you're competent and care about fiduciary responsibilities.
Security people have been their own worst enemies for pushing an "all or nothing" strategy. Reality hasn't worked that way and probably doesn't in ANY sphere of endeavour.
I think the overregulation of the past 2 decades (SOX,GLB, etc) is awful, but it did make a nice check box about which to worry.
I have tried to use the "due diligence" angle more than the horror story or "management by magazine" strategy.
This is probably why I'm not in sales.
Security people have been their own worst enemies for pushing an "all or nothing" strategy.
This. I think that too many IS departements go overboard, spending millions on security trying to mitigate the most unlikely of threats and it turns CIOs and CFOs off.
I work for a hospital and HIPAA really twisted a lot of IT minds. A lot of guys in this business are now convinced that the Russians are circling the hospital in Black Helicopters trying to steal the contents of Mrs. Kravitz's urinalysis. Meanwhile they give out the Admin password like candy, their antivirus isn't up to date, and their servers aren't patched because they're afraid of stability issues.
There's no contextual threat analysis.
Virus infections: High
Angry former employees: Medium to high.
Chinese hackers stealing pregnancy reports from patients using a man-in-the-middle attack: Low
You made my quote file.
I think people spend on security until they "feel' secure. Then they're done, unless something bad happens.
Bruce Schneier has a TED talk about how people inaccurately perceive risk in pretty much every way possible. It's worth watching.
I will say this though, because I'll lose membership in the secret-fellowship-of-hackers-who-run-the-world-and-stuff if I don't...
The pregnancy report database is the steppingstone that the hacker will use to get to his REAL target, the wife of the defense contractor whom they want to blackmail.
I do, and in my experience it's still a tough sell. The worst horror stories, ironically enough, involve breach cleanup costs as a result of non-compliance. But each industry is different to be honest. You really don't need to sell security to a financial services company for example.
You really don't need to sell security to a financial services company for example.
If I were dumber, I'd have snapped out a response before thinking. Age and wisdom, while not having done a lot for me did make me smile at this.
Selling security is roughly equivalent to selling insurance. The same sales techniques apply.
Kind of. Insurance is a control that is used to avoid risk. Security typically involves itself with other controls that mitigate either the probability of a risk occurring, or the impact of the threat. You're correct though, insurance is one way to address risk and can be sold in a similar way.
I am with you guys on this. There are some aspects of cyber security that are overblown, but it is more real than most threats (much more real). The analysis of the threat in the article is certainly lacking; however, we shouldn't, as critical thinkers, dismiss the threat on the account that the government believes it real. If upon analysis we find a critical threat, we should analyze why the threat exists. Here I think you would find that the government has created a problem it now needs power, authority, and money to fix. It is the regulation and standardization of loss that it the biggest reason that cyber security sucks. The limited liability of companies for loss of data is why they over collect and under secure it. Combined with the all too frequently mentioned issue of having to justify your success through negative results ("no security incident happened this week, therefore we are doing our job and you need us") there is a high likelihood of security being undervalued.
PCI is good only because the playing field is so poor. However, it is the best example of business being more effective than government. PCI is also a monopoly for all intensive purposes as VISA has taken the role of government in regulating auditors with a high price of entry into the market. There is no competition, the service is poor, and there is an inherent conflict of interest between the auditor and their responsibilities to both VISA and the client. Real competition in this market would help, but will not work until the government quits limiting liability.
*I recognize my conflict working in the industry for 10 years; however, I do what I do because I know there is a problem and the work does not depend on government spending (or even a good economy).
"I used to be hired to break into banks, hospitals, energy companies (yes, even the ones with nuclear power plants)"
I worked for a guy who was a COO of a large cap ($10B+) combined utility and a member of the DOE's ESSG (Electricity Security Steering Group).
So, he's at a meeting and some guy was brought in to claim as you did, that they had been able to breech security at nuke plants. So he asked how, because if this is true, we need to address it? And they said "We can't tell you. It's classified."
His thought? Total. Bullshit.
It's possible that he was full of shit. However, most of our clients who contract us for this type of work are really paranoid about it. It looks really bad on an audit report for starters, not to mention that you're asking somebody to demonstrate how to do something that could be disastrous to your company. Most organizations make us sign a confidentiality agreement. And the reports we write typically are only seen by a privileged few.
I used to do a lot of work in utilities and energy. These are considered "critical infrastructure" by the USG and I would agree (though the USG tends to consider pretty much anything critical infrastructure these days). So without knowing if your guy was reputable or not it's hard to say. My bet would be that he's telling the truth.
Most of my clients struggle to fix the vulnerabilities that we identify, let alone the root causes that caused them to exist in the first place. I would venture to guess that the reason that your guy didn't describe how to breach security at nuke plants (whatever that means) is because the method he used probably still works and that by sharing that information could expose him to civil liability.
If a private contractor said that, I'd be suspicious too, or think he was a bonehead and didn't know the difference between "confidential" and "classified." If a DOE red team person said that, I'd probably believe that it was.
Will using the prefix cyber- make me look like an idiot?
http://willusingtheprefixcyber.....idiot.com/
WOOHOO! I scored a 'Dangerous Moron'. Beat that, loserdopians!
Yes
They need to add "Am I writing a document for consumption within the U.S. Government?"
They do. You have to say yes to the "nonexistant reason" question to see it.
Not a good Summer for teen leadership campers
Two suffer life-threatening injuries on extreme-survival course.
You didn't expect something you might not survive, did you, kids?
It's EXTREME! Did they have Mountain Dew?
No, but the bear did.
The bear could use a cold beverage if they were "spicy" campers:
The hikers carried three canisters of bear spray, but there was no initial indication that the hikers used the repellent, Palmer said.
I'm pretty sure that yellow stuff in a cup that Bear has isn't Mountain Dew.
Well played.
We obviously need to ban either summer camps or both summer and caps.
*camps
Okay, squirrels I fucking get it. Preview is my fucking friend. I'm taking a hammer to my fingers.
The hikers carried three canisters of bear spray
I wouldn't go wandering around bear country with anything less than a .44 magnum strapped to my hip.
I knew an experienced camper that would wear bells when out hiking in the backwoods. The bears are less interested in running into us than we are in them.
The bears are less interested in running into us than we are in them.
This is usually true - the firepower is for the exceptions to the rule. Better to have it and not need it than need it and not have it.
few organization will give loaded firearms to minors and then let them wander off in the wilderness.
I assume that "bear repellent" is pretty much useless even if you have time to get it out.
Bear spray is pepper spray, it just creates a larger mist with more range. It can definitely repel bears, and it's probably a much better idea than a handgun.
I don't know why the kids didn't use it, although I assume they had it in their bags or something.
Don't fuck with the bears
Da Bearssss
The dolls
"If the buildup to the Iraq war teaches us anything, it is that we cannot accept the word of government officials with access to classified information as the sole evidence for the existence or scope of a threat."
That is an excellent point.
Isn't that why god made the media and cameras? Oh wait, nevermind.
AGREED; however, neither can you simply dismiss it on the same basis.
Number 99
Question for strat:
Some time ago, I was talking to some cyber-security folks, and they said that the biggest weakness most supposedly secure systems had was "social engineering" (that is, conning someone into giving you access).
There's a common saying in the field - the best hackers hack people not computers. Human beings are the biggest weakness in a system. Social engineering in this context refers to the act "hacking people." I used to do a fair amount of it back in my pentesting days, and it was almost always successful.
The thought process is this: You can implement millions of dollars worth of technical security controls but if I can call your security and "ask" for the information I'm after then what's the point? I don't necessarily agree with this thought process because I still think there's merit in implementing a technically secure system. I interpret to mean that it's very easy to neglect human error and the very real fact that people can circumvent your security measures just as easily as a virus. To the lay person most social engineering would appear at face value to be "con artist-ish" though most IS people would consider conning a form of social engineering.
If you want to know more about social engineering you should look into Kevin Mitnick's book "The Art of Deception". It's a good read and doesn't use much technical jargon. Mitnick is *the* poster child for social engineering. You may have seen the movie (or read the book) "Catch Me if You Can" by Frank Abignale (sp). He was also a gifted social engineer.
Edit: "call your secretary" sorry, I have security on the brain.
Human beings are the biggest weakness in any system that is the reason I will NEVER, NEVER, did I mention NEVER endorse any REAL ID or other "secure" ID system. Any such system can always be compromised via the Great Wall method, ie just bribe someone. Any such system will just become the playing ground for the well connected and wealthy even if they could technically secure the damn thing. The result would be that your average teenager would have a hard time getting a fake ID, but your connected terrorist should have no problem. Thanks Mr....Smith, have a nice flight.
(The TSA rhetorical example was just that, an example only...No need for the near obligatory anti-TSA or flight safety posts)
My ball-point pen of choice uses the Frank Abagnale-approved "207" ink. Wonderful pen!
There was a good article on this in Bloomberg recently:
That's certainly a huge issue, especially in firms that don't train their people well. People try to be helpful, so you can't exactly fault them sometimes. Unless they were trained first.
It's hard to categorically say "this is the majority" of attacks, as there is a host of things that can lay an enterprise wide open.
Remember that there are already-infected zombie machines out there by the millions.
It may only take one subverted box to compromise an entire enterprise.
Is it just me, or is that super-duper high-tech operations center above positively littered with blue screens of death?
Glad to see some people (strat & ChicagoSucks) already called out the author on his "this isn't really a problem" subtext. It clearly is a problem, even if government is not the solution.
But if you really think it's not a problem, go have a look at what a bunch of kids who were just in it for the lulz pulled off, and consider what actual motivated professionals could do. Also, check out the Stuxnet virus and consider what that could've done if it had been written to cause a nuclear accident instead of just slow down the operation, and what something similar could do at any number of other industrial facilities.
Sometimes people say scary things to whip up hysteria - but sometime they say scary things because there are actual problems out there.
the devil's greatest achievement is convincing the world he does not exist
Man, those old IMSAI 8080s were workhorses. With the blinking LED panel and toggle switches, and the screechy 8" Calcomp floppy drives ... yeah! Good times.
Yeah, but they don't heat the basement as well as my either my PDP-10 or PDP-11s.
Disclaimer-
I work in IT and specifically in network administration and security at that. So I am bit biased here. I have worked in IT for 10 years, starting as a member of the US Navy and since then through various private firms and government contractors.
One of the largest problems with IT security is that virtually all the people who keep dragging it up have no idea what the fuck they are actually talking about. And I see this constantly. Very few people pitching doom and gloom scenarios, or from the other side saying everything is just fine, have any actual experience as a network administrator, IT security officer, penetration tester (ethical hacker) or anything like that. They are strictly coming at it from an academic point of view and most often do not understand what the hell it is that they are rambling on about.
In the world of IT security you generally run into two people. The first generally assumes anything is secure until it's demonstrated that isn't the case, and they often don't invest in proper security until they are caught with their pants down despite the pleading of their IT staff, or they are under the impression that you can hack into the Pentagon and nuke Washington DC from a Starbucks.
Is our security a joke, oh yeah it sure as hell is. It's extremely easy to pilfer any sort of data you want from any major company. Though, oddly enough, the vast majority of the attacks like this that go on are done through simply low tech hacking (social engineering) rather than over the top stunts. Can you blow up a pipeline, sure, easy. The CIA did it, stuxnet proves it's possible, and rouge systems administrators have brought down cities traffic grids (in California) out of spite. But in all of these cases someone had to have physical access to the network. The control functions of a nuclear power plant are not on a network that touches the internet in any shape or form. Can you steal thousands of peoples identities, well? look at the recent debacle that happened to Sony, 80 odd million peoples information was stolen.
What often happens is people who don't actually work in IT conflate all these sorts of things together. So when you're asked "could someone do xyz" the answer is pretty much always "yeah, they could", and next thing you know someone who doesn't even know what the OSI layer, Kerberos, ACLs, CRL, or any of the basic terminology in actual IT security is has stormed up onto the TV and is telling everybody that some kid in China is going to hack into hospitals and pull the plug on grandma, and since your average person is utterly computer illiterate, they believe it! Idiotic "news" companies like Fox (worst of the worst) claiming that lulzsec or Anonymous "hacked the CIA", when all they did was DDoS the website, the cyber equivillant of toilet papering someone's house, are certainly not helping things either.
The government is correct that the current state of things is ripe for disaster, but the solution of "monitor everybody, monitor them now" is hilarious from several angles. For one it's not even possible against someone who actually knows what they are doing. Second, most real damage is done by actors gaining or having access to a private network (see stuxnet, see traffic light fiascos) and monitor doesn't stop this at all. In other cases it's done by social engineering (see the HBGary hack), or it's done by extremely poor security systems on behalf of "it's all about the money" private corporations, see Sony.
Finally, if you're worried about privacy it's not the government you should worry about here. Google wardrived the entire nation under the guise of street view and vacuumed up every one's MAC address and a slew of other private data even though they claimed not to, so they can track a phone that was in LA as it moves across the country, they can do this with your laptop as well. And let's not ignore apple's insane policy of storing where your iphone goes and moves in nearly real time. Attacks on your cyber privacy by private industry are constant and intrusive. And since most of the public isn't literate in the technology or too busy worrying about the government they get away with this nonsense constantly.
/end rant.
The article is a very well written piece on the ambiguities of cyber-policies being promoted by various government entities.
The problem is, and the author alludes to this, is that practically all of them are completely without any foundation... and nor would any sophisticated security implementation actually work.
First off, the government's Einstein security program has not worked out too well since it is not complete and as a result has not been successfully implemented across all government networked infrastructures.
Second, by the time the government does generate a security solution it will be obsolete in the face of increasing hacker sophistication.
The US is right to be concerned about concerted efforts on the parts of hackers to overcome their security defenses. However, if hackers want to take down the Internet there is a much easier way of doing it than by attacking government installations; simply attack and take down the 13 world-wide servers that support the Internet addressing. With 11 located in a single place in the US and the other two in Europe, this should not be a very hard thing to do for those with the skills.
In any event, what the government fails to realize is that why they are planning so to are their cyber opponents and with software in general being highly fragile and very porous itself, there is little the government can do except use some common sense; which I doubt exists in such environs.
The US government unleashed the Internet and it is simply too late to put the "genie back into the bottle"...
Taking down some root nameservers is hardly destroying the Internet.
is good
thank u
"The control functions of a nuclear power plant are not on a network that touches the internet in any shape or form." There you have it. My network people have assured me that the "power grid" is not on the internet either, nor should the control systems for public water and sewer systems. Remember stuxnet? How did it infect Iranian computers controlling centrifuges? Thumb drives. Remember sneakernet? Same thing. An employee hand carries an infected device across the chasm that separates the internet from command and control systems that are separate and independent of the internet.
All of these threats of "cyberwarfare" boil down to the equivalent of human error.
have been introduced in Congress. In May the White House released its own cybersecurity