Cyber War Will Not Take Place, Thomas Rid, Hurst & Company, London, 218 pp., £14.99
The U.S. Cyber Command, or USCYBERCOM, was launched with great fanfare in 2010 to conduct "full-spectrum military cyberspace operations…in order to ensure U.S. and allied freedom of action in cyberspace, while denying the same to our adversaries." In a recent speech, former director of national intelligence Michael McConnell warned that the U.S. "is fighting a cyberwar today, and we are losing." This week the media was atwitter over the "cyberwar" against various news services, apparently instigated by supporters of Syria's President Bashar al-Assad.
Since 2010, USCYBERCOM's budget and its ranks have both swelled. It is now housed in a $358 million headquarters at Fort Meade, which is also home to the National Security Agency. Reuters reports that it is "adding 3,000 and 4,000 new cyber warriors under its wing by late 2015, more than quadrupling its size." In the 2014 budget Cyber Command spending will grow by $800 million to $4.7 billion.
Is McConnell right? Is the U.S. losing a cyberwar? In his intriguing new book, Cyber War Will Not Take Place, Thomas Rid, a War Studies scholar at King's College in London, argues that not we are not currently engaged in a cyberwar, and indeed that such a "war" is unlikely ever to take place. Rid is a careful thinker who believes that the public and policymakers are being misled about the magnitude of harm that cyberattacks can inflict.
War, Carl von Clauswitz wrote, "is an act of force to compel the enemy to do our will." Citing this definition, Rid argues: "All war, pretty simply, is violent. If an act is not potentially violent, it's not an act of war, and it's not an armed attack." Without violence, war becomes a metaphor, like the war on obesity or the war on cancer. Clauswitz also argued in a war, a political goal must be attributed to one of the sides. With no goal and no attribution, the activity is something other than war.
Rid proceeds to see how well recent cyberattacks fit these criteria. Consider, for example, the denial of service attacks against Estonia in 2007 and Georgia in 2008, which evidently emanated from Russia. While somewhat disruptive, these "attacks" involved no violence, had no clear political goals, and could not be firmly attributed to the Russian government. At worst, they fall into a middle ground of infotech-mediated aggression that does not amount to war, a zone occupied by acts of sabotage, espionage, and subversion.
The chief aim of war is to harm the bodies of the enemy. While some computer code might be able to manipulate some machinery into harming a person, the experience of violence is greatly attenuated compared to being shot at by a machine gun or bombed by a plane. Unlike conventional bombs and missiles, computer code does not carry its own explosive charge. To do physical damage it must be aimed at machinery that can damage itself when its operations are disrupted. The first reported casualty of a cyberattack, Rid predicts, will produce a lot of fear and a massive public outcry. But in the meantime, "Not a single human being has ever been killed or hurt as the result of a code triggered cyber attack."
The most famous case of weaponized code causing physical damage is the Stuxnet worm, which disrupted Iran's nuclear enrichment centrifuges at Natanz. Evidently, the U.S. and Israel developed and targeted this highly sophisticated software, which reportedly delayed Iran's nuclear program by as much as two years. Rid argues that Stuxnet is not an example of warfare, but of sabotage. "Cyber attacks which are designed to sabotage a system may be violent, or the vast majority of the cases, non-violent," argues Rid. Sabotage is aimed chiefly at things, not people. In addition, most saboteurs do not want to be identified. Rid cites several examples of cybersabotage, including the Shamoon attack (likely unleashed by coders located in Iran) against the oil company Saudi Aramco in 2012, which wiped the data from several thousand of the companies' computers. Yet it harmed no critical oil production and control facilities.
Another form of cyberattack—exfiltrating data from computers—is best classified as espionage. The Shady RAT attacks, for example, downloaded data from 13 U.S. defense companies, six U.S. government agencies, five national Olympic Committees, three electronics companies, three energy companies, and two think tanks in 2011. The pilfering (apparently organized by Chinese hackers) was discovered in 2011; it's not clear what was taken. The Flame attack, discovered in 2012, was aimed at Iran's oil industry. That "bug on steroids" had the remarkable capabilities; it could turn on an infected computer's microphone, take screen shots, log keystrokes, and overhear Skype conversations, as well as exfiltrate documents. Given its intricacy, Kaspersky Lab, the Russian computer security firm, suspects that Flame was devised by the same groups that created Stuxnet.
Cyberespionage, Rid notes, carries less risk of violence than traditional cloak-and-dagger spying. It also has a greater tendency to blur the lines between foreign and domestic surveillance. Rid suggests that the Foreign Intelligence Surveillance Act, adopted in 1978, "imposed severe limits on the use of intelligence agencies inside the U.S." Edwin Snowden's recent revelations have shown that the limits weren't so severe after all.
Finally, infotech can be used to subvert. Rid argues that modern communications technologies have made it much easier to launch movements against the existing order but harder to maintain discipline among the would-be subverters. The anti-globalization movement at the turn of the century, for example, was savvy in its use of information technologies but petered out as its often contradictory goals proliferated.
"Subversion," Rid notes, has quite different meanings and effects in liberal and authoritarian regimes. Occupy Wall Street was a form of more or less legitimate subversion in the U.S.; Occupy Tahrir Square in Egypt has quite a different political valence. "In liberal democracies subversion has been successfully legalized and institutionalized," Rid writes. Meanwhile, authoritarian regimes such as China and Russia tried to push for the United Nations to adopt an "International code of conduct for information security." The proposed code of conduct would have specifically obligated nations to "combat" the use of information and communication technologies that "undermines other countries' political, economic and social stability."
Rid frets, "The real risk for liberal democracies is not that these technologies empower individuals more than the state; the long-term risk is that they empower the state more than individuals." Given the NSA spying scandal, this observation seems disturbingly prescient. "Open systems," Rid argues, "no matter if we're talking about a computer's operating system or a society's political system, are more stable and run more securely." That much is absolutely right.
Ultimately, Rid makes a strong case that "cyber war has never happened in the past, it does not occur in the present, and it is highly unlikely that it will disturb our future."