The Government's Latest Attempt to Stop Hackers Will Only Make Cybersecurity Worse
"Cyberweapons" crackdown could be used to criminalize basic software-bug testing.
As the threat of cyberwar looms more saliently on the horizon, many countries have turned to controlling the sale of "cyberweapons." But the U.S. government's proposed cyberweapon crackdown, part of a multinational arms-export control agreement called the Wassenaar Arrangement, could be used to criminalize basic bug-testing of software and ultimately weaken Internet security.
The Wassenaar Arrangement emerged in 1996 from a Cold War agreement to regulate the international arms trade. It is not a treaty, bound by force of law, but rather a voluntary pact between 41 major world nations, with the noticeable exceptions of China and Israel. Every few months, member nations gather and exchange information on each country's major arms deals in eight broad weapons categories, including missile systems, armored vehicles, military aircraft, warships, and small arms. By providing a platform for member nations to transparently report on arms trades with non-Wassenaar nations, and to voluntarily prohibit trades with known human-rights violators, the Wassenaar Arrangement seeks to responsibly "contribute to international security and stability."
In 2013, the Wassenaar Arrangement created two new categories for official regulation: "IP network surveillance systems" (which includes the kind of technologies sold to state intelligence and law-enforcement agencies by cybersecurity mercenaries) and "intrusion software" (which includes both malware and "zero-day exploits" used to enter computer systems without detection). If implemented, signatory nations would self-impose export controls limiting the selling or sharing of such information to licensed dealers and buyers within pre-approved countries. The changes are intended to stop firms from legally selling destructive technologies to authoritarian nations, which can use them to spy on and oppress citizens. Several professional security firms, including Gamma International and Hacking Team, indeed sold such software to repressive regimes in, respectively, Turkmenistan and Sudan.
But while some privacy and human-rights activists cautiously welcomed Wassenaar's new targeting of this scurrilous trade, the move is pitting normal allies in the privacy and security communities against one another. Computer scientists worry that the proposal could dangerously limit the critical security research undertaken by responsible, above-ground firms—the kind of research necessary to keep the Internet secure.
Member nations have been refining and promulgating their own national policy changes to bring domestic law in accord with the new international agreement. The EU published its updates last October. The U.S. Bureau of Industry and Security (BIS) opened for public comment its (relatively more restrictive) exploit-technologies export proposal from May through July 20. As currently written, however, vague language and unfamiliarity with technical realities permeate member-nation drafts, standing to wreak havoc on regular vulnerability testing without significantly preventing unsavory cyberweapon sales.
The BIS draft regulations are particularly troubling. The broad language describing "intrusion software"—software that is "specifically designed or modified to avoid detection by monitoring tools, or to defeat protective countermeasures of a computer or network-capable device" and extract or modify data or standard program execution path—threatens to capture and quash countless legitimate research projects, potentially resulting in "billions of users across the globe becoming persistently less secure," according to the Google Security Blog.
The new BIS rules would also impose costly, bureaucratic requirements on simple bug reporting and analysis duties of the sort undertaken each day by info-security professionals. Multinational corporations and independent security consultants alike would be required to seek, justify, and obtain thousands of export licenses from faraway bureaucrats to simply continue doing their jobs legally. And if a researcher were to find a toxic cyberbug requiring immediate attention, he or she would have to obtain government permission before reporting the issue.
Government agents could also require substantial (and unnecessary) documentation of a range of basic bug-patching and collaboration activities before awarding licenses—info that may then be shared with agencies such as the National Security Agency (NSA). Such anti-collaboration rules are particularly strange given the recent Congressional push for increased "information sharing" of cybersecurity threats through measures like CISA.
The industry and academic response has been nothing short of apoplectic. A Passcode survey finds that 77 percent of the security influencers polled oppose the new export limits on software flaws. Scores of information-security experts have weighed in with dire public comments. Symantec notes that "virtually every other legitimate security company uses such tools to ensure the security of our networks and commercial products," while Cisco warns that the new rules will leave network operators "blind to the very weaponized software that the proposed rule intends to constrain."
Purely academic research, too, is under threat. The Electronic Frontier Foundation invokes the curious case of Grant Wilcox, a computer science student at the University of Northumbria whose dissertation on software exploits was censored due to Wassenaar rules.
Meanwhile, unscrupulous security firms could continue to sell exploits in violation of the spirit or letter of the arrangement. Hackers would still be free to sell to approved member nations, even though such technologies could be similarly destructive in the hands of their governments. The French security firm and NSA exploit-seller VUPEN, for instance, claims to have proactively modified its business practices for Wassenaar compliance; likewise, the CEO of the disgraced Hacking Team claims that his company "complied immediately" with new Wassenaar regulations. That such high-profile examples of questionable hacking ethics are confident in their accordance with regulations tailored to prevent the very activities in which they specialize does not bode well for the policy's ultimate efficacy.
The weakness of the new proposals lies in the false assumption that certain technologies can preemptively categorized as good or evil. The same techniques that unscrupulous hackers sell to despots to crackdown on dissidents are also used by ethical companies looking to improve their internal security, researchers looking to understand systemic fragility, and public-minded hackers looking to report innocuously discovered bugs. As security expert and early intrusion-system pioneer Robert Graham explains, "good and evil products are often indistinguishable from each other. The best way to secure your stuff is for you to attack yourself." The new Wassenaar rules, in contrast, would effectively prohibit responsible system maintenance and improvement from law-abiding parties while retroactively chasing the non-compliant baddies who attack in the meantime.
It is highly unsettling that the policymakers in charge appear so ignorant of something as saliently crucial to our everyday lives as software-bug testing. America's unusual tardiness in supplying a public first draft, along with BIS's uncommon solicitation of public feedback, led independent security researcher Collin Anderson to conclude that BIS "had trouble understanding fully the scope and understanding potentially negative repercussions for overregulating."
If implemented, warns security analyst Runa Sandvik, the new U.S. Wassenaar rules would place the technologically uneasy export authorities in a position to directly control security research.
The problems that the new Wassenaar controls seek to address are indeed formidable and legitimate. Powerful groups have undeniably suppressed helpless innocents through the aid of underground exploit sales. However, it is the subjugation itself that is the crime, not the mere communication of software quirks. What's more, signatory nations adopt a hypocritical public stance when they continue to engage in zero-day exploit trades to promote their own brand of surveillance and ideological harassment.
Attempting to ban cyberweapons from getting into the hands of people you don't like by requiring extensive licensing for merely discussing problems is a bit like attempting to ban alcohol from people who are inclined to public drunkenness by requiring permission to discuss chemical reactions. You won't meaningfully cut down the number of cybercriminals or alcoholics, but you will severely limit the output of productive scientists and chill innovative discovery along the way.