The Economics of Software-Vulnerability Sales: Can the Feds Encourage 'Pro-Social' Hacking?
The question for cybersecurity-minded lawmakers should not be how to control the zero-day market but how to encourage more "white hat" trades and fewer destructive ones.
Last week, the information security industry temporarily dodged a bureaucratic blunder that could have inadvertently criminalized basic software bug testing. Heeding the near-unanimous dissent from cybersecurity professionals, the U.S. Commerce Department wisely rescinded its proposal to impose export controls limiting the selling or sharing of "zero-day exploits," software vulnerabilities that only the discoverer knows about. Now the agency will work on drafting a new proposal to bring U.S. law in compliance with the Wassenaar Arrangement, an international arms control pact that now includes so-called "cyberweapons." But how can policymakers control a cryptic yet powerful software-vulnerability market that many of them don't or can't acknowledge exists?
The Obama administration claims to have made cybersecurity a key focus, but its support is mostly limited to legislation that would empower the National Security Agency (NSA) to extract more of our personal data and proposals to treat innocuous hackers like deadly mafiosos by beefing up the Computer Fraud and Abuse Act. At the same time, it has all but declared war on the strong encryption technologies that computer scientists contend are critical for robust Internet security. And it's hypocritical, too, supporting security-weakening "backdoors" for the NSA yet strongly opposing equivalent Chinese efforts to access encrypted communications (needless to say, Xi Jinping returns the "favor").
A similar dynamic permeates ongoing Wassenaar discussions about how to best regulate the transfer and weaponization of "zero-day exploits." Reading between the lines, policymakers aren't interested in eliminating these potential cyberweapons so much as they are hoping to control their trade through export licensing, and to disarm enemies while stockpiling their own cyberarsenals. And although leaders give lip service to cybersecurity, the trade-off for this is actually a less secure Internet. Web security is weakened any time that bugs are intentionally left unfixed, whether it is done by the United States, China, Israel, or a bored teenager trolling for n00dz.
Zero-days are a fact of life, a quirk of software development that is neither good nor evil in itself. How people choose to respond to a discovered zero-day bug, however, can be either "pro-social" and responsible or unethical and destructive.
In the ideal case, a public-minded, "white hat" hacker who discovers a zero-day bug will quickly and responsibly report it to the appropriate party for patching—hopefully in exchange for a monetary reward. In this scenario, everyone wins: Companies that develop or rely on certain software obviously benefit when weaknesses are brought to their attention and fixed; hackers that responsibly report bugs enjoy an enhanced reputation (Facebook launched a "White Hat debit card" so that researchers could literally stockpile good-hacking karma) and improved skill set; and the general public gets to surf a that-much-more secure Internet for another day. If such zero-day trades are supported by a robust enough market, the prices that emerge allow "unique qualitative comparisons for how difficult it is to exploit a given piece of software," as an analysis of the zero-day trade noted, and could encourage the development of the still-latent cyberinsurance market.
Unfortunately, the destructive case is usually more profitable: zero-days can be weaponized and sold as malware to deep-pocketed interests by "black hat," or unethical, hackers. So while you're innocently enjoying some entertaining animated content online, hackers could be hijacking your zero-day-ridden Flash software to pack key-logging programs, ransomware, or even NSA surveillance technologies onto your system without anyone catching wind for quite some time. One study estimates that the average zero-day attack lasts around 312 days before it is caught and patched.
Even generally-ethical computer researchers moonlighting as "grey hat" hackers can find justifications to sell zero-days to poorly-intentioned, malware-spreading organizations. The payments are more attractive, their clients may offer immunity or other perks, and spyware can be made to appear less devious if shellacked by a healthy veneer of patriotic duty. But no number of compelling national security rationales can change the fact that the major Heartbleed bug that the NSA exploited left at least two-thirds of the world's major web traffic open to significant eavesdropping by the very parties that the NSA is supposed to be countering.
The question for security-minded policymakers should not be about how to control zero-day trades, but how to encourage more white hat trades and fewer destructive ones. Currently, information asymmetries and an uncertain legal environment limit profit opportunities for well-intentioned zero-day trading. Pro-social hackers lack a robust space to solicit work, approach legitimate buyers, or even gauge market prices to evaluate which projects would be worth their time.
Technology firms do experiment with different zero-day reporting incentives to fill the gap. Some companies follow the model established by TippingPoint's "Zero Day Initiative" and VeriSign's iDefense Vulnerability Contributor Program in the early 2000s, regularly paying security researchers for finding general vulnerabilities. Then there's Google's Project Zero, which employs full-time security researchers to probe popular programs and report bugs to relevant software developers; if developers do not sufficiently patch the bug within 90 days, then the Project Zero website will publicly name and shame the laggardly companies to light a fire under their behinds. And other firms maintain "bug bounty" programs that welcome and reward outside researchers for discretely reporting bugs to the company first. Yet as promising as these ventures are, they require a vendor to first initiate such programs. If a hacker innocuously discovers a vulnerability for a company that does not welcome such reporting, they could get in legal hot water.
The market for zero-day exploits has been little examined by the public until recently, but these bugs have been traded for about as long as humans have tinkered with computers. For years, computer hobbyists routinely sought and exchanged zero-day bugs for prestige, personal duty, and modest rewards. But as the web scaled up, hackers could no longer afford to contribute to the security commons without real compensation and a less uncertain legal environment. Eventually, some came to learn that governments with questionable intentions offered the biggest and most certain payoffs.
Today, the destructive zero-day market has fleshed out with middlemen like Hacking Team and Gamma International peddling vulnerabilities to self-serving organizations, including most of the Wassenaar-signatory nations that are currently seeking more control over this trade. The U.S. is thought to be the largest buyer of zero-day exploits—the NSA alone spent at least $25 million on grey market purchases in 2013.
Ironically, prevailing legal incentives encourage more grey and black-hat sales than public-minded ones. Vendors may become defensive or hostile towards external researchers who report software vulnerabilities to their companies. The threat of lawsuits and prosecution under the CFAA chills security researchers who would otherwise be happy to report software bugs. Meanwhile, governments and contractors are more than willing to pull legal strings and ensure prosecutorial immunity for the grey hat hackers who do their bidding (the CFAA explicitly exempts intelligence activities from its purview).
Ultimately, Wassenaar nations want to stop zero-day exploits from being used against their systems without giving up the ability to manipulate vulnerabilities for their own purposes. But proposals to bureaucratically separate "good" zero-day sales from "bad" ones through export controls or outright bans are doomed to failure because regulators lack the technical knowledge to effectively weigh the costs and benefits for each. And besides, governments can no better control the sale of zero-days than they can control the sale of alcohol, drugs, or raw milk—those predisposed to criminality will always find some way to sling their sordid wares. Meanwhile, the disproportionate (and self-defeating) focus on stockpiling "cyberweapons" neglects to appreciate the vast security improvements that a robust zero-day market could provide.
There are few winners in the stealth cyberarms race that quietly permeates any zero-day policy discussion, but everyone would benefit from a safer Internet. One good way for the government to start promoting this goal is to cease purchasing grey-market zero-day bugs, report existing vulnerabilty stockpiles to the proper vendors for security patches, and prune anti-hacking legislation like the CFAA to promote the development of a robust white-hat zero-day market.
Show Comments (3)