Major Computer Chip Bugs Show the Need for Open Security Research
Have you heard about "Meltdown" and "Spectre"? Here's what you need to know.

2018 rang in with a bang in the computer security world, as two serious and extensive processor vulnerabilities were discovered in early January. Last week, researchers with Google's Project Zero and various universities and private security shops announced their troubling findings that the majority of the world's computer chips manufactured over the past two decades had been susceptible to two exploits—named "Meltdown" and "Spectre"—for years. (Yes, that means your computer, smartphone, and tablet are affected.)
The researchers who discovered the bugs have assembled a helpful website full of information about the vulnerabilities, with links to incident responses by various technology companies.
Computer programs are not supposed to be able to read certain data from other programs. Yet the Meltdown and Spectre hardware bugs could allow a malicious actor to bypass memory isolation and access "secrets stored in the memory of other running programs"—like passwords, photos, emails, communications, and personal documents.
While serious vulnerabilities affecting browsers and other software are unfortunately rather common, the Spectre and Meltdown bugs are noteworthy both for the extent of their reach and the fact that they affect the very chips that make all of our devices run.
Both exploits affect processors in similar ways, but there are differences between them. The Meltdown vulnerability affects Intel and Apple processors and effectively "melts down" the protections that the hardware normally enforces. Meltdown mostly concerns desktop, laptop, and cloud computers. The Spectre vulnerability, on the other hand, tricks Intel, ARM, and AMD chips into executing commands that could expose sensitive information to other applications. Spectre is a problem for smartphone and tablet users. While patches for both vulnerabilities have been pushed out on the software level, the researchers note that the Spectre bug does not have an easy fix and will "haunt us for some time." (Also noteworthy: the Meltdown patch, called KAISER, may slow down processing speeds by up to 30 percent—an annoying headache that will surely make users grumble.)
Intel's stock price took a mighty hit upon news of the exploits, although the characterization of Meltdown and Spectre as an "Intel bug" is not quite right. The vulnerabilities affect popular chipmakers ARM and AMD as well, not to mention Apple hardware.
Some have speculated that the vulnerabilities will require a complete recall and redesign of how processors are made to fully steel systems against these bugs, perhaps especially so in the case of the tricky Spectre bug. A security alert from US-CERT, the Department of Homeland Security's primary cybersecurity coordination body, notes that "patching may not fully address these vulnerabilities" because they exist "in CPU architecture rather than software." This means that the software fixes pushed so far may only be an intermediary step to a full solution. Yet Intel has perhaps not surprisingly sought to downplay the general threat, stating that a software patch will be sufficient to render their products "immune" from the exploits.
How to protect yourself
So what do the Spectre and Meltdown bugs mean for the average person? Don't panic: it's highly unlikely that hackers would first look to target average Joes like you and I. They would be much more likely to attack the big guys like Amazon Web Services and Microsoft Azure because that's where the money's at. Still, it's always good to be proactive.
If you haven't already, it's probably a good idea to cease sharing sensitive data on compromised devices—which includes basically all computing products, save for perhaps the trusty Nokia 3310—until you verify that the necessary fixes have been installed.
While no known malicious uses of the vulnerabilities have been discovered yet, test cases have been successfully deployed. Technology companies have been scrambling behind the scenes to issue patches before hackers have the chance to do so. Some of these have been automatically pushed out by firms like Apple and Google, but it never hurts to check just to be sure.
Even if the necessary security patches are installed on your device, there is still a chance that issues will remain, or that a hardware recall and replacement is truly needed, or that the patches themselves could be a new vector for another exploit. As such, this should serve as yet another reminder of the importance of our digital hygiene. It is our responsibility to safeguard our own data and ensure that we are being as prudent as possible with our online sharing. You can even use this as an excuse to adopt a late New Year's resolution: take your data protection seriously in 2018!
Why we need open security research
This is not merely a story about our vulnerable digital reality. It is equally a case study of the importance of open security research, and a cautionary tale about what can happen if vulnerability disclosure is concealed or even weaponized.
Journalist Andy Greenberg of WIRED has published an excellent exposition of how disparate teams of security researchers across the globe independently discovered the chip bugs. He notes curious coincidence that four separate groups of computer scientists should somehow all locate the world's worst CPU vulnerabilities within weeks of each other. After all, these bugs may have existed for decades. Why would there be a veritable tsunami of disclosures in this particular year?
Greenberg traces the genesis of this particular "bug collision"—or simultaneous vulnerability discovery—to a blogpost by security researcher Anders Fogh last July, where he brainstormed how industry standard microprocessor features to improve speed, called "speculative execution," could be exploited to undermine security. This got the gears turning for other researchers who started to noodle exactly how processors could be vulnerable in other ways. Months later, two unique groups of researchers Graz the University of Technology in Austria and the German security firm Cyberus applied Fogh's techniques and discovered the troubling bugs.
Meanwhile, a 22-year-old hacker named Jann Horn who recently started a stint at Google's bug-finding Project Zero had independently discovered the vulnerabilities weeks before Fogh's clarion call while poring over Intel's chip documentation for an unrelated project. And Paul Kocher, founder of San Fransisco's Cryptography Research, likewise channeled his own suspicions about speculative execution into identifying the bugs without input from any other team.
When these researchers went to responsibly report their findings to chip makers like Intel, they were surprised to learn that others had preceded them. It looks like industry developers from Intel, Amazon, and Google were also aware of the issue in October, as their contributions to the popular Linux kernel mailing list suggested they were already interested in the KAISER patch.
On the one hand, this is a story about what is possible when bright and public-minded security researchers are free to tinker with systems and responsibly report the vulnerabilities they find. But on the other hand, it raises uncomfortable questions about the extent of such deeply-threatening vulnerabilities and whether other, less public-minded parties may have been exploiting them all along.
The researchers surveyed in Greenberg's articles relate their creepy feelings that they were surely not the first to discover these bugs. Many of them are quite certain that intelligence agencies were aware of the flaws, and perhaps even exploited them for years. For what it's worth, the National Security Agency denies that they even knew about the flaws, let alone abused them. Yet many Americans will surely find this hard to believe.
Regardless of whether or not intelligence agencies exploited these particular bugs, the fact of the matter is that they have stockpiled and deployed a trove of zero-day vulnerabilities for years in the service of surveillance missions. As the recent Shadow Brokers saga illustrates, these "assets" can shortly turn to dangerous liabilities should they be released into the wild. This kind of short-sighted bug hiding ultimately makes everyone less safe online, and can ironically even undermine the intelligence goals for which they were initially employed.
The bottom line is that security is hard. We need to both actively manage our own data sharing and promote a policy environment which protects and encourages responsible vulnerability disclosure. We can't afford to wait for the next security catastrophe before we start taking these necessary steps.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
This is a good article and I appreciate the details.
I'm a little put off by phrases like 'public minded security researchers'. I've been part of the open source movement for years (which is admittedly not what the author of the article is talking about), and I can tell you I'm not doing this because I'm public minded. I'm doing it because high quality software takes a long time to develop and benefits from a large developer base. In other words, I'm doing it because I believe I'll get a better outcome than other alternatives.
Security researchers exist. It would be interesting to find out why they do what they do, so that the rest of us can try to give them incentives to continue.
I'd agree that "Public Minded" is probably stealing a base. I work closely with the security team at my company, and they in turn work closely with many third party security companies. Most of the public work that the security firms perform supports their profit-making activities. They get big consulting fees and (in my company's case) decent bounties for identifying exploits on our systems.
That said, I can see multiple reasons why people do what they do. I'm sure some believe they are civic warriors fighting the good fight against chaos. Others like the notoriety and fame of being the first to discover something. Still others are using this as a coordinated part of their PR strategy that lands them big corporate gigs. More likely, it is a combination of all of the above.
One incentive is to protect America and our way of life.
Not to get too patriotic or anything but people undermining American symbols and fundamental qualities takes it toll on those who sacrifice to help keep America as free as possible. There are people who put their life on the line for peanuts and do it to protect America's strongest qualities, like freedom of speech and free market.
Life is cheap around the World and there are a lot of people who want to take what you have and what America has or at least see America fall.
There are a lot of American bureaucrats who don't get paid as much as civilian jobs would pay to enforce the law and spy on foreign powers and companies. Some of these people are dedicated but misguided by bureaucracy on how to protect America and some are corrupt assholes who violate the Constitution and American laws.
In the end, I think the US government's push for companies to create backdoors in all software and hardware for US intel to gain access was a long term mistake. Americans would be safer if the US had the best security software in the World.
I just started 7 weeks ago and I've gotten 2 check for a total of $2,000...this is the best decision I made in a long time! "Thank you for giving me this extraordinary opportunity to make extra money from homeACq.
go to this site for more details..... http://www.startonlinejob.com
Start earning $90/hourly for working online from your home for few hours each day... Get regular payment on a weekly basis... All you need is a computer, internet connection and a litte free time...
Read more here,..... http://www.startonlinejob.com
...............I just started 7 weeks ago and I've gotten 2 check for a total of $2,000...this is the best decision I made in a long time! "Thank you for giving me this extraordinary opportunity to make extra money from home.
go to this site for more details..... http://www.startonlinejob.com
Start earning $90/hourly for working online from your home for few hours each day... Get regular payment on a weekly basis... All you need is a computer, internet connection and a litte free time...
Read more here,..... http://www.startonlinejob.com
This is probably how a certain someone keeps getting firsties.
The patch appears to be working.
Noooooooooooooo!
This is what happens when you allow your government to have a spy on your own citizens culture rather than protect your own citizens culture through combined efforts to find the best possible computer security features.
With some speculation on my part and some confirmation, the USA exports a bunch of computer technology to infiltrate compromised hardware and software around the World. US intelligence agencies "activate" the exploits in our "enemy's" systems while leaving these exploits alone in our "allies". This has been the intelligence agency's strategy to protect America for decades.
The huge downside is that American systems are vulnerable too to those aware of the exploits. Sooner of later other nations will build many of the chips and systems adding in their own exploits- and we are there.
Now America will need years to switch to a security strategy. That is if short-sighted law enforcement will allow Apple to make their iPhones immune to any hack- because need to get a few criminals over protecting all Americans.
My conspiratorial side agrees with you that these security flaws are not accidental, but if you think they are activated only for the "bad guys", I've got several bridges to sell you, cheap.
It isn't all the fault of the CPU. Speculative execution in the CPU, designed to improve performance, had to be combined with memory page table sharing in the OS, also designed to improve performance. The patch removes the page table sharing between the kernel and programs which will force more cache refreshes slowing memory access.
So you're saying that the issue is mpre nuanced than presented? That ruins the narrative comrade.
Anything technical is always more nuanced than what most of the people reporting on it are capable of understanding.
As best as I could summarize:
The page table maps virtual memory, which is used by the kernel and programs, to physical memory. A buffer holds a subset of the page table in a CPU cache. Each program has a separate virtual memory pool and page table. Every time the CPU changes programs it has to access a different page table which flushes the buffer. To speed up processing most OS designs split the use of the buffer between the kernel and program so that the subset of the kernel page table is always in the buffer. The bug is that speculative execution allows a user program to then execute a memory read against the kernel's buffered page table. Although the retrieval to the program is blocked by the CPU, a carefully crafted program can leave artifacts in registers that can be read by the program and used to reconstruct data.
I am sorry, but you are absolutely wrong in the case of Meltdown. The Intel CPUs make memory accessible (through prefetch) without checking whether or not the executing program has the permission to read this memory. The fact that the prefetcher can even be convinced to read in certain addresses just adds to the pain by making it more deterministic to go through the kernel's memory.This is only an issue for Intel (and a few ARM designs), as better hardware implementation would check if the memory is permitted for the process or not before prefetch. Others (AMD e.g.) got this right, Intel got it wrong, so it is absolutely the fault of the Intel CPU. Hence also why the disconnect of page tables only happens on affected platforms and not on all of them for the Linux patches and other open-source systems (where we can track this, I'd be surprised if Windows did it differently but who knows).
Spectre is a different beast.
The prefetch is permitted because the kernel and program page table caches are in the same buffer by design of the OS. That is why it is OS patchable by separating the caches.
Not true, as demonstrated by AMD's implementation which wasn't susceptible to the meltdown bug.
Intel chips allow user programs to speculatively use kernel data, and the access check (to see if the kernel memory is accessible to a user program) happens some time after the instruction starts executing. The speculative execution is properly blocked, but the impact that speculation has on the processor's cache can be measured. With careful timing, this can be used to infer the values stored in kernel memory.
For systems with Intel chips, the impact is quite severe, as potentially any kernel memory can be read by user programs. It's this attack that the operating system patches are designed to fix. It works by removing the shared kernel mapping, an operating system design that has been a mainstay since the early 1990s due to the efficiency it provides. Without that shared mapping, there's no way for user programs to provoke the speculative reads of kernel memory, and hence no way to leak kernel information. But it comes at a cost: it makes every single call into the kernel a bit slower, because each switch to the kernel now requires the kernel page to be reloaded.
meltdownattack.com
Meltdown is a side channel attack that uses page table buffer cache access timing to pass information. It consists of 3 steps:
Step 1
The content of an attacker-chosen memory location, which is inaccessible to the attacker, is loaded into a register.
Step 2
A transient instruction accesses an array based on the secret content of the register which loads the page translation into the table lookup buffer.
Step 3
The attacker uses Flush+Reload to determine which page was loaded into the buffer and hence the secret stored at the chosen memory location.
Awful, half-informed article.
There are 3 issues: #1 and #2 are what is called Spectre. Spectre gives user programs access to user program data. #3 (and #3a) is called Meltdown and gives access to kernel/operating system/privileged data from user code. #1 and #2 apply to Intel, AMD, ARM, ... and a whole host of other CPUs due to them exploiting features of modern CPU architectures. Their fix is, b/c it is such a fundamental issue, not trivial but the impact is less in both performance and security. However, I expect personally we will see similar attacks for years and play catch-up. #3 only applies to Intel and (for #3a, which is less severe) a few ARM designs as of now. The issue with Meltdown is that the CPU reads memory - making it accessible to a user process - w/o checking whether the process has the permission to do so. Much worse.
The paragraph on "personal data hygiene" is true in the sense that you should make sure to also have enough vitamins every day - doesn't help you much against a bullet wound though.This gives - in Meltdown's case - user software access to encryption keys. This is NOT about online sharing!
The thesis of the article is that somehow, with open-security research, these things would not happen in the future. Open security research an responsible disclosure is already established and happened here. Or is the argument that open CPU designs will magically not have bugs? Or are we just shouting buzzwords from rooftops?
There are more inaccuracies.
Needz moar blockchain.
Blockchain runs on these CPUs.
Yeah. Also, I am wondering what an "Apple CPU" is. Desktop products use run of the mill Intel. Mobile products use ARM-derived cores. But if the later are "Apple CPUs" why on earth are we not talking about Samsung or Qualcomm or Mediatek CPUs (which are also custom ARM-designs and just as numerous)?
Also, is it *certain* that the Apple ARM core has a Meltdown bug? There are only a few ARM designs with issue #3a present.
I fear that the writer has so little knowledge of what's going on that the desktop CPUs (which are just Intel CPUs) are what is meant? Or maybe the operating system getting a patch?
Similar with "Apple hardware": why would that matter? Dell hardware has the issue - if it has Intel CPUs. The motherboard and shiny package (I know, that's what Apple customers pay good money for...) does not matter. It's only the CPU which has bugs and the operating system which does or does not have mitigation in place.
And "active content" makes it worse: web-pages nowadays are largely programs (in the JavaScript programming language) that describe to the browser how the page should be drawn on-screen. Unfortunately, JavaScript is quite powerful enough to exploit these bugs -- making it even more important to run browser-extensions like NoScript or Ghostery (or to simply turn off JavaScript completely, which unfortunately breaks _lots_ of web-pages).
And potentially the same problems show up in stand-alone documents, since "docx" and "pdf" are really programming languages, too.
What does that even mean? Who is this "we" that you're talking about? If you want to conduct "open security research", feel free to do so. If you want to pay for other people to conduct "open security research", feel free to pay them. Otherwise, fsck off and stop making collectivist pronouncements about what "we need".
I am aware of two bugs that Spectre and Meltdown. Spectre can't easily be fixed and will need computer chips themselves to be re-designed and made secure but Meltdown can be patched up through an update .
It is a fundamental flaw in the way processors have been built over the last decades.
The division bug was a much more clear-cut fault: there was an error in the hardwired constants used in the division algorithm and IIRC no microcode mitigation was possible. If you are facing any problem related to Apple Issues please visit:- https://www.appletechnicalsupportnumbers.com/
It's great to come across a blog every once in a while that isn't the same out of date rehashed material. Fantastic read. http://www.applesupport247.com.....ze-issues/