Tag Archives: Spectre

"Skyfall attack" was attention seeking

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/01/skyfall-attack-was-attention-seeking.html

After the Meltdown/Spectre attacks, somebody created a website promising related “Skyfall/Solace” attacks. They revealed today that it was a “hoax”.

It was a bad hoax. It wasn’t a clever troll, parody, or commentary. It was childish behavior seeking attention.
For all you hate naming of security vulnerabilities, Meltdown/Spectre was important enough to deserve a name. Sure, from an infosec perspective, it was minor, we just patch and move on. But from an operating-system and CPU design perspective, these things where huge.
Page table isolation to fix Meltdown is a fundamental redesign of the operating system. What you learned in college about how Solaris, Windows, Linux, and BSD were designed is now out-of-date. It’s on the same scale of change as address space randomization.
The same is true of Spectre. It changes what capabilities are given to JavaScript (buffers and high resolution timers). It dramatically increases the paranoia we have of running untrusted code from the Internet. We’ve been cleansing JavaScript of things like buffer-overflows and type confusion errors, now we have to cleanse it of branch prediction issues.

Moreover, not only do we need to change software, we need to change the CPU. No, we won’t get rid of branch-prediction and out-of-order execution, but there things that can easily be done to mitigate these attacks. We won’t be recalling the billions of CPUs already shipped, and it will take a year before fixed CPUs appear on the market, but it’s still an important change. That we fix security through such a massive hardware change is by itself worthy of “names”.

Yes, the “naming” of vulnerabilities is annoying. A bunch of vulns named by their creators have disappeared, and we’ve stopped talking about them. On the other hand, we still talk about Heartbleed and Shellshock, because they were damn important. A decade from now, we’ll still be talking about Meltdown/Spectre. Even if they hadn’t been named by their creators, we still would’ve come up with nicknames to talk about them, because CVE numbers are so inconvenient.
Thus, the hoax’s mocking of the naming is invalid. It was largely incoherent rambling from somebody who really doesn’t understand the importance of these vulns, who uses the hoax to promote themselves.

Kroah-Hartman: Meltdown and Spectre Linux Kernel Status – Update

Post Syndicated from corbet original https://lwn.net/Articles/744803/rss

Here’s a
brief update from Greg Kroah-Hartman
on the kernel’s handling of the
Meltdown and Spectre vulnerabilities. “This shows that my kernel is
properly mitigating the Meltdown problem by implementing PTI (Page Table
Isolation), and that my system is still vulnerable to the Spectre variant
1, but is trying really hard to resolve the variant 2, but is not quite
there (because I did not build my kernel with a compiler to properly
support the retpoline feature).

[$] Meltdown/Spectre mitigation for 4.15 and beyond

Post Syndicated from corbet original https://lwn.net/Articles/744287/rss

While some aspects of the kernel’s defenses against the Meltdown and
Spectre vulnerabilities were more-or-less in place when the problems were
disclosed on January 3, others were less fully formed. Additionally,
many of the mitigations (especially for the two Spectre variants) had not
been seen in public prior to the disclosure, meaning that there was a lot
of scope for discussion once they came out. Many of those discussions are
slowing down, and the kernel’s initial response has mostly come into
focus. The 4.15 kernel will include a broad set of mitigations, while some
others will have to wait for later; read on
for details on where things stand.

Kernel prepatch 4.15-rc8

Post Syndicated from corbet original https://lwn.net/Articles/744304/rss

The 4.15-rc8 kernel prepatch is out for
testing. Among other things, it includes the “retpoline” mechanism
intended to mitigate variant 2 of the Spectre vulnerability. Testing
of this change will be hard, though, since it requires a version of GCC
that almost nobody has — watch LWN for a full article in the near future.
I’m still hoping that this will be the last
rc, despite all the Meltdown and Spectre hoopla. But we will just have to
see, it obviously requires this upcoming week to not come with any huge
surprises.

Spectre & Meltdown Checker – Vulnerability Mitigation Tool For Linux

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/01/spectre-meltdown-checker-vulnerability-mitigation-tool-linux/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Spectre & Meltdown Checker – Vulnerability Mitigation Tool For Linux

Spectre & Meltdown Checker is a simple shell script to tell if your Linux installation is vulnerable against the 3 “speculative execution” CVEs that were made public early 2018.

Without options, it’ll inspect you currently running kernel. You can also specify a kernel image on the command line, if you’d like to inspect a kernel you’re not running.

The script will do its best to detect mitigations, including backported non-vanilla patches, regardless of the advertised kernel version number.

Read the rest of Spectre & Meltdown Checker – Vulnerability Mitigation Tool For Linux now! Only available at Darknet.

[$] A look at the handling of Meltdown and Spectre

Post Syndicated from jake original https://lwn.net/Articles/743363/rss

The Meltdown/Spectre debacle has,
deservedly, reached the mainstream press
and, likely, most of the public that has even a remote interest in computers
and security. It only took a day or so from the accelerated disclosure
date of January 3—it was originally scheduled for
January 9—before the bugs
were making big headlines. But Spectre has been known for at least six
months and Meltdown for nearly as long—at least to some in the industry.
Others that were affected were completely blindsided by the
announcements and have joined the scramble to mitigate these hardware bugs
before they bite users. Whatever else can be said about Meltdown and Spectre,
the handling (or, in truth, mishandling) of this whole incident has been a
horrific failure.

[$] Is it time for open processors?

Post Syndicated from corbet original https://lwn.net/Articles/743602/rss

The disclosure of the Meltdown and Spectre
vulnerabilities
has brought a
new level of attention to the security bugs that can lurk at the hardware
level. Massive amounts of work have gone into improving the (still poor)
security of our software, but all of that is in vain if the hardware gives
away the game. The CPUs that we run in our systems are highly proprietary
and have been shown to contain unpleasant surprises (the Intel management
engine, for example). It is thus natural to wonder whether it is time to
make a move to open-source hardware, much like we have done with our
software. Such a move may well be possible, and it would certainly offer
some benefits, but it would be no panacea.

Kroah-Hartman: Meltdown and Spectre Linux Kernel Status

Post Syndicated from corbet original https://lwn.net/Articles/743383/rss

Here’s an
update from Greg Kroah-Hartman
on the kernel’s response to Meltdown and
Spectre. “If you rely on any other kernel tree other than 4.4, 4.9, or 4.14 right now, and you do not have a distribution supporting you, you are out of luck. The lack of patches to resolve the Meltdown problem is so minor compared to the hundreds of other known exploits and bugs that your kernel version currently contains. You need to worry about that more than anything else at this moment, and get your systems up to date first.

Also, go yell at the people who forced you to run an obsoleted and insecure
kernel version, they are the ones that need to learn that doing so is a
totally reckless act.”

[$] Addressing Meltdown and Spectre in the kernel

Post Syndicated from corbet original https://lwn.net/Articles/743265/rss

When the Meltdown and Spectre vulnerabilities were disclosed on
January 3, attention quickly turned to mitigations. There was already
a clear defense against Meltdown in the form of kernel page-table isolation (KPTI), but the
defenses
against the two Spectre variants had not been developed in public and still
do not exist in the mainline kernel. Initial versions of proposed
defenses have now been disclosed. The resulting picture shows what has
been done to fend off Spectre-based attacks in the near future, but the
situation remains chaotic, to put it lightly.

Spectre and Meltdown Attacks Against Microprocessors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/spectre_and_mel_1.html

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution — which of course is not a solution — is to throw them all away and buy new ones.

On Wednesday, researchers just announced a series of major security vulnerabilities in the microprocessors at the heart of the world’s computers for the past 15-20 years. They’ve been named Spectre and Meltdown, and they have to do with manipulating different ways processors optimize performance by rearranging the order of instructions or performing different instructions in parallel. An attacker who controls one process on a system can use the vulnerabilities to steal secrets elsewhere on the computer. (The research papers are here and here.)

This means that a malicious app on your phone could steal data from your other apps. Or a malicious program on your computer — maybe one running in a browser window from that sketchy site you’re visiting, or as a result of a phishing attack — can steal data elsewhere on your machine. Cloud services, which often share machines amongst several customers, are especially vulnerable. This affects corporate applications running on cloud infrastructure, and end-user cloud applications like Google Drive. Someone can run a process in the cloud and steal data from every other users on the same hardware.

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

“Throw it away and buy a new one” is ridiculous security advice, but it’s what US-CERT recommends. It is also unworkable. The problem is that there isn’t anything to buy that isn’t vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there’s no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here’s a running list of who’s patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computer and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren’t security teams on call to write patches, and there often aren’t mechanisms to push patches onto the devices. We’re already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can’t be fixed.

The second is that some of the patches require updating the computer’s firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn’t get that update directly to users; it had to work with the individual hardware companies, and some of them just weren’t capable of getting the update to their customers.

We’re already seeing this. Some patches require users to disable the computer’s password, which means organizations can’t automate the patch. Some antivirus software blocks the patch, or — worse — crashes the computer. This results in a three-step process: patch your antivirus software, patch your operating system, and then patch the computer’s firmware.

The final reason is the nature of these vulnerabilities themselves. These aren’t normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.

It shouldn’t be surprising that microprocessor designers have been building insecure hardware for 20 years. What’s surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren’t thinking about security. They didn’t have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

This isn’t to say you should immediately turn your computers and phones off and not use them for a few years. For the average user, this is just another attack method amongst many. All the major vendors are working on patches and workarounds for the attacks they can mitigate. All the normal security advice still applies: watch for phishing attacks, don’t click on strange e-mail attachments, don’t visit sketchy websites that might run malware on your browser, patch your systems regularly, and generally be careful on the Internet.

You probably won’t notice that performance hit once Meltdown is patched, except maybe in backup programs and networking applications. Embedded systems that do only one task, like your programmable thermostat or the computer in your refrigerator, are unaffected. Small microprocessors that don’t do all of the vulnerable fancy performance tricks are unaffected. Browsers will figure out how to mitigate this in software. Overall, the security of the average Internet-of-Things device is so bad that this attack is in the noise compared to the previously known risks.

It’s a much bigger problem for cloud vendors; the performance hit will be expensive, but I expect that they’ll figure out some clever way of detecting and blocking the attacks. All in all, as bad as Spectre and Meltdown are, I think we got lucky.

But more are coming, and they’ll be worse. 2018 will be the year of microprocessor vulnerabilities, and it’s going to be a wild ride.

Note: A shorter version of this essay previously appeared on CNN.com. My previous blog post on this topic contains additional links.

More details about mitigations for the CPU Speculative Execution issue (Google Security Blog)

Post Syndicated from jake original https://lwn.net/Articles/743269/rss

One of the main concerns about the mitigations for the Meltdown/Spectre speculative execution bugs has been performance. The Google Security Blog is reporting negligible performance impact on Google systems for two of the mitigations (kernel page-table isolation and Retpoline): “In response to the vulnerabilities that were discovered we developed a novel mitigation called “Retpoline” — a binary modification technique that protects against “branch target injection” attacks. We shared Retpoline with our industry partners and have deployed it on Google’s systems, where we have observed negligible impact on performance.
In addition, we have deployed Kernel Page Table Isolation (KPTI) — a general purpose technique for better protecting sensitive information in memory from other software running on a machine — to the entire fleet of Google Linux production servers that support all of our products, including Search, Gmail, YouTube, and Google Cloud Platform.
There has been speculation that the deployment of KPTI causes significant performance slowdowns. Performance can vary, as the impact of the KPTI mitigations depends on the rate of system calls made by an application. On most of our workloads, including our cloud infrastructure, we see negligible impact on performance.

Why Raspberry Pi isn’t vulnerable to Spectre or Meltdown

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/

Over the last couple of days, there has been a lot of discussion about a pair of security vulnerabilities nicknamed Spectre and Meltdown. These affect all modern Intel processors, and (in the case of Spectre) many AMD processors and ARM cores. Spectre allows an attacker to bypass software checks to read data from arbitrary locations in the current address space; Meltdown allows an attacker to read arbitrary data from the operating system kernel’s address space (which should normally be inaccessible to user programs).

Both vulnerabilities exploit performance features (caching and speculative execution) common to many modern processors to leak data via a so-called side-channel attack. Happily, the Raspberry Pi isn’t susceptible to these vulnerabilities, because of the particular ARM cores that we use.

To help us understand why, here’s a little primer on some concepts in modern processor design. We’ll illustrate these concepts using simple programs in Python syntax like this one:

t = a+b
u = c+d
v = e+f
w = v+g
x = h+i
y = j+k

While the processor in your computer doesn’t execute Python directly, the statements here are simple enough that they roughly correspond to a single machine instruction. We’re going to gloss over some details (notably pipelining and register renaming) which are very important to processor designers, but which aren’t necessary to understand how Spectre and Meltdown work.

For a comprehensive description of processor design, and other aspects of modern computer architecture, you can’t do better than Hennessy and Patterson’s classic Computer Architecture: A Quantitative Approach.

What is a scalar processor?

The simplest sort of modern processor executes one instruction per cycle; we call this a scalar processor. Our example above will execute in six cycles on a scalar processor.

Examples of scalar processors include the Intel 486 and the ARM1176 core used in Raspberry Pi 1 and Raspberry Pi Zero.

What is a superscalar processor?

The obvious way to make a scalar processor (or indeed any processor) run faster is to increase its clock speed. However, we soon reach limits of how fast the logic gates inside the processor can be made to run; processor designers therefore quickly began to look for ways to do several things at once.

An in-order superscalar processor examines the incoming stream of instructions and tries execute more than one at once, in one of several “pipes”, subject to dependencies between the instructions. Dependencies are important: you might think that a two-way superscalar processor could just pair up (or dual-issue) the six instructions in our example like this:

t, u = a+b, c+d
v, w = e+f, v+g
x, y = h+i, j+k

But this doesn’t make sense: we have to compute v before we can compute w, so the third and fourth instructions can’t be executed at the same time. Our two-way superscalar processor won’t be able to find anything to pair with the third instruction, so our example will execute in four cycles:

t, u = a+b, c+d
v    = e+f                   # second pipe does nothing here
w, x = v+g, h+i
y    = j+k

Examples of superscalar processors include the Intel Pentium, and the ARM Cortex-A7 and Cortex-A53 cores used in Raspberry Pi 2 and Raspberry Pi 3 respectively. Raspberry Pi 3 has only a 33% higher clock speed than Raspberry Pi 2, but has roughly double the performance: the extra performance is partly a result of Cortex-A53’s ability to dual-issue a broader range of instructions than Cortex-A7.

What is an out-of-order processor?

Going back to our example, we can see that, although we have a dependency between v and w, we have other independent instructions later in the program that we could potentially have used to fill the empty pipe during the second cycle. An out-of-order superscalar processor has the ability to shuffle the order of incoming instructions (again subject to dependencies) in order to keep its pipelines busy.

An out-of-order processor might effectively swap the definitions of w and x in our example like this:

t = a+b
u = c+d
v = e+f
x = h+i
w = v+g
y = j+k

allowing it to execute in three cycles:

t, u = a+b, c+d
v, x = e+f, h+i
w, y = v+g, j+k

Examples of out-of-order processors include the Intel Pentium 2 (and most subsequent Intel and AMD x86 processors), and many recent ARM cores, including Cortex-A9, -A15, -A17, and -A57.

What is speculation?

Reordering sequential instructions is a powerful way to recover more instruction-level parallelism, but as processors become wider (able to triple- or quadruple-issue instructions) it becomes harder to keep all those pipes busy. Modern processors have therefore grown the ability to speculate. Speculative execution lets us issue instructions which might turn out not to be required (because they are branched over): this keeps a pipe busy, and if it turns out that the instruction isn’t executed, we can just throw the result away.

To demonstrate the benefits of speculation, let’s look at another example:

t = a+b
u = t+c
v = u+d
if v:
   w = e+f
   x = w+g
   y = x+h

Now we have dependencies from t to u to v, and from w to x to y, so a two-way out-of-order processor without speculation won’t ever be able to fill its second pipe. It spends three cycles computing t, u, and v, after which it knows whether the body of the if statement will execute, in which case it then spends three cycles computing w, x, and y. Assuming the if (a branch instruction) takes one cycle, our example takes either four cycles (if v turns out to be zero) or seven cycles (if v is non-zero).

Speculation effectively shuffles the program like this:

t = a+b
u = t+c
v = u+d
w_ = e+f
x_ = w_+g
y_ = x_+h
if v:
   w, x, y = w_, x_, y_

so we now have additional instruction level parallelism to keep our pipes busy:

t, w_ = a+b, e+f
u, x_ = t+c, w_+g
v, y_ = u+d, x_+h
if v:
   w, x, y = w_, x_, y_

Cycle counting becomes less well defined in speculative out-of-order processors, but the branch and conditional update of w, x, and y are (approximately) free, so our example executes in (approximately) three cycles.

What is a cache?

In the good old days*, the speed of processors was well matched with the speed of memory access. My BBC Micro, with its 2MHz 6502, could execute an instruction roughly every 2µs (microseconds), and had a memory cycle time of 0.25µs. Over the ensuing 35 years, processors have become very much faster, but memory only modestly so: a single Cortex-A53 in a Raspberry Pi 3 can execute an instruction roughly every 0.5ns (nanoseconds), but can take up to 100ns to access main memory.

At first glance, this sounds like a disaster: every time we access memory, we’ll end up waiting for 100ns to get the result back. In this case, this example:

a = mem[0]
b = mem[1]

would take 200ns.

In practice, programs tend to access memory in relatively predictable ways, exhibiting both temporal locality (if I access a location, I’m likely to access it again soon) and spatial locality (if I access a location, I’m likely to access a nearby location soon). Caching takes advantage of these properties to reduce the average cost of access to memory.

A cache is a small on-chip memory, close to the processor, which stores copies of the contents of recently used locations (and their neighbours), so that they are quickly available on subsequent accesses. With caching, the example above will execute in a little over 100ns:

a = mem[0]    # 100ns delay, copies mem[0:15] into cache
b = mem[1]    # mem[1] is in the cache

From the point of view of Spectre and Meltdown, the important point is that if you can time how long a memory access takes, you can determine whether the address you accessed was in the cache (short time) or not (long time).

What is a side channel?

From Wikipedia:

“… a side-channel attack is any attack based on information gained from the physical implementation of a cryptosystem, rather than brute force or theoretical weaknesses in the algorithms (compare cryptanalysis). For example, timing information, power consumption, electromagnetic leaks or even sound can provide an extra source of information, which can be exploited to break the system.”

Spectre and Meltdown are side-channel attacks which deduce the contents of a memory location which should not normally be accessible by using timing to observe whether another location is present in the cache.

Putting it all together

Now let’s look at how speculation and caching combine to permit the Meltdown attack. Consider the following example, which is a user program that sometimes reads from an illegal (kernel) address:

t = a+b
u = t+c
v = u+d
if v:
   w = kern_mem[address]   # if we get here crash
   x = w&0x100
   y = user_mem[x]

Now our out-of-order two-way superscalar processor shuffles the program like this:

t, w_ = a+b, kern_mem[address]
u, x_ = t+c, w_&0x100
v, y_ = u+d, user_mem[x_]

if v:
   # crash
   w, x, y = w_, x_, y_      # we never get here

Even though the processor always speculatively reads from the kernel address, it must defer the resulting fault until it knows that v was non-zero. On the face of it, this feels safe because either:

  • v is zero, so the result of the illegal read isn’t committed to w
  • v is non-zero, so the program crashes before the read is committed to w

However, suppose we flush our cache before executing the code, and arrange a, b, c, and d so that v is zero. Now, the speculative load in the third cycle:

v, y_ = u+d, user_mem[x_]

will read from either address 0x000 or address 0x100 depending on the eighth bit of the result of the illegal read. Because v is zero, the results of the speculative instructions will be discarded, and execution will continue. If we time a subsequent access to one of those addresses, we can determine which address is in the cache. Congratulations: you’ve just read a single bit from the kernel’s address space!

The real Meltdown exploit is more complex than this, but the principle is the same. Spectre uses a similar approach to subvert software array bounds checks.

Conclusion

Modern processors go to great lengths to preserve the abstraction that they are in-order scalar machines that access memory directly, while in fact using a host of techniques including caching, instruction reordering, and speculation to deliver much higher performance than a simple processor could hope to achieve. Meltdown and Spectre are examples of what happens when we reason about security in the context of that abstraction, and then encounter minor discrepancies between the abstraction and reality.

The lack of speculation in the ARM1176, Cortex-A7, and Cortex-A53 cores used in Raspberry Pi render us immune to attacks of the sort.

* days may not be that old, or that good

The post Why Raspberry Pi isn’t vulnerable to Spectre or Meltdown appeared first on Raspberry Pi.

Spectre and Meltdown Attacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/spectre_and_mel.html

After a week or so of rumors, everyone is now reporting about the Spectre and Meltdown attacks against pretty much every modern processor out there.

These are side-channel attacks where one process can spy on other processes. They affect computers where an untrusted browser window can execute code, phones that have multiple apps running at the same time, and cloud computing networks that run lots of different processes at once. Fixing them either requires a patch that results in a major performance hit, or is impossible and requires a re-architecture of conditional execution in future CPU chips.

I’ll be writing something for publication over the next few days. This post is basically just a link repository.

EDITED TO ADD: Good technical explanation. And a Slashdot thread.

EDITED TO ADD (1/5): Another good technical description. And how the exploits work through browsers. A rundown of what vendors are doing. Nicholas Weaver on its effects on individual computers.

EDITED TO ADD (1/7): xkcd.

EDITED TO ADD (1/10): Another good technical description.