Tag Archives: Spectre

Kernel 4.17 released

Post Syndicated from corbet original https://lwn.net/Articles/756373/rss

Linus has released the 4.17 kernel, which
will indeed be called “4.17”.
No, I didn’t call it 5.0, even though all the git object count
numerology was in place for that. It will happen in the not _too_
distant future, and I’m told all the release scripts on kernel.org are
ready for it, but I didn’t feel there was any real reason for it.

Headline features in this release include
improved load estimation in the CPU
scheduler,
raw
BPF tracepoints
,
lazytime support in the XFS filesystem,
full in-kernel TLS protocol support,
histogram triggers for tracing,
mitigations for the latest Spectre variants,
and, of course, the removal of support for eight unloved processor
architectures.

C is to low level

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/c-is-too-low-level.html

I’m in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it’s still the closest language to assembly. This is obvious by the fact it’s still the fastest compiled language. What we see is a typical academic out of touch with the real world.

The author makes the (wrong) observation that we’ve been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.


The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.

But here’s the thing, the Sparc Tx was a failure. To be fair, it’s mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.

Time after time, engineers keep finding that “out-of-order”, single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.

The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel’s dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don’t need to be as beefy as Intel’s processors, but they need to be close.

Even Intel’s “Xeon Phi” custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide “vector” (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.

As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.

This is an intellectually compelling argument, but so far bunk.

The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like “intrinsics” (C ‘functions’ that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enough, it still provides the best access to low level capabilities.

Given that C can access new functionality in CPUs, CPU designers add new paradigms, from SIMD to transaction processing. In other words, while in the 1980s CPUs were designed to optimize C (stacks, scaled pointers), these days CPUs are designed to optimize tasks regardless of language.

The author of that article criticizes the memory/cache hierarchy, claiming it has problems. Yes, it has problems, but only compared to how well it normally works. The author praises the many simple cores/threads idea as hiding memory latency with little caching, but misses the point that caches also dramatically increase memory bandwidth. Intel processors are optimized to read a whopping 256 bits every clock cycle from L1 cache. Main memory bandwidth is orders of magnitude slower.

The author goes onto criticize cache coherency as a problem. C uses it, but other languages like Erlang don’t need it. But that’s largely due to the problems each languages solves. Erlang solves the problem where a large number of threads work on largely independent tasks, needing to send only small messages to each other across threads. The problems C solves is when you need many threads working on a huge, common set of data.

For example, consider the “intrusion prevention system”. Any thread can process any incoming packet that corresponds to any region of memory. There’s no practical way of solving this problem without a huge coherent cache. It doesn’t matter which language or abstractions you use, it’s the fundamental constraint of the problem being solved. RDMA is an important concept that’s moved from supercomputer applications to the data center, such as with memcached. Again, we have the problem of huge quantities (terabytes worth) shared among threads rather than small quantities (kilobytes).

The fundamental issue the author of the the paper is ignoring is decreasing marginal returns. Moore’s Law has gifted us more transistors than we can usefully use. We can’t apply those additional registers to just one thing, because the useful returns we get diminish.

For example, Intel CPUs have two hardware threads per core. That’s because there are good returns by adding a single additional thread. However, the usefulness of adding a third or fourth thread decreases. That’s why many CPUs have only two threads, or sometimes four threads, but no CPU has 16 threads per core.

You can apply the same discussion to any aspect of the CPU, from register count, to SIMD width, to cache size, to out-of-order depth, and so on. Rather than focusing on one of these things and increasing it to the extreme, CPU designers make each a bit larger every process tick that adds more transistors to the chip.

The same applies to cores. It’s why the “more simpler cores” strategy fails, because more cores have their own decreasing marginal returns. Instead of adding cores tied to limited memory bandwidth, it’s better to add more cache. Such cache already increases the size of the cores, so at some point it’s more effective to add a few out-of-order features to each core rather than more cores. And so on.

The question isn’t whether we can change this paradigm and radically redesign CPUs to match some academic’s view of the perfect abstraction. Instead, the goal is to find new uses for those additional transistors. For example, “message passing” is a useful abstraction in languages like Go and Erlang that’s often more useful than sharing memory. It’s implemented with shared memory and atomic instructions, but I can’t help but think it couldn’t better be done with direct hardware support.

Of course, as soon as they do that, it’ll become an intrinsic in C, then added to languages like Go and Erlang.

Summary

Academics live in an ideal world of abstractions, the rest of us live in practical reality. The reality is that vast majority of programmers work with the C family of languages (JavaScript, Go, etc.), whereas academics love the epiphanies they learned using other languages, especially function languages. CPUs are only superficially designed to run C and “PDP-11 compatibility”. Instead, they keep adding features to support other abstractions, abstractions available to C. They are driven by decreasing marginal returns — they would love to add new abstractions to the hardware because it’s a cheap way to make use of additional transitions. Academics are wrong believing that the entire system needs to be redesigned from scratch. Instead, they just need to come up with new abstractions CPU designers can add.

Another Spectre-Like CPU Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/another_spectre.html

Google and Microsoft researchers have disclosed another Spectre-like CPU side-channel vulnerability, called “Speculative Store Bypass.” Like the others, the fix will slow the CPU down.

The German tech site Heise reports that more are coming.

I’m not surprised. Writing about Spectre and Meltdown in January, I predicted that we’ll be seeing a lot more of these sorts of vulnerabilities.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown.

I still predict that we’ll be seeing lots more of these in the coming months and years, as we learn more about this class of vulnerabilities.

Spectre variants 3a and 4

Post Syndicated from corbet original https://lwn.net/Articles/755114/rss

Intel has, finally, disclosed
two more Spectre variants, called 3a and 4. The first (“rogue system
register read”) allows system-configuration registers to be read
speculatively, while the second (“speculative store bypass”) could enable
speculative reads to data after a store operation has been speculatively
ignored. Some more information on variant 4 can be found in the
Project Zero bug tracker
. The fix is to install microcode updates,
which are not yet available.

Schaller: Warming up for Fedora Workstation 28

Post Syndicated from corbet original https://lwn.net/Articles/752901/rss

Christian Schaller looks
forward to the Fedora 28 release
(which will evidently be the first on-time Fedora release ever).
The Spectre/Meltdown situation did hammer home to a lot of people
the need to have firmware updates easily available and easy to update. We
created the Linux Vendor Firmware service for Fedora Workstation users with
that in mind and it was great to see the service paying off for many Linux
users, not only on Fedora, but also on other distributions who started
using the service we provided. I would like to call out to Dell who was a
critical partner for the Linux Vendor Firmware effort from day 1 and thus
their users got the most benefit from it when Spectre and Meltdown
hit. Spectre and Meltdown also helped get a lot of other vendors off the
fence or to accelerate their efforts to support LVFS and Richard Hughes and
Peter Jones have been working closely with a lot of new vendors during this
cycle to get support for their hardware and devices into LVFS.

[$] Finding Spectre vulnerabilities with smatch

Post Syndicated from corbet original https://lwn.net/Articles/752408/rss

The furor over the Meltdown and Spectre vulnerabilities has calmed a bit —
for now, at least — but that does not mean that developers have stopped
worrying about them. Spectre variant 1 (the bounds-check bypass
vulnerability) has been of particular concern because, while the kernel is
thought to contain numerous vulnerable spots, nobody really knows how to
find them all. As a result, the defenses that have been developed for
variant 1 have only been deployed in a few places. Recently, though,
Dan Carpenter has enhanced the smatch tool to enable it to find possibly
vulnerable code in the kernel.

The 4.16 kernel is out

Post Syndicated from corbet original https://lwn.net/Articles/750693/rss

Linus has released the 4.16 kernel, as
expected. “We had a number of fixes and cleanups elsewhere, but none
of it made me go ‘uhhuh, better let this soak for another week’
“.
Some of the headline changes in this release include initial support for
the Jailhouse
hypervisor, the usercopy whitelisting
hardening patches, some improvements to the deadline scheduler and, of
course, a lot of Meltdown and Spectre mitigation work.

Another Branch Prediction Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/another_branch_.html

When Spectre and Meltdown were first announced earlier this year, pretty much everyone predicted that there would be many more attacks targeting branch prediction in microprocessors. Here’s another one:

In the new attack, an attacker primes the PHT and running branch instructions so that the PHT will always assume a particular branch is taken or not taken. The victim code then runs and makes a branch, which is potentially disturbing the PHT. The attacker then runs more branch instructions of its own to detect that disturbance to the PHT; the attacker knows that some branches should be predicted in a particular direction and tests to see if the victim’s code has changed that prediction.

The researchers looked only at Intel processors, using the attacks to leak information protected using Intel’s SGX (Software Guard Extensions), a feature found on certain chips to carve out small sections of encrypted code and data such that even the operating system (or virtualization software) cannot access it. They also described ways the attack could be used against address space layout randomization and to infer data in encryption and image libraries.

Research paper.

Qubes OS 4.0 has been released

Post Syndicated from ris original https://lwn.net/Articles/750318/rss

The security-focused distribution Qubes OS has released
version 4.0. “This release delivers on the features we promised in
our announcement
of Qubes 4.0-rc1
, with some course corrections along the way, such as
the switch from HVM to PVH for most VMs in response to Meltdown
and Spectre
. For more details, please see the full Release Notes.

LLVM 6.0.0 released

Post Syndicated from corbet original https://lwn.net/Articles/748863/rss

Version 6.0.0 of the LLVM compiler suite is out.
This release is the result of the community’s work over the past six
months, including: retpoline Spectre variant 2 mitigation,
significantly improved CodeView debug info for Windows, GlobalISel by
default for AArch64 at -O0, improved scheduling on several x86
micro-architectures, Clang defaults to -std=gnu++14 instead of
-std=gnu++98, support for some upcoming C++2a features, improved
optimizations, new compiler warnings, many bug fixes, and more.

New Spectre/Meltdown Variants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/new_spectremelt.html

Researchers have discovered new variants of Spectre and Meltdown. The software mitigations for Spectre and Meltdown seem to block these variants, although the eventual CPU fixes will have to be expanded to account for these new attacks.

[$] Meltdown and Spectre mitigations — a February update

Post Syndicated from corbet original https://lwn.net/Articles/746551/rss

The initial panic over the Meltdown and Spectre processor vulnerabilities
has faded, and work on mitigations in the kernel has slowed since our mid-January report. That work has not
stopped, though. Fully equipping the kernel to protect systems from these
vulnerabilities is a task that may well require years. Read on for an
update on the current status of that work.

Huang: Spectre/Meltdown Pits Transparency Against Liability

Post Syndicated from corbet original https://lwn.net/Articles/746111/rss

Here’s a blog post
from “bunnie” Huang
on the tension between transparency and product
liability around hardware flaws. “The open source community could
use the Spectre/Meltdown crisis as an opportunity to reform the status
quo. Instead of suing Intel for money, what if we sue Intel for
documentation? If documentation and transparency have real value, then this
is a chance to finally put that value in economic terms that Intel
shareholders can understand. I propose a bargain somewhere along these
lines: if Intel releases comprehensive microarchitectural hardware design
specifications, microcode, firmware, and all software source code (e.g. for
AMT/ME) so that the community can band together to hammer out any other
security bugs hiding in their hardware, then Intel is absolved of any
payouts related to the Spectre/Meltdown exploits.

[$] The effect of Meltdown and Spectre in our communities

Post Syndicated from jake original https://lwn.net/Articles/745674/rss

A late-breaking development in the computing world led to a somewhat
hastily arranged panel discussion at this year’s linux.conf.au in Sydney.
The embargo for the Meltdown and Spectre
vulnerabilities

broke on January 4; three weeks later, Jonathan Corbet convened
representatives from five separate parts of our community, from cloud to
kernel to the BSDs and beyond. As Corbet noted in the opening, the panel
itself was organized much like the response to the vulnerabilities
themselves, which is why it didn’t even make it onto the conference schedule
until a few hours earlier.

The 4.15 kernel is out

Post Syndicated from corbet original https://lwn.net/Articles/744875/rss

Linus has released the 4.15 kernel.
After a release cycle that was unusual in so many (bad) ways, this
last week was really pleasant. Quiet and small, and no last-minute
panics, just small fixes for various issues. I never got a feeling
that I’d need to extend things by yet another week, and 4.15 looks
fine to me.

Some of the more significant features in this release include:
the long-awaited CPU controller for the
version-2 control-group interface,
significant live-patching improvements,
initial support for the RISC-V architecture,
support for AMD’s secure encrypted virtualization feature, and
the MAP_SYNC mechanism for working
with nonvolatile memory.
This release also, of course, includes mitigations for the Meltdown and Spectre variant-2
vulnerabilities
though, as Linus points out in the announcement, the
work of dealing with these issues is not yet done.

The Effects of the Spectre and Meltdown Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/the_effects_of_3.html

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors’ manufacturers, and patched­ — at least to the extent possible.

This news isn’t really any different from the usual endless stream of security vulnerabilities and patches, but it’s also a harbinger of the sorts of security problems we’re going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn’t possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren’t anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They’re the future of security­ — and it doesn’t look good for the defenders.

Modern computers do lots of things at the same time. Your computer and your phone simultaneously run several applications — ­or apps. Your browser has several windows open. A cloud computer runs applications for many different computers. All of those applications need to be isolated from each other. For security, one application isn’t supposed to be able to peek at what another one is doing, except in very controlled circumstances. Otherwise, a malicious advertisement on a website you’re visiting could eavesdrop on your banking details, or the cloud service purchased by some foreign intelligence organization could eavesdrop on every other cloud customer, and so on. The companies that write browsers, operating systems, and cloud infrastructure spend a lot of time making sure this isolation works.

Both Spectre and Meltdown break that isolation, deep down at the microprocessor level, by exploiting performance optimizations that have been implemented for the past decade or so. Basically, microprocessors have become so fast that they spend a lot of time waiting for data to move in and out of memory. To increase performance, these processors guess what data they’re going to receive and execute instructions based on that. If the guess turns out to be correct, it’s a performance win. If it’s wrong, the microprocessors throw away what they’ve done without losing any time. This feature is called speculative execution.

Spectre and Meltdown attack speculative execution in different ways. Meltdown is more of a conventional vulnerability; the designers of the speculative-execution process made a mistake, so they just needed to fix it. Spectre is worse; it’s a flaw in the very concept of speculative execution. There’s no way to patch that vulnerability; the chips need to be redesigned in such a way as to eliminate it.

Since the announcement, manufacturers have been rolling out patches to these vulnerabilities to the extent possible. Operating systems have been patched so that attackers can’t make use of the vulnerabilities. Web browsers have been patched. Chips have been patched. From the user’s perspective, these are routine fixes. But several aspects of these vulnerabilities illustrate the sorts of security problems we’re only going to be seeing more of.

First, attacks against hardware, as opposed to software, will become more common. Last fall, vulnerabilities were discovered in Intel’s Management Engine, a remote-administration feature on its microprocessors. Like Spectre and Meltdown, they affected how the chips operate. Looking for vulnerabilities on computer chips is new. Now that researchers know this is a fruitful area to explore, security researchers, foreign intelligence agencies, and criminals will be on the hunt.

Second, because microprocessors are fundamental parts of computers, patching requires coordination between many companies. Even when manufacturers like Intel and AMD can write a patch for a vulnerability, computer makers and application vendors still have to customize and push the patch out to the users. This makes it much harder to keep vulnerabilities secret while patches are being written. Spectre and Meltdown were announced prematurely because details were leaking and rumors were swirling. Situations like this give malicious actors more opportunity to attack systems before they’re guarded.

Third, these vulnerabilities will affect computers’ functionality. In some cases, the patches for Spectre and Meltdown result in significant reductions in speed. The press initially reported 30%, but that only seems true for certain servers running in the cloud. For your personal computer or phone, the performance hit from the patch is minimal. But as more vulnerabilities are discovered in hardware, patches will affect performance in noticeable ways.

And then there are the unpatchable vulnerabilities. For decades, the computer industry has kept things secure by finding vulnerabilities in fielded products and quickly patching them. Now there are cases where that doesn’t work. Sometimes it’s because computers are in cheap products that don’t have a patch mechanism, like many of the DVRs and webcams that are vulnerable to the Mirai (and other) botnets — ­groups of Internet-connected devices sabotaged for coordinated digital attacks. Sometimes it’s because a computer chip’s functionality is so core to a computer’s design that patching it effectively means turning the computer off. This, too, is becoming more common.

Increasingly, everything is a computer: not just your laptop and phone, but your car, your appliances, your medical devices, and global infrastructure. These computers are and always will be vulnerable, but Spectre and Meltdown represent a new class of vulnerability. Unpatchable vulnerabilities in the deepest recesses of the world’s computer hardware is the new normal. It’s going to leave us all much more vulnerable in the future.

This essay previously appeared on TheAtlantic.com.

MagPi 66: Raspberry Pi media projects for your home

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-66-media-pi/

Hey folks, Rob from The MagPi here! Issue 66 of The MagPi is out right now, with the ultimate guide to powering your home media with Raspberry Pi. We think the Pi is the perfect replacement or upgrade for many media devices, so in this issue we show you how to build a range of Raspberry Pi media projects.

MagPi 66

Yes, it does say Pac-Man robotics on the cover. They’re very cool.

The article covers file servers for sharing media across your network, music streaming boxes that connect to Spotify, a home theatre PC to make your TV-watching more relaxing, a futuristic Pi-powered moving photoframe, and even an Alexa voice assistant to control all these devices!

More to see

That’s not all though — The MagPi 66 also shows you how to build a Raspberry Pi cluster computer, how to control LEGO robots using the GPIO, and why your Raspberry Pi isn’t affected by Spectre and Meltdown.




In addition, you’ll also find our usual selection of product reviews and excellent project showcases.

Get The MagPi 66

Issue 66 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Want to support the Raspberry Pi Foundation and the magazine, and get some cool free stuff? If you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

I hope you enjoy this issue! See you next month.

The post MagPi 66: Raspberry Pi media projects for your home appeared first on Raspberry Pi.

On that Spectre mitigations discussion

Post Syndicated from corbet original https://lwn.net/Articles/745111/rss

By now, almost everybody has probably seen the press coverage of Linus Torvalds’s remarks about one of the
patches addressing Spectre variant 2. Less noted, but much more
informative, is David Woodhouse’s response
on why those patches are the way they are. “That’s why my initial
idea, as implemented in this RFC patchset, was to stick with IBRS on
Skylake, and use retpoline everywhere else. I’ll give you ‘garbage
patches’, but they weren’t being ‘just mindlessly sent around’. If we’re
going to drop IBRS support and accept the caveats, then let’s do it as a
conscious decision having seen what it would look like, not just drop it
quietly because poor Davey is too scared that Linus might shout at him
again.