Tag Archives: gchq

Hacking the GCHQ Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/hacking_the_gch.html

Last week, I evaluated the security of a recent GCHQ backdoor proposal for communications systems. Furthering the debate, Nate Cardozo and Seth Schoen of EFF explain how this sort of backdoor can be detected:

In fact, we think when the ghost feature is active­ — silently inserting a secret eavesdropping member into an otherwise end-to-end encrypted conversation in the manner described by the GCHQ authors­ — it could be detected (by the target as well as certain third parties) with at least four different techniques: binary reverse engineering, cryptographic side channels, network-traffic analysis, and crash log analysis. Further, crash log analysis could lead unrelated third parties to find evidence of the ghost in use, and it’s even possible that binary reverse engineering could lead researchers to find ways to disable the ghost capability on the client side. It should be obvious that none of these possibilities are desirable for law enforcement or society as a whole. And while we’ve theorized some types of mitigations that might make the ghost less detectable by particular techniques, they could also impose considerable costs to the network when deployed at the necessary scale, as well as creating new potential security risks or detection methods.

Other critiques of the system were written by Susan Landau and Matthew Green.

EDITED TO ADD (1/26): Good commentary on how to defeat the backdoor detection.

Evaluating the GCHQ Exceptional Access Proposal

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/evaluating_the_.html

The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI — and some of their peer agencies in the UK, Australia, and elsewhere — argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.

A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency — basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved — they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.

In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.

The basic problem is that a backdoor is a technical capability — a vulnerability — that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.

That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit — not just the police — but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).

The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA — and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.

This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.

There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system — and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” feature­which amounts to the same thing.

And once that’s in place, every government will try to hack it for its own purposes­ — just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone — we don’t know who — hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.

Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?

Of course not.

Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate — but the resultant confusion caused some people to abandon the encrypted messaging service.

Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.

In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well — cryptography — to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.

The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement — and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and — unavoidably — law enforcement as well?

This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?

I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.

This essay originally appeared on Lawfare.com.

EDITED TO ADD (1/30): More commentary.

GCHQ on Quantum Key Distribution

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/gchq_on_quantum.html

The UK’s GCHQ delivers a brutally blunt assessment of quantum key distribution:

QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms — such as digital signatures — than on encryption.

QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.

I agree with them. It’s a clever idea, but basically useless in practice. I don’t even think it’s anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.

Read the whole thing. It’s short.

GCHQ Found — and Disclosed — a Windows 10 Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/gchq_found_--_a.html

Now this is good news. The UK’s National Cyber Security Centre (NCSC) — part of GCHQ — found a serious vulnerability in Windows Defender (their anti-virus component). Instead of keeping it secret and all of us vulnerable, it alerted Microsoft.

I’d like believe the US does this, too.

Decrypt messages and calculate Pi: new OctaPi projects

Post Syndicated from Laura Sach original https://www.raspberrypi.org/blog/pi-enigma-octapi/

Back in July, we collaborated with GCHQ to bring you two fantastic free resources: the first showed you how to build an OctaPi, a Raspberry Pi cluster computer. The second showed you how to use the cluster to learn about public key cryptography. Since then, we and GCHQ have been hard at work, and now we’re presenting two more exciting projects to make with your OctaPi!

A happy cartoon octopus holds a Raspberry Pi in each tentacle.

Maker level

These new free resources are at the Maker level of the Raspberry Pi Foundation Digital Making Curriculum — they are intended for learners with a fair amount of experience, introducing them to some intriguing new concepts.

Whilst both resources make use of the OctaPi in their final steps, you can work through the majority of the projects on any computer running Python 3.

Calculate Pi

A cartoon octopus is struggling to work out the value of Pi

3.14159…ummm…

Calculating Pi teaches you two ways of calculating the value of Pi with varying accuracy. Along the way, you’ll also learn how computers store numbers with a fractional part, why your computer can limit how accurate your calculation of Pi is, and how to distribute the calculation across the OctaPi cluster.

Brute-force Enigma

A cartoon octopus tries to break an Enigma code

Decrypt the message before time runs out!

Brute-force Enigma sends you back in time to take up the position of a WWII Enigma operator. Learn how to encrypt and decrypt messages using an Enigma machine simulated entirely in Python. Then switch roles and become a Bletchley Park code breaker — except this time, you’ve got a cluster computer on your side! You will use the OctaPi to launch a brute-force crypt attack on an Enigma-encrypted message, and you’ll gain an appreciation of just how difficult this decryption task was without computers.

Our own OctaPi

A GIF of the OctaPi cluster computer at Pi Towers
GCHQ has kindly sent us a fully assembled, very pretty OctaPi of our own to play with at Pi Towers — it even has eight snazzy Unicorn HATs which let you display light patterns and visualize simulations! Visitors of the Raspberry Jam at Pi Towers can have a go at running their own programs on the OctaPi, while we’ll be using it to continue to curate more free resources for you.

The post Decrypt messages and calculate Pi: new OctaPi projects appeared first on Raspberry Pi.

OctaPi: cluster computing and cryptography

Post Syndicated from Laura Sach original https://www.raspberrypi.org/blog/octapi/

When I was a teacher, a question I was constantly asked by curious students was, “Can you teach us how to hack?” Turning this idea on its head, and teaching the techniques behind some of our most important national cyber security measures, is an excellent way of motivating students to do good. This is why the Raspberry Pi Foundation and GCHQ have been working together to bring you exciting new resources!

More computing power with the OctaPi

You may have read about GCHQ’s OctaPi computer in Issue 58 of the MagPi. The OctaPi is a cluster computer joining together the power of eight Raspberry Pis (i.e. 32 cores) in a distributed computer system to execute computations much faster than a single Pi could perform them.

OctaPi cluster

Can you feel the power?

We have created a brand-new tutorial on how to build your own OctaPi at home. Don’t have eight Raspberry Pis lying around? Build a TetraPi (4) or a HexaPi (6) instead! You could even build the OctaPi with Pi Zero Ws if you wish. You will be able to run any programs you like on your new cluster computer, as it has all the software of a regular Pi, but is more powerful.

OctaPi at the Cheltenham Science Festival

Understanding cryptography

You probably use public key cryptography online every day without even realising it, but now you can use your OctaPi to understand exactly how it keeps your data safe. Our new OctaPi: public key cryptography resource walks you through the invention of this type of encryption (spoiler: Diffie and Hellman weren’t the first to invent it!). In it, you’ll also learn how a public key is created, whether a brute force attack using the OctaPi could be used to find out a public key, and you will be able to try breaking an encryption example yourself.

These resources are some our most advanced educational materials yet, and fit in with the “Maker” level of the Raspberry Pi Foundation Digital Making Curriculum. The projects are ideal for older students, perhaps those looking to study Computer Science at university. And there’s more to come: we have two other OctaPi resources in the pipeline to make use of the OctaPi’s full capabilities, so watch this space!

The post OctaPi: cluster computing and cryptography appeared first on Raspberry Pi.

GCHQ Discloses Two OS X Vulnerabilities to Apple

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/gchq_discloses_.html

This is good news:

Communications and Electronics Security Group (CESG), the information security arm of GCHQ, was credited with the discovery of two vulnerabilities that were patched by Apple last week.

The flaws could allow hackers to corrupt memory and cause a denial of service through a crafted app or execute arbitrary code in a privileged context.

The memory handling vulnerabilities (CVE-2016-1822 and CVE-2016-1829) affect OS X El Capitan v10.11 and later operating systems, according to Apple’s 2016-003 security update. The memory corruption vulnerabilities allowed hackers to execute arbitrary code with kernel privileges.

There’s still a lot that needs to be said about this equities process.