Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/enigma_typex_an.html
GCHQ has put simulators for the Enigma, Typex, and Bombe on the Internet.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/enigma_typex_an.html
GCHQ has put simulators for the Enigma, Typex, and Bombe on the Internet.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/hacking_the_gch.html
Last week, I evaluated the security of a recent GCHQ backdoor proposal for communications systems. Furthering the debate, Nate Cardozo and Seth Schoen of EFF explain how this sort of backdoor can be detected:
In fact, we think when the ghost feature is active — silently inserting a secret eavesdropping member into an otherwise end-to-end encrypted conversation in the manner described by the GCHQ authors — it could be detected (by the target as well as certain third parties) with at least four different techniques: binary reverse engineering, cryptographic side channels, network-traffic analysis, and crash log analysis. Further, crash log analysis could lead unrelated third parties to find evidence of the ghost in use, and it’s even possible that binary reverse engineering could lead researchers to find ways to disable the ghost capability on the client side. It should be obvious that none of these possibilities are desirable for law enforcement or society as a whole. And while we’ve theorized some types of mitigations that might make the ghost less detectable by particular techniques, they could also impose considerable costs to the network when deployed at the necessary scale, as well as creating new potential security risks or detection methods.
EDITED TO ADD (1/26): Good commentary on how to defeat the backdoor detection.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/evaluating_the_.html
The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI — and some of their peer agencies in the UK, Australia, and elsewhere — argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.
A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency — basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:
In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved — they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.
On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.
In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.
The basic problem is that a backdoor is a technical capability — a vulnerability — that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.
That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit — not just the police — but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).
The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA — and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.
This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.
There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system — and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” featurewhich amounts to the same thing.
And once that’s in place, every government will try to hack it for its own purposes — just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone — we don’t know who — hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.
Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?
Of course not.
Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate — but the resultant confusion caused some people to abandon the encrypted messaging service.
Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.
In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well — cryptography — to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.
The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement — and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and — unavoidably — law enforcement as well?
This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?
I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.
This essay originally appeared on Lawfare.com.
EDITED TO ADD (1/30): More commentary.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/gchq_on_quantum.html
The UK’s GCHQ delivers a brutally blunt assessment of quantum key distribution:
QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms — such as digital signatures — than on encryption.
QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.
I agree with them. It’s a clever idea, but basically useless in practice. I don’t even think it’s anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.
Read the whole thing. It’s short.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/gchq_found_--_a.html
Now this is good news. The UK’s National Cyber Security Centre (NCSC) — part of GCHQ — found a serious vulnerability in Windows Defender (their anti-virus component). Instead of keeping it secret and all of us vulnerable, it alerted Microsoft.
I’d like believe the US does this, too.
Post Syndicated from Laura Sach original https://www.raspberrypi.org/blog/pi-enigma-octapi/
Back in July, we collaborated with GCHQ to bring you two fantastic free resources: the first showed you how to build an OctaPi, a Raspberry Pi cluster computer. The second showed you how to use the cluster to learn about public key cryptography. Since then, we and GCHQ have been hard at work, and now we’re presenting two more exciting projects to make with your OctaPi!
These new free resources are at the Maker level of the Raspberry Pi Foundation Digital Making Curriculum — they are intended for learners with a fair amount of experience, introducing them to some intriguing new concepts.
Whilst both resources make use of the OctaPi in their final steps, you can work through the majority of the projects on any computer running Python 3.
Calculating Pi teaches you two ways of calculating the value of Pi with varying accuracy. Along the way, you’ll also learn how computers store numbers with a fractional part, why your computer can limit how accurate your calculation of Pi is, and how to distribute the calculation across the OctaPi cluster.
Brute-force Enigma sends you back in time to take up the position of a WWII Enigma operator. Learn how to encrypt and decrypt messages using an Enigma machine simulated entirely in Python. Then switch roles and become a Bletchley Park code breaker — except this time, you’ve got a cluster computer on your side! You will use the OctaPi to launch a brute-force crypt attack on an Enigma-encrypted message, and you’ll gain an appreciation of just how difficult this decryption task was without computers.
GCHQ has kindly sent us a fully assembled, very pretty OctaPi of our own to play with at Pi Towers — it even has eight snazzy Unicorn HATs which let you display light patterns and visualize simulations! Visitors of the Raspberry Jam at Pi Towers can have a go at running their own programs on the OctaPi, while we’ll be using it to continue to curate more free resources for you.
The post Decrypt messages and calculate Pi: new OctaPi projects appeared first on Raspberry Pi.
When I was a teacher, a question I was constantly asked by curious students was, “Can you teach us how to hack?” Turning this idea on its head, and teaching the techniques behind some of our most important national cyber security measures, is an excellent way of motivating students to do good. This is why the Raspberry Pi Foundation and GCHQ have been working together to bring you exciting new resources!
You may have read about GCHQ’s OctaPi computer in Issue 58 of the MagPi. The OctaPi is a cluster computer joining together the power of eight Raspberry Pis (i.e. 32 cores) in a distributed computer system to execute computations much faster than a single Pi could perform them.
We have created a brand-new tutorial on how to build your own OctaPi at home. Don’t have eight Raspberry Pis lying around? Build a TetraPi (4) or a HexaPi (6) instead! You could even build the OctaPi with Pi Zero Ws if you wish. You will be able to run any programs you like on your new cluster computer, as it has all the software of a regular Pi, but is more powerful.
You probably use public key cryptography online every day without even realising it, but now you can use your OctaPi to understand exactly how it keeps your data safe. Our new OctaPi: public key cryptography resource walks you through the invention of this type of encryption (spoiler: Diffie and Hellman weren’t the first to invent it!). In it, you’ll also learn how a public key is created, whether a brute force attack using the OctaPi could be used to find out a public key, and you will be able to try breaking an encryption example yourself.
These resources are some our most advanced educational materials yet, and fit in with the “Maker” level of the Raspberry Pi Foundation Digital Making Curriculum. The projects are ideal for older students, perhaps those looking to study Computer Science at university. And there’s more to come: we have two other OctaPi resources in the pipeline to make use of the OctaPi’s full capabilities, so watch this space!
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/new_nsa_stories.html
Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/what-yahoo-nsa-mightve-looked-for.html
The vague story about Yahoo searching emails for the NSA was cleared up today with various stories from other outlets . It seems clear a FISA court order was used to compel Yahoo to search all their customer’s email for a pattern (or patterns). But there’s an important detail still missing: what specifically were they searching for? In this post, I give an example.
The NYTimes article explains the search thusly:
Investigators had learned that agents of the foreign terrorist organization were communicating using Yahoo’s email service and with a method that involved a “highly unique” identifier or signature, but the investigators did not know which specific email accounts those agents were using, the officials said.
What they are likely referring it is software like “Mujahideen Secrets”, which terrorists have been using for about a decade to encrypt messages. It includes a unique fingerprint/signature that can easily be searched for, as shown below.
In the screenshot below, I use this software to type in a secret message:
I then hit the “encrypt” button, and get the following, a chunk of random looking text:
This software encrypts, but does not send/receive messages. You have to do that manually yourself. It’s intended that terrorists will copy/paste this text into emails. They may also paste the messages into forum posts. Encryption is so good that nobody, not even the NSA, can crack properly encrypted messages, so it’s okay to post them to public forums, and still maintain secrecy.
In my case, I copy/pasted this encrypted message into an email message from one of my accounts and sent to to one of my Yahoo! email accounts. I received the message shown below:
The obvious “highly unique signature” the FBI should be looking for, to catch this software, is the string:
### Begin ASRAR El Mojahedeen v2.0 Encrypted Message ###
Indeed, if this is the program the NSA/FBI was looking for, they’ve now caught this message in their dragnet of incoming Yahoo! mail. This is a bit creepy, which is why I added a plea to the message, in unencrypted form, asking them not to rendition or drone strike me. Since the NSA can use such signatures to search traffic from websites, as well as email traffic, there’s a good change you’ve been added to their “list” simply for reading this blog post. For fun, send this blogpost to family or friends you don’t particularly like, in order to get them on the watch list as well.
The thing to note about this is that the string is both content and metadata. As far as the email system is concerned, it is content like anything else you might paste into a message. As far as the terrorists are concerned, the content is encrypted, and this string is just metadata describing how the content was encrypted. I suspect the FISA court might consider content and metadata differently, and that they might issue such an order to search for this metadata while not being willing to order searches of patterns within content.
Regardless of what FISA decides, though, this is still mass surveillance of American citizens. All Yahoo! mail is scanned for such a pattern. I’m no sure how this can possibly be constitutional. Well, I do know how — we can’t get any details about what the government is doing, because national security, and thus we have no “standing” in the court to challenge what they are doing.
Note that one reason Yahoo! may have had to act in 2015 is because after the Snowden revelations, and at the behest of activists, email providers started to use STARTTLS encryption between email servers. If the NSA had servers passively listening to email traffic before, they’d need to be replaced with a new system that tapped more actively into the incoming email stream, behind the initial servers. Thus, we may be able to blame activists for this system (or credit, as the case may be :).
In any case, while the newer stories do a much better job at describe what details are available, no story is complete on this issue. This blogpost suggests one possible scenario that matches the available descriptions, to show more concretely what’s going on.
If you want to be troublemaker, add the above string to as your email signature, so that it gets sent as part of every email you send. It’s hard to imagine the NSA or GCHQ aren’t looking for this string, so it’ll jam up their system.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/gchq_discloses_.html
This is good news:
Communications and Electronics Security Group (CESG), the information security arm of GCHQ, was credited with the discovery of two vulnerabilities that were patched by Apple last week.
The flaws could allow hackers to corrupt memory and cause a denial of service through a crafted app or execute arbitrary code in a privileged context.
The memory handling vulnerabilities (CVE-2016-1822 and CVE-2016-1829) affect OS X El Capitan v10.11 and later operating systems, according to Apple’s 2016-003 security update. The memory corruption vulnerabilities allowed hackers to execute arbitrary code with kernel privileges.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.