Tag Archives: backdoors

Backdoor Built into Android Firmware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/backdoor_built_.html

In 2017, some Android phones came with a backdoor pre-installed:

Criminals in 2017 managed to get an advanced backdoor preinstalled on Android devices before they left the factories of manufacturers, Google researchers confirmed on Thursday.

Triada first came to light in 2016 in articles published by Kaspersky here and here, the first of which said the malware was “one of the most advanced mobile Trojans” the security firm’s analysts had ever encountered. Once installed, Triada’s chief purpose was to install apps that could be used to send spam and display ads. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and the means to modify the Android OS’ all-powerful Zygote process. That meant the malware could directly tamper with every installed app. Triada also connected to no fewer than 17 command and control servers.

In July 2017, security firm Dr. Web reported that its researchers had found Triada built into the firmware of several Android devices, including the Leagoo M5 Plus, Leagoo M8, Nomu S10, and Nomu S20. The attackers used the backdoor to surreptitiously download and install modules. Because the backdoor was embedded into one of the OS libraries and located in the system section, it couldn’t be deleted using standard methods, the report said.

On Thursday, Google confirmed the Dr. Web report, although it stopped short of naming the manufacturers. Thursday’s report also said the supply chain attack was pulled off by one or more partners the manufacturers used in preparing the final firmware image used in the affected devices.

This is a supply chain attack. It seems to be the work of criminals, but it could just as easily have been a nation-state.

Cryptanalyzing a Pair of Russian Encryption Algorithms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/cryptanalyzing_.html

A pair of Russia-designed cryptographic algorithms — the Kuznyechik block cipher and the Streebog hash function — have the same flawed S-box that is almost certainly an intentional backdoor. It’s just not the kind of mistake you make by accident, not in 2014.

Cybersecurity for the Public Interest

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/cybersecurity_f_2.html

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there’s no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It’s an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism­ — as practiced by the Internet companies that are already spying on everyone — ­matters. So does society’s underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals­ — that occasionally become law­ — that are technological disasters.

This isn’t sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand — ­and are involved in — ­policy. We need public-interest technologists.

Let’s pause at that term. The Ford Foundation defines public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics recently wrote that public-interest technologists are people who “study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good.” Tim Berners-Lee has called them “philosophical engineers.” I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it’s not the best term­ — and I know not everyone likes it­ — but it’s a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn’t want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn’t new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it’s really not. There aren’t enough people doing it, there aren’t enough people who know it needs to be done, and there aren’t enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There’s a report titled A Pivotal Moment that includes this quote: “While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress.”

That quote speaks to the three places for intervention. One: the supply side. There just isn’t enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with “clinics” at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they’ve learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama’s US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­ — places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February 2019 issue of IEEE Security & Privacy. I maintain a public-interest tech resources page here.

Vulnerability in French Government Tchap Chat App

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/vulnerability_i_1.html

A researcher found a vulnerability in the French government WhatsApp replacement app: Tchap. The vulnerability allows anyone to surreptitiously join any conversation.

Of course the developers will fix this vulnerability. But it is amusing to point out that this is exactly the backdoor that GCHQ is proposing.

G7 Comes Out in Favor of Encryption Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/g7_comes_out_in.html

From a G7 meeting of interior ministers in Paris this month, an “outcome document“:

Encourage Internet companies to establish lawful access solutions for their products and services, including data that is encrypted, for law enforcement and competent authorities to access digital evidence, when it is removed or hosted on IT servers located abroad or encrypted, without imposing any particular technology and while ensuring that assistance requested from internet companies is underpinned by the rule law and due process protection. Some G7 countries highlight the importance of not prohibiting, limiting, or weakening encryption;

There is a weird belief amongst policy makers that hacking an encryption system’s key management system is fundamentally different than hacking the system’s encryption algorithm. The difference is only technical; the effect is the same. Both are ways of weakening encryption.

China Spying on Undersea Internet Cables

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/china_spying_on.html

Supply chain security is an insurmountably hard problem. The recent focus is on Chinese 5G equipment, but the problem is much broader. This opinion piece looks at undersea communications cables:

But now the Chinese conglomerate Huawei Technologies, the leading firm working to deliver 5G telephony networks globally, has gone to sea. Under its Huawei Marine Networks component, it is constructing or improving nearly 100 submarine cables around the world. Last year it completed a cable stretching nearly 4,000 miles from Brazil to Cameroon. (The cable is partly owned by China Unicom, a state-controlled telecom operator.) Rivals claim that Chinese firms are able to lowball the bidding because they receive subsidies from Beijing.

Just as the experts are justifiably concerned about the inclusion of espionage “back doors” in Huawei’s 5G technology, Western intelligence professionals oppose the company’s engagement in the undersea version, which provides a much bigger bang for the buck because so much data rides on so few cables.

This shouldn’t surprise anyone. For years, the US and the Five Eyes have had a monopoly on spying on the Internet around the globe. Other countries want in.

As I have repeatedly said, we need to decide if we are going to build our future Internet systems for security or surveillance. Either everyone gets to spy, or no one gets to spy. And I believe we must choose security over surveillance, and implement a defense-dominant strategy.

NSA-Inspired Vulnerability Found in Huawei Laptops

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/nsa-inspired_vu.html

This is an interesting story of a serious vulnerability in a Huawei driver that Microsoft found. The vulnerability is similar in style to the NSA’s DOUBLEPULSAR that was leaked by the Shadow Brokers — believed to be the Russian government — and it’s obvious that this attack copied that technique.

What is less clear is whether the vulnerability — which has been fixed — was put into the Huwei driver accidentally or on purpose.

Critical Flaw in Swiss Internet Voting System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/critical_flaw_i.html

Researchers have found a critical flaw in the Swiss Internet voting system. I was going to write an essay about how this demonstrates that Internet voting is a stupid idea and should never be attempted — and that this system in particular should never be deployed, even if the found flaw is fixed — but Cory Doctorow beat me to it:

The belief that companies can be trusted with this power defies all logic, but it persists. Someone found Swiss Post’s embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world’s top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn’t work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.

And, as everyone who’s ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post’s position is that since the bug only allows elections to be stolen by Swiss Post employees, it’s not a big deal, because Swiss Post employees wouldn’t steal an election.

But when Swiss Post agreed to run the election, they promised an e-voting system based on “zero knowledge” proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn’t be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election.

You might be thinking, “Well, what is the big deal? If you don’t trust the people administering an election, you can’t trust the election’s outcome, right?” Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.

Read the whole thing. It’s excellent.

More info.

Japanese Government Will Hack Citizens’ IoT Devices

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/japanese_govern.html

The Japanese government is going to run penetration tests against all the IoT devices in their country, in an effort to (1) figure out what’s insecure, and (2) help consumers secure them:

The survey is scheduled to kick off next month, when authorities plan to test the password security of over 200 million IoT devices, beginning with routers and web cameras. Devices in people’s homes and on enterprise networks will be tested alike.

[…]

The Japanese government’s decision to log into users’ IoT devices has sparked outrage in Japan. Many have argued that this is an unnecessary step, as the same results could be achieved by just sending a security alert to all users, as there’s no guarantee that the users found to be using default or easy-to-guess passwords would change their passwords after being notified in private.

However, the government’s plan has its technical merits. Many of today’s IoT and router botnets are being built by hackers who take over devices with default or easy-to-guess passwords.

Hackers can also build botnets with the help of exploits and vulnerabilities in router firmware, but the easiest way to assemble a botnet is by collecting the ones that users have failed to secure with custom passwords.

Securing these devices is often a pain, as some expose Telnet or SSH ports online without the users’ knowledge, and for which very few users know how to change passwords. Further, other devices also come with secret backdoor accounts that in some cases can’t be removed without a firmware update.

I am interested in the results of this survey. Japan isn’t very different from other industrialized nations in this regard, so their findings will be general. I am less optimistic about the country’s ability to secure all of this stuff — especially before the 2020 Summer Olympics.

Hacking the GCHQ Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/hacking_the_gch.html

Last week, I evaluated the security of a recent GCHQ backdoor proposal for communications systems. Furthering the debate, Nate Cardozo and Seth Schoen of EFF explain how this sort of backdoor can be detected:

In fact, we think when the ghost feature is active­ — silently inserting a secret eavesdropping member into an otherwise end-to-end encrypted conversation in the manner described by the GCHQ authors­ — it could be detected (by the target as well as certain third parties) with at least four different techniques: binary reverse engineering, cryptographic side channels, network-traffic analysis, and crash log analysis. Further, crash log analysis could lead unrelated third parties to find evidence of the ghost in use, and it’s even possible that binary reverse engineering could lead researchers to find ways to disable the ghost capability on the client side. It should be obvious that none of these possibilities are desirable for law enforcement or society as a whole. And while we’ve theorized some types of mitigations that might make the ghost less detectable by particular techniques, they could also impose considerable costs to the network when deployed at the necessary scale, as well as creating new potential security risks or detection methods.

Other critiques of the system were written by Susan Landau and Matthew Green.

EDITED TO ADD (1/26): Good commentary on how to defeat the backdoor detection.

Evaluating the GCHQ Exceptional Access Proposal

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/evaluating_the_.html

The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI — and some of their peer agencies in the UK, Australia, and elsewhere — argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.

A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency — basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved — they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.

In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.

The basic problem is that a backdoor is a technical capability — a vulnerability — that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.

That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit — not just the police — but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).

The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA — and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.

This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.

There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system — and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” feature­which amounts to the same thing.

And once that’s in place, every government will try to hack it for its own purposes­ — just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone — we don’t know who — hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.

Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?

Of course not.

Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate — but the resultant confusion caused some people to abandon the encrypted messaging service.

Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.

In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well — cryptography — to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.

The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement — and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and — unavoidably — law enforcement as well?

This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?

I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.

This essay originally appeared on Lawfare.com.

EDITED TO ADD (1/30): More commentary.

El Chapo’s Encryption Defeated by Turning His IT Consultant

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/el_chapos_encry.html

Impressive police work:

In a daring move that placed his life in danger, the I.T. consultant eventually gave the F.B.I. his system’s secret encryption keys in 2011 after he had moved the network’s servers from Canada to the Netherlands during what he told the cartel’s leaders was a routine upgrade.

A Dutch article says that it’s a BlackBerry system.

El Chapo had his IT person install “…spyware called FlexiSPY on the ‘special phones’ he had given to his wife, Emma Coronel Aispuro, as well as to two of his lovers, including one who was a former Mexican lawmaker.” That same software was used by the FBI when his IT person turned over the keys. Yet again we learn the lesson that a backdoor can be used against you.

And it doesn’t have to be with the IT person’s permission. A good intelligence agency can use the IT person’s authorizations without his knowledge or consent. This is why the NSA hunts sysadmins.

Slashdot thread. Hacker News thread. Boing Boing post.

New Australian Backdoor Law

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/new_australian_.html

Last week, Australia passed a law giving the government the ability to demand backdoors in computers and communications systems. Details are still to be defined, but it’s really bad.

Note: Many people e-mailed me to ask why I haven’t blogged this yet. One, I was busy with other things. And two, there’s nothing I can say that I haven’t said many times before.

If there are more good links or commentary, please post them in the comments.

EDITED TO ADD (12/13): The Australian government response is kind of embarrassing.

That Bloomberg Supply-Chain-Hack Story

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/that_bloomberg_.html

Back in October, Bloomberg reported that China has managed to install backdoors into server equipment that ended up in networks belonging to — among others — Apple and Amazon. Pretty much everybody has denied it (including the US DHS and the UK NCSC). Bloomberg has stood by its story — and is still standing by it.

I don’t think it’s real. Yes, it’s plausible. But first of all, if someone actually surreptitiously put malicious chips onto motherboards en masse, we would have seen a photo of the alleged chip already. And second, there are easier, more effective, and less obvious ways of adding backdoors to networking equipment.

More on the Five Eyes Statement on Encryption and Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/more_on_the_fiv.html

Earlier this month, I wrote about a statement by the Five Eyes countries about encryption and back doors. (Short summary: they like them.) One of the weird things about the statement is that it was clearly written from a law-enforcement perspective, though we normally think of the Five Eyes as a consortium of intelligence agencies.

Susan Landau examines the details of the statement, explains what’s going on, and why the statement is a lot less than what it might seem.

Security Risks of Government Hacking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/security_risks_14.html

Some of us — myself included — have proposed lawful government hacking as an alternative to backdoors. A new report from the Center of Internet and Society looks at the security risks of allowing government hacking. They include:

  • Disincentive for vulnerability disclosure
  • Cultivation of a market for surveillance tools
  • Attackers co-opt hacking tools over which governments have lost control
  • Attackers learn of vulnerabilities through government use of malware
  • Government incentives to push for less-secure software and standards
  • Government malware affects innocent users.

These risks are real, but I think they’re much less than mandating backdoors for everyone. From the report’s conclusion:

Government hacking is often lauded as a solution to the “going dark” problem. It is too dangerous to mandate encryption backdoors, but targeted hacking of endpoints could ensure investigators access to same or similar necessary data with less risk. Vulnerabilities will never affect everyone, contingent as they are on software, network configuration, and patch management. Backdoors, however, mean everybody is vulnerable and a security failure fails catastrophically. In addition, backdoors are often secret, while eventually, vulnerabilities will typically be disclosed and patched.

The key to minimizing the risks is to ensure that law enforcement (or whoever) report all vulnerabilities discovered through the normal process, and use them for lawful hacking during the period between reporting and patching. Yes, that’s a big ask, but the alternatives are worse.

This is the canonical lawful hacking paper.

Five-Eyes Intelligence Services Choose Surveillance Over Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/five-eyes_intel.html

The Five Eyes — the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) — have issued a “Statement of Principles on Access to Evidence and Encryption” where they claim their needs for surveillance outweigh everyone’s needs for security and privacy.

…the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution.

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority.

The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations.

To put it bluntly, this is reckless and shortsighted. I’ve repeatedly written about why this can’t be done technically, and why trying results in insecurity. But there’s a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a “defense dominant” strategy for securing the Internet and everything attached to it.

This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and — now that they can affect the world in a direct physical manner — affect our lives and property as well.

This is what I just wrote, in Click Here to Kill Everybody:

There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There’s no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world.

This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It’s actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals’ safe houses would be more secure, but it’s pretty clear that this downside would be worth the trade-off of protecting everyone’s house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses.

Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won’t make it impossible for law enforcement to solve crimes; I’ll get to that later in this chapter.) Regardless, it’s worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We’ve got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one.

We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions.

Cory Doctorow has a good reaction.

Slashdot post.

New Report on Police Digital Forensics Techniques

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/new_report_on_p.html

According to a new CSIS report, “going dark” is not the most pressing problem facing law enforcement in the age of digital data:

Over the past year, we conducted a series of interviews with federal, state, and local law enforcement officials, attorneys, service providers, and civil society groups. We also commissioned a survey of law enforcement officers from across the country to better understand the full range of difficulties they are facing in accessing and using digital evidence in their cases. Survey results indicate that accessing data from service providers — much of which is not encrypted — is the biggest problem that law enforcement currently faces in leveraging digital evidence.

This is a problem that has not received adequate attention or resources to date. An array of federal and state training centers, crime labs, and other efforts have arisen to help fill the gaps, but they are able to fill only a fraction of the need. And there is no central entity responsible for monitoring these efforts, taking stock of the demand, and providing the assistance needed. The key federal entity with an explicit mission to assist state and local law enforcement with their digital evidence needs­ — the National Domestic Communications Assistance Center (NDCAC)­has a budget of $11.4 million, spread among several different programs designed to distribute knowledge about service providers’ poli­cies and products, develop and share technical tools, and train law enforcement on new services and tech­nologies, among other initiatives.

From a news article:

In addition to bemoaning the lack of guidance and help from tech companies — a quarter of survey respondents said their top issue was convincing companies to hand over suspects’ data — law enforcement officials also reported receiving barely any digital evidence training. Local police said they’d received only 10 hours of training in the past 12 months; state police received 13 and federal officials received 16. A plurality of respondents said they only received annual training. Only 16 percent said their organizations scheduled training sessions at least twice per year.

This is a point that Susan Landau has repeatedly made, and also one I make in my new book. The FBI needs technical expertise, not backdoors.

Here’s the report.

IEEE Statement on Strong Encryption vs. Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/ieee_statement_.html

The IEEE came out in favor of strong encryption:

IEEE supports the use of unfettered strong encryption to protect confidentiality and integrity of data and communications. We oppose efforts by governments to restrict the use of strong encryption and/or to mandate exceptional access mechanisms such as “backdoors” or “key escrow schemes” in order to facilitate government access to encrypted data. Governments have legitimate law enforcement and national security interests. IEEE believes that mandating the intentional creation of backdoors or escrow schemes — no matter how well intentioned — does not serve those interests well and will lead to the creation of vulnerabilities that would result in unforeseen effects as well as some predictable negative consequences

The full statement is here.