Tag Archives: iPhone

Massive iPhone Hack Targets Uyghurs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/massive_iphone_.html

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google’s Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google’s announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

EDITED TO ADD (9/7): More on Apple’s pushbacks.

Cellebrite Claims It Can Unlock Any iPhone

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/cellebrite_clai.html

The digital forensics company Cellebrite now claims it can unlock any iPhone.

I dithered before blogging this, not wanting to give the company more publicity. But I decided that everyone who wants to know already knows, and that Apple already knows. It’s all of us that need to know.

iPhone Apps Surreptitiously Communicated with Unknown Servers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/iphone_apps_sur.html

Long news article (alternate source) on iPhone privacy, specifically the enormous amount of data your apps are collecting without your knowledge. A lot of this happens in the middle of the night, when you’re probably not otherwise using your phone:

IPhone apps I discovered tracking me by passing information to third parties ­ just while I was asleep ­ include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic.

How Apple’s "Find My" Feature Works

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/how_apples_find.html

Matthew Green intelligently speculates about how Apple’s new “Find My” feature works.

If you haven’t already been inspired by the description above, let me phrase the question you ought to be asking: how is this system going to avoid being a massive privacy nightmare?

Let me count the concerns:

  • If your device is constantly emitting a BLE signal that uniquely identifies it, the whole world is going to have (yet another) way to track you. Marketers already use WiFi and Bluetooth MAC addresses to do this: Find My could create yet another tracking channel.
  • It also exposes the phones who are doing the tracking. These people are now going to be sending their current location to Apple (which they may or may not already be doing). Now they’ll also be potentially sharing this information with strangers who “lose” their devices. That could go badly.

  • Scammers might also run active attacks in which they fake the location of your device. While this seems unlikely, people will always surprise you.

The good news is that Apple claims that their system actually does provide strong privacy, and that it accomplishes this using clever cryptography. But as is typical, they’ve declined to give out the details how they’re going to do it. Andy Greenberg talked me through an incomplete technical description that Apple provided to Wired, so that provides many hints. Unfortunately, what Apple provided still leaves huge gaps. It’s into those gaps that I’m going to fill in my best guess for what Apple is actually doing.

iOS Shortcut for Recording the Police

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/ios_shortcut_fo.html

Hey Siri; I’m getting pulled over” can be a shortcut:

Once the shortcut is installed and configured, you just have to say, for example, “Hey Siri, I’m getting pulled over.” Then the program pauses music you may be playing, turns down the brightness on the iPhone, and turns on “do not disturb” mode.

It also sends a quick text to a predetermined contact to tell them you’ve been pulled over, and it starts recording using the iPhone’s front-facing camera. Once you’ve stopped recording, it can text or email the video to a different predetermined contact and save it to Dropbox.

Fingerprinting iPhones

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/fingerprinting_7.html

This clever attack allows someone to uniquely identify a phone when you visit a website, based on data from the accelerometer, gyroscope, and magnetometer sensors.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

  • The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you.
  • The attack takes less than one second to generate a fingerprint.
  • The attack can generate a globally unique fingerprint for iOS devices.
  • The calibration fingerprint never changes, even after a factory reset.
  • The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

* Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

Research paper.

iPhone FaceTime Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/iphone_facetime.html

This is kind of a crazy iPhone vulnerability: it’s possible to call someone on FaceTime and listen on their microphone — and see from their camera — before they accept the call.

This is definitely an embarrassment, and Apple was right to disable Group FaceTime until it’s fixed. But it’s hard to imagine how an adversary can operationalize this in any useful way.

New York governor Andrew M. Cuomo wrote: “The FaceTime bug is an egregious breach of privacy that puts New Yorkers at risk.” Kinda, I guess.

EDITED TO ADD (1/30): This bug/vulnerability was first discovered by a 14-year-old, whose mother tried to alert Apple with no success.

iOS 12.1 Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/ios_121_vulnera.html

This is really just to point out that computer security is really hard:

Almost as soon as Apple released iOS 12.1 on Tuesday, a Spanish security researcher discovered a bug that exploits group Facetime calls to give anyone access to an iPhone users’ contact information with no need for a passcode.

[…]

A bad actor would need physical access to the phone that they are targeting and has a few options for viewing the victim’s contact information. They would need to either call the phone from another iPhone or have the phone call itself. Once the call connects they would need to:

  • Select the Facetime icon
  • Select “Add Person”
  • Select the plus icon
  • Scroll through the contacts and use 3D touch on a name to view all contact information that’s stored.

Making the phone call itself without entering a passcode can be accomplished by either telling Siri the phone number or, if they don’t know the number, they can say “call my phone.” We tested this with both the owners’ voice and a strangers voice, in both cases, Siri initiated the call.

Cell Phone Security and Heads of State

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/cell_phone_secu_1.html

Earlier this week, the New York Times reported that the Russians and the Chinese were eavesdropping on President Donald Trump’s personal cell phone and using the information gleaned to better influence his behavior. This should surprise no one. Security experts have been talking about the potential security vulnerabilities in Trump’s cell phone use since he became president. And President Barack Obama bristled at — but acquiesced to — the security rules prohibiting him from using a “regular” cell phone throughout his presidency.

Three broader questions obviously emerge from the story. Who else is listening in on Trump’s cell phone calls? What about the cell phones of other world leaders and senior government officials? And — most personal of all — what about my cell phone calls?

There are two basic places to eavesdrop on pretty much any communications system: at the end points and during transmission. This means that a cell phone attacker can either compromise one of the two phones or eavesdrop on the cellular network. Both approaches have their benefits and drawbacks. The NSA seems to prefer bulk eavesdropping on the planet’s major communications links and then picking out individuals of interest. In 2016, WikiLeaks published a series of classified documents listing “target selectors”: phone numbers the NSA searches for and records. These included senior government officials of Germany — among them Chancellor Angela Merkel — France, Japan, and other countries.

Other countries don’t have the same worldwide reach that the NSA has, and must use other methods to intercept cell phone calls. We don’t know details of which countries do what, but we know a lot about the vulnerabilities. Insecurities in the phone network itself are so easily exploited that 60 Minutes eavesdropped on a US congressman’s phone live on camera in 2016. Back in 2005, unknown attackers targeted the cell phones of many Greek politicians by hacking the country’s phone network and turning on an already-installed eavesdropping capability. The NSA even implanted eavesdropping capabilities in networking equipment destined for the Syrian Telephone Company.

Alternatively, an attacker could intercept the radio signals between a cell phone and a tower. Encryption ranges from very weak to possibly strong, depending on which flavor the system uses. Don’t think the attacker has to put his eavesdropping antenna on the White House lawn; the Russian Embassy is close enough.

The other way to eavesdrop on a cell phone is by hacking the phone itself. This is the technique favored by countries with less sophisticated intelligence capabilities. In 2017, the public-interest forensics group Citizen Lab uncovered an extensive eavesdropping campaign against Mexican lawyers, journalists, and opposition politicians — presumably run by the government. Just last month, the same group found eavesdropping capabilities in products from the Israeli cyberweapons manufacturer NSO Group operating in Algeria, Bangladesh, Greece, India, Kazakhstan, Latvia, South Africa — 45 countries in all.

These attacks generally involve downloading malware onto a smartphone that then records calls, text messages, and other user activities, and forwards them to some central controller. Here, it matters which phone is being targeted. iPhones are harder to hack, which is reflected in the prices companies pay for new exploit capabilities. In 2016, the vulnerability broker Zerodium offered $1.5 million for an unknown iOS exploit and only $200 for a similar Android exploit. Earlier this year, a new Dubai start-up announced even higher prices. These vulnerabilities are resold to governments and cyberweapons manufacturers.

Some of the price difference is due to the ways the two operating systems are designed and used. Apple has much more control over the software on an iPhone than Google does on an Android phone. Also, Android phones are generally designed, built, and sold by third parties, which means they are much less likely to get timely security updates. This is changing. Google now has its own phone — Pixel — that gets security updates quickly and regularly, and Google is now trying to pressure Android-phone manufacturers to update their phones more regularly. (President Trump reportedly uses an iPhone.)

Another way to hack a cell phone is to install a backdoor during the design process. This is a real fear; earlier this year, US intelligence officials warned that phones made by the Chinese companies ZTE and Huawei might be compromised by that government, and the Pentagon ordered stores on military bases to stop selling them. This is why China’s recommendation that if Trump wanted security, he should use a Huawei phone, was an amusing bit of trolling.

Given the wealth of insecurities and the array of eavesdropping techniques, it’s safe to say that lots of countries are spying on the phones of both foreign officials and their own citizens. Many of these techniques are within the capabilities of criminal groups, terrorist organizations, and hackers. If I were guessing, I’d say that the major international powers like China and Russia are using the more passive interception techniques to spy on Trump, and that the smaller countries are too scared of getting caught to try to plant malware on his phone.

It’s safe to say that President Trump is not the only one being targeted; so are members of Congress, judges, and other senior officials — especially because no one is trying to tell any of them to stop using their cell phones (although cell phones still are not allowed on either the House or the Senate floor).

As for the rest of us, it depends on how interesting we are. It’s easy to imagine a criminal group eavesdropping on a CEO’s phone to gain an advantage in the stock market, or a country doing the same thing for an advantage in a trade negotiation. We’ve seen governments use these tools against dissidents, reporters, and other political enemies. The Chinese and Russian governments are already targeting the US power grid; it makes sense for them to target the phones of those in charge of that grid.

Unfortunately, there’s not much you can do to improve the security of your cell phone. Unlike computer networks, for which you can buy antivirus software, network firewalls, and the like, your phone is largely controlled by others. You’re at the mercy of the company that makes your phone, the company that provides your cellular service, and the communications protocols developed when none of this was a problem. If one of those companies doesn’t want to bother with security, you’re vulnerable.

This is why the current debate about phone privacy, with the FBI on one side wanting the ability to eavesdrop on communications and unlock devices, and users on the other side wanting secure devices, is so important. Yes, there are security benefits to the FBI being able to use this information to help solve crimes, but there are far greater benefits to the phones and networks being so secure that all the potential eavesdroppers — including the FBI — can’t access them. We can give law enforcement other forensics tools, but we must keep foreign governments, criminal groups, terrorists, and everyone else out of everyone’s phones. The president may be taking heat for his love of his insecure phone, but each of us is using just as insecure a phone. And for a surprising number of us, making those phones more private is a matter of national security.

This essay previously appeared in the Atlantic.

EDITED TO ADD: Steven Bellovin and Susan Landau have a good essay on the same topic, as does Wired. Slashdot post.

Defeating the iPhone Restricted Mode

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/defeating_the_i.html

Recently, Apple introduced restricted mode to protect iPhones from attacks by companies like Cellebrite and Greyshift, which allow attackers to recover information from a phone without the password or fingerprint. Elcomsoft just announced that it can easily bypass it.

There is an important lesson in this: security is hard. Apple Computer has one of the best security teams on the planet. This feature was not tossed out in a day; it was designed and implemented with a lot of thought and care. If this team could make a mistake like this, imagine how bad a security feature is when implemented by a team without this kind of expertise.

This is the reason actual cryptographers and security engineers are very skeptical when a random company announces that their product is “secure.” We know that they don’t have the requisite security expertise to design and implement security properly. We know they didn’t take the time and care. We know that their engineers think they understand security, and designed to a level that they couldn’t break.

Getting security right is hard for the best teams on the world. It’s impossible for average teams.

Russian Censorship of Telegram

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/russian_censors.html

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors’ demands, how long can Internet freedom last?

The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender’s phone and decrypted on the receiver’s phone; no part of the network can eavesdrop on the messages.

Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn’t a fixed website, it doesn’t need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.

A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.

More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app’s founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.

Telegram might seem a weird app for Russia to focus on. Those of us who work in security don’t recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That’s why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)

What the Russian government doesn’t like about Telegram is its anonymous broadcast feature­ — channel capability and chats — ­which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram’s role in facilitating uncontrolled journalism is the real issue.

Iran attempts to block Telegram have been more successful than Russia’s, less because Iran’s censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram’s founder, has pledged millions of dollars to help fight Russian censorship.

For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.

Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China’s, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram’s voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.

Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned­ — and technically blocked­ — the practice of “domain fronting,” a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website­ — in this case Google.com­ — to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world’s largest Internet companies. And while freedom may have its advocates — ­the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban­ — actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

In 1993, John Gilmore famously said that “The Internet interprets censorship as damage and routes around it.” That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.

This essay previously appeared on Lawfare.com.

New iPhone OS May Include Device-Unlocking Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/new_iphone_os_m.html

iOS 12, the next release of Apple’s iPhone operating system, may include features to prevent someone from unlocking your phone without your permission:

The feature essentially forces users to unlock the iPhone with the passcode when connecting it to a USB accessory everytime the phone has not been unlocked for one hour. That includes the iPhone unlocking devices that companies such as Cellebrite or GrayShift make, which police departments all over the world use to hack into seized iPhones.

“That pretty much kills [GrayShift’s product] GrayKey and Cellebrite,” Ryan Duff, a security researcher who has studied iPhone and is Director of Cyber Solutions at Point3 Security, told Motherboard in an online chat. “If it actually does what it says and doesn’t let ANY type of data connection happen until it’s unlocked, then yes. You can’t exploit the device if you can’t communicate with it.”

This is part of a bunch of security enhancements in iOS 12:

Other enhancements include tools for generating strong passwords, storing them in the iCloud keychain, and automatically entering them into Safari and iOS apps across all of a user’s devices. Previously, standalone apps such as 1Password have done much the same thing. Now, Apple is integrating the functions directly into macOS and iOS. Apple also debuted new programming interfaces that allow users to more easily access passwords stored in third-party password managers directly from the QuickType bar. The company also announced a new feature that will flag reused passwords, an interface that autofills one-time passwords provided by authentication apps, and a mechanism for sharing passwords among nearby iOS devices, Macs, and Apple TVs.

A separate privacy enhancement is designed to prevent websites from tracking people when using Safari. It’s specifically designed to prevent share buttons and comment code on webpages from tracking people’s movements across the Web without permission or from collecting a device’s unique settings such as fonts, in an attempt to fingerprint the device.

The last additions of note are new permission dialogues macOS Mojave will display before allowing apps to access a user’s camera or microphone. The permissions are designed to thwart malicious software that surreptitiously turns on these devices in an attempt to spy on users. The new protections will largely mimic those previously available only through standalone apps such as one called Oversight, developed by security researcher Patrick Wardle. Apple said similar dialog permissions will protect the file system, mail database, message history, and backups.

No, Ray Ozzie hasn’t solved crypto backdoors

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/no-ray-ozzie-hasnt-solved-crypto.html

According to this Wired article, Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn’t. He’s only solving the part we already know how to solve. He’s deliberately ignoring the stuff we don’t know how to solve. We know how to make backdoors, we just don’t know how to secure them.

The vault doesn’t scale

Yes, Apple has a vault where they’ve successfully protected important keys. No, it doesn’t mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

A good analogy to Ozzie’s solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie’s proposal.

But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We’ve got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it’s not catastrophic.

But with Ozzie’s scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody’s secrets.

In particular, consider what would happen if LetsEncrypt’s certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie’s master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works — but then his scheme includes none of the many protections necessary to make SSL work.

What I’m trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down — quickly. We have so much experience with failure at scale that we can judge Ozzie’s scheme as woefully incomplete. It’s not even up to the standard of SSL, and we have a long list of SSL problems.

Cryptography is about people more than math

We have a mathematically pure encryption algorithm called the “One Time Pad”. It can’t ever be broken, provably so with mathematics.

It’s also perfectly useless, as it’s not something humans can use. That’s why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad’s on my grandfather’s knee — he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).

The same is true with Ozzie’s scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don’t know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can’t be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren’t. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, and stealing millions. Most banks/brokers require additional verification before doing such transfers.

Let me repeat: Ozzie has only solved the part we already know how to solve. He hasn’t addressed these issues that confound us.

We still can’t secure security, much less secure backdoors

We already know how to decrypt iPhones: just wait a year or two for somebody to discover a vulnerability. FBI claims it’s “going dark”, but that’s only for timely decryption of phones. If they are willing to wait a year or two a vulnerability will eventually be found that allows decryption.

That’s what’s happened with the “GrayKey” device that’s been all over the news lately. Apple is fixing it so that it won’t work on new phones, but it works on old phones.

Ozzie’s solution is based on the assumption that iPhones are already secure against things like GrayKey. Like his assumption “if Apple already has a vault for private keys, then we have such vaults for backdoor keys”, Ozzie is saying “if Apple already had secure hardware/software to secure the phone, then we can use the same stuff to secure the backdoors”. But we don’t really have secure vaults and we don’t really have secure hardware/software to secure the phone.

Again, to stress this point, Ozzie is solving the part we already know how to solve, but ignoring the stuff we don’t know how to solve. His solution is insecure for the same reason phones are already insecure.

Locked phones aren’t the problem

Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption crime, then criminals will still use encryption.

That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.

In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them.

We know how to get Apple to give us a backdoor, just pass a law forcing them to. It may look like Ozzie’s scheme, it may be something more secure designed by Apple’s engineers. Sure, it will weaken security on the phone for everyone, but those who truly care will just install Signal. But again we are back to the problem that Ozzie’s solving the problem we know how to solve while ignoring the much larger problem, that of preventing people from installing their own encryption.

The FBI isn’t necessarily the problem

Ozzie phrases his solution in terms of U.S. law enforcement. Well, what about Europe? What about Russia? What about China? What about North Korea?

Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably be used by repressive states for what we believe would be “illegitimate” law enforcement requests.

Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn’t see himself what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living in oppressive governments around the world.

Conclusion

Ozzie pretends the problem is political, that he’s created a solution that appeases both sides. He hasn’t. He’s solved the problem we already know how to solve. He’s ignored all the problems we struggle with, the problems we claim make secure backdoors essentially impossible. I’ve listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he’s going to have to work harder to appease doubting cryptographers.

Why the crypto-backdoor side is morally corrupt

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/why-crypto-backdoor-side-is-morally.html

Crypto-backdoors for law enforcement is a reasonable position, but the side that argues for it adds things that are either outright lies or morally corrupt. Every year, the amount of digital evidence law enforcement has to solve crimes increases, yet they outrageously lie, claiming they are “going dark”, losing access to evidence. A weirder claim is that  those who oppose crypto-backdoors are nonetheless ethically required to make them work. This is morally corrupt.

That’s the point of this Lawfare post, which claims:

What I am saying is that those arguing that we should reject third-party access out of hand haven’t carried their research burden. … There are two reasons why I think there hasn’t been enough research to establish the no-third-party access position. First, research in this area is “taboo” among security researchers. … the second reason why I believe more research needs to be done: the fact that prominent non-government experts are publicly willing to try to build secure third-party-access solutions should make the information-security community question the consensus view. 

This is nonsense. It’s like claiming we haven’t cured the common cold because researchers haven’t spent enough effort at it. When researchers claim they’ve tried 10,000 ways to make something work, it’s like insisting they haven’t done enough because they haven’t tried 10,001 times.
Certainly, half the community doesn’t want to make such things work. Any solution for the “legitimate” law enforcement of the United States means a solution for illegitimate states like China and Russia which would use the feature to oppress their own people. Even if I believe it’s a net benefit to the United States, I would never attempt such research because of China and Russia.
But computer scientists notoriously ignore ethics in pursuit of developing technology. That describes the other half of the crypto community who would gladly work on the problem. The reason they haven’t come up with solutions is because the problem is hard, really hard.
The second reason the above argument is wrong: it says we should believe a solution is possible because some outsiders are willing to try. But as Yoda says, do or do not, there is no try. Our opinions on the difficulty of the problem don’t change simply because people are trying. Our opinions change when people are succeeding. People are always trying the impossible, that’s not evidence it’s possible.
The paper cherry picks things, like Intel CPU features, to make it seem like they are making forward progress. No. Intel’s SGX extensions are there for other reasons. Sure, it’s a new development, and new developments may change our opinion on the feasibility of law enforcement backdoors. But nowhere in talking about this new development have they actually proposes a solution to the backdoor problem. New developments happen all the time, and the pro-backdoor side is going to seize upon each and every one to claim that this, finally, solves the backdoor problem, without showing exactly how it solves the problem.

The Lawfare post does make one good argument, that there is no such thing as “absolute security”, and thus the argument is stupid that “crypto-backdoors would be less than absolute security”. Too often in the cybersecurity community we reject solutions that don’t provide “absolute security” while failing to acknowledge that “absolute security” is impossible.
But that’s not really what’s going on here. Cryptographers aren’t certain we’ve achieved even “adequate security” with current crypto regimes like SSL/TLS/HTTPS. Every few years we find horrible flaws in the old versions and have to develop new versions. If you steal somebody’s iPhone today, it’s so secure you can’t decrypt anything on it. But then if you hold it for 5 years, somebody will eventually figure out a hole and then you’ll be able to decrypt it — a hole that won’t affect Apple’s newer phones.
The reason we think we can’t get crypto-backdoors correct is simply because we can’t get crypto completely correct. It’s implausible that we can get the backdoors working securely when we still have so much trouble getting encryption working correctly in the first place.
Thus, we aren’t talking about “insignificantly less security”, we are talking about going from “barely adequate security” to “inadequate security”. Negotiating keys between you and a website is hard enough without simultaneously having to juggle keys with law enforcement organizations.

And finally, even if cryptographers do everything correctly law enforcement themselves haven’t proven themselves reliable. The NSA exposed its exploits (like the infamous ETERNALBLUE), and OPM lost all its security clearance records. If they can’t keep those secrets, it’s unreasonable to believe they can hold onto backdoor secrets. One of the problems cryptographers are expected to solve is partly this, to make it work in a such way that makes it unlikely law enforcement will lose its secrets.

Summary

This argument by the pro-backdoor side, that we in the crypto-community should do more to solve backdoors, it simply wrong. We’ve spent a lot of effort at this already. Many continue to work on this problem — the reason you haven’t heard much from them is because they haven’t had much success. It’s like blaming doctors for not doing more to work on interrogation drugs (truth serums). Sure, a lot of doctors won’t work on this because it’s distasteful, but at the same time, there are many drug companies who would love to profit by them. The reason they don’t exist is not because they aren’t spending enough money researching them, it’s because there is no plausible solution in sight.
Crypto-backdoors designed for law-enforcement will significantly harm your security. This may change in the future, but that’s the state of crypto today. You should trust the crypto experts on this, not lawyers.

GreyKey iPhone Unlocker

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/greykey_iphone_.html

Some details about the iPhone unlocker from the US company Greyshift, with photos.

Little is known about Grayshift or its sales model at this point. We don’t know whether sales are limited to US law enforcement, or if it is also selling in other parts of the world. Regardless of that, it’s highly likely that these devices will ultimately end up in the hands of agents of an oppressive regime, whether directly from Grayshift or indirectly through the black market.

It’s also entirely possible, based on the history of the IP-Box, that Grayshift devices will end up being available to anyone who wants them and can find a way to purchase them, perhaps by being reverse-engineered and reproduced by an enterprising hacker, then sold for a couple hundred bucks on eBay.

Forbes originally wrote about this, and I blogged that article.

Raspbian update: supporting different screen sizes

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/raspbian-update-screen-sizes/

You may have noticed that we released a updated Raspbian software image yesterday. While the main reason for the new image was to provide support for the new Raspberry Pi 3 Model B+, the image also includes, alongside the usual set of bug fixes and minor tweaks, one significant chunk of new functionality that is worth pointing out.

Updating Raspbian on your Raspberry Pi

How to update to the latest version of Raspbian on your Raspberry Pi.

Compatibility

As a software developer, one of the most awkward things to deal with is what is known as platform fragmentation: having to write code that works on all the different devices and configurations people use. In my spare time, I write applications for iOS, and this has become increasingly painful over the last few years. When I wrote my first iPhone application, it only had to work on the original iPhone, but nowadays any iOS application has to work across several models of iPhone and iPad (which all have different processors and screens), and also across the various releases of iOS. And that’s before you start to consider making your code run on Android as well…

Screenshot of clean Raspbian desktop

The good thing about developing for Raspberry Pi is that there is only a relatively small number of different models of Pi hardware. We try our best to make sure that, wherever possible, the Raspberry Pi Desktop software works on every model of Pi ever sold, and we’ve managed to do this for most of the software in the image. The only exceptions are some of the more recent applications like Chromium, which won’t run on the older ARM6 processors in the Pi 1 and the Pi Zero, and some applications that run very slowly due to needing more memory than the older platforms have.

Raspbian with different screen resolutions

But there is one area where we have no control over the hardware, and that is screen resolution. The HDMI port on the Pi supports a wide range of resolutions, and when you include the composite port and display connector as well, people can be using the desktop  on a huge number of different screen sizes.

Supporting a range of screen sizes is harder than you might think. One problem is that the Linux desktop environment is made up of a large selection of bits of software from various different developers, and not all of these support resizing. And the bits of software that do support resizing don’t all do it in the same way, so making everything resize at once can be awkward.

This is why one of the first things I did when I first started working on the desktop was to create the Appearance Settings application in order to bring a lot of the settings for things like font and icon sizes into one place. This avoids users having to tweak several configuration files whenever they wanted to change something.

Screenshot of appearance settings application in Raspbian

The Appearance Settings application was a good place to start regarding support of different screen sizes. One of the features I originally included was a button to set everything to a default value. This was really a default setting for screens of an average size, and the resulting defaults would not have worked that well on much smaller or much larger screens. Now, there is no longer a single defaults button, but a new Defaults tab with multiple options:

Screenshot of appearance settings application in Raspbian

These three options adjust font size, icon size, and various other settings to values which ought to work well on screens with a high or low resolution. (The For medium screens option has the same effect as the previous defaults button.) The results will not be perfect in all circumstances and for all applications — as mentioned above, there are many different components used to create the desktop, and some of them don’t provide any way of resizing what they draw. But using these options should set the most important parts of the desktop and installed applications, such as icons, fonts, and toolbars, to a suitable size.

Pixel doubling

We’ve added one other option for supporting high resolution screens. At the bottom of the System tab in the Raspberry Pi Configuration application, there is now an option for pixel doubling:

Screenshot of configuration application in Raspbian

We included this option to facilitate the use of the x86 version of Raspbian with ultra-high-resolution screens that have very small pixels, such as Apple’s Retina displays. When running our desktop on one of these, the tininess of the pixels made everything too small for comfortable use.

Enabling pixel doubling simply draws every pixel in the desktop as a 2×2 block of pixels on the screen, making everything exactly twice the size and resulting in a usable desktop on, for example, a MacBook Pro’s Retina display. We’ve included the option on the version of the desktop for the Pi as well, because we know that some people use their Pi with large-screen HDMI TVs.

As pixel doubling magnifies everything on the screen by a factor of two, it’s also a useful option for people with visual impairments.

How to update

As mentioned above, neither of these new functionalities is a perfect solution to dealing with different screen sizes, but we hope they will make life slightly easier for you if you’re trying to run the desktop on a small or large screen. The features are included in the new image we have just released to support the Pi 3B+. If you want to add them to your existing image, the standard upgrade from apt will do so. As shown in the video above, you can just open a terminal window and enter the following to update Raspbian:

sudo apt-get update
sudo apt-get dist-upgrade

As always, your feedback, either in comments here or on the forums, is very welcome.

The post Raspbian update: supporting different screen sizes appeared first on Raspberry Pi.

What John Oliver gets wrong about Bitcoin

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/what-john-oliver-gets-wrong-about.html

John Oliver covered bitcoin/cryptocurrencies last night. I thought I’d describe a bunch of things he gets wrong.

How Bitcoin works

Nowhere in the show does it describe what Bitcoin is and how it works.
Discussions should always start with Satoshi Nakamoto’s original paper. The thing Satoshi points out is that there is an important cost to normal transactions, namely, the entire legal system designed to protect you against fraud, such as the way you can reverse the transactions on your credit card if it gets stolen. The point of Bitcoin is that there is no way to reverse a charge. A transaction is done via cryptography: to transfer money to me, you decrypt it with your secret key and encrypt it with mine, handing ownership over to me with no third party involved that can reverse the transaction, and essentially no overhead.
All the rest of the stuff, like the decentralized blockchain and mining, is all about making that work.
Bitcoin crazies forget about the original genesis of Bitcoin. For example, they talk about adding features to stop fraud, reversing transactions, and having a central authority that manages that. This misses the point, because the existing electronic banking system already does that, and does a better job at it than cryptocurrencies ever can. If you want to mock cryptocurrencies, talk about the “DAO”, which did exactly that — and collapsed in a big fraudulent scheme where insiders made money and outsiders didn’t.
Sticking to Satoshi’s original ideas are a lot better than trying to repeat how the crazy fringe activists define Bitcoin.

How does any money have value?

Oliver’s answer is currencies have value because people agree that they have value, like how they agree a Beanie Baby is worth $15,000.
This is wrong. A better way of asking the question why the value of money changes. The dollar has been losing roughly 2% of its value each year for decades. This is called “inflation”, as the dollar loses value, it takes more dollars to buy things, which means the price of things (in dollars) goes up, and employers have to pay us more dollars so that we can buy the same amount of things.
The reason the value of the dollar changes is largely because the Federal Reserve manages the supply of dollars, using the same law of Supply and Demand. As you know, if a supply decreases (like oil), then the price goes up, or if the supply of something increases, the price goes down. The Fed manages money the same way: when prices rise (the dollar is worth less), the Fed reduces the supply of dollars, causing it to be worth more. Conversely, if prices fall (or don’t rise fast enough), the Fed increases supply, so that the dollar is worth less.
The reason money follows the law of Supply and Demand is because people use money, they consume it like they do other goods and services, like gasoline, tax preparation, food, dance lessons, and so forth. It’s not like a fine art painting, a stamp collection or a Beanie Baby — money is a product. It’s just that people have a hard time thinking of it as a consumer product since, in their experience, money is what they use to buy consumer products. But it’s a symmetric operation: when you buy gasoline with dollars, you are actually selling dollars in exchange for gasoline. That you call one side in this transaction “money” and the other “goods” is purely arbitrary, you call gasoline money and dollars the good that is being bought and sold for gasoline.
The reason dollars is a product is because trying to use gasoline as money is a pain in the neck. Storing it and exchanging it is difficult. Goods like this do become money, such as famously how prisons often use cigarettes as a medium of exchange, even for non-smokers, but it has to be a good that is fungible, storable, and easily exchanged. Dollars are the most fungible, the most storable, and the easiest exchanged, so has the most value as “money”. Sure, the mechanic can fix the farmers car for three chickens instead, but most of the time, both parties in the transaction would rather exchange the same value using dollars than chickens.
So the value of dollars is not like the value of Beanie Babies, which people might buy for $15,000, which changes purely on the whims of investors. Instead, a dollar is like gasoline, which obey the law of Supply and Demand.
This brings us back to the question of where Bitcoin gets its value. While Bitcoin is indeed used like dollars to buy things, that’s only a tiny use of the currency, so therefore it’s value isn’t determined by Supply and Demand. Instead, the value of Bitcoin is a lot like Beanie Babies, obeying the laws of investments. So in this respect, Oliver is right about where the value of Bitcoin comes, but wrong about where the value of dollars comes from.

Why Bitcoin conference didn’t take Bitcoin

John Oliver points out the irony of a Bitcoin conference that stopped accepting payments in Bitcoin for tickets.
The biggest reason for this is because Bitcoin has become so popular that transaction fees have gone up. Instead of being proof of failure, it’s proof of popularity. What John Oliver is saying is the old joke that nobody goes to that popular restaurant anymore because it’s too crowded and you can’t get a reservation.
Moreover, the point of Bitcoin is not to replace everyday currencies for everyday transactions. If you read Satoshi Nakamoto’s whitepaper, it’s only goal is to replace certain types of transactions, like purely electronic transactions where electronic goods and services are being exchanged. Where real-life goods/services are being exchanged, existing currencies work just fine. It’s only the crazy activists who claim Bitcoin will eventually replace real world currencies — the saner people see it co-existing with real-world currencies, each with a different value to consumers.

Turning a McNugget back into a chicken

John Oliver uses the metaphor of turning a that while you can process a chicken into McNuggets, you can’t reverse the process. It’s a funny metaphor.
But it’s not clear what the heck this metaphor is trying explain. That’s not a metaphor for the blockchain, but a metaphor for a “cryptographic hash”, where each block is a chicken, and the McNugget is the signature for the block (well, the block plus the signature of the last block, forming a chain).
Even then that metaphor as problems. The McNugget produced from each chicken must be unique to that chicken, for the metaphor to accurately describe a cryptographic hash. You can therefore identify the original chicken simply by looking at the McNugget. A slight change in the original chicken, like losing a feather, results in a completely different McNugget. Thus, nuggets can be used to tell if the original chicken has changed.
This then leads to the key property of the blockchain, it is unalterable. You can’t go back and change any of the blocks of data, because the fingerprints, the nuggets, will also change, and break the nugget chain.
The point is that while John Oliver is laughing at a silly metaphor to explain the blockchain becuase he totally misses the point of the metaphor.
Oliver rightly says “don’t worry if you don’t understand it — most people don’t”, but that includes the big companies that John Oliver name. Some companies do get it, and are producing reasonable things (like JP Morgan, by all accounts), but some don’t. IBM and other big consultancies are charging companies millions of dollars to consult with them on block chain products where nobody involved, the customer or the consultancy, actually understand any of it. That doesn’t stop them from happily charging customers on one side and happily spending money on the other.
Thus, rather than Oliver explaining the problem, he’s just being part of the problem. His explanation of blockchain left you dumber than before.

ICO’s

John Oliver mocks the Brave ICO ($35 million in 30 seconds), claiming it’s all driven by YouTube personalities and people who aren’t looking at the fundamentals.
And while this is true, most ICOs are bunk, the  Brave ICO actually had a business model behind it. Brave is a Chrome-like web-browser whose distinguishing feature is that it protects your privacy from advertisers. If you don’t use Brave or a browser with an ad block extension, you have no idea how bad things are for you. However, this presents a problem for websites that fund themselves via advertisements, which is most of them, because visitors no longer see ads. Brave has a fix for this. Most people wouldn’t mind supporting the websites they visit often, like the New York Times. That’s where the Brave ICO “token” comes in: it’s not simply stock in Brave, but a token for micropayments to websites. Users buy tokens, then use them for micropayments to websites like New York Times. The New York Times then sells the tokens back to the market for dollars. The buying and selling of tokens happens without a centralized middleman.
This is still all speculative, of course, and it remains to be seen how successful Brave will be, but it’s a serious effort. It has well respected VC behind the company, a well-respected founder (despite the fact he invented JavaScript), and well-respected employees. It’s not a scam, it’s a legitimate venture.

How to you make money from Bitcoin?

The last part of the show is dedicated to describing all the scam out there, advising people to be careful, and to be “responsible”. This is garbage.
It’s like my simple two step process to making lots of money via Bitcoin: (1) buy when the price is low, and (2) sell when the price is high. My advice is correct, of course, but useless. Same as “be careful” and “invest responsibly”.
The truth about investing in cryptocurrencies is “don’t”. The only responsible way to invest is to buy low-overhead market index funds and hold for retirement. No, you won’t get super rich doing this, but anything other than this is irresponsible gambling.
It’s a hard lesson to learn, because everyone is telling you the opposite. The entire channel CNBC is devoted to day traders, who buy and sell stocks at a high rate based on the same principle as a ponzi scheme, basing their judgment not on the fundamentals (like long term dividends) but animal spirits of whatever stock is hot or cold at the moment. This is the same reason people buy or sell Bitcoin, not because they can describe the fundamental value, but because they believe in a bigger fool down the road who will buy it for even more.
For things like Bitcoin, the trick to making money is to have bought it over 7 years ago when it was essentially worthless, except to nerds who were into that sort of thing. It’s the same tick to making a lot of money in Magic: The Gathering trading cards, which nerds bought decades ago which are worth a ton of money now. Or, to have bought Apple stock back in 2009 when the iPhone was new, when nerds could understand the potential of real Internet access and apps that Wall Street could not.
That was my strategy: be a nerd, who gets into things. I’ve made a good amount of money on all these things because as a nerd, I was into Magic: The Gathering, Bitcoin, and the iPhone before anybody else was, and bought in at the point where these things were essentially valueless.
At this point with cryptocurrencies, with the non-nerds now flooding the market, there little chance of making it rich. The lottery is probably a better bet. Instead, if you want to make money, become a nerd, obsess about a thing, understand a thing when its new, and cash out once the rest of the market figures it out. That might be Brave, for example, but buy into it because you’ve spent the last year studying the browser advertisement ecosystem, the market’s willingness to pay for content, and how their Basic Attention Token delivers value to websites — not because you want in on the ICO craze.

Conclusion

John Oliver spends 25 minutes explaining Bitcoin, Cryptocurrencies, and the Blockchain to you. Sure, it’s funny, but it leaves you worse off than when it started. It admits they “simplify” the explanation, but they simplified it so much to the point where they removed all useful information.

Apple to Store Encryption Keys in China

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/apple_to_store_.html

Apple is bowing to pressure from the Chinese government and storing encryption keys in China. While I would prefer it if it would take a stand against China, I really can’t blame it for putting its business model ahead of its desires for customer privacy.

Two more articles.