Tag Archives: Apple

Apple Abandoned Plans for Encrypted iCloud Backup after FBI Complained

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/apple_abandoned.html

This is new from Reuters:

More than two years ago, Apple told the FBI that it planned to offer users end-to-end encryption when storing their phone data on iCloud, according to one current and three former FBI officials and one current and one former Apple employee.

Under that plan, primarily designed to thwart hackers, Apple would no longer have a key to unlock the encrypted data, meaning it would not be able to turn material over to authorities in a readable form even under court order.

In private talks with Apple soon after, representatives of the FBI’s cyber crime agents and its operational technology division objected to the plan, arguing it would deny them the most effective means for gaining evidence against iPhone-using suspects, the government sources said.

When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan.

ToTok Is an Emirati Spying Tool

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/totok_is_an_emi.html

The smartphone messaging app ToTok is actually an Emirati spying tool:

But the service, ToTok, is actually a spying tool, according to American officials familiar with a classified intelligence assessment and a New York Times investigation into the app and its developers. It is used by the government of the United Arab Emirates to try to track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.

ToTok, introduced only months ago, was downloaded millions of times from the Apple and Google app stores by users throughout the Middle East, Europe, Asia, Africa and North America. While the majority of its users are in the Emirates, ToTok surged to become one of the most downloaded social apps in the United States last week, according to app rankings and App Annie, a research firm.

Apple and Google have removed it from their app stores. If you have it on your phone, delete it now.

Fake: DMCA Notice Targeting Apple Jailbreaks on Reddit Was Fraudulent

Post Syndicated from Andy original https://torrentfreak.com/fake-dmca-notice-targeting-apple-jailbreaks-on-reddit-was-fraudulent-191213/

Earlier this week, black clouds began to form over the passionate iOS jailbreaking community. Tolerated by Apple through gritted teeth due to legal protection under the DMCA, the company took the unusual step of sending a DMCA notice targeting a developer’s tweet containing an encryption key.

While that tweet was later restored, the takedown came as a complete surprise and the knock-on effect from this unsettling act would set the scene for the company getting blamed for additional similar acts, this time on Reddit.

In the wake of the Twitter action, a moderator of the /r/jailbreak sub-Reddit revealed that Reddit’s legal team had removed five posts detailing iOS jailbreak releases checkra1n and unc0ver. All of the posts were deleted by Reddit’s admins after receiving a DMCA notice, ostensibly sent by Apple.

What followed was an hours-long information blackout, during which /r/jailbreak’s moderators sought but failed to obtain information from Reddit’s admins. With a credible fear that more notices could be filed and as a result label /r/jailbreak as a repeat offender under the DMCA, its moderators put the forum into lockdown.

Right from the very beginning there was no clear proof that Apple had sent any DMCA notices to Reddit, despite news headlines blaming the tech company for going to war against jailbreakers. It now transpires that waiting for proof would’ve been a more prudent option.

As revealed by checkra1n development team member ‘qwertyoruiopz’, the notice that targeted his project was actually a fake.

And, according to fellow developer ‘axi0mX’, the fake notice wasn’t particularly well constructed either.

“We reviewed it and confirmed that it was someone impersonating Apple. It was not sent from their law firm, which is Kilpatrick Townsend. There are issues with grammar and spelling,” he revealed.

“This notice was obviously not submitted in good faith, and it was not done by someone authorized to represent Apple. Not cool. They could be sued for damages or face criminal charges for perjury.”

Being sued for sending a fake notice sounds like a reasonable solution in practice but history tells us, one particularly notable case aside, that is unlikely to happen. However, it’s clear that more can be done to mitigate the effects of malicious takedowns, starting with more transparency from Reddit’s admins.

While the moderators of /r/jailbreak knew about the complaints early on, they were given no information about who sent them or on what basis. This meant that the people against whom the complaints were made weren’t in a position to counter them, at least with knowledge on their side.

“My personal take on all this is that this should provide plenty of food of thought about the state of copyright laws in the US. A site like Reddit risks losing legal safe harbor protections if they don’t immediately act on such notices,” qwertyoruiopz says.

“Not sharing the notices by default is however very bad policy on Reddit’s end; I would even call this a vulnerability. It allows for nefarious parties to create false-flag takedowns that spark can infighting and has chilling effects (albeit temporary) on non-infringing content.”

There can be little doubt that Reddit takes its DMCA obligations very seriously, so it could be argued that taking down the posts in response to a complaint was the safest legal option. However, if a cursory review of the notices by those targeted revealed clear fraud within minutes, there is a very good case for those notices being shared quickly to ensure that the fraudulent notices don’t have the desired effect.

While Reddit has shown no signs of sharing DMCA notices with the Lumen Database recently, quickly sharing them with those who have allegedly infringed would be a good first step.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Apple Hits Encryption Key With DMCA Notice, Panic Shuts Down the Jailbreak Sub-Reddit

Post Syndicated from Andy original https://torrentfreak.com/apple-hits-encryption-key-with-dmca-notice-panic-shuts-down-the-jailbreak-sub-reddit-191212/


To most users of mobile computing devices such as phones and tablets, they exist to be used however the consumer sees fit. However, the majority are restricted to prevent the adventurous from doing whatever they like with their own hardware.

To bypass these restrictions, users can utilize a so-called jailbreak tool. These unlock the digital handcuffs deployed on a device and grant additional freedoms that aren’t available as standard. As such, they are popular with modders who enjoy customizing their hardware with new features that otherwise wouldn’t exist.

Since it is viewed as one of the most restrictive manufacturers, Apple hardware and software face almost continual ‘attacks’ from people wanting to jailbreak its devices. There are many communities online dedicated to this scene, including Reddit’s 462,000-member /r/jailbreak forum.

Yesterday, however, chaos reigned after Reddit’s legal team received multiple DMCA notices against a number of threads detailing a pair of prominent jailbreak tools – Checkra1n and UNc0ver.

“Reddit Legal have removed 5 posts (all release posts) for checkra1n and unc0ver. We don’t know what exactly was the copyright about. Admins never told us, we just saw their actions in our mod log,” a moderator explained.

Perhaps unsurprisingly, many linked the issues facing /r/jailbreak to an earlier drama on Twitter when an iOS hacker called S1guza published an Apple decryption key that led to his tweet being taken down following a DMCA notice. It took a few hours but the tweet was ultimately reinstated last evening. No specific reasons were given for taking it down, and none were provided for putting it back up.

The Twitter takedown was sent by Kilpatrick Townsend & Stockton LLP, a company that has acted on Apple’s behalf in the past. The notice itself, published on the Lumen Database thanks to Twitter, also provides no useful details as to why the tweet was targeted.

Since Apple was behind the takedown on Twitter and the most obvious culprit in respect of the DMCA takedowns on Reddit, many fingers were pointed towards the Cupertino-based company. However, despite the best efforts of the moderators on /r/jailbreak, Reddit’s admins would not provide the necessary information to identify who filed the DMCA notices or on what grounds.

With uncertainty apparently the order of the day, moderators of the discussion forum took the drastic decision to put their platform into lockdown.

“Locking down the subreddit to prevent new threads is one of the ‘standard’ responses moderators take to show the admins that the mod team isn’t playing, and that they are serious and ready to remedy the issue,” a post from the mods reads.

“Too many DMCA notices eventually end up with a warn and a ban (or just a ban) from the admins to whatever subreddit these notices are being sent to.”

While the DMCA notices in themselves are clearly the biggest issue here, unlike Twitter and Google, for example, Reddit does not routinely share DMCA notices it receives with an external database such as Lumen. If it did, the additional transparency would perhaps help to shine some light on the topic and prevent heavy self-imposed actions, such as the voluntary lockdown of the jailbreak sub.

Moderators report that Reddit’s admins were initially unresponsive to requests for information and that a database that tracks DMCA notices sent to Reddit didn’t provide any helpful details on the sender of the notices.

Last evening, however, one of the affected jailbreak developers ‘qwertyoruiopz’
announced on Twitter that things were some way to being resolved on Reddit and the sub had been taken out of ‘lockdown mode‘.

Soon after, a welcome response from Reddit’s admins was published, effectively signaling the all-clear.

While the message was well-received, /r/jailbreak shouldn’t have been obliged to take such serious action to preserve its existence. The jailbreaking of iOS devices is considered legal in the US and the DMCA notices filed against Reddit clearly caught everyone by surprise.

It remains unknown whether they were indeed sent by Apple so the possibility remains that they were sent by some kind of imposter, trying to unsettle the community. Nevertheless, it is good news that all complaints have been lifted due to the claims being invalid, as per Reddit’s admins.

Without transparency from Reddit, however, the true nature of what happened is likely to remain a mystery. That being said, the moderators of /r/jailbreak deserve a big pat on the back for taking decisive action, quickly. Things could have really spiraled out of control but by showing good intent early on, things were brought back into line relatively quickly.

Now, let’s see those notices to determine who sent them, and why.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Fooling Voice Assistants with Lasers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/fooling_voice_a.html

Interesting:

Siri, Alexa, and Google Assistant are vulnerable to attacks that use lasers to inject inaudible­ — and sometimes invisible­ — commands into the devices and surreptitiously cause them to unlock doors, visit websites, and locate, unlock, and start vehicles, researchers report in a research paper published on Monday. Dubbed Light Commands, the attack works against Facebook Portal and a variety of phones.

Shining a low-powered laser into these voice-activated systems allows attackers to inject commands of their choice from as far away as 360 feet (110m). Because voice-controlled systems often don’t require users to authenticate themselves, the attack can frequently be carried out without the need of a password or PIN. Even when the systems require authentication for certain actions, it may be feasible to brute force the PIN, since many devices don’t limit the number of guesses a user can make. Among other things, light-based commands can be sent from one building to another and penetrate glass when a vulnerable device is kept near a closed window.

Massive iPhone Hack Targets Uyghurs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/massive_iphone_.html

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google’s Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google’s announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

EDITED TO ADD (9/7): More on Apple’s pushbacks.

Bypassing Apple FaceID’s Liveness Detection Feature

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/bypassing_apple.html

Apple’s FaceID has a liveness detection feature, which prevents someone from unlocking a victim’s phone by putting it in front of his face while he’s sleeping. That feature has been hacked:

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim’s FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim’s face the researchers demonstrated how they could bypass Apple’s FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

Apple Needs to Tackle ‘Pirate’ Music Apps, Labels Insist

Post Syndicated from Andy original https://torrentfreak.com/apple-needs-to-tackle-pirate-music-apps-labels-insist-190712/

The popularity of smartphones and their accompanying software ecosystems have given rise to large volumes of applications that appear to infringe copyright.

With its side-loading ability, Android is by far the most affected platform, with apps easily installable on millions of devices granting access to unlicensed content, including music, movies, and TV shows.

However, even when apps are pre-vetted for availability on Google Play or Apple’s App Store, some rogue tools slip through the net. This situation is unacceptable to most rightsholders but given the manner in which music is often consumed these days, recording labels tend to be the most dissatisfied.

This has prompted a large coalition of music-focused industry groups, headed up by the Recording Industry Association of Japan (RIAJ), to write to Apple demanding change.

In a joint request the RIAJ, the Japan Association of Music Enterprises, the Music Publishers Association of Japan, and the Federation of Music Producers Japan, to name just four, seek assurances from the US-based tech giant that it will “tighten up” its processes to prevent “unauthorized” streaming apps ending up on its platform.

According to the industry groups, “unauthorized” means any app that allows a user to stream music in “ways that fall beyond the intention of the music’s copyright and neighboring right holders.”

The groups don’t offer any specifics but it seems extremely likely that given the pressure on sites and tools that rip, source, or otherwise cull content from YouTube, these are prime candidates for Apple’s attention.

“The recent torrent of Unauthorized Music Apps flooding the industry is enabling users to listen to music for free, resulting in these app operators to gain unfair profits through advertising sales,” the groups write.

“These operators are not only committing copyright infringement, but also stealing profit from the music’s rightful copyright owners and legitimate service providers—profit that they would have otherwise gained through CD sales, downloads, and streaming.”

That CD sales are placed at the head of the list is unsurprising. Despite much of the world ditching plastic discs in favor of digital streaming, Japan still has a love affair with the format, albeit one that’s on the wane.

According to figures published by the RIAJ, 88.65 million CDs were produced in Japan during 2018, down 13 percent on the previous year. That’s compared to 52 million units sold across the entire US during 2018. In 2014, around 85% of music sales in Japan came from CDs. Around 54% of consumption now comes from streaming.

The RIAJ acknowledges that Apple removes “unlicensed apps” from its App Store in response to takedown requests. However, removed apps sometimes reappear on the platform after being disguised as new tools. As a result, the RIAJ wants to be involved in the app approval process, to ensure that rogue software doesn’t appear on the App Store.

Calling for Apple to “strengthen its review process”, the RIAJ says the US company should begin “contacting and working with RIAJ for apps suspected to be Unauthorized Music Apps” while expediting takedowns for tools that violate Apple’s own terms and conditions.

“The music industry associations and music streaming service providers will continue to discuss and engage in efforts to tighten control over Unauthorized Music Apps, strive to build an honest and fair market, and demand speedy amendment of the Copyright Act that regulates leeching sites and apps,” the RIAJ concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cellebrite Claims It Can Unlock Any iPhone

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/cellebrite_clai.html

The digital forensics company Cellebrite now claims it can unlock any iPhone.

I dithered before blogging this, not wanting to give the company more publicity. But I decided that everyone who wants to know already knows, and that Apple already knows. It’s all of us that need to know.

iPhone Apps Surreptitiously Communicated with Unknown Servers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/iphone_apps_sur.html

Long news article (alternate source) on iPhone privacy, specifically the enormous amount of data your apps are collecting without your knowledge. A lot of this happens in the middle of the night, when you’re probably not otherwise using your phone:

IPhone apps I discovered tracking me by passing information to third parties ­ just while I was asleep ­ include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic.

How Apple’s "Find My" Feature Works

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/how_apples_find.html

Matthew Green intelligently speculates about how Apple’s new “Find My” feature works.

If you haven’t already been inspired by the description above, let me phrase the question you ought to be asking: how is this system going to avoid being a massive privacy nightmare?

Let me count the concerns:

  • If your device is constantly emitting a BLE signal that uniquely identifies it, the whole world is going to have (yet another) way to track you. Marketers already use WiFi and Bluetooth MAC addresses to do this: Find My could create yet another tracking channel.
  • It also exposes the phones who are doing the tracking. These people are now going to be sending their current location to Apple (which they may or may not already be doing). Now they’ll also be potentially sharing this information with strangers who “lose” their devices. That could go badly.

  • Scammers might also run active attacks in which they fake the location of your device. While this seems unlikely, people will always surprise you.

The good news is that Apple claims that their system actually does provide strong privacy, and that it accomplishes this using clever cryptography. But as is typical, they’ve declined to give out the details how they’re going to do it. Andy Greenberg talked me through an incomplete technical description that Apple provided to Wired, so that provides many hints. Unfortunately, what Apple provided still leaves huge gaps. It’s into those gaps that I’m going to fill in my best guess for what Apple is actually doing.

Fingerprinting iPhones

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/fingerprinting_7.html

This clever attack allows someone to uniquely identify a phone when you visit a website, based on data from the accelerometer, gyroscope, and magnetometer sensors.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

  • The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you.
  • The attack takes less than one second to generate a fingerprint.
  • The attack can generate a globally unique fingerprint for iOS devices.
  • The calibration fingerprint never changes, even after a factory reset.
  • The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

* Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

Research paper.

iPhone FaceTime Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/iphone_facetime.html

This is kind of a crazy iPhone vulnerability: it’s possible to call someone on FaceTime and listen on their microphone — and see from their camera — before they accept the call.

This is definitely an embarrassment, and Apple was right to disable Group FaceTime until it’s fixed. But it’s hard to imagine how an adversary can operationalize this in any useful way.

New York governor Andrew M. Cuomo wrote: “The FaceTime bug is an egregious breach of privacy that puts New Yorkers at risk.” Kinda, I guess.

EDITED TO ADD (1/30): This bug/vulnerability was first discovered by a 14-year-old, whose mother tried to alert Apple with no success.

iOS 12.1 Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/ios_121_vulnera.html

This is really just to point out that computer security is really hard:

Almost as soon as Apple released iOS 12.1 on Tuesday, a Spanish security researcher discovered a bug that exploits group Facetime calls to give anyone access to an iPhone users’ contact information with no need for a passcode.

[…]

A bad actor would need physical access to the phone that they are targeting and has a few options for viewing the victim’s contact information. They would need to either call the phone from another iPhone or have the phone call itself. Once the call connects they would need to:

  • Select the Facetime icon
  • Select “Add Person”
  • Select the plus icon
  • Scroll through the contacts and use 3D touch on a name to view all contact information that’s stored.

Making the phone call itself without entering a passcode can be accomplished by either telling Siri the phone number or, if they don’t know the number, they can say “call my phone.” We tested this with both the owners’ voice and a strangers voice, in both cases, Siri initiated the call.

Defeating the iPhone Restricted Mode

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/defeating_the_i.html

Recently, Apple introduced restricted mode to protect iPhones from attacks by companies like Cellebrite and Greyshift, which allow attackers to recover information from a phone without the password or fingerprint. Elcomsoft just announced that it can easily bypass it.

There is an important lesson in this: security is hard. Apple Computer has one of the best security teams on the planet. This feature was not tossed out in a day; it was designed and implemented with a lot of thought and care. If this team could make a mistake like this, imagine how bad a security feature is when implemented by a team without this kind of expertise.

This is the reason actual cryptographers and security engineers are very skeptical when a random company announces that their product is “secure.” We know that they don’t have the requisite security expertise to design and implement security properly. We know they didn’t take the time and care. We know that their engineers think they understand security, and designed to a level that they couldn’t break.

Getting security right is hard for the best teams on the world. It’s impossible for average teams.

Bypassing Passcodes in iOS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/bypassing_passc.html

Last week, a story was going around explaining how to brute-force an iOS password. Basically, the trick was to plug the phone into an external keyboard and trying every PIN at once:

We reported Friday on Hickey’s findings, which claimed to be able to send all combinations of a user’s possible passcode in one go, by enumerating each code from 0000 to 9999, and concatenating the results in one string with no spaces. He explained that because this doesn’t give the software any breaks, the keyboard input routine takes priority over the device’s data-erasing feature.

I didn’t write about it, because it seemed too good to be true. A few days later, Apple pushed back on the findings — and it seems that it doesn’t work.

This isn’t to say that no one can break into an iPhone. We know that companies like Cellebrite and Grayshift are renting/selling iPhone unlock tools to law enforcement — which means governments and criminals can do the same thing — and that Apple is releasing a new feature called “restricted mode” that may make those hacks obsolete.

Grayshift is claiming that its technology will still work.

Former Apple security engineer Braden Thomas, who now works for a company called Grayshift, warned customers who had bought his GrayKey iPhone unlocking tool that iOS 11.3 would make it a bit harder for cops to get evidence and data out of seized iPhones. A change in the beta didn’t break GrayKey, but would require cops to use GrayKey on phones within a week of them being last unlocked.

“Starting with iOS 11.3, iOS saves the last time a device has been unlocked (either with biometrics or passcode) or was connected to an accessory or computer. If a full seven days (168 hours) elapse [sic] since the last time iOS saved one of these events, the Lightning port is entirely disabled,” Thomas wrote in a blog post published in a customer-only portal, which Motherboard obtained. “You cannot use it to sync or to connect to accessories. It is basically just a charging port at this point. This is termed USB Restricted Mode and it affects all devices that support iOS 11.3.”

Whether that’s real or marketing, we don’t know.

Perverse Vulnerability from Interaction between 2-Factor Authentication and iOS AutoFill

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/perverse_vulner.html

Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.

Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:

Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.

Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here — which is among the most common methods currently used — the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.

This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user’s phone accesses the bank’s legitimate online banking service.

This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.

Russian Censorship of Telegram

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/russian_censors.html

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors’ demands, how long can Internet freedom last?

The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender’s phone and decrypted on the receiver’s phone; no part of the network can eavesdrop on the messages.

Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn’t a fixed website, it doesn’t need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.

A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.

More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app’s founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.

Telegram might seem a weird app for Russia to focus on. Those of us who work in security don’t recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That’s why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)

What the Russian government doesn’t like about Telegram is its anonymous broadcast feature­ — channel capability and chats — ­which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram’s role in facilitating uncontrolled journalism is the real issue.

Iran attempts to block Telegram have been more successful than Russia’s, less because Iran’s censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram’s founder, has pledged millions of dollars to help fight Russian censorship.

For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.

Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China’s, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram’s voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.

Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned­ — and technically blocked­ — the practice of “domain fronting,” a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website­ — in this case Google.com­ — to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world’s largest Internet companies. And while freedom may have its advocates — ­the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban­ — actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

In 1993, John Gilmore famously said that “The Internet interprets censorship as damage and routes around it.” That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.

This essay previously appeared on Lawfare.com.

New iPhone OS May Include Device-Unlocking Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/new_iphone_os_m.html

iOS 12, the next release of Apple’s iPhone operating system, may include features to prevent someone from unlocking your phone without your permission:

The feature essentially forces users to unlock the iPhone with the passcode when connecting it to a USB accessory everytime the phone has not been unlocked for one hour. That includes the iPhone unlocking devices that companies such as Cellebrite or GrayShift make, which police departments all over the world use to hack into seized iPhones.

“That pretty much kills [GrayShift’s product] GrayKey and Cellebrite,” Ryan Duff, a security researcher who has studied iPhone and is Director of Cyber Solutions at Point3 Security, told Motherboard in an online chat. “If it actually does what it says and doesn’t let ANY type of data connection happen until it’s unlocked, then yes. You can’t exploit the device if you can’t communicate with it.”

This is part of a bunch of security enhancements in iOS 12:

Other enhancements include tools for generating strong passwords, storing them in the iCloud keychain, and automatically entering them into Safari and iOS apps across all of a user’s devices. Previously, standalone apps such as 1Password have done much the same thing. Now, Apple is integrating the functions directly into macOS and iOS. Apple also debuted new programming interfaces that allow users to more easily access passwords stored in third-party password managers directly from the QuickType bar. The company also announced a new feature that will flag reused passwords, an interface that autofills one-time passwords provided by authentication apps, and a mechanism for sharing passwords among nearby iOS devices, Macs, and Apple TVs.

A separate privacy enhancement is designed to prevent websites from tracking people when using Safari. It’s specifically designed to prevent share buttons and comment code on webpages from tracking people’s movements across the Web without permission or from collecting a device’s unique settings such as fonts, in an attempt to fingerprint the device.

The last additions of note are new permission dialogues macOS Mojave will display before allowing apps to access a user’s camera or microphone. The permissions are designed to thwart malicious software that surreptitiously turns on these devices in an attempt to spy on users. The new protections will largely mimic those previously available only through standalone apps such as one called Oversight, developed by security researcher Patrick Wardle. Apple said similar dialog permissions will protect the file system, mail database, message history, and backups.