Tag Archives: smartphones

The NSA on the Risks of Exposing Location Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/the_nsa_on_the_.html

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

    Of course, turning off your wireless devices is itself a signal that something is going on. It’s hard to be clandestine in our always connected world.

    News articles.

    Hacking Voice Assistants with Ultrasonic Waves

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/hacking_voice_a_1.html

    I previously wrote about hacking voice assistants with lasers. Turns you can do much the same thing with ultrasonic waves:

    Voice assistants — the demo targeted Siri, Google Assistant, and Bixby — are designed to respond when they detect the owner’s voice after noticing a trigger phrase such as ‘Ok, Google’.

    Ultimately, commands are just sound waves, which other researchers have already shown can be emulated using ultrasonic waves which humans can’t hear, providing an attacker has a line of sight on the device and the distance is short.

    What SurfingAttack adds to this is the ability to send the ultrasonic commands through a solid glass or wood table on which the smartphone was sitting using a circular piezoelectric disc connected to its underside.

    Although the distance was only 43cm (17 inches), hiding the disc under a surface represents a more plausible, easier-to-conceal attack method than previous techniques.

    Research paper. Demonstration video.

    Modern Mass Surveillance: Identify, Correlate, Discriminate

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/modern_mass_sur.html

    Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

    These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

    In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

    Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

    But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

    Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­– using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

    There is a huge ­– and almost entirely unregulated ­– data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

    The whole purpose of this process is for companies –­ and governments ­– to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

    The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

    Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

    Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­– passed in Vermont in 2018 ­– that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

    Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

    Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.

    This essay previously appeared in the New York Times.

    EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

    If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

    Smartphone Election in Washington State

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/smartphone_elec.html

    This year:

    King County voters will be able to use their name and birthdate to log in to a Web portal through the Internet browser on their phones, says Bryan Finney, the CEO of Democracy Live, the Seattle-based voting company providing the technology.

    Once voters have completed their ballots, they must verify their submissions and then submit a signature on the touch screen of their device.

    Finney says election officials in Washington are adept at signature verification because the state votes entirely by mail. That will be the way people are caught if they log in to the system under false pretenses and try to vote as someone else.

    The King County elections office plans to print out the ballots submitted electronically by voters whose signatures match and count the papers alongside the votes submitted through traditional routes.

    While advocates say this creates an auditable paper trail, many security experts say that because the ballots cross the Internet before they are printed, any subsequent audits on them would be moot. If a cyberattack occurred, an audit could essentially require double-checking ballots that may already have been altered, says Buell.

    Of course it’s not an auditable paper trail. There’s a reason why security experts use the phrase “voter-verifiable paper ballots.” A centralized printout of a received Internet message is not voter verifiable.

    Another news article.

    Technical Report of the Bezos Phone Hack

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/technical_repor.html

    Motherboard obtained and published the technical report on the hack of Jeff Bezos’s phone, which is being attributed to Saudi Arabia, specifically to Crown Prince Mohammed bin Salman.

    …investigators set up a secure lab to examine the phone and its artifacts and spent two days poring over the device but were unable to find any malware on it. Instead, they only found a suspicious video file sent to Bezos on May 1, 2018 that “appears to be an Arabic language promotional film about telecommunications.”

    That file shows an image of the Saudi Arabian flag and Swedish flags and arrived with an encrypted downloader. Because the downloader was encrypted this delayed or further prevented “study of the code delivered along with the video.”

    Investigators determined the video or downloader were suspicious only because Bezos’ phone subsequently began transmitting large amounts of data. “[W]ithin hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter,” the report states.

    “The amount of data being transmitted out of Bezos’ phone changed dramatically after receiving the WhatsApp video file and never returned to baseline. Following execution of the encrypted downloader sent from MBS’ account, egress on the device immediately jumped by approximately 29,000 percent,” it notes. “Forensic artifacts show that in the six (6) months prior to receiving the WhatsApp video, Bezos’ phone had an average of 430KB of egress per day, fairly typical of an iPhone. Within hours of the WhatsApp video, egress jumped to 126MB. The phone maintained an unusually high average of 101MB of egress data per day for months thereafter, including many massive and highly atypical spikes of egress data.”

    The Motherboard article also quotes forensic experts on the report:

    A mobile forensic expert told Motherboard that the investigation as depicted in the report is significantly incomplete and would only have provided the investigators with about 50 percent of what they needed, especially if this is a nation-state attack. She says the iTunes backup and other extractions they did would get them only messages, photo files, contacts and other files that the user is interested in saving from their applications, but not the core files.

    “They would need to use a tool like Graykey or Cellebrite Premium or do a jailbreak to get a look at the full file system. That’s where that state-sponsored malware is going to be found. Good state-sponsored malware should never show up in a backup,” said Sarah Edwards, an author and teacher of mobile forensics for the SANS Institute.

    “The full file system is getting into the device and getting every single file on there­ — the whole operating system, the application data, the databases that will not be backed up. So really the in-depth analysis should be done on that full file system, for this level of investigation anyway. I would have insisted on that right from the start.”

    The investigators do note on the last page of their report that they need to jailbreak Bezos’s phone to examine the root file system. Edwards said this would indeed get them everything they would need to search for persistent spyware like the kind created and sold by the NSO Group. But the report doesn’t indicate if that did get done.

    SIM Hijacking

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/sim_hijacking.html

    SIM hijacking — or SIM swapping — is an attack where a fraudster contacts your cell phone provider and convinces them to switch your account to a phone that they control. Since your smartphone often serves as a security measure or backup verification system, this allows the fraudster to take over other accounts of yours. Sometimes this involves people inside the phone companies.

    Phone companies have added security measures since this attack became popular and public, but a new study (news article) shows that the measures aren’t helping:

    We examined the authentication procedures used by five pre-paid wireless carriers when a customer attempted to change their SIM card. These procedures are an important line of defense against attackers who seek to hijack victims’ phone numbers by posing as the victim and calling the carrier to request that service be transferred to a SIM card the attacker possesses. We found that all five carriers used insecure authentication challenges that could be easily subverted by attackers.We also found that attackers generally only needed to target the most vulnerable authentication challenges, because the rest could be bypassed.

    It’s a classic security vs. usability trade-off. The phone companies want to provide easy customer service for their legitimate customers, and that system is what’s being exploited by the SIM hijackers. Companies could make the fraud harder, but it would necessarily also make it harder for legitimate customers to modify their accounts.

    ToTok Is an Emirati Spying Tool

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/totok_is_an_emi.html

    The smartphone messaging app ToTok is actually an Emirati spying tool:

    But the service, ToTok, is actually a spying tool, according to American officials familiar with a classified intelligence assessment and a New York Times investigation into the app and its developers. It is used by the government of the United Arab Emirates to try to track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.

    ToTok, introduced only months ago, was downloaded millions of times from the Apple and Google app stores by users throughout the Middle East, Europe, Asia, Africa and North America. While the majority of its users are in the Emirates, ToTok surged to become one of the most downloaded social apps in the United States last week, according to app rankings and App Annie, a research firm.

    Apple and Google have removed it from their app stores. If you have it on your phone, delete it now.

    Security Vulnerabilities in Android Firmware

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/security_vulner_20.html

    Researchers have discovered and revealed 146 vulnerabilities in various incarnations of Android smartphone firmware. The vulnerabilities were found by scanning the phones of 29 different Android makers, and each is unique to a particular phone or maker. They were found using automatic tools, and it is extremely likely that many of the vulnerabilities are not exploitable — making them bugs but not security concerns. There is no indication that any of these vulnerabilities were put there on purpose, although it is reasonable to assume that other organizations do this same sort of scanning and use the findings for attack. And since they’re firmware bugs, in many cases there is no ability to patch them.

    I see this as yet another demonstration of how hard supply chain security is.

    News article.

    Wi-Fi Hotspot Tracking

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/wi-fi_hotspot_t.html

    Free Wi-Fi hotspots can track your location, even if you don’t connect to them. This is because your phone or computer broadcasts a unique MAC address.

    What distinguishes location-based marketing hotspot providers like Zenreach and Euclid is that the personal information you enter in the captive portal­ — like your email address, phone number, or social media profile­ — can be linked to your laptop or smartphone’s Media Access Control (MAC) address. That’s the unique alphanumeric ID that devices broadcast when Wi-Fi is switched on.

    As Euclid explains in its privacy policy, “…if you bring your mobile device to your favorite clothing store today that is a Location — ­and then a popular local restaurant a few days later that is also a Location­ — we may know that a mobile device was in both locations based on seeing the same MAC Address.”

    MAC addresses alone don’t contain identifying information besides the make of a device, such as whether a smartphone is an iPhone or a Samsung Galaxy. But as long as a device’s MAC address is linked to someone’s profile, and the device’s Wi-Fi is turned on, the movements of its owner can be followed by any hotspot from the same provider.

    “After a user signs up, we associate their email address and other personal information with their device’s MAC address and with any location history we may previously have gathered (or later gather) for that device’s MAC address,” according to Zenreach’s privacy policy.

    The defense is to turn Wi-Fi off on your phone when you’re not using it.

    EDITED TO ADD: Note that the article is from 2018. Not that I think anything is different today….

    A Feminist Take on Information Privacy

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/a_feminist_take.html

    Maria Farrell has a really interesting framing of information/device privacy:

    What our smartphones and relationship abusers share is that they both exert power over us in a world shaped to tip the balance in their favour, and they both work really, really hard to obscure this fact and keep us confused and blaming ourselves. Here are some of the ways our unequal relationship with our smartphones is like an abusive relationship:

    • They isolate us from deeper, competing relationships in favour of superficial contact
      — ‘user engagement’ — that keeps their hold on us strong. Working with social media, they insidiously curate our social lives, manipulating us emotionally with dark patterns to keep us scrolling.

    • They tell us the onus is on us to manage their behavior. It’s our job to tiptoe around them and limit their harms. Spending too much time on a literally-designed-to-be-behaviorally-addictive phone? They send company-approved messages about our online time, but ban from their stores the apps that would really cut our use. We just need to use willpower. We just need to be good enough to deserve them.

    • They betray us, leaking data / spreading secrets. What we shared privately with them is suddenly public. Sometimes this destroys lives, but hey, we only have ourselves to blame. They fight nasty and under-handed, and are so, so sorry when they get caught that we’re meant to feel bad for them. But they never truly change, and each time we take them back, we grow weaker.

    • They love-bomb us when we try to break away, piling on the free data or device upgrades, making us click through page after page of dark pattern, telling us no one understands us like they do, no one else sees everything we really are, no one else will want us.

    • It’s impossible to just cut them off. They’ve wormed themselves into every part of our lives, making life without them unimaginable. And anyway, the relationship is complicated. There is love in it, or there once was. Surely we can get back to that if we just manage them the way they want us to?

    Nope. Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true.

    More on Law Enforcement Backdoor Demands

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/more_on_law_enf.html

    The Carnegie Endowment for International Peace and Princeton University’s Center for Information Technology Policy convened an Encryption Working Group to attempt progress on the “going dark” debate. They have released their report: “Moving the Encryption Policy Conversation Forward.

    The main contribution seems to be that attempts to backdoor devices like smartphones shouldn’t also backdoor communications systems:

    Conclusion: There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.

    I don’t believe that backdoor access to encryption data at rest offers “an achievable balance of risk vs. benefit” either, but I agree that the two aspects should be treated independently.

    EDITED TO ADD (9/12): This report does an important job moving the debate forward. It advises that policymakers break the issues into component parts. Instead of talking about restricting all encryption, it separates encrypted data at rest (storage) from encrypted data in motion (communication). It advises that policymakers pick the problems they have some chance of solving, and not demand systems that put everyone in danger. For example: no key escrow, and no use of software updates to break into devices).

    Data in motion poses challenges that are not present for data at rest. For example, modern cryptographic protocols for data in motion use a separate “session key” for each message, unrelated to the private/public key pairs used to initiate communication, to preserve the message’s secrecy independent of other messages (consistent with a concept known as “forward secrecy”). While there are potential techniques for recording, escrowing, or otherwise allowing access to these session keys, by their nature, each would break forward secrecy and related concepts and would create a massive target for criminal and foreign intelligence adversaries. Any technical steps to simplify the collection or tracking of session keys, such as linking keys to other keys or storing keys after they are used, would represent a fundamental weakening of all the communications.

    These are all big steps forward given who signed on to the report. Not just the usual suspects, but also Jim Baker — former general counsel of the FBI — and Chris Inglis: former deputy director of the NSA.

    Massive iPhone Hack Targets Uyghurs

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/massive_iphone_.html

    China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

    Earlier this year, Google’s Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

    Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

    There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

    TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

    Four more news stories.

    This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

    None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

    Google’s announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

    Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

    So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

    EDITED TO ADD: New data on the value of smartphone exploits:

    According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

    EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

    EDITED TO ADD (9/7): More on Apple’s pushbacks.

    AT&T Employees Took Bribes to Unlock Smartphones

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/att_employees_t.html

    This wasn’t a small operation:

    A Pakistani man bribed AT&T call-center employees to install malware and unauthorized hardware as part of a scheme to fraudulently unlock cell phones, according to the US Department of Justice. Muhammad Fahd, 34, was extradited from Hong Kong to the US on Friday and is being detained pending trial.

    An indictment alleges that “Fahd recruited and paid AT&T insiders to use their computer credentials and access to disable AT&T’s proprietary locking software that prevented ineligible phones from being removed from AT&T’s network,” a DOJ announcement yesterday said. “The scheme resulted in millions of phones being removed from AT&T service and/or payment plans, costing the company millions of dollars. Fahd allegedly paid the insiders hundreds of thousands of dollars­ — paying one co-conspirator $428,500 over the five-year scheme.”

    In all, AT&T insiders received more than $1 million in bribes from Fahd and his co-conspirators, who fraudulently unlocked more than 2 million cell phones, the government alleged. Three former AT&T customer service reps from a call center in Bothell, Washington, already pleaded guilty and agreed to pay the money back to AT&T.

    Phone Pharming for Ad Fraud

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/phone_farming_f.html

    Interesting article on people using banks of smartphones to commit ad fraud for profit.

    No one knows how prevalent ad fraud is on the Internet. I believe it is surprisingly high — here’s an article that places losses between $6.5 and $19 billion annually — and something companies like Google and Facebook would prefer remain unresearched.