Tag Archives: surveillance

China’s Olympics App Is Horribly Insecure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/chinas-olympics-app-is-horribly-insecure.html

China is mandating that athletes download and use a health and travel app when they attend the Winter Olympics next month. Citizen Lab examined the app and found it riddled with security holes.

Key Findings:

  • MY2022, an app mandated for use by all attendees of the 2022 Olympic Games in Beijing, has a simple but devastating flaw where encryption protecting users’ voice audio and file transfers can be trivially sidestepped. Health customs forms which transmit passport details, demographic information, and medical and travel history are also vulnerable. Server responses can also be spoofed, allowing an attacker to display fake instructions to users.
  • MY2022 is fairly straightforward about the types of data it collects from users in its public-facing documents. However, as the app collects a range of highly sensitive medical information, it is unclear with whom or which organization(s) it shares this information.
  • MY2022 includes features that allow users to report “politically sensitive” content. The app also includes a censorship keyword list, which, while presently inactive, targets a variety of political topics including domestic issues such as Xinjiang and Tibet as well as references to Chinese government agencies.
  • While the vendor did not respond to our security disclosure, we find that the app’s security deficits may not only violate Google’s Unwanted Software Policy and Apple’s App Store guidelines but also China’s own laws and national standards pertaining to privacy protection, providing potential avenues for future redress.

News article:

It’s not clear whether the security flaws were intentional or not, but the report speculated that proper encryption might interfere with some of China’s ubiquitous online surveillance tools, especially systems that allow local authorities to snoop on phones using public wireless networks or internet cafes. Still, the researchers added that the flaws were probably unintentional, because the government will already be receiving data from the app, so there wouldn’t be a need to intercept the data as it was being transferred.

[…]

The app also included a list of 2,422 political keywords, described within the code as “illegalwords.txt,” that worked as a keyword censorship list, according to Citizen Lab. The researchers said the list appeared to be a latent function that the app’s chat and file transfer function was not actively using.

The US government has already advised athletes to leave their personal phones and laptops home and bring burners.

San Francisco Police Illegally Spying on Protesters

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/san-francisco-police-illegally-spying-on-protesters.html

Last summer, the San Francisco police illegally used surveillance cameras at the George Floyd protests. The EFF is suing the police:

This surveillance invaded the privacy of protesters, targeted people of color, and chills and deters participation and organizing for future protests. The SFPD also violated San Francisco’s new Surveillance Technology Ordinance. It prohibits city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, without prior approval from the city’s Board of Supervisors, following an open process that includes public participation. Here, the SFPD went through no such process before spying on protesters with this network of surveillance cameras.

It’s feels like a pretty easy case. There’s a law, and the SF police didn’t follow it.

Tech billionaire Chris Larsen is on the side of the police. He thinks that the surveillance is a good thing, and wrote an op-ed defending it.

I wouldn’t be writing about this at all except that Chris is a board member of EPIC, and used his EPIC affiliation in the op-ed to bolster his own credentials. (Bizarrely, he linked to an EPIC page that directly contradicts his position.) In his op-ed, he mischaracterized the EFF’s actions and the facts of the lawsuit. It’s a mess.

The plaintiffs in the lawsuit wrote a good rebuttal to Larsen’s piece. And this week, EPIC published what is effectively its own rebuttal:

One of the fundamental principles that underlies EPIC’s work (and the work of many other groups) on surveillance oversight is that individuals should have the power to decide whether surveillance tools are used in their communities and to impose limits on their use. We have fought for years to shed light on the development, procurement, and deployment of such technologies and have worked to ensure that they are subject to independent oversight through hearings, legal challenges, petitions, and other public forums. The CCOPS model, which was developed by ACLU affiliates and other coalition partners in California and implemented through the San Francisco ordinance, is a powerful mechanism to enable public oversight of dangerous surveillance tools. The access, retention, and use policies put in place by the neighborhood business associations operating these networks provide necessary, but not sufficient, protections against abuse. Strict oversight is essential to promote both privacy and community safety, which includes freedom from arbitrary police action and the freedom to assemble.

So far, EPIC has not done anything about Larsen still being on its board. (Others have criticized them for keeping him on.) I don’t know if I have an opinion on this. Larsen has done good work on financial privacy regulations, which is a good thing. But he seems to be funding all these surveillance cameras in San Francisco, which is really bad.

Using Foreign Nationals to Bypass US Surveillance Restrictions

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/using-foreign-nationals-to-bypass-us-surveillance-restrictions.html

Remember when the US and Australian police surreptitiously owned and operated the encrypted cell phone app ANOM? They arrested 800 people in 2021 based on that operation.

New documents received by Motherboard show that over 100 of those phones were shipped to users in the US, far more than previously believed.

What’s most interesting to me about this new information is how the US used the Australians to get around domestic spying laws:

For legal reasons, the FBI did not monitor outgoing messages from Anom devices determined to be inside the U.S. Instead, the Australian Federal Police (AFP) monitored them on behalf of the FBI, according to previously published court records. In those court records unsealed shortly before the announcement of the Anom operation, FBI Special Agent Nicholas Cheviron wrote that the FBI received Anom user data three times a week, which contained the messages of all of the users of Anom with some exceptions, including “the messages of approximately 15 Anom users in the U.S. sent to any other Anom device.”

[…]

Stewart Baker, partner at Steptoe & Johnson LLP, and Bryce Klehm, associate editor of Lawfare, previously wrote that “The ‘threat to life; standard echoes the provision of U.S. law that allows communications providers to share user data with law enforcement without legal process under 18 U.S.C. § 2702. Whether the AFP was relying on this provision of U.S. law or a more general moral imperative to take action to prevent imminent threats is not clear.” That section of law discusses the voluntary disclosure of customer communications or records.

When asked about the practice of Australian law enforcement monitoring devices inside the U.S. on behalf of the FBI, Senator Ron Wyden told Motherboard in a statement “Multiple intelligence community officials have confirmed to me, in writing, that intelligence agencies cannot ask foreign partners to conduct surveillance that the U.S. would be legally prohibited from doing itself. The FBI should follow this same standard. Allegations that the FBI outsourced warrantless surveillance of Americans to a foreign government raise troubling questions about the Justice Department’s oversight of these practices.”

I and others have long suspected that the NSA uses foreign nationals to get around restrictions that prevent it from spying on Americans. It is interesting to see the FBI using the same trick.

Apple AirTags Are Being Used to Track People and Cars

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/12/apple-airtags-are-being-used-to-track-people-and-cars.html

This development suprises no one who has been paying attention:

Researchers now believe AirTags, which are equipped with Bluetooth technology, could be revealing a more widespread problem of tech-enabled tracking. They emit a digital signal that can be detected by devices running Apple’s mobile operating system. Those devices then report where an AirTag has last been seen. Unlike similar tracking products from competitors such as Tile, Apple added features to prevent abuse, including notifications like the one Ms. Estrada received and automatic beeping. (Tile plans to release a feature to prevent the tracking of people next year, a spokeswoman for that company said.)

[…]

A person who doesn’t own an iPhone might have a harder time detecting an unwanted AirTag. AirTags aren’t compatible with Android smartphones. Earlier this month, Apple released an Android app that can scan for AirTags — but you have to be vigilant enough to download it and proactively use it.

Apple declined to say if it was working with Google on technology that would allow Android phones to automatically detect its trackers.

People who said they have been tracked have called Apple’s safeguards insufficient. Ms. Estrada said she was notified four hours after her phone first noticed the rogue gadget. Others said it took days before they were made aware of an unknown AirTag. According to Apple, the timing of the alerts can vary depending on the iPhone’s operating system and location settings.

The European Parliament Voted to Ban Remote Biometric Surveillance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/10/the-european-parliament-voted-to-ban-remote-biometric-surveillance.html

It’s not actually banned in the EU yet — the legislative process is much more complicated than that — but it’s a step: a total ban on biometric mass surveillance.

To respect “privacy and human dignity,” MEPs said that EU lawmakers should pass a permanent ban on the automated recognition of individuals in public spaces, saying citizens should only be monitored when suspected of a crime.

The parliament has also called for a ban on the use of private facial recognition databases — such as the controversial AI system created by U.S. startup Clearview (also already in use by some police forces in Europe) — and said predictive policing based on behavioural data should also be outlawed.

MEPs also want to ban social scoring systems which seek to rate the trustworthiness of citizens based on their behaviour or personality.

Surveillance of the Internet Backbone

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/surveillance-of-the-internet-backbone.html

Vice has an article about how data brokers sell access to the Internet backbone. This is netflow data. It’s useful for cybersecurity forensics, but can also be used for things like tracing VPN activity.

At a high level, netflow data creates a picture of traffic flow and volume across a network. It can show which server communicated with another, information that may ordinarily only be available to the server owner or the ISP carrying the traffic. Crucially, this data can be used for, among other things, tracking traffic through virtual private networks, which are used to mask where someone is connecting to a server from, and by extension, their approximate physical location.

In the hands of some governments, that could be dangerous.

More on Apple’s iPhone Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/more-on-apples-iphone-backdoor.html

In this post, I’ll collect links on Apple’s iPhone backdoor for scanning CSAM images. Previous links are here and here.

Apple says that hash collisions in its CSAM detection system were expected, and not a concern. I’m not convinced that this secondary system was originally part of the design, since it wasn’t discussed in the original specification.

Good op-ed from a group of Princeton researchers who developed a similar system:

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

EDITED TO ADD (8/30): Good essays by Matthew Green and Alex Stamos, Ross Anderson, Edward Snowden, and Susan Landau. And also Kurt Opsahl.

Apple Adds a Backdoor to iMessage and iCloud Storage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/apple-adds-a-backdoor-to-imesssage-and-icloud-storage.html

Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.

EFF writes:

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts — that is, accounts designated as owned by a minor — for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.

Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.

This is a security disaster. Read tweets by Matthew Green and Edward Snowden. Also this. I’ll post more when I see it.

Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.

EDITED TO ADD: This is a really good write-up of the problems.

EDITED TO ADD: Alex Stamos comments.

An open letter to Apple criticizing the project.

A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)

EDITED TO ADD: John Gruber’s excellent analysis.

EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.

EDITED TO ADD (8/13): Really good essay by EFF’s Kurt Opsahl. Ross Anderson did an interview with Glenn Beck. And this news article talks about dissent within Apple about this feature.

The Economist has a good take. Apple responds to criticisms. (It’s worth watching the Wall Street Journal video interview as well.)

EDITED TO ADD (8/14): Apple released a threat model

EDITED TO ADD (8/20): Follow-on blog posts here and here.

Paragon: Yet Another Cyberweapons Arms Manufacturer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/paragon-yet-another-cyberweapons-arms-manufacturer.html

Forbes has the story:

Paragon’s product will also likely get spyware critics and surveillance experts alike rubbernecking: It claims to give police the power to remotely break into encrypted instant messaging communications, whether that’s WhatsApp, Signal, Facebook Messenger or Gmail, the industry sources said. One other spyware industry executive said it also promises to get longer-lasting access to a device, even when it’s rebooted.

[…]

Two industry sources said they believed Paragon was trying to set itself apart further by promising to get access to the instant messaging applications on a device, rather than taking complete control of everything on a phone. One of the sources said they understood that Paragon’s spyware exploits the protocols of end-to-end encrypted apps, meaning it would hack into messages via vulnerabilities in the core ways in which the software operates.

Read that last sentence again: Paragon uses unpatched zero-day exploits in the software to hack messaging apps.

De-anonymization Story

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/de-anonymization-story.html

This is important:

Monsignor Jeffrey Burrill was general secretary of the US Conference of Catholic Bishops (USCCB), effectively the highest-ranking priest in the US who is not a bishop, before records of Grindr usage obtained from data brokers was correlated with his apartment, place of work, vacation home, family members’ addresses, and more.

[…]

The data that resulted in Burrill’s ouster was reportedly obtained through legal means. Mobile carriers sold­ — and still sell — ­location data to brokers who aggregate it and sell it to a range of buyers, including advertisers, law enforcement, roadside services, and even bounty hunters. Carriers were caught in 2018 selling real-time location data to brokers, drawing the ire of Congress. But after carriers issued public mea culpas and promises to reform the practice, investigations have revealed that phone location data is still popping up in places it shouldn’t. This year, T-Mobile even broadened its offerings, selling customers’ web and app usage data to third parties unless people opt out.

The publication that revealed Burrill’s private app usage, The Pillar, a newsletter covering the Catholic Church, did not say exactly where or how it obtained Burrill’s data. But it did say how it de-anonymized aggregated data to correlate Grindr app usage with a device that appears to be Burrill’s phone.

The Pillar says it obtained 24 months’ worth of “commercially available records of app signal data” covering portions of 2018, 2019, and 2020, which included records of Grindr usage and locations where the app was used. The publication zeroed in on addresses where Burrill was known to frequent and singled out a device identifier that appeared at those locations. Key locations included Burrill’s office at the USCCB, his USCCB-owned residence, and USCCB meetings and events in other cities where he was in attendance. The analysis also looked at other locations farther afield, including his family lake house, his family members’ residences, and an apartment in his Wisconsin hometown where he reportedly has lived.

Location data is not anonymous. It cannot be made anonymous. I hope stories like these will teach people that.

Commercial Location Data Used to Out Priest

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/commercial-location-data-used-to-out-priest.html

A Catholic priest was outed through commercially available surveillance data. Vice has a good analysis:

The news starkly demonstrates not only the inherent power of location data, but how the chance to wield that power has trickled down from corporations and intelligence agencies to essentially any sort of disgruntled, unscrupulous, or dangerous individual. A growing market of data brokers that collect and sell data from countless apps has made it so that anyone with a bit of cash and effort can figure out which phone in a so-called anonymized dataset belongs to a target, and abuse that information.

There is a whole industry devoted to re-identifying anonymized data. This was something that Snowden showed that the NSA could do. Now it’s available to everyone.

Banning Surveillance-Based Advertising

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/06/banning-surveillance-based-advertising.html

The Norwegian Consumer Council just published a fantastic new report: “Time to Ban Surveillance-Based Advertising.” From the Introduction:

The challenges caused and entrenched by surveillance-based advertising include, but are not limited to:

  • privacy and data protection infringements
  • opaque business models
  • manipulation and discrimination at scale
  • fraud and other criminal activity
  • serious security risks

In the following chapters, we describe various aspects of these challenges and point out how today’s dominant model of online advertising is a threat to consumers, democratic societies, the media, and even to advertisers themselves. These issues are significant and serious enough that we believe that it is time to ban these detrimental practices.

A ban on surveillance-based practices should be complemented by stronger enforcement of existing legislation, including the General Data Protection Regulation, competition regulation, and the Unfair Commercial Practices Directive. However, enforcement currently consumes significant time and resources, and usually happens after the damage has already been done. Banning surveillance-based advertising in general will force structural changes to the advertising industry and alleviate a number of significant harms to consumers and to society at large.

A ban on surveillance-based advertising does not mean that one can no longer finance digital content using advertising. To illustrate this, we describe some possible ways forward for advertising-funded digital content, and point to alternative advertising technologies that may contribute to a safer and healthier digital economy for both consumers and businesses.

Press release. Press coverage.

I signed their open letter.

Insider Attack on Home Surveillance Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/insider-attack-on-home-surveillance-systems.html

No one who reads this blog regularly will be surprised:

A former employee of prominent home security company ADT has admitted that he hacked into the surveillance feeds of dozens of customer homes, doing so primarily to spy on naked women or to leer at unsuspecting couples while they had sex.

[…]

Authorities say that the IT technician “took note of which homes had attractive women, then repeatedly logged into these customers’ accounts in order to view their footage for sexual gratification.” He did this by adding his personal email address to customer accounts, which ultimately hooked him into “real-time access to the video feeds from their homes.”

Slashdot thread.

Cell Phone Location Privacy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/cell-phone-location-privacy.html

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

Eavesdropping on Phone Taps from Voice Assistants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/eavesdropping-on-phone-taps-from-voice-assistants.html

The microphones on voice assistants are very sensitive, and can snoop on all sorts of data:

In Hey Alexa what did I just type? we show that when sitting up to half a meter away, a voice assistant can still hear the taps you make on your phone, even in presence of noise. Modern voice assistants have two to seven microphones, so they can do directional localisation, just as human ears do, but with greater sensitivity. We assess the risk and show that a lot more work is needed to understand the privacy implications of the always-on microphones that are increasingly infesting our work spaces and our homes.

From the paper:

Abstract: Voice assistants are now ubiquitous and listen in on our everyday lives. Ever since they became commercially available, privacy advocates worried that the data they collect can be abused: might private conversations be extracted by third parties? In this paper we show that privacy threats go beyond spoken conversations and include sensitive data typed on nearby smartphones. Using two different smartphones and a tablet we demonstrate that the attacker can extract PIN codes and text messages from recordings collected by a voice assistant located up to half a meter away. This shows that remote keyboard-inference attacks are not limited to physical keyboards but extend to virtual keyboards too. As our homes become full of always-on microphones, we need to work through the implications.

US Schools Are Buying Cell Phone Unlocking Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/us-schools-are-buying-cell-phone-unlocking-systems.html

Gizmodo is reporting that schools in the US are buying equipment to unlock cell phones from companies like Cellebrite:

Gizmodo has reviewed similar accounting documents from eight school districts, seven of which are in Texas, showing that administrators paid as much $11,582 for the controversial surveillance technology. Known as mobile device forensic tools (MDFTs), this type of tech is able to siphon text messages, photos, and application data from student’s devices. Together, the districts encompass hundreds of schools, potentially exposing hundreds of thousands of students to invasive cell phone searches.

The eighth district was in Los Angeles.

Mexican Drug Cartels with High-Tech Spyware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/mexican-drug-cartels-with-high-tech-spyware.html

Sophisticated spyware, sold by surveillance tech companies to Mexican government agencies, are ending up in the hands of drug cartels:

As many as 25 private companies — including the Israeli company NSO Group and the Italian firm Hacking Team — have sold surveillance software to Mexican federal and state police forces, but there is little or no regulation of the sector — and no way to control where the spyware ends up, said the officials.

Lots of details in the article. The cyberweapons arms business is immoral in many ways. This is just one of them.

The NSA is Refusing to Disclose its Policy on Backdooring Commercial Products

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/the-nsa-is-refusing-to-disclose-its-policy-on-backdooring-commercial-products.html

Senator Ron Wyden asked, and the NSA didn’t answer:

The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others.

These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications.

The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines.

[…]

The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws.

“At NSA, it’s common practice to constantly assess processes to identify and determine best practices,” said Anne Neuberger, who heads NSA’s year-old Cybersecurity Directorate. “We don’t share specific processes and procedures.”

Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries.

The article goes on to talk about Juniper Networks equipment, which had the NSA-created DUAL_EC PRNG backdoor in its products. That backdoor was taken advantage of by an unnamed foreign adversary.

Juniper Networks got into hot water over Dual EC two years later. At the end of 2015, the maker of internet switches disclosed that it had detected malicious code in some firewall products. Researchers later determined that hackers had turned the firewalls into their own spy tool here by altering Juniper’s version of Dual EC.

Juniper said little about the incident. But the company acknowledged to security researcher Andy Isaacson in 2016 that it had installed Dual EC as part of a “customer requirement,” according to a previously undisclosed contemporaneous message seen by Reuters. Isaacson and other researchers believe that customer was a U.S. government agency, since only the U.S. is known to have insisted on Dual EC elsewhere.

Juniper has never identified the customer, and declined to comment for this story.

Likewise, the company never identified the hackers. But two people familiar with the case told Reuters that investigators concluded the Chinese government was behind it. They declined to detail the evidence they used.

Okay, lots of unsubstantiated claims and innuendo here. And Neuberger is right; the NSA shouldn’t share specific processes and procedures. But as long as this is a democratic country, the NSA has an obligation to disclose its general processes and procedures so we all know what they’re doing in our name. And if it’s still putting surveillance ahead of security.

IMSI-Catchers from Canada

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/imsi-catchers-from-canada.html

Gizmodo is reporting that Harris Corp. is no longer selling Stingray IMSI-catchers (and, presumably, its follow-on models Hailstorm and Crossbow) to local governments:

L3Harris Technologies, formerly known as the Harris Corporation, notified police agencies last year that it planned to discontinue sales of its surveillance boxes at the local level, according to government records. Additionally, the company would no longer offer access to software upgrades or replacement parts, effectively slapping an expiration date on boxes currently in use. Any advancements in cellular technology, such as the rollout of 5G networks in most major U.S. cities, would render them obsolete.

The article goes on to talk about replacement surveillance systems from the Canadian company Octasic.

Octasic’s Nyxcell V800 can target most modern phones while maintaining the ability to capture older GSM devices. Florida’s state police agency described the device, made for in-vehicle use, as capable of targeting eight frequency bands including GSM (2G), CDMA2000 (3G), and LTE (4G).

[…]

A 2018 patent assigned to Octasic claims that Nyxcell forces a connection with nearby mobile devices when its signal is stronger than the nearest legitimate cellular tower. Once connected, Nyxcell prompts devices to divulge information about its signal strength relative to nearby cell towers. These reported signal strengths (intra-frequency measurement reports) are then used to triangulate the position of a phone.

Octasic appears to lean heavily on the work of Indian engineers and scientists overseas. A self-published biography of the company notes that while the company is headquartered in Montreal, it has “R&D facilities in India,” as well as a “worldwide sales support network.” Nyxcell’s website, which is only a single page requesting contact information, does not mention Octasic by name. Gizmodo was, however, able to recover domain records identifying Octasic as the owner.