All posts by Bruce Schneier

Collecting and Selling Mobile Phone Location Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/collecting_and_.html

The Wall Street Journal has an article about a company called Anomaly Six LLC that has an SDK that’s used by “more than 500 mobile applications.” Through that SDK, the company collects location data from users, which it then sells.

Anomaly Six is a federal contractor that provides global-location-data products to branches of the U.S. government and private-sector clients. The company told The Wall Street Journal it restricts the sale of U.S. mobile phone movement data only to nongovernmental, private-sector clients.

[…]

Anomaly Six was founded by defense-contracting veterans who worked closely with government agencies for most of their careers and built a company to cater in part to national-security agencies, according to court records and interviews.

Just one of the many Internet companies spying on our every move for profit. And I’m sure they sell to the US government; it’s legal and why would they forgo those sales?

Smart Lock Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/smart_lock_vuln.html

Yet another Internet-connected door lock is insecure:

Sold by retailers including Amazon, Walmart, and Home Depot, U-Tec’s $139.99 UltraLoq is marketed as a “secure and versatile smart deadbolt that offers keyless entry via your Bluetooth-enabled smartphone and code.”

Users can share temporary codes and ‘Ekeys’ to friends and guests for scheduled access, but according to Tripwire researcher Craig Young, a hacker able to sniff out the device’s MAC address can help themselves to an access key, too.

UltraLoq eventually fixed the vulnerabilities, but not in a way that should give you any confidence that they know what they’re doing.

EDITED TO ADD (8/12): More.

Friday Squid Blogging: New SQUID

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/friday_squid_bl_740.html

There’s a new SQUID:

A new device that relies on flowing clouds of ultracold atoms promises potential tests of the intersection between the weirdness of the quantum world and the familiarity of the macroscopic world we experience every day. The atomtronic Superconducting QUantum Interference Device (SQUID) is also potentially useful for ultrasensitive rotation measurements and as a component in quantum computers.

“In a conventional SQUID, the quantum interference in electron currents can be used to make one of the most sensitive magnetic field detectors,” said Changhyun Ryu, a physicist with the Material Physics and Applications Quantum group at Los Alamos National Laboratory. “We use neutral atoms rather than charged electrons. Instead of responding to magnetic fields, the atomtronic version of a SQUID is sensitive to mechanical rotation.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

The NSA on the Risks of Exposing Location Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/the_nsa_on_the_.html

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

    Of course, turning off your wireless devices is itself a signal that something is going on. It’s hard to be clandestine in our always connected world.

    News articles.

    BlackBerry Phone Cracked

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/blackberry_phon.html

    Australia is reporting that a BlackBerry device has been cracked after five years:

    An encrypted BlackBerry device that was cracked five years after it was first seized by police is poised to be the key piece of evidence in one of the state’s longest-running drug importation investigations.

    In April, new technology “capabilities” allowed authorities to probe the encrypted device….

    No details about those capabilities.

    Twitter Hacker Arrested

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/twitter_hacker_.html

    A 17-year-old Florida boy was arrested and charged with last week’s Twitter hack.

    News articles. Boing Boing post. Florida state attorney press release.

    This is a developing story. Post any additional news in the comments.

    EDITED TO ADD (8/1): Two others have been charged as well.

    EDITED TO ADD (8/11): The online bail hearing was hacked.

    Friday Squid Blogging: Squid Proteins for a Better Face Mask

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/friday_squid_bl_739.html

    Researchers are synthesizing squid proteins to create a face mask that better survives cleaning. (And you thought there was no connection between squid and COVID-19.) The military thinks this might have applications for self-healing robots.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Read my blog posting guidelines here.

    Fake Stories in Real News Sites

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/fake_stories_in.html

    Fireeye is reporting that a hacking group called Ghostwriter broke into the content management systems of Eastern European news sites to plant fake stories.

    From a Wired story:

    The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they’ve posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content.

    That hacking campaign, targeting media sites from Poland to Lithuania, has spread false stories about US military aggression, NATO soldiers spreading coronavirus, NATO planning a full-on invasion of Belarus, and more.

    Survey of Supply Chain Attacks

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/survey_of_suppl.html

    The Atlantic Council has a released a report that looks at the history of computer supply chain attacks.

    Key trends from their summary:

    1. Deep Impact from State Actors: There were at least 27 different state attacks against the software supply chain including from Russia, China, North Korea, and Iran as well as India, Egypt, the United States, and Vietnam.States have targeted software supply chains with great effect as the majority of cases surveyed here did, or could have, resulted in remote code execution. Examples: CCleaner, NotPetya, Kingslayer, SimDisk, and ShadowPad.
    2. Abusing Trust in Code Signing: These attacks undermine public key cryptography and certificates used to ensure the integrity of code. Overcoming these protections is a critical step to enabling everything from simple alterations of open-source code to complex nation-state espionage campaigns. Examples: ShadowHammer, Naid/McRAT, and BlackEnergy 3.

    3. Hijacking Software Updates: 27% of these attacks targeted software updates to insert malicious code against sometimes millions of targets. These attacks are generally carried out by extremely capable actors and poison updates from legitimate vendors. Examples: Flame, CCleaner 1 & 2, NotPetya, and Adobe pwdum7v71.

    4. Poisoning Open-Source Code: These incidents saw attackers either modify open-source code by gaining account access or post their own packages with names similar to common examples. Attacks targeted some of the most widely used open source tools on the internet. Examples: Cdorked/Darkleech, RubyGems Backdoor, Colourama, and JavaScript 2018 Backdoor.

    5. Targeting App Stores: 22% of these attacks targeted app stores like the Google Play Store, Apple’s App Store, and other third-party app hubs to spread malware to mobile devices. Some attacks even targeted developer tools ­ meaning every app later built using that tool was potentially compromised. Examples: ExpensiveWall, BankBot, Gooligan, Sandworm’s Android attack, and XcodeGhost.

    Recommendations included in the report. The entirely open and freely available dataset is here.

    Friday Squid Blogging: Introducing the Seattle Kraken

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/friday_squid_bl_738.html

    The Kraken is the name of Seattle’s new NFL franchise.

    I have always really liked collective nouns as sports team names (like the Utah Jazz or the Minnesota Wild), mostly because it’s hard to describe individual players.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Read my blog posting guidelines here.

    Update on NIST’s Post-Quantum Cryptography Program

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/update_on_nists.html

    NIST has posted an update on their post-quantum cryptography program:

    After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15. NIST has now begun the third round of public review. This “selection round” will help the agency decide on the small subset of these algorithms that will form the core of the first post-quantum cryptography standard.

    […]

    For this third round, the organizers have taken the novel step of dividing the remaining candidate algorithms into two groups they call tracks. The first track contains the seven algorithms that appear to have the most promise.

    “We’re calling these seven the finalists,” Moody said. “For the most part, they’re general-purpose algorithms that we think could find wide application and be ready to go after the third round.”

    The eight alternate algorithms in the second track are those that either might need more time to mature or are tailored to more specific applications. The review process will continue after the third round ends, and eventually some of these second-track candidates could become part of the standard. Because all of the candidates still in play are essentially survivors from the initial group of submissions from 2016, there will also be future consideration of more recently developed ideas, Moody said.

    “The likely outcome is that at the end of this third round, we will standardize one or two algorithms for encryption and key establishment, and one or two others for digital signatures,” he said. “But by the time we are finished, the review process will have been going on for five or six years, and someone may have had a good idea in the interim. So we’ll find a way to look at newer approaches too.”

    Details are here. This is all excellent work, and exemplifies NIST at its best. The quantum-resistant algorithms will be standardized far in advance of any practical quantum computer, which is how we all want this sort of thing to go.

    Adversarial Machine Learning and the CFAA

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/adversarial_mac_1.html

    I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

    Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, “What are the potential legal risks to adversarial ML researchers when they attack ML systems?” Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

    Medium post on the paper. News article, which uses our graphic without attribution.

    Fawkes: Digital Image Cloaking

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/fawkes_digital_.html

    Fawkes is a system for manipulating digital images so that they aren’t recognized by facial recognition systems.

    At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

    Research paper.

    Hacking a Power Supply

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/hacking_a_power.html

    This hack targets the firmware on modern power supplies. (Yes, power supplies are also computers.)

    Normally, when a phone is connected to a power brick with support for fast charging, the phone and the power adapter communicate with each other to determine the proper amount of electricity that can be sent to the phone without damaging the device­ — the more juice the power adapter can send, the faster it can charge the phone.

    However, by hacking the fast charging firmware built into a power adapter, Xuanwu Labs demonstrated that bad actors could potentially manipulate the power brick into sending more electricity than a phone can handle, thereby overheating the phone, melting internal components, or as Xuanwu Labs discovered, setting the device on fire.

    Research paper, in Chinese.

    On the Twitter Hack

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/on_the_twitter_.html

    Twitter was hacked this week. Not a few people’s Twitter accounts, but all of Twitter. Someone compromised the entire Twitter network, probably by stealing the log-in credentials of one of Twitter’s system administrators. Those are the people trusted to ensure that Twitter functions smoothly.

    The hacker used that access to send tweets from a variety of popular and trusted accounts, including those of Joe Biden, Bill Gates, and Elon Musk, as part of a mundane scam — stealing bitcoin — but it’s easy to envision more nefarious scenarios. Imagine a government using this sort of attack against another government, coordinating a series of fake tweets from hundreds of politicians and other public figures the day before a major election, to affect the outcome. Or to escalate an international dispute. Done well, it would be devastating.

    Whether the hackers had access to Twitter direct messages is not known. These DMs are not end-to-end encrypted, meaning that they are unencrypted inside Twitter’s network and could have been available to the hackers. Those messages — between world leaders, industry CEOs, reporters and their sources, heath organizations — are much more valuable than bitcoin. (If I were a national-intelligence agency, I might even use a bitcoin scam to mask my real intelligence-gathering purpose.) Back in 2018, Twitter said it was exploring encrypting those messages, but it hasn’t yet.

    Internet communications platforms — such as Facebook, Twitter, and YouTube — are crucial in today’s society. They’re how we communicate with one another. They’re how our elected leaders communicate with us. They are essential infrastructure. Yet they are run by for-profit companies with little government oversight. This is simply no longer sustainable. Twitter and companies like it are essential to our national dialogue, to our economy, and to our democracy. We need to start treating them that way, and that means both requiring them to do a better job on security and breaking them up.

    In the Twitter case this week, the hacker’s tactics weren’t particularly sophisticated. We will almost certainly learn about security lapses at Twitter that enabled the hack, possibly including a SIM-swapping attack that targeted an employee’s cellular service provider, or maybe even a bribed insider. The FBI is investigating.

    This kind of attack is known as a “class break.” Class breaks are endemic to computerized systems, and they’re not something that we as users can defend against with better personal security. It didn’t matter whether individual accounts had a complicated and hard-to-remember password, or two-factor authentication. It didn’t matter whether the accounts were normally accessed via a Mac or a PC. There was literally nothing any user could do to protect against it.

    Class breaks are security vulnerabilities that break not just one system, but an entire class of systems. They might exploit a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system’s software. Or a vulnerability in internet-enabled digital video recorders and webcams that allows an attacker to recruit those devices into a massive botnet. Or a single vulnerability in the Twitter network that allows an attacker to take over every account.

    For Twitter users, this attack was a double whammy. Many people rely on Twitter’s authentication systems to know that someone who purports to be a certain celebrity, politician, or journalist is really that person. When those accounts were hijacked, trust in that system took a beating. And then, after the attack was discovered and Twitter temporarily shut down all verified accounts, the public lost a vital source of information.

    There are many security technologies companies like Twitter can implement to better protect themselves and their users; that’s not the issue. The problem is economic, and fixing it requires doing two things. One is regulating these companies, and requiring them to spend more money on security. The second is reducing their monopoly power.

    The security regulations for banks are complex and detailed. If a low-level banking employee were caught messing around with people’s accounts, or if she mistakenly gave her log-in credentials to someone else, the bank would be severely fined. Depending on the details of the incident, senior banking executives could be held personally liable. The threat of these actions helps keep our money safe. Yes, it costs banks money; sometimes it severely cuts into their profits. But the banks have no choice.

    The opposite is true for these tech giants. They get to decide what level of security you have on your accounts, and you have no say in the matter. If you are offered security and privacy options, it’s because they decided you can have them. There is no regulation. There is no accountability. There isn’t even any transparency. Do you know how secure your data is on Facebook, or in Apple’s iCloud, or anywhere? You don’t. No one except those companies do. Yet they’re crucial to the country’s national security. And they’re the rare consumer product or service allowed to operate without significant government oversight.

    For example, President Donald Trump’s Twitter account wasn’t hacked as Joe Biden’s was, because that account has “special protections,” the details of which we don’t know. We also don’t know what other world leaders have those protections, or the decision process surrounding who gets them. Are they manual? Can they scale? Can all verified accounts have them? Your guess is as good as mine.

    In addition to security measures, the other solution is to break up the tech monopolies. Companies like Facebook and Twitter have so much power because they are so large, and they face no real competition. This is a national-security risk as well as a personal-security risk. Were there 100 different Twitter-like companies, and enough compatibility so that all their feeds could merge into one interface, this attack wouldn’t have been such a big deal. More important, the risk of a similar but more politically targeted attack wouldn’t be so great. If there were competition, different platforms would offer different security options, as well as different posting rules, different authentication guidelines — different everything. Competition is how our economy works; it’s how we spur innovation. Monopolies have more power to do what they want in the quest for profits, even if it harms people along the way.

    This wasn’t Twitter’s first security problem involving trusted insiders. In 2017, on his last day of work, an employee shut down President Donald Trump’s account. In 2019, two people were charged with spying for the Saudi government while they were Twitter employees.

    Maybe this hack will serve as a wake-up call. But if past incidents involving Twitter and other companies are any indication, it won’t. Underspending on security, and letting society pay the eventual price, is far more profitable. I don’t blame the tech companies. Their corporate mandate is to make as much money as is legally possible. Fixing this requires changes in the law, not changes in the hearts of the company’s leaders.

    This essay previously appeared on TheAtlantic.com.