Tag Archives: vulnerabilities

Russia’s SolarWinds Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/russias-solarwinds-attack.html

Recent news articles have all been talking about the massive Russian cyberattack against the United States, but that’s wrong on two accounts. It wasn’t a cyberattack in international relations terms, it was espionage. And the victim wasn’t just the US, it was the entire world. But it was massive, and it is dangerous.

Espionage is internationally allowed in peacetime. The problem is that both espionage and cyberattacks require the same computer and network intrusions, and the difference is only a few keystrokes. And since this Russian operation isn’t at all targeted, the entire world is at risk — and not just from Russia. Many countries carry out these sorts of operations, none more extensively than the US. The solution is to prioritize security and defense over espionage and attack.

Here’s what we know: Orion is a network management product from a company named SolarWinds, with over 300,000 customers worldwide. Sometime before March, hackers working for the Russian SVR — previously known as the KGB — hacked into SolarWinds and slipped a backdoor into an Orion software update. (We don’t know how, but last year the company’s update server was protected by the password “solarwinds123” — something that speaks to a lack of security culture.) Users who downloaded and installed that corrupted update between March and June unwittingly gave SVR hackers access to their networks.

This is called a supply-chain attack, because it targets a supplier to an organization rather than an organization itself — and can affect all of a supplier’s customers. It’s an increasingly common way to attack networks. Other examples of this sort of attack include fake apps in the Google Play store, and hacked replacement screens for your smartphone.

SolarWinds has removed its customer list from its website, but the Internet Archive saved it: all five branches of the US military, the state department, the White House, the NSA, 425 of the Fortune 500 companies, all five of the top five accounting firms, and hundreds of universities and colleges. In an SEC filing, SolarWinds said that it believes “fewer than 18,000” of those customers installed this malicious update, another way of saying that more than 17,000 did.

That’s a lot of vulnerable networks, and it’s inconceivable that the SVR penetrated them all. Instead, it chose carefully from its cornucopia of targets. Microsoft’s analysis identified 40 customers who were infiltrated using this vulnerability. The great majority of those were in the US, but networks in Canada, Mexico, Belgium, Spain, the UK, Israel and the UAE were also targeted. This list includes governments, government contractors, IT companies, thinktanks, and NGOs — and it will certainly grow.

Once inside a network, SVR hackers followed a standard playbook: establish persistent access that will remain even if the initial vulnerability is fixed; move laterally around the network by compromising additional systems and accounts; and then exfiltrate data. Not being a SolarWinds customer is no guarantee of security; this SVR operation used other initial infection vectors and techniques as well. These are sophisticated and patient hackers, and we’re only just learning some of the techniques involved here.

Recovering from this attack isn’t easy. Because any SVR hackers would establish persistent access, the only way to ensure that your network isn’t compromised is to burn it to the ground and rebuild it, similar to reinstalling your computer’s operating system to recover from a bad hack. This is how a lot of sysadmins are going to spend their Christmas holiday, and even then they can&;t be sure. There are many ways to establish persistent access that survive rebuilding individual computers and networks. We know, for example, of an NSA exploit that remains on a hard drive even after it is reformatted. Code for that exploit was part of the Equation Group tools that the Shadow Brokers — again believed to be Russia — stole from the NSA and published in 2016. The SVR probably has the same kinds of tools.

Even without that caveat, many network administrators won’t go through the long, painful, and potentially expensive rebuilding process. They’ll just hope for the best.

It’s hard to overstate how bad this is. We are still learning about US government organizations breached: the state department, the treasury department, homeland security, the Los Alamos and Sandia National Laboratories (where nuclear weapons are developed), the National Nuclear Security Administration, the National Institutes of Health, and many more. At this point, there’s no indication that any classified networks were penetrated, although that could change easily. It will take years to learn which networks the SVR has penetrated, and where it still has access. Much of that will probably be classified, which means that we, the public, will never know.

And now that the Orion vulnerability is public, other governments and cybercriminals will use it to penetrate vulnerable networks. I can guarantee you that the NSA is using the SVR’s hack to infiltrate other networks; why would they not? (Do any Russian organizations use Orion? Probably.)

While this is a security failure of enormous proportions, it is not, as Senator Richard Durban said, “virtually a declaration of war by Russia on the United States.” While President-elect Biden said he will make this a top priority, it’s unlikely that he will do much to retaliate.

The reason is that, by international norms, Russia did nothing wrong. This is the normal state of affairs. Countries spy on each other all the time. There are no rules or even norms, and it’s basically “buyer beware.” The US regularly fails to retaliate against espionage operations — such as China’s hack of the Office of Personal Management (OPM) and previous Russian hacks — because we do it, too. Speaking of the OPM hack, the then director of national intelligence, James Clapper, said: “You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don’t think we’d hesitate for a minute.”

We don’t, and I’m sure NSA employees are grudgingly impressed with the SVR. The US has by far the most extensive and aggressive intelligence operation in the world. The NSA’s budget is the largest of any intelligence agency. It aggressively leverages the US’s position controlling most of the internet backbone and most of the major internet companies. Edward Snowden disclosed many targets of its efforts around 2014, which then included 193 countries, the World Bank, the IMF and the International Atomic Energy Agency. We are undoubtedly running an offensive operation on the scale of this SVR operation right now, and it’ll probably never be made public. In 2016, President Obama boasted that we have “more capacity than anybody both offensively and defensively.”

He may have been too optimistic about our defensive capability. The US prioritizes and spends many times more on offense than on defensive cybersecurity. In recent years, the NSA has adopted a strategy of “persistent engagement,” sometimes called “defending forward.” The idea is that instead of passively waiting for the enemy to attack our networks and infrastructure, we go on the offensive and disrupt attacks before they get to us. This strategy was credited with foiling a plot by the Russian Internet Research Agency to disrupt the 2018 elections.

But if persistent engagement is so effective, how could it have missed this massive SVR operation? It seems that pretty much the entire US government was unknowingly sending information back to Moscow. If we had been watching everything the Russians were doing, we would have seen some evidence of this. The Russians’ success under the watchful eye of the NSA and US Cyber Command shows that this is a failed approach.

And how did US defensive capability miss this? The only reason we know about this breach is because, earlier this month, the security company FireEye discovered that it had been hacked. During its own audit of its network, it uncovered the Orion vulnerability and alerted the US government. Why don’t organizations like the Departments of State, Treasury and Homeland Wecurity regularly conduct that level of audit on their own systems? The government’s intrusion detection system, Einstein 3, failed here because it doesn’t detect new sophisticated attacks — a deficiency pointed out in 2018 but never fixed. We shouldn’t have to rely on a private cybersecurity company to alert us of a major nation-state attack.

If anything, the US’s prioritization of offense over defense makes us less safe. In the interests of surveillance, the NSA has pushed for an insecure cell phone encryption standard and a backdoor in random number generators (important for secure encryption). The DoJ has never relented in its insistence that the world’s popular encryption systems be made insecure through back doors — another hot point where attack and defense are in conflict. In other words, we allow for insecure standards and systems, because we can use them to spy on others.

We need to adopt a defense-dominant strategy. As computers and the internet become increasingly essential to society, cyberattacks are likely to be the precursor to actual war. We are simply too vulnerable when we prioritize offense, even if we have to give up the advantage of using those insecurities to spy on others.

Our vulnerability is magnified as eavesdropping may bleed into a direct attack. The SVR’s access allows them not only to eavesdrop, but also to modify data, degrade network performance, or erase entire networks. The first might be normal spying, but the second certainly could be considered an act of war. Russia is almost certainly laying the groundwork for future attack.

This preparation would not be unprecedented. There’s a lot of attack going on in the world. In 2010, the US and Israel attacked the Iranian nuclear program. In 2012, Iran attacked the Saudi national oil company. North Korea attacked Sony in 2014. Russia attacked the Ukrainian power grid in 2015 and 2016. Russia is hacking the US power grid, and the US is hacking Russia’s power grid — just in case the capability is needed someday. All of these attacks began as a spying operation. Security vulnerabilities have real-world consequences.

We’re not going to be able to secure our networks and systems in this no-rules, free-for-all every-network-for-itself world. The US needs to willingly give up part of its offensive advantage in cyberspace in exchange for a vastly more secure global cyberspace. We need to invest in securing the world’s supply chains from this type of attack, and to press for international norms and agreements prioritizing cybersecurity, like the 2018 Paris Call for Trust and Security in Cyberspace or the Global Commission on the Stability of Cyberspace. Hardening widely used software like Orion (or the core internet protocols) helps everyone. We need to dampen this offensive arms race rather than exacerbate it, and work towards cyber peace. Otherwise, hypocritically criticizing the Russians for doing the same thing we do every day won’t help create the safer world in which we all want to live.

This essay previously appeared in the Guardian.

Impressive iPhone Exploit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/impressive-iphone-exploit.html

This is a scarily impressive vulnerability:

Earlier this year, Apple patched one of the most breathtaking iPhone vulnerabilities ever: a memory corruption bug in the iOS kernel that gave attackers remote access to the entire device­ — over Wi-Fi, with no user interaction required at all. Oh, and exploits were wormable­ — meaning radio-proximity exploits could spread from one nearby device to another, once again, with no user interaction needed.

[…]

Beer’s attack worked by exploiting a buffer overflow bug in a driver for AWDL, an Apple-proprietary mesh networking protocol that makes things like Airdrop work. Because drivers reside in the kernel — ­one of the most privileged parts of any operating system­ — the AWDL flaw had the potential for serious hacks. And because AWDL parses Wi-Fi packets, exploits can be transmitted over the air, with no indication that anything is amiss.

[…]

Beer developed several different exploits. The most advanced one installs an implant that has full access to the user’s personal data, including emails, photos, messages, and passwords and crypto keys stored in the keychain. The attack uses a laptop, a Raspberry Pi, and some off-the-shelf Wi-Fi adapters. It takes about two minutes to install the prototype implant, but Beer said that with more work a better written exploit could deliver it in a “handful of seconds.” Exploits work only on devices that are within Wi-Fi range of the attacker.

There is no evidence that this vulnerability was ever used in the wild.

EDITED TO ADD: Slashdot thread.

SAD DNS Explained

Post Syndicated from Marek Vavruša original https://blog.cloudflare.com/sad-dns-explained/

SAD DNS Explained

This week, at the ACM CCS 2020 conference, researchers from UC Riverside and Tsinghua University announced a new attack against the Domain Name System (DNS) called SAD DNS (Side channel AttackeD DNS). This attack leverages recent features of the networking stack in modern operating systems (like Linux) to allow attackers to revive a classic attack category: DNS cache poisoning. As part of a coordinated disclosure effort earlier this year, the researchers contacted Cloudflare and other major DNS providers and we are happy to announce that 1.1.1.1 Public Resolver is no longer vulnerable to this attack.

In this post, we’ll explain what the vulnerability was, how it relates to previous attacks of this sort, what mitigation measures we have taken to protect our users, and future directions the industry should consider to prevent this class of attacks from being a problem in the future.

DNS Basics

The Domain Name System (DNS) is what allows users of the Internet to get around without memorizing long sequences of numbers. What’s often called the “phonebook of the Internet” is more like a helpful system of translators that take natural language domain names (like blog.cloudflare.com or gov.uk) and translate them into the native language of the Internet: IP addresses (like 192.0.2.254 or [2001:db8::cf]). This translation happens behind the scenes so that users only need to remember hostnames and don’t have to get bogged down with remembering IP addresses.

DNS is both a system and a protocol. It refers to the hierarchical system of computers that manage the data related to naming on a network and it refers to the language these computers use to speak to each other to communicate answers about naming. The DNS protocol consists of pairs of messages that correspond to questions and responses. Each DNS question (query) and answer (reply) follows a standard format and contains a set of parameters that contain relevant information such as the name of interest (such as blog.cloudflare.com) and the type of response record desired (such as A for IPv4 or AAAA for IPv6).

The DNS Protocol and Spoofing

These DNS messages are exchanged over a network between machines using a transport protocol. Originally, DNS used UDP, a simple stateless protocol in which messages are endowed with a set of metadata indicating a source port and a destination port. More recently, DNS has adapted to use more complex transport protocols such as TCP and even advanced protocols like TLS or HTTPS, which incorporate encryption and strong authentication into the mix (see Peter Wu’s blog post about DNS protocol encryption).

Still, the most common transport protocol for message exchange is UDP, which has the advantages of being fast, ubiquitous and requiring no setup. Because UDP is stateless, the pairing of a response to an outstanding query is based on two main factors: the source address and port pair, and information in the DNS message. Given that UDP is both stateless and unauthenticated, anyone, and not just the recipient, can send a response with a forged source address and port, which opens up a range of potential problems.

SAD DNS Explained
The blue portions contribute randomness

Since the transport layer is inherently unreliable and untrusted, the DNS protocol was designed with additional mechanisms to protect against forged responses. The first two bytes in the message form a message or transaction ID that must be the same in the query and response. When a DNS client sends a query, it will set the ID to a random value and expect the value in the response to match. This unpredictability introduces entropy into the protocol, which makes it less likely that a malicious party will be able to construct a valid DNS reply without first seeing the query. There are other potential variables to account for, like the DNS query name and query type are also used to pair query and response, but these are trivial to guess and don’t introduce an additional entropy.

Those paying close attention to the diagram may notice that the amount of entropy introduced by this measure is only around 16 bits, which means that there are fewer than a hundred thousand possibilities to go through to find the matching reply to a given query. More on this later.

The DNS Ecosystem

DNS servers fall into one of a few main categories: recursive resolvers (like 1.1.1.1 or 8.8.8.8), nameservers (like the DNS root servers or Cloudflare Authoritative DNS). There are also elements of the ecosystem that act as “forwarders” such as dnsmasq. In a typical DNS lookup, these DNS servers work together to complete the task of delivering the IP address for a specified domain to the client (the client is usually a stub resolver – a simple resolver built into an operating system). For more detailed information about the DNS ecosystem, take a look at our learning site. The SAD DNS attack targets the communication between recursive resolvers and nameservers.

Each of the participants in DNS (client, resolver, nameserver) uses the DNS protocol to communicate with each other. Most of the latest innovations in DNS revolve around upgrading the transport between users and recursive resolvers to use encryption. Upgrading the transport protocol between resolvers and authoritative servers is a bit more complicated as it requires a new discovery mechanism to instruct the resolver when to (and when not to use) a more secure channel.  Aside from a few examples like our work with Facebook to encrypt recursive-to-authoritative traffic with DNS-over-TLS, most of these exchanges still happen over UDP. This is the core issue that enables this new attack on DNS, and one that we’ve seen before.

Kaminsky’s Attack

Prior to 2008, recursive resolvers typically used a single open port (usually port 53) to send and receive messages to authoritative nameservers. This made guessing the source port trivial, so the only variable an attacker needed to guess to forge a response to a query was the 16-bit message ID. The attack Kaminsky described was relatively simple: whenever a recursive resolver queried the authoritative name server for a given domain, an attacker would flood the resolver with DNS responses for some or all of the 65 thousand or so possible message IDs. If the malicious answer with the right message ID arrived before the response from the authoritative server, then the DNS cache would be effectively poisoned, returning the attacker’s chosen answer instead of the real one for as long as the DNS response was valid (called the TTL, or time-to-live).

For popular domains, resolvers contact authoritative servers once per TTL (which can be as short as 5 minutes), so there are plenty of opportunities to mount this attack. Forwarders that cache DNS responses are also vulnerable to this type of attack.

SAD DNS Explained

In response to this attack, DNS resolvers started doing source port randomization and careful checking of the security ranking of cached data. To poison these updated resolvers, forged responses would not only need to guess the message ID, but they would also have to guess the source port, bringing the number of guesses from the tens of thousands to over a trillion. This made the attack effectively infeasible. Furthermore, the IETF published RFC 5452 on how to harden DNS from guessing attacks.

It should be noted that this attack did not work for DNSSEC-signed domains since their answers are digitally signed. However, even now in 2020, DNSSEC is far from universal.

Defeating Source Port Randomization with Fragmentation

Another way to avoid having to guess the source port number and message ID is to split the DNS response in two. As is often the case in computer security, old attacks become new again when attackers discover new capabilities. In 2012, researchers Amir Herzberg and Haya Schulman from Bar Ilan University discovered that it was possible for a remote attacker to defeat the protections provided by source port randomization. This new attack leveraged another feature of UDP: fragmentation. For a primer on the topic of UDP fragmentation, check out our previous blog post on the subject by Marek Majkowski.

The key to this attack is the fact that all the randomness that needs to be guessed in a DNS poisoning attack is concentrated at the beginning of the DNS message (UDP header and DNS header).If the UDP response packet (sometimes called a datagram) is split into two fragments, the first half containing the message ID and source port and the second containing part of the DNS response, then all an attacker needs to do is forge the second fragment and make sure that the fake second fragment arrives at the resolver before the true second fragment does. When a datagram is fragmented, each fragment is assigned a 16-bit IDs (called IP-ID), which is used to reassemble it at the other end of the connection. Since the second fragment only has the IP-ID as entropy (again, this is a familiar refrain in this area), this attack is feasible with a relatively small number of forged packets. The downside of this attack is the precondition that the response must be fragmented in the first place, and the fragment must be carefully altered to pass the original section counts and UDP checksum.

SAD DNS Explained

Also discussed in the original and follow-up papers is a method of forcing two remote servers to send packets between each other which are fragmented at an attacker-controlled point, making this attack much more feasible. The details are in the paper, but it boils down to the fact that the control mechanism for describing the maximum transmissible unit (MTU) between two servers — which determines at which point packets are fragmented — can be set via a forged UDP packet.

SAD DNS Explained

We explored this risk in a previous blog post in the context of certificate issuance last year when we introduced our multi-path DCV service, which mitigates this risk in the context of certificate issuance by making DNS queries from multiple vantage points. Nevertheless, fragmentation-based attacks are proving less and less effective as DNS providers move to eliminate support for fragmented DNS packets (one of the major goals of DNS Flag Day 2020).

Defeating Source Port Randomization via ICMP error messages

Another way to defeat the source port randomization is to use some measurable property of the server that makes the source port easier to guess. If the attacker could ask the server which port number is being used for a pending query, that would make the construction of a spoofed packet much easier. No such thing exists, but it turns out there is something close enough – the attacker can discover which ports are surely closed (and thus avoid having to send traffic). One such mechanism is the ICMP “port unreachable” message.

Let’s say the target receives a UDP datagram destined for its IP and some port, the datagram either ends up either being accepted and silently discarded by the application, or rejected because the port is closed. If the port is closed, or more importantly, closed to the IP address that the UDP datagram was sent from, the target will send back an ICMP message notifying the attacker that the port is closed. This is handy to know since the attacker now doesn’t have to bother trying to guess the pending message ID on this port and move to other ports. A single scan of the server effectively reduces the search space of valid UDP responses from 232 (over a trillion) to 217 (around a hundred thousand), at least in theory.

This trick doesn’t always work. Many resolvers use “connected” UDP sockets instead of “open” UDP sockets to exchange messages between the resolver and nameserver. Connected sockets are tied to the peer address and port on the OS layer, which makes it impossible for an attacker to guess which “connected” UDP sockets are established between the target and the victim, and since the attacker isn’t the victim, it can’t directly observe the outcome of the probe.

To overcome this, the researchers found a very clever trick: they leverage ICMP rate limits as a side channel to reveal whether a given port is open or not. ICMP rate limiting was introduced (somewhat ironically, given this attack) as a security feature to prevent a server from being used as an unwitting participant in a reflection attack. In broad terms, it is used to limit how many ICMP responses a server will send out in a given time period. Say an attacker wanted to scan 10,000 ports and sent a burst of 10,000 UDP packets to a server configured with an ICMP rate limit of 50 per second, then only the first 50 would get an ICMP “port unreachable” message in reply.

Rate limiting seems innocuous until you remember one of the core rules of data security: don’t let private information influence publicly measurable metrics. ICMP rate limiting violates this rule because the rate limiter’s behavior can be influenced by an attacker making guesses as to whether a “secret” port number is open or not.

don’t let private information influence publicly measurable metrics

An attacker wants to know whether the target has an open port, so it sends a spoofed UDP message from the authoritative server to that port. If the port is open, no ICMP reply is sent and the rate counter remains unchanged. If the port is inaccessible, then an ICMP reply is sent (back to the authoritative server, not to the attacker) and the rate is increased by one. Although the attacker doesn’t see the ICMP response, it has influenced the counter. The counter itself isn’t known outside the server, but whether it has hit the rate limit or not can be measured by any outside observer by sending a UDP packet and waiting for a reply. If an ICMP “port unreachable” reply comes back, the rate limit hasn’t been reached. No reply means the rate limit has been met. This leaks one bit of information about the counter to the outside observer, which in the end is enough to reveal the supposedly secret information (whether the spoofed request got through or not).

SAD DNS Explained
Diagram inspired by original paper‌‌

Concretely, the attack works as follows: the attacker sends a bunch (large enough to trigger the rate limiting) of probe messages to the target, but with a forged source address of the victim. In the case where there are no open ports in the probed set, the target will send out the same amount of ICMP “port unreachable” responses back to the victim and trigger the rate limit on outgoing ICMP messages. The attacker can now send an additional verification message from its own address and observe whether an ICMP response comes back or not. If it does then there was at least one port open in the set and the attacker can divide the set and try again, or do a linear scan by inserting the suspected port number into a set of known closed ports. Using this approach, the attacker can narrow down to the open ports and try to guess the message ID until it is successful or gives up, similarly to the original Kaminsky attack.

In practice there are some hurdles to successfully mounting this attack.

  • First, the target IP, or a set of target IPs must be discovered. This might be trivial in some cases – a single forwarder, or a fixed set of IPs that can be discovered by probing and observing attacker controlled zones, but more difficult if the target IPs are partitioned across zones as the attacker can’t see the resolver egress IP unless she can monitor the traffic for the victim domain.
  • The attack also requires a large enough ICMP outgoing rate limit in order to be able to scan with a reasonable speed. The scan speed is critical, as it must be completed while the query to the victim nameserver is still pending. As the scan speed is effectively fixed, the paper instead describes a method to potentially extend the window of opportunity by triggering the victim’s response rate limiting (RRL), a technique to protect against floods of forged DNS queries. This may work if the victim implements RRL and the target resolver doesn’t implement a retry over TCP (A Quantitative Study of the Deployment of DNS Rate Limiting shows about 16% of nameservers implement some sort of RRL).
  • Generally, busy resolvers will have ephemeral ports opening and closing, which introduces false positive open ports for the attacker, and ports open for different pending queries than the one being attacked.

We’ve implemented an additional mitigation to 1.1.1.1 to prevent message ID guessing – if the resolver detects an ID enumeration attempt, it will stop accepting any more guesses and switches over to TCP. This reduces the number of attempts for the attacker even if it guesses the IP address and port correctly, similarly to how the number of password login attempts is limited.

Outlook

Ultimately these are just mitigations, and the attacker might be willing to play the long game. As long as the transport layer is insecure and DNSSEC is not widely deployed, there will be different methods of chipping away at these mitigations.

It should be noted that trying to hide source IPs or open port numbers is a form of security through obscurity. Without strong cryptographic authentication, it will always be possible to use spoofing to poison DNS resolvers. The silver lining here is that DNSSEC exists, and is designed to protect against this type of attack, and DNS servers are moving to explore cryptographically strong transports such as TLS for communicating between resolvers and authoritative servers.

At Cloudflare, we’ve been helping to reduce the friction of DNSSEC deployment, while also helping to improve transport security in the long run. There is also an effort to increase entropy in DNS messages with RFC 7873 – Domain Name System (DNS) Cookies, and make DNS over TCP support mandatory RFC 7766 – DNS Transport over TCP – Implementation Requirements, with even more documentation around ways to mitigate this type of issue available in different places. All of these efforts are complementary, which is a good thing. The DNS ecosystem consists of many different parties and software with different requirements and opinions, as long as the operators support at least one of the preventive measures, these types of attacks will become more and more difficult.

If you are an operator of an authoritative DNS server, you should consider taking the following steps to protect yourself from this attack:

We’d like to thank the researchers for responsibly disclosing this attack and look forward to working with them in the future on efforts to strengthen the DNS.

Tracking Users on Waze

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/tracking-users-on-waze.html

A security researcher discovered a wulnerability in Waze that breaks the anonymity of users:

I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road.

The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different.

NSA Advisory on Chinese Government Hacking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/nsa-advisory-on-chinese-government-hacking.html

The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers.

This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching.

Hacking Apple for Profit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/hacking-apple-for-profit.html

Five researchers hacked Apple Computer’s networks — not their products — and found fifty-five vulnerabilities. So far, they have received $289K.

One of the worst of all the bugs they found would have allowed criminals to create a worm that would automatically steal all the photos, videos, and documents from someone’s iCloud account and then do the same to the victim’s contacts.

Lots of details in this blog post by one of the hackers.

Hacking a Coffee Maker

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/hacking-a-coffee-maker.html

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.

[…]

In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

New Bluetooth Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/new-bluetooth-vulnerability.html

There’s a new unpatched Bluetooth vulnerability:

The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data — and battery power — from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place — which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.

Another article:

Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).

However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.

Smart Lock Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/smart_lock_vuln.html

Yet another Internet-connected door lock is insecure:

Sold by retailers including Amazon, Walmart, and Home Depot, U-Tec’s $139.99 UltraLoq is marketed as a “secure and versatile smart deadbolt that offers keyless entry via your Bluetooth-enabled smartphone and code.”

Users can share temporary codes and ‘Ekeys’ to friends and guests for scheduled access, but according to Tripwire researcher Craig Young, a hacker able to sniff out the device’s MAC address can help themselves to an access key, too.

UltraLoq eventually fixed the vulnerabilities, but not in a way that should give you any confidence that they know what they’re doing.

EDITED TO ADD (8/12): More.

CVE-2020-5902: Helping to protect against the F5 TMUI RCE vulnerability

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/cve-2020-5902-helping-to-protect-against-the-f5-tmui-rce-vulnerability/

CVE-2020-5902: Helping to protect against the F5 TMUI RCE vulnerability

Cloudflare has deployed a new managed rule protecting customers against a remote code execution vulnerability that has been found in F5 BIG-IP’s web-based Traffic Management User Interface (TMUI). Any customer who has access to the Cloudflare Web Application Firewall (WAF) is automatically protected by the new rule (100315) that has a default action of BLOCK.

Initial testing on our network has shown that attackers started probing and trying to exploit this vulnerability starting on July 3.

F5 has published detailed instructions on how to patch affected devices, how to detect if attempts have been made to exploit the vulnerability on a device and instructions on how to add a custom mitigation. If you have an F5 device, read their detailed mitigations before reading the rest of this blog post.

The most popular probe URL appears to be /tmui/login.jsp/..;/tmui/locallb/workspace/fileRead.jsp followed by /tmui/login.jsp/..;/tmui/util/getTabSet.jsp, /tmui/login.jsp/..;/tmui/system/user/authproperties.jsp and /tmui/login.jsp/..;/tmui/locallb/workspace/tmshCmd.jsp. All contain the critical pattern ..; which is at the heart of the vulnerability.

On July 3 we saw O(1k) probes ramping to O(1m) yesterday. This is because simple test patterns have been added to scanning tools and small test programs made available by security researchers.

CVE-2020-5902: Helping to protect against the F5 TMUI RCE vulnerability

The Vulnerability

The vulnerability was disclosed by the vendor on July 1 and allows both authenticated and unauthenticated users to perform remote code execution (RCE).

Remote Code Execution is a type of code injection which provides the attacker the ability to run any arbitrary code on the target application, allowing them, in most scenarios such as this one, to gain privileged access and perform a full system take over.

The vulnerability affects the administration interface only (the management dashboard), not the underlying data plane provided by the application.

How to Mitigate

If updating the application is not possible, the attack can be mitigated by blocking all requests that match the following regular expression in the URL:

.*\.\.;.*

The above regular expression matches two dot characters (.) followed by a semicolon within any sequence of characters.

Customers who are using the Cloudflare WAF, that have their F5 BIG-IP TMUI interface proxied behind Cloudflare, are already automatically protected from this vulnerability with rule 100315. If you wish to turn off the rule or change the default action:

  1. Head over to the Cloudflare Firewall, then click on Managed Rules and head over to the advanced link under the Cloudflare Managed Rule set,
  2. Search for rule ID: 100315,
  3. Select any appropriate action or disable the rule.

Facebook Helped Develop a Tails Exploit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/facebook_helped.html

This is a weird story:

Hernandez was able to evade capture for so long because he used Tails, a version of Linux designed for users at high risk of surveillance and which routes all inbound and outbound connections through the open-source Tor network to anonymize it. According to Vice, the FBI had tried to hack into Hernandez’s computer but failed, as the approach they used “was not tailored for Tails.” Hernandez then proceeded to mock the FBI in subsequent messages, two Facebook employees told Vice.

Facebook had tasked a dedicated employee to unmasking Hernandez, developed an automated system to flag recently created accounts that messaged minors, and made catching Hernandez a priority for its security teams, according to Vice. They also paid a third party contractor “six figures” to help develop a zero-day exploit in Tails: a bug in its video player that enabled them to retrieve the real I.P. address of a person viewing a clip. Three sources told Vice that an intermediary passed the tool onto the FBI, who then obtained a search warrant to have one of the victims send a modified video file to Hernandez (a tactic the agency has used before).

[…]

Facebook also never notified the Tails team of the flaw — breaking with a long industry tradition of disclosure in which the relevant developers are notified of vulnerabilities in advance of them becoming public so they have a chance at implementing a fix. Sources told Vice that since an upcoming Tails update was slated to strip the vulnerable code, Facebook didn’t bother to do so, though the social media company had no reason to believe Tails developers had ever discovered the bug.

[…]

“The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls,” a Facebook spokesperson told Vice. “This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice.”

I agree with that last paragraph. I’m fine with the FBI using vulnerabilities: lawful hacking, it’s called. I’m less okay with Facebook paying for a Tails exploit, giving it to the FBI, and then keeping its existence secret.

Another article.

EDITED TO ADD: This post has been translated into Portuguese.

Another Intel Speculative Execution Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/another_intel_s.html

Remember Spectre and Meltdown? Back in early 2018, I wrote:

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

That has turned out to be true. Here’s a new vulnerability:

On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel’s Software Guard eXtension, by far the most sensitive region of the company’s processors.

[…]

The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that’s a tall bar, it’s precisely the scenario that SGX is supposed to defend against.

Another news article.

Security Analysis of the Democracy Live Online Voting System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/security_analys_7.html

New research: “Security Analysis of the Democracy Live Online Voting System“:

Abstract: Democracy Live’s OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and (optionally) online voting. Three states — Delaware, West Virginia, and New Jersey — recently announced that they will allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never been the subject of a public, independent security review.

We reverse engineered the client-side portion of OmniBallot, as used in Delaware, in order to detail the system’s operation and analyze its security.We find that OmniBallot uses a simplistic approach to Internet voting that is vulnerable to vote manipulation by malware on the voter’s device and by insiders or other attackers who can compromise Democracy Live, Amazon,Google, or Cloudflare. In addition, Democracy Live, which appears to have no privacy policy, receives sensitive personally identifiable information­ — including the voter’s identity, ballot selections, and browser fingerprint­ — that could be used to target political ads or disinformation campaigns.Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter’s identity and ballot choices to Democracy Live, an unnecessary security risk that jeopardizes the secret ballot. We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection.

News story.

EDITED TO ADD: This post has been translated into Portuguese.

Bluetooth Vulnerability: BIAS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/bluetooth_vulne_1.html

This is new research on a Bluetooth vulnerability (called BIAS) that allows someone to impersonate a trusted device:

Abstract: Bluetooth (BR/EDR) is a pervasive technology for wireless communication used by billions of devices. The Bluetooth standard includes a legacy authentication procedure and a secure authentication procedure, allowing devices to authenticate to each other using a long term key. Those procedures are used during pairing and secure connection establishment to prevent impersonation attacks. In this paper, we show that the Bluetooth specification contains vulnerabilities enabling to perform impersonation attacks during secure connection establishment. Such vulnerabilities include the lack of mandatory mutual authentication, overly permissive role switching, and an authentication procedure downgrade. We describe each vulnerability in detail, and we exploit them to design, implement, and evaluate master and slave impersonation attacks on both the legacy authentication procedure and the secure authentication procedure. We refer to our attacks as Bluetooth Impersonation AttackS (BIAS).

Our attacks are standard compliant, and are therefore effective against any standard compliant Bluetooth device regardless the Bluetooth version, the security mode (e.g., Secure Connections), the device manufacturer, and the implementation details. Our attacks are stealthy because the Bluetooth standard does not require to notify end users about the outcome of an authentication procedure, or the lack of mutual authentication. To confirm that the BIAS attacks are practical, we successfully conduct them against 31 Bluetooth devices (28 unique Bluetooth chips) from major hardware and software vendors, implementing all the major Bluetooth versions, including Apple, Qualcomm, Intel, Cypress, Broadcom, Samsung, and CSR.

News articles.

Attack Against PC Thunderbolt Port

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/attack_against_2.html

The attack requires physical access to the computer, but it’s pretty devastating:

On Thunderbolt-enabled Windows or Linux PCs manufactured before 2019, his technique can bypass the login screen of a sleeping or locked computer — and even its hard disk encryption — to gain full access to the computer’s data. And while his attack in many cases requires opening a target laptop’s case with a screwdriver, it leaves no trace of intrusion and can be pulled off in just a few minutes. That opens a new avenue to what the security industry calls an “evil maid attack,” the threat of any hacker who can get alone time with a computer in, say, a hotel room. Ruytenberg says there’s no easy software fix, only disabling the Thunderbolt port altogether.

“All the evil maid needs to do is unscrew the backplate, attach a device momentarily, reprogram the firmware, reattach the backplate, and the evil maid gets full access to the laptop,” says Ruytenberg, who plans to present his Thunderspy research at the Black Hat security conference this summer­or the virtual conference that may replace it. “All of this can be done in under five minutes.”

Lots of details in the article above, and in the attack website. (We know it’s a modern hack, because it comes with its own website and logo.)

Intel responds.

EDITED TO ADD (5/14): More.

iOS XML Bug

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/ios_xml_bug.html

This is a good explanation of an iOS bug that allowed someone to break out of the application sandbox. A summary:

What a crazy bug, and Siguza’s explanation is very cogent. Basically, it comes down to this:

  • XML is terrible.
  • iOS uses XML for Plists, and Plists are used everywhere in iOS (and MacOS).
  • iOS’s sandboxing system depends upon three different XML parsers, which interpret slightly invalid XML input in slightly different ways.

So Siguza’s exploit ­– which granted an app full access to the entire file system, and more ­- uses malformed XML comments constructed in a way that one of iOS’s XML parsers sees its declaration of entitlements one way, and another XML parser sees it another way. The XML parser used to check whether an application should be allowed to launch doesn’t see the fishy entitlements because it thinks they’re inside a comment. The XML parser used to determine whether an already running application has permission to do things that require entitlements sees the fishy entitlements and grants permission.

This is fixed in the new iOS release, 13.5 beta 3.

Comment:

Implementing 4 different parsers is just asking for trouble, and the “fix” is of the crappiest sort, bolting on more crap to check they’re doing the right thing in this single case. None of this is encouraging.

More commentary. Hacker News thread.

Vulnerability Finding Using Machine Learning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/vulnerability_f.html

Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.