Tag Archives: cyberattack

SonicWall Zero-Day

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/sonicwall-zero-day.html

Hackers are exploiting a zero-day in SonicWall:

In an email, an NCC Group spokeswoman wrote: “Our team has observed signs of an attempted exploitation of a vulnerabilitythat affects the SonicWall SMA 100 series devices. We are working closely with SonicWall to investigate this in more depth.”

In Monday’s update, SonicWall representatives said the company’s engineering team confirmed that the submission by NCC Group included a “critical zero-day” in the SMA 100 series 10.x code. SonicWall is tracking it as SNWLID-2021-0001. The SMA 100 series is a line of secure remote access appliances.

The disclosure makes SonicWall at least the fifth large company to report in recent weeks that it was targeted by sophisticated hackers. Other companies include network management tool provider SolarWinds, Microsoft, FireEye, and Malwarebytes. CrowdStrike also reported being targeted but said the attack wasn’t successful.

Neither SonicWall nor NCC Group said that the hack involving the SonicWall zero-day was linked to the larger hack campaign involving SolarWinds. Based on the timing of the disclosure and some of the details in it, however, there is widespread speculation that the two are connected.

The speculation is just that — speculation. I have no opinion in the matter. This could easily be part of the SolarWinds campaign, which targeted other security companies. But there are a lot of “highly sophisticated threat actors” — that’s how NCC Group described them — out there, and this could easily be a coincidence.

Were I working for a national intelligence organization, I would try to disguise my operations as being part of the SolarWinds attack.

EDITED TO ADD (2/9): SonicWall has patched the vulnerability.

Sophisticated Watering Hole Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/sophisticated-watering-hole-attack.html

Google’s Project Zero has exposed a sophisticated watering-hole attack targeting both Windows and Android:

Some of the exploits were zero-days, meaning they targeted vulnerabilities that at the time were unknown to Google, Microsoft, and most outside researchers (both companies have since patched the security flaws). The hackers delivered the exploits through watering-hole attacks, which compromise sites frequented by the targets of interest and lace the sites with code that installs malware on visitors’ devices. The boobytrapped sites made use of two exploit servers, one for Windows users and the other for users of Android

The use of zero-days and complex infrastructure isn’t in itself a sign of sophistication, but it does show above-average skill by a professional team of hackers. Combined with the robustness of the attack code — ­which chained together multiple exploits in an efficient manner — the campaign demonstrates it was carried out by a “highly sophisticated actor.”

[…]

The modularity of the payloads, the interchangeable exploit chains, and the logging, targeting, and maturity of the operation also set the campaign apart, the researcher said.

No attribution was made, but the list of countries likely to be behind this isn’t very large. If you were to ask me to guess based on available information, I would guess it was the US — specifically, the NSA. It shows a care and precision that it’s known for. But I have no actual evidence for that guess.

All the vulnerabilities were fixed by last April.

Russia’s SolarWinds Attack and Software Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/russias-solarwinds-attack-and-software-security.html

The information that is emerging about Russia’s extensive cyberintelligence operation against the United States and other countries should be increasingly alarming to the public. The magnitude of the hacking, now believed to have affected more than 250 federal agencies and businesses — ­primarily through a malicious update of the SolarWinds network management software — ­may have slipped under most people’s radar during the holiday season, but its implications are stunning.

According to a Washington Post report, this is a massive intelligence coup by Russia’s foreign intelligence service (SVR). And a massive security failure on the part of the United States is also to blame. Our insecure Internet infrastructure has become a critical national security risk­ — one that we need to take seriously and spend money to reduce.

President-elect Joe Biden’s initial response spoke of retaliation, but there really isn’t much the United States can do beyond what it already does. Cyberespionage is business as usual among countries and governments, and the United States is aggressively offensive in this regard. We benefit from the lack of norms in this area and are unlikely to push back too hard because we don’t want to limit our own offensive actions.

Biden took a more realistic tone last week when he spoke of the need to improve US defenses. The initial focus will likely be on how to clean the hackers out of our networks, why the National Security Agency and US Cyber Command failed to detect this intrusion and whether the 2-year-old Cybersecurity and Infrastructure Security Agency has the resources necessary to defend the United States against attacks of this caliber. These are important discussions to have, but we also need to address the economic incentives that led to SolarWinds being breached and how that insecure software ended up in so many critical US government networks.

Software has become incredibly complicated. Most of us almost don’t know all of the software running on our laptops and what it’s doing. We don’t know where it’s connecting to on the Internet­ — not even which countries it’s connecting to­ — and what data it’s sending. We typically don’t know what third party libraries are in the software we install. We don’t know what software any of our cloud services are running. And we’re rarely alone in our ignorance. Finding all of this out is incredibly difficult.

This is even more true for software that runs our large government networks, or even the Internet backbone. Government software comes from large companies, small suppliers, open source projects and everything in between. Obscure software packages can have hidden vulnerabilities that affect the security of these networks, and sometimes the entire Internet. Russia’s SVR leveraged one of those vulnerabilities when it gained access to SolarWinds’ update server, tricking thousands of customers into downloading a malicious software update that gave the Russians access to those networks.

The fundamental problem is one of economic incentives. The market rewards quick development of products. It rewards new features. It rewards spying on customers and users: collecting and selling individual data. The market does not reward security, safety or transparency. It doesn’t reward reliability past a bare minimum, and it doesn’t reward resilience at all.

This is what happened at SolarWinds. A New York Times report noted the company ignored basic security practices. It moved software development to Eastern Europe, where Russia has more influence and could potentially subvert programmers, because it’s cheaper.

Short-term profit was seemingly prioritized over product security.

Companies have the right to make decisions like this. The real question is why the US government bought such shoddy software for its critical networks. This is a problem that Biden can fix, and he needs to do so immediately.

The United States needs to improve government software procurement. Software is now critical to national security. Any system for acquiring software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure they are sufficient to meet the security needs of the network they’re being installed in. Procurement contracts need to include security controls of the software development process. They need security attestations on the part of the vendors, with substantial penalties for misrepresentation or failure to comply. The government needs detailed best practices for government and other companies.

Some of the groundwork for an approach like this has already been laid by the federal government, which has sponsored the development of a “Software Bill of Materials” that would set out a process for software makers to identify the components used to assemble their software.

This scrutiny can’t end with purchase. These security requirements need to be monitored throughout the software’s life cycle, along with what software is being used in government networks.

None of this is cheap, and we should be prepared to pay substantially more for secure software. But there’s a benefit to these practices. If the government evaluations are public, along with the list of companies that meet them, all network buyers can benefit from them. The US government acting purely in the realm of procurement can improve the security of nongovernmental networks worldwide.

This is important, but it isn’t enough. We need to set minimum safety and security standards for all software: from the code in that Internet of Things appliance you just bought to the code running our critical national infrastructure. It’s all one network, and a vulnerability in your refrigerator’s software can be used to attack the national power grid.

The IOT Cybersecurity Improvement Act, signed into law last month, is a start in this direction.

The Biden administration should prioritize minimum security standards for all software sold in the United States, not just to the government but to everyone. Long gone are the days when we can let the software industry decide how much emphasis to place on security. Software security is now a matter of personal safety: whether it’s ensuring your car isn’t hacked over the Internet or that the national power grid isn’t hacked by the Russians.

This regulation is the only way to force companies to provide safety and security features for customers — just as legislation was necessary to mandate food safety measures and require auto manufacturers to install life-saving features such as seat belts and air bags. Smart regulations that incentivize innovation create a market for security features. And they improve security for everyone.

It’s true that creating software in this sort of regulatory environment is more expensive. But if we truly value our personal and national security, we need to be prepared to pay for it.

The truth is that we’re already paying for it. Today, software companies increase their profits by secretly pushing risk onto their customers. We pay the cost of insecure personal computers, just as the government is now paying the cost to clean up after the SolarWinds hack. Fixing this requires both transparency and regulation. And while the industry will resist both, they are essential for national security in our increasingly computer-dependent worlds.

This essay previously appeared on CNN.com.

Extracting Personal Information from Large Language Models Like GPT-2

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/extracting-personal-information-from-large-language-models-like-gpt-2.html

Researchers have been able to find all sorts of personal information within GPT-2. This information was part of the training data, and can be extracted with the right sorts of queries.

Paper: “Extracting Training Data from Large Language Models.”

Abstract: It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model’s training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.

We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. For example, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.

From a blog post:

We generated a total of 600,000 samples by querying GPT-2 with three different sampling strategies. Each sample contains 256 tokens, or roughly 200 words on average. Among these samples, we selected 1,800 samples with abnormally high likelihood for manual inspection. Out of the 1,800 samples, we found 604 that contain text which is reproduced verbatim from the training set.

The rest of the blog post discusses the types of data they found.

Russia’s SolarWinds Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/russias-solarwinds-attack.html

Recent news articles have all been talking about the massive Russian cyberattack against the United States, but that’s wrong on two accounts. It wasn’t a cyberattack in international relations terms, it was espionage. And the victim wasn’t just the US, it was the entire world. But it was massive, and it is dangerous.

Espionage is internationally allowed in peacetime. The problem is that both espionage and cyberattacks require the same computer and network intrusions, and the difference is only a few keystrokes. And since this Russian operation isn’t at all targeted, the entire world is at risk — and not just from Russia. Many countries carry out these sorts of operations, none more extensively than the US. The solution is to prioritize security and defense over espionage and attack.

Here’s what we know: Orion is a network management product from a company named SolarWinds, with over 300,000 customers worldwide. Sometime before March, hackers working for the Russian SVR — previously known as the KGB — hacked into SolarWinds and slipped a backdoor into an Orion software update. (We don’t know how, but last year the company’s update server was protected by the password “solarwinds123” — something that speaks to a lack of security culture.) Users who downloaded and installed that corrupted update between March and June unwittingly gave SVR hackers access to their networks.

This is called a supply-chain attack, because it targets a supplier to an organization rather than an organization itself — and can affect all of a supplier’s customers. It’s an increasingly common way to attack networks. Other examples of this sort of attack include fake apps in the Google Play store, and hacked replacement screens for your smartphone.

SolarWinds has removed its customer list from its website, but the Internet Archive saved it: all five branches of the US military, the state department, the White House, the NSA, 425 of the Fortune 500 companies, all five of the top five accounting firms, and hundreds of universities and colleges. In an SEC filing, SolarWinds said that it believes “fewer than 18,000” of those customers installed this malicious update, another way of saying that more than 17,000 did.

That’s a lot of vulnerable networks, and it’s inconceivable that the SVR penetrated them all. Instead, it chose carefully from its cornucopia of targets. Microsoft’s analysis identified 40 customers who were infiltrated using this vulnerability. The great majority of those were in the US, but networks in Canada, Mexico, Belgium, Spain, the UK, Israel and the UAE were also targeted. This list includes governments, government contractors, IT companies, thinktanks, and NGOs — and it will certainly grow.

Once inside a network, SVR hackers followed a standard playbook: establish persistent access that will remain even if the initial vulnerability is fixed; move laterally around the network by compromising additional systems and accounts; and then exfiltrate data. Not being a SolarWinds customer is no guarantee of security; this SVR operation used other initial infection vectors and techniques as well. These are sophisticated and patient hackers, and we’re only just learning some of the techniques involved here.

Recovering from this attack isn’t easy. Because any SVR hackers would establish persistent access, the only way to ensure that your network isn’t compromised is to burn it to the ground and rebuild it, similar to reinstalling your computer’s operating system to recover from a bad hack. This is how a lot of sysadmins are going to spend their Christmas holiday, and even then they can&;t be sure. There are many ways to establish persistent access that survive rebuilding individual computers and networks. We know, for example, of an NSA exploit that remains on a hard drive even after it is reformatted. Code for that exploit was part of the Equation Group tools that the Shadow Brokers — again believed to be Russia — stole from the NSA and published in 2016. The SVR probably has the same kinds of tools.

Even without that caveat, many network administrators won’t go through the long, painful, and potentially expensive rebuilding process. They’ll just hope for the best.

It’s hard to overstate how bad this is. We are still learning about US government organizations breached: the state department, the treasury department, homeland security, the Los Alamos and Sandia National Laboratories (where nuclear weapons are developed), the National Nuclear Security Administration, the National Institutes of Health, and many more. At this point, there’s no indication that any classified networks were penetrated, although that could change easily. It will take years to learn which networks the SVR has penetrated, and where it still has access. Much of that will probably be classified, which means that we, the public, will never know.

And now that the Orion vulnerability is public, other governments and cybercriminals will use it to penetrate vulnerable networks. I can guarantee you that the NSA is using the SVR’s hack to infiltrate other networks; why would they not? (Do any Russian organizations use Orion? Probably.)

While this is a security failure of enormous proportions, it is not, as Senator Richard Durban said, “virtually a declaration of war by Russia on the United States.” While President-elect Biden said he will make this a top priority, it’s unlikely that he will do much to retaliate.

The reason is that, by international norms, Russia did nothing wrong. This is the normal state of affairs. Countries spy on each other all the time. There are no rules or even norms, and it’s basically “buyer beware.” The US regularly fails to retaliate against espionage operations — such as China’s hack of the Office of Personal Management (OPM) and previous Russian hacks — because we do it, too. Speaking of the OPM hack, the then director of national intelligence, James Clapper, said: “You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don’t think we’d hesitate for a minute.”

We don’t, and I’m sure NSA employees are grudgingly impressed with the SVR. The US has by far the most extensive and aggressive intelligence operation in the world. The NSA’s budget is the largest of any intelligence agency. It aggressively leverages the US’s position controlling most of the internet backbone and most of the major internet companies. Edward Snowden disclosed many targets of its efforts around 2014, which then included 193 countries, the World Bank, the IMF and the International Atomic Energy Agency. We are undoubtedly running an offensive operation on the scale of this SVR operation right now, and it’ll probably never be made public. In 2016, President Obama boasted that we have “more capacity than anybody both offensively and defensively.”

He may have been too optimistic about our defensive capability. The US prioritizes and spends many times more on offense than on defensive cybersecurity. In recent years, the NSA has adopted a strategy of “persistent engagement,” sometimes called “defending forward.” The idea is that instead of passively waiting for the enemy to attack our networks and infrastructure, we go on the offensive and disrupt attacks before they get to us. This strategy was credited with foiling a plot by the Russian Internet Research Agency to disrupt the 2018 elections.

But if persistent engagement is so effective, how could it have missed this massive SVR operation? It seems that pretty much the entire US government was unknowingly sending information back to Moscow. If we had been watching everything the Russians were doing, we would have seen some evidence of this. The Russians’ success under the watchful eye of the NSA and US Cyber Command shows that this is a failed approach.

And how did US defensive capability miss this? The only reason we know about this breach is because, earlier this month, the security company FireEye discovered that it had been hacked. During its own audit of its network, it uncovered the Orion vulnerability and alerted the US government. Why don’t organizations like the Departments of State, Treasury and Homeland Wecurity regularly conduct that level of audit on their own systems? The government’s intrusion detection system, Einstein 3, failed here because it doesn’t detect new sophisticated attacks — a deficiency pointed out in 2018 but never fixed. We shouldn’t have to rely on a private cybersecurity company to alert us of a major nation-state attack.

If anything, the US’s prioritization of offense over defense makes us less safe. In the interests of surveillance, the NSA has pushed for an insecure cell phone encryption standard and a backdoor in random number generators (important for secure encryption). The DoJ has never relented in its insistence that the world’s popular encryption systems be made insecure through back doors — another hot point where attack and defense are in conflict. In other words, we allow for insecure standards and systems, because we can use them to spy on others.

We need to adopt a defense-dominant strategy. As computers and the internet become increasingly essential to society, cyberattacks are likely to be the precursor to actual war. We are simply too vulnerable when we prioritize offense, even if we have to give up the advantage of using those insecurities to spy on others.

Our vulnerability is magnified as eavesdropping may bleed into a direct attack. The SVR’s access allows them not only to eavesdrop, but also to modify data, degrade network performance, or erase entire networks. The first might be normal spying, but the second certainly could be considered an act of war. Russia is almost certainly laying the groundwork for future attack.

This preparation would not be unprecedented. There’s a lot of attack going on in the world. In 2010, the US and Israel attacked the Iranian nuclear program. In 2012, Iran attacked the Saudi national oil company. North Korea attacked Sony in 2014. Russia attacked the Ukrainian power grid in 2015 and 2016. Russia is hacking the US power grid, and the US is hacking Russia’s power grid — just in case the capability is needed someday. All of these attacks began as a spying operation. Security vulnerabilities have real-world consequences.

We’re not going to be able to secure our networks and systems in this no-rules, free-for-all every-network-for-itself world. The US needs to willingly give up part of its offensive advantage in cyberspace in exchange for a vastly more secure global cyberspace. We need to invest in securing the world’s supply chains from this type of attack, and to press for international norms and agreements prioritizing cybersecurity, like the 2018 Paris Call for Trust and Security in Cyberspace or the Global Commission on the Stability of Cyberspace. Hardening widely used software like Orion (or the core internet protocols) helps everyone. We need to dampen this offensive arms race rather than exacerbate it, and work towards cyber peace. Otherwise, hypocritically criticizing the Russians for doing the same thing we do every day won’t help create the safer world in which we all want to live.

This essay previously appeared in the Guardian.

On That Dusseldorf Hospital Ransomware Attack and the Resultant Death

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/on-that-dusseldorf-hospital-ransomware-attack-and-the-resultant-death.html

Wired has a detailed story about the ransomware attack on a Dusseldorf hospital, the one that resulted in an ambulance being redirected to a more distant hospital and the patient dying. The police wanted to prosecute the ransomware attackers for negligent homicide, but the details were more complicated:

After a detailed investigation involving consultations with medical professionals, an autopsy, and a minute-by-minute breakdown of events, Hartmann believes that the severity of the victim’s medical diagnosis at the time she was picked up was such that she would have died regardless of which hospital she had been admitted to. “The delay was of no relevance to the final outcome,” Hartmann says. “The medical condition was the sole cause of the death, and this is entirely independent from the cyberattack.” He likens it to hitting a dead body while driving: while you might be breaking the speed limit, you’re not responsible for the death.

So while this might not be an example of death by cyberattack, the article correctly notes that it’s only a matter of time:

But it’s only a matter of time, Hartmann believes, before ransomware does directly cause a death. “Where the patient is suffering from a slightly less severe condition, the attack could certainly be a decisive factor,” he says. “This is because the inability to receive treatment can have severe implications for those who require emergency services.” Success at bringing a charge might set an important precedent for future cases, thereby deepening the toolkit of prosecutors beyond the typical cybercrime statutes.

“The main hurdle will be one of proof,” Urban says. “Legal causation will be there as soon as the prosecution can prove that the person died earlier, even if it’s only a few hours, because of the hack, but this is never easy to prove.” With the Düsseldorf attack, it was not possible to establish that the victim could have survived much longer, but in general it’s “absolutely possible” that hackers could be found guilty of manslaughter, Urban argues.

And where causation is established, Hartmann points out that exposure for criminal prosecution stretches beyond the hackers. Instead, anyone who can be shown to have contributed to the hack may also be prosecuted, he says. In the Düsseldorf case, for example, his team was preparing to consider the culpability of the hospital’s IT staff. Could they have better defended the hospital by monitoring the network more closely, for instance?

Documented Death from a Ransomware Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/documented-death-from-a-ransomware-attack.html

A Dusseldorf woman died when a ransomware attack against a hospital forced her to be taken to a different hospital in another city.

I think this is the first documented case of a cyberattack causing a fatality. UK hospitals had to redirect patients during the 2017 WannaCry ransomware attack, but there were no documented fatalities from that event.

The police are treating this as a homicide.

The Unintended Harms of Cybersecurity

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/the_unintended_.html

Interesting research: “Identifying Unintended Harms of Cybersecurity Countermeasures“:

Abstract: Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity.The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex,multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decision-making, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.

Security is always a trade-off. I appreciate work that examines the details of that trade-off.

Examining the US Cyber Budget

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/examining_the_u.html

Jason Healey takes a detailed look at the US federal cybersecurity budget and reaches an important conclusion: the US keeps saying that we need to prioritize defense, but in fact we prioritize attack.

To its credit, this budget does reveal an overall growth in cybersecurity funding of about 5 percent above the fiscal 2019 estimate. However, federal cybersecurity spending on civilian departments like the departments of Homeland Security, State, Treasury and Justice is overshadowed by that going toward the military:

  • The Defense Department’s cyber-related budget is nearly 25 percent higher than the total going to all civilian departments, including the departments of Homeland Security, Treasury and Energy, which not only have to defend their own critical systems but also partner with critical infrastructure to help secure the energy, finance, transportation and health sectors ($9.6 billion compared to $7.8 billion).
  • The funds to support just the headquarters element­ — that is, not even the operational teams in facilities outside of headquarters — ­of U.S. Cyber Command are 33 percent higher than all the cyber-related funding to the State Department ($532 million compared to $400 million).

  • Just the increased funding to Defense was 30 percent higher than the total Homeland Security budget to improve the security of federal networks ($909 million compared to $694.1 million).

  • The Defense Department is budgeted two and a half times as much just for cyber operations as the Cybersecurity and Infrastructure Security Agency (CISA), which is nominally in charge of cybersecurity ($3.7 billion compared to $1.47 billion). In fact, the cyber operations budget is higher than the budgets for the CISA, the FBI and the Department of Justice’s National Security Division combined ($3.7 billion compared to $2.21 billion).

  • The Defense Department’s cyber operations have nearly 10 times the funding as the relevant Homeland Security defensive operational element, the National Cybersecurity and Communications Integration Center (NCCIC) ($3.7 billion compared to $371.4 million).

  • The U.S. government budgeted as much on military construction for cyber units as it did for the entirety of Homeland Security ($1.9 billion for each).

We cannot ignore what the money is telling us. The White House and National Cyber Strategy emphasize the need to protect the American people and our way of life, yet the budget does not reflect those values. Rather, the budget clearly shows that the Defense Department is the government’s main priority. Of course, the exact Defense numbers for how much is spent on offense are classified.

New US Electronic Warfare Platform

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/new_us_electron.html

The Army is developing a new electronic warfare pod capable of being put on drones and on trucks.

…the Silent Crow pod is now the leading contender for the flying flagship of the Army’s rebuilt electronic warfare force. Army EW was largely disbanded after the Cold War, except for short-range jammers to shut down remote-controlled roadside bombs. Now it’s being urgently rebuilt to counter Russia and China, whose high-tech forces — unlike Afghan guerrillas — rely heavily on radio and radar systems, whose transmissions US forces must be able to detect, analyze and disrupt.

It’s hard to tell what this thing can do. Possibly a lot, but it’s all still in prototype stage.

Historically, cyber operations occurred over landline networks and electronic warfare over radio-frequency (RF) airwaves. The rise of wireless networks has caused the two to blur. The military wants to move away from traditional high-powered jamming, which filled the frequencies the enemy used with blasts of static, to precisely targeted techniques, designed to subtly disrupt the enemy’s communications and radar networks without their realizing they’re being deceived. There are even reports that “RF-enabled cyber” can transmit computer viruses wirelessly into an enemy network, although Wojnar declined to confirm or deny such sensitive details.

[…]

The pod’s digital brain also uses machine-learning algorithms to analyze enemy signals it detects and compute effective countermeasures on the fly, instead of having to return to base and download new data to human analysts. (Insiders call this cognitive electronic warfare). Lockheed also offers larger artificial intelligences to assist post-mission analysis on the ground, Wojnar said. But while an AI small enough to fit inside the pod is necessarily less powerful, it can respond immediately in a way a traditional system never could.

EDITED TO ADD (5/14): Here are two reports on Russian electronic warfare capabilities.

On Cyber Warranties

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/on_cyber_warran.html

Interesting article discussing cyber-warranties, and whether they are an effective way to transfer risk (as envisioned by Ackerlof’s “market for lemons”) or a marketing trick.

The conclusion:

Warranties must transfer non-negligible amounts of liability to vendors in order to meaningfully overcome the market for lemons. Our preliminary analysis suggests the majority of cyber warranties cover the cost of repairing the device alone. Only cyber-incident warranties cover first-party costs from cyber-attacks — why all such warranties were offered by firms selling intangible products is an open question. Consumers should question whether warranties can function as a costly signal when narrow coverage means vendors accept little risk.

Worse still, buyers cannot compare across cyber-incident warranty contracts due to the diversity of obligations and exclusions. Ambiguous definitions of the buyer’s obligations and excluded events create uncertainty over what is covered. Moving toward standardized terms and conditions may help consumers, as has been pursued in cyber insurance, but this is in tension with innovation and product diversity.

[..]

Theoretical work suggests both the breadth of the warranty and the price of a product determine whether the warranty functions as a quality signal. Our analysis has not touched upon the price of these products. It could be that firms with ineffective products pass the cost of the warranty on to buyers via higher prices. Future studies could analyze warranties and price together to probe this issue.

In conclusion, cyber warranties — particularly cyber-product warranties — do not transfer enough risk to be a market fix as imagined in Woods. But this does not mean they are pure marketing tricks either. The most valuable feature of warranties is in preventing vendors from exaggerating what their products can do. Consumers who read the fine print can place greater trust in marketing claims so long as the functionality is covered by a cyber-incident warranty.

WhatsApp Sues NSO Group

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/whatsapp_sues_n.html

WhatsApp is suing the Israeli cyberweapons arms manufacturer NSO Group in California court:

WhatsApp’s lawsuit, filed in a California court on Tuesday, has demanded a permanent injunction blocking NSO from attempting to access WhatsApp computer systems and those of its parent company, Facebook.

It has also asked the court to rule that NSO violated US federal law and California state law against computer fraud, breached their contracts with WhatsApp and “wrongfully trespassed” on Facebook’s property.

This could be interesting.

EDITED TO ADD: Citizen Lab has a research paper in the technology involved in this case. WhatsApp has an op ed on their actions. And this is a good news article on how the attack worked.

EDITED TO ADD: Facebook is deleting the accounts of NSO Group employees.

Hackers Expose Russian FSB Cyberattack Projects

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/hackers_expose_.html

More nation-state activity in cyberspace, this time from Russia:

Per the different reports in Russian media, the files indicate that SyTech had worked since 2009 on a multitude of projects since 2009 for FSB unit 71330 and for fellow contractor Quantum. Projects include:

  • Nautilus — a project for collecting data about social media users (such as Facebook, MySpace, and LinkedIn).
  • Nautilus-S — a project for deanonymizing Tor traffic with the help of rogue Tor servers.

  • Reward — a project to covertly penetrate P2P networks, like the one used for torrents.

  • Mentor — a project to monitor and search email communications on the servers of Russian companies.

  • Hope — a project to investigate the topology of the Russian internet and how it connects to other countries’ network.

  • Tax-3 — a project for the creation of a closed intranet to store the information of highly-sensitive state figures, judges, and local administration officials, separate from the rest of the state’s IT networks.

BBC Russia, who received the full trove of documents, claims there were other older projects for researching other network protocols such as Jabber (instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that the FSB was also tracking students and pensioners.

The Human Cost of Cyberattacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/the_human_cost_.html

The International Committee of the Red Cross has just published a report: “The Potential Human Cost of Cyber-Operations.” It’s the result of an “ICRC Expert Meeting” from last year, but was published this week.

Here’s a shorter blog post if you don’t want to read the whole thing. And commentary by one of the authors.

What Happened to Cyber 9/11?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/what_happened_t.html

A recent article in the Atlantic asks why we haven’t seen a”cyber 9/11″ in the past fifteen or so years. (I, too, remember the increasingly frantic and fearful warnings of a “cyber Peal Harbor,” “cyber Katrina” — when that was a thing — or “cyber 9/11.” I made fun of those warnings back then.) The author’s answer:

Three main barriers are likely preventing this. For one, cyberattacks can lack the kind of drama and immediate physical carnage that terrorists seek. Identifying the specific perpetrator of a cyberattack can also be difficult, meaning terrorists might have trouble reaping the propaganda benefits of clear attribution. Finally, and most simply, it’s possible that they just can’t pull it off.

Commenting on the article, Rob Graham adds:

I think there are lots of warning from so-called “experts” who aren’t qualified to make such warnings, that the press errs on the side of giving such warnings credibility instead of challenging them.

I think mostly the reason why cyberterrorism doesn’t happen is that which motivates violent people is different than what which motivates technical people, pulling apart the groups who would want to commit cyberterrorism from those who can.

These are all good reasons, but I think both authors missed the most important one: there simply aren’t a lot of terrorists out there. Let’s ask the question more generally: why hasn’t there been another 9/11 since 2001? I also remember dire predictions that large-scale terrorism was the new normal, and that we would see 9/11-scale attacks regularly. But since then, nothing. We could credit the fantastic counterterrorism work of the US and other countries, but a more reasonable explanation is that there are very few terrorists and even fewer organized ones. Our fear of terrorism is far greater than the actual risk.

This isn’t to say that cyberterrorism can never happen. Of course it will, sooner or later. But I don’t foresee it becoming a preferred terrorism method anytime soon. Graham again:

In the end, if your goal is to cause major power blackouts, your best bet is to bomb power lines and distribution centers, rather than hack them.

How to Punish Cybercriminals

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/how_to_punish_c.html

Interesting policy paper by Third Way: “To Catch a Hacker: Toward a comprehensive strategy to identify, pursue, and punish malicious cyber actors“:

In this paper, we argue that the United States currently lacks a comprehensive overarching strategic approach to identify, stop and punish cyberattackers. We show that:

  • There is a burgeoning cybercrime wave: A rising and often unseen crime wave is mushrooming in America. There are approximately 300,000 reported malicious cyber incidents per year, including up to 194,000 that could credibly be called individual or system-wide breaches or attempted breaches. This is likely a vast undercount since many victims don’t report break-ins to begin with. Attacks cost the US economy anywhere from $57 billion to $109 billion annually and these costs are increasing.
  • There is a stunning cyber enforcement gap: Our analysis of publicly available data shows that cybercriminals can operate with near impunity compared to their real-world counterparts. We estimate that cyber enforcement efforts are so scattered that less than 1% of malicious cyber incidents see an enforcement action taken against the attackers.

  • There is no comprehensive US cyber enforcement strategy aimed at the human attacker: Despite the recent release of a National Cyber Strategy, the United States still lacks a comprehensive strategic approach to how it identifies, pursues, and punishes malicious human cyberattackers and the organizations and countries often behind them. We believe that the United States is as far from this human attacker strategy as the nation was toward a strategic approach to countering terrorism in the weeks and months before 9/11.

In order to close the cyber enforcement gap, we argue for a comprehensive enforcement strategy that makes a fundamental rebalance in US cybersecurity policies: from a heavy focus on building better cyber defenses against intrusion to also waging a more robust effort at going after human attackers. We call for ten US policy actions that could form the contours of a comprehensive enforcement strategy to better identify, pursue and bring to justice malicious cyber actors that include building up law enforcement, enhancing diplomatic efforts, and developing a measurable strategic plan to do so.