Tag Archives: computer security

A Cyber Insurance Backstop

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/02/a-cyber-insurance-backstop.html

In the first week of January, the pharmaceutical giant Merck quietly settled its years-long lawsuit over whether or not its property and casualty insurers would cover a $700 million claim filed after the devastating NotPetya cyberattack in 2017. The malware ultimately infected more than 40,000 of Merck’s computers, which significantly disrupted the company’s drug and vaccine production. After Merck filed its $700 million claim, the pharmaceutical giant’s insurers argued that they were not required to cover the malware’s damage because the cyberattack was widely attributed to the Russian government and therefore was excluded from standard property and casualty insurance coverage as a “hostile or warlike act.”

At the heart of the lawsuit was a crucial question: Who should pay for massive, state-sponsored cyberattacks that cause billions of dollars’ worth of damage?

One possible solution, touted by former Department of Homeland Security Secretary Michael Chertoff on a recent podcast, would be for the federal government to step in and help pay for these sorts of attacks by providing a cyber insurance backstop. A cyber insurance backstop would provide a means for insurers to receive financial support from the federal government in the event that there was a catastrophic cyberattack that caused so much financial damage that the insurers could not afford to cover all of it.

In his discussion of a potential backstop, Chertoff specifically references the Terrorism Risk Insurance Act (TRIA) as a model. TRIA was passed in 2002 to provide financial assistance to the insurers who were reeling from covering the costs of the Sept. 11, 2001, terrorist attacks. It also created the Terrorism Risk Insurance Program (TRIP), a public-private system of compensation for some terrorism insurance claims. The 9/11 attacks cost insurers and reinsurers $47 billion. It was one of the most expensive insured events in history and prompted many insurers to stop offering terrorism coverage, while others raised the premiums for such policies significantly, making them prohibitively expensive for many businesses. The government passed TRIA to provide support for insurers in the event of another terrorist attack, so that they would be willing to offer terrorism coverage again at reasonable rates. President Biden’s 2023 National Cybersecurity Strategy tasked the Treasury and Homeland Security Departments with investigating possible ways of implementing something similar for large cyberattacks.

There is a growing (and unsurprising) consensus among insurers in favor of the creation and implementation of a federal cyber insurance backstop. Like terrorist attacks, catastrophic cyberattacks are difficult for insurers to predict or model because there is not very good historical data about them—and even if there were, it’s not clear that past patterns of cyberattacks will dictate future ones. What’s more, cyberattacks could cost insurers astronomic sums of money, especially if all of their policyholders were simultaneously affected by the same attack. However, despite this consensus and the fact that this idea of the government acting as the “insurer of last resort” was first floated more than a decade ago, actually developing a sound, thorough proposal for a backstop has proved to be much more challenging than many insurers and policymakers anticipated.

One major point of issue is determining a threshold for what types of cyberattacks should trigger a backstop. Specific characteristics of cyberattacks—such as who perpetrated the attack, the motive behind it, and total damage it has caused—are often exceedingly difficult to determine. Therefore, even if policymakers could agree on what types of attacks they think the government should pay for based on these characteristics, they likely won’t be able to calculate which incursions actually qualify for assistance.

For instance, NotPetya is estimated to have caused more than $10 billion in damage worldwide, but the quantifiable amount of damage it actually did is unknown. The attack caused such a wide variety of disruptions in so many different industries, many of which likely went unreported since many companies had no incentive to publicize their security failings and were not required to do so. Observers do, however, have a pretty good idea who was behind the NotPetya attack because several governments, including the United States and the United Kingdom, issued coordinated statements blaming the Russian military. As for the motive behind NotPetya, the program was initially transmitted through Ukrainian accounting software, which suggests that it was intended to target Ukrainian critical infrastructure. But notably, this type of coordinated, consensus-based attribution to a specific government is relatively rare when it comes to cyberattacks. Future attacks are not likely to receive the same determination.

In the absence of a government backstop, the insurance industry has begun to carve out larger and larger exceptions to their standard cyber coverage. For example, in a pair of rulings against Merck’s insurers, judges in New Jersey ruled that the insurance exclusions for “hostile or warlike acts” (such as the one in Merck’s property policy that excluded coverage for “loss or damage caused by hostile or warlike action in time of peace or war … by any government or sovereign power”) were not sufficiently specific to encompass a cyberattack such as NotPetya that did not involve the use of traditional force.

Accordingly, insurers such as Lloyd’s have begun to change their policy language to explicitly exclude broad swaths of cyberattacks that are perpetrated by nation-states. In an August 2022 bulletin, Lloyd’s instructed its underwriters to exclude from all cyber insurance policies not just losses arising from war but also “losses arising from state backed cyber-attacks that (a) significantly impair the ability of a state to function or (b) that significantly impair the security capabilities of a state.”  Other insurers, such as Chubb, have tried to avoid tricky questions about attribution by suggesting a government response-based exclusion for war that only applies if a government responds to a cyberattack by authorizing the use of force. Chubb has also introduced explicit definitions for cyberattacks that pose a “systemic risk” or impact multiple entities simultaneously. But most of this language has not yet been tested by insurers trying to deny claims. No one, including the companies buying the policies with these exclusions written into them, really knows exactly which types of cyberattacks they exclude. It’s not clear what types of cyberattacks courts will recognize as being state-sponsored, or posing systemic risks, or significantly impairing the ability of a state to function. And for the policyholders’ whose insurance exclusions feature this sort of language, it matters a great deal how that language in their exclusions will be parsed and understood by courts adjudicating claim disputes.

These types of recent exclusions leave a large hole in companies’ coverage for cyber risks, placing even more pressure on the government to help. One of the reasons Chertoff gives for why the backstop is important is to help clarify for organizations what cyber risk-related costs they are and are not responsible for. That clarity will require very specific definitions of what types of cyberattacks the government will and will not pay for. And as the insurers know, it can be quite difficult to anticipate what the next catastrophic cyberattack will look like or how to craft a policy that will enable the government to pay only for a narrow slice of cyberattacks in a varied and unpredictable threat landscape. Get this wrong, and the government will end up writing some very large checks.

And in comparison to insurers’ coverage of terrorist attacks, large-scale cyberattacks are much more common and affect far more organizations, which makes it a far more costly risk that no one wants to take on. Organizations don’t want to—that’s why they buy insurance. Insurance companies don’t want to—that’s why they look to the government for assistance. But, so far, the U.S. government doesn’t want to take on the risk, either.

It is safe to assume, however, that regardless of whether a formal backstop is established, the federal government would step in and help pay for a sufficiently catastrophic cyberattack. If the electric grid went down nationwide, for instance, the U.S. government would certainly help cover the resulting costs. It’s possible to imagine any number of catastrophic scenarios in which an ad hoc backstop would be implemented hastily to help address massive costs and catastrophic damage, but that’s not primarily what insurers and their policyholders are looking for. They want some reassurance and clarity up front about what types of incidents the government will help pay for. But to provide that kind of promise in advance, the government likely would have to pair it with some security requirements, such as implementing multifactor authentication, strong encryption, or intrusion detection systems. Otherwise, they create a moral hazard problem, where companies may decide they can invest less in security knowing that the government will bail them out if they are the victims of a really expensive attack.

The U.S. government has been looking into the issue for a while, though, even before the 2023 National Cybersecurity Strategy was released. In 2022, for instance, the Federal Insurance Office in the Treasury Department published a Request for Comment on a “Potential Federal Insurance Response to Catastrophic Cyber Incidents.” The responses recommended a variety of different possible backstop models, ranging from expanding TRIP to encompass certain catastrophic cyber incidents, to creating a new structure similar to the National Flood Insurance Program that helps underwrite flood insurance, to trying a public-private partnership backstop model similar to the United Kingdom’s Pool Re program.

Many of these responses rightly noted that while it might eventually make sense to have some federal backstop, implementing such a program immediately might be premature. University of Edinburgh Professor Daniel Woods, for example, made a compelling case for why it was too soon to institute a backstop in Lawfare last year. Woods wrote,

One might argue similarly that a cyber insurance backstop would subsidize those companies whose security posture creates the potential for cyber catastrophe, such as the NotPetya attack that caused $10 billion in damage. Infection in this instance could have been prevented by basic cyber hygiene. Why should companies that do not employ basic cyber hygiene be subsidized by industry peers? The argument is even less clear for a taxpayer-funded subsidy.

The answer is to ensure that a backstop applies only to companies that follow basic cyber hygiene guidelines, or to insurers who require those hygiene measures of their policyholders. These are the types of controls many are familiar with: complicated passwords, app-based two-factor authentication, antivirus programs, and warning labels on emails. But this is easier said than done. To a surprising extent, it is difficult to know which security controls really work to improve companies’ cybersecurity. Scholars know what they think works: strong encryption, multifactor authentication, regular software updates, and automated backups. But there is not anywhere near as much empirical evidence as there ought to be about how effective these measures are in different implementations, or how much they reduce a company’s exposure to cyber risk.

This is largely due to companies’ reluctance to share detailed, quantitative information about cybersecurity incidents because any such information may be used to criticize their security posture or, even worse, as evidence for a government investigation or class-action lawsuit. And when insurers and regulators alike try to gather that data, they often run into legal roadblocks because these investigations are often run by lawyers who claim that the results are shielded by attorney-client privilege or work product doctrine. In some cases, companies don’t write down their findings at all to avoid the possibility of its being used against them in court. Without this data, it’s difficult for insurers to be confident that what they’re requiring of their policyholders will really work to improve those policyholders’ security and decrease their claims for cybersecurity-related incidents under their policies. Similarly, it’s hard for the federal government to be confident that they can impose requirements for a backstop that will actually raise the level of cybersecurity hygiene nationwide.

The key to managing cyber risks—both large and small—and designing a cyber backstop is determining what security practices can effectively mitigate the impact of these attacks. If there were data showing which controls work, insurers could then require that their policyholders use them, in the same way they require policyholders to install smoke detectors or burglar alarms. Similarly, if the government had better data about which security tools actually work, it could establish a backstop that applied only to victims who have used those tools as safeguards. The goal of this effort, of course, is to improve organizations’ overall cybersecurity in addition to providing financial assistance.

There are a number of ways this data could be collected. Insurers could do it through their claims databases and then aggregate that data across carriers to policymakers. They did this for car safety measures starting in the 1950s, when a group of insurance associations founded the Insurance Institute for Highway Safety. The government could use its increasing reporting authorities, for instance under the Cyber Incident Reporting for Critical Infrastructure Act of 2022, to require that companies report data about cybersecurity incidents, including which countermeasures were in place and the root causes of the incidents. Or the government could establish an entirely new entity in the form of a Bureau for Cyber Statistics that would be devoted to collecting and analyzing this type of data.

Scholars and policymakers can’t design a cyber backstop until this data is collected and studied to determine what works best for cybersecurity. More broadly, organizations’ cybersecurity cannot improve until more is known about the threat landscape and the most effective tools for managing cyber risk.

If the cybersecurity community doesn’t pause to gather that data first, then it will never be able to meaningfully strengthen companies’ security postures against large-scale cyberattacks, and insurers and government officials will just keep passing the buck back and forth, while the victims are left to pay for those attacks themselves.

This essay was written with Josephine Wolff, and was originally published in Lawfare.

New iPhone Security Features to Protect Stolen Devices

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/12/new-iphone-security-features-to-protect-stolen-devices.html

Apple is rolling out a new “Stolen Device Protection” feature that seems well thought out:

When Stolen Device Protection is turned on, Face ID or Touch ID authentication is required for additional actions, including viewing passwords or passkeys stored in iCloud Keychain, applying for a new Apple Card, turning off Lost Mode, erasing all content and settings, using payment methods saved in Safari, and more. No passcode fallback is available in the event that the user is unable to complete Face ID or Touch ID authentication.

For especially sensitive actions, including changing the password of the Apple ID account associated with the iPhone, the feature adds a security delay on top of biometric authentication. In these cases, the user must authenticate with Face ID or Touch ID, wait one hour, and authenticate with Face ID or Touch ID again. However, Apple said there will be no delay when the iPhone is in familiar locations, such as at home or work.

More details at the link.

Friday Squid Blogging: Unpatched Vulnerabilities in the Squid Caching Proxy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/11/friday-squid-blogging-unpatched-vulnerabilities-in-the-squid-caching-proxy.html

In a rare squid/security post, here’s an article about unpatched vulnerabilities in the Squid caching proxy.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Decoupling for Security

Post Syndicated from B. Schneier original https://www.schneier.com/blog/archives/2023/11/decoupling-for-security.html

This is an excerpt from a longer paper. You can read the whole thing (complete with sidebars and illustrations) here.

Our message is simple: it is possible to get the best of both worlds. We can and should get the benefits of the cloud while taking security back into our own hands. Here we outline a strategy for doing that.

What Is Decoupling?

In the last few years, a slew of ideas old and new have converged to reveal a path out of this morass, but they haven’t been widely recognized, combined, or used. These ideas, which we’ll refer to in the aggregate as “decoupling,” allow us to rethink both security and privacy.

Here’s the gist. The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

To ensure that cloud services do not learn more than they should, and that a breach of one does not pose a fundamental threat to our data, we need two types of decoupling. The first is organizational decoupling: dividing private information among organizations such that none knows the totality of what is going on. The second is functional decoupling: splitting information among layers of software. Identifiers used to authenticate users, for example, should be kept separate from identifiers used to connect their devices to the network.

In designing decoupled systems, cloud providers should be considered potential threats, whether due to malice, negligence, or greed. To verify that decoupling has been done right, we can learn from how we think about encryption: you’ve encrypted properly if you’re comfortable sending your message with your adversary’s communications system. Similarly, you’ve decoupled properly if you’re comfortable using cloud services that have been split across a noncolluding group of adversaries.

Read the full essay

This essay was written with Barath Raghavan, and previously appeared in IEEE Spectrum.

New York Increases Cybersecurity Rules for Financial Companies

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/11/new-york-increases-cybersecurity-rules-for-financial-companies.html

Another example of a large and influential state doing things the federal government won’t:

Boards of directors, or other senior committees, are charged with overseeing cybersecurity risk management, and must retain an appropriate level of expertise to understand cyber issues, the rules say. Directors must sign off on cybersecurity programs, and ensure that any security program has “sufficient resources” to function.

In a new addition, companies now face significant requirements related to ransom payments. Regulated firms must now report any payment made to hackers within 24 hours of that payment.

Ethical Problems in Computer Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/ethical-problems-in-computer-security.html

Tadayoshi Kohno, Yasemin Acar, and Wulf Loh wrote excellent paper on ethical thinking within the computer security community: “Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversation“:

Abstract: The computer security research community regularly tackles ethical questions. The field of ethics / moral philosophy has for centuries considered what it means to be “morally good” or at least “morally allowed / acceptable.” Among philosophy’s contributions are (1) frameworks for evaluating the morality of actions—including the well-established consequentialist and deontological frameworks—and (2) scenarios (like trolley problems) featuring moral dilemmas that can facilitate discussion about and intellectual inquiry into different perspectives on moral reasoning and decision-making. In a classic trolley problem, consequentialist and deontological analyses may render different opinions. In this research, we explicitly make and explore connections between moral questions in computer security research and ethics / moral philosophy through the creation and analysis of trolley problem-like computer security-themed moral dilemmas and, in doing so, we seek to contribute to conversations among security researchers about the morality of security research-related decisions. We explicitly do not seek to define what is morally right or wrong, nor do we argue for one framework over another. Indeed, the consequentialist and deontological frameworks that we center, in addition to coming to different conclusions for our scenarios, have significant limitations. Instead, by offering our scenarios and by comparing two different approaches to ethics, we strive to contribute to how the computer security research field considers and converses about ethical questions, especially when there are different perspectives on what is morally right or acceptable. Our vision is for this work to be broadly useful to the computer security community, including to researchers as they embark on (or choose not to embark on), conduct, and write about their research, to program committees as they evaluate submissions, and to educators as they teach about computer security and ethics.

The paper will be presented at USENIX Security.

Computer Repair Technicians Are Stealing Your Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/computer-repair-technicians-are-stealing-your-data.html

Laptop technicians routinely violate the privacy of the people whose computers they repair:

Researchers at University of Guelph in Ontario, Canada, recovered logs from laptops after receiving overnight repairs from 12 commercial shops. The logs showed that technicians from six of the locations had accessed personal data and that two of those shops also copied data onto a personal device. Devices belonging to females were more likely to be snooped on, and that snooping tended to seek more sensitive data, including both sexually revealing and non-sexual pictures, documents, and financial information.

[…]

In three cases, Windows Quick Access or Recently Accessed Files had been deleted in what the researchers suspect was an attempt by the snooping technician to cover their tracks. As noted earlier, two of the visits resulted in the logs the researchers relied on being unrecoverable. In one, the researcher explained they had installed antivirus software and performed a disk cleanup to “remove multiple viruses on the device.” The researchers received no explanation in the other case.

[…]

The laptops were freshly imaged Windows 10 laptops. All were free of malware and other defects and in perfect working condition with one exception: the audio driver was disabled. The researchers chose that glitch because it required only a simple and inexpensive repair, was easy to create, and didn’t require access to users’ personal files.

Half of the laptops were configured to appear as if they belonged to a male and the other half to a female. All of the laptops were set up with email and gaming accounts and populated with browser history across several weeks. The researchers added documents, both sexually revealing and non-sexual pictures, and a cryptocurrency wallet with credentials.

A few notes. One: this is a very small study—only twelve laptop repairs. Two, some of the results were inconclusive, which indicated—but did not prove—log tampering by the technicians. Three, this study was done in Canada. There would probably be more snooping by American repair technicians.

The moral isn’t a good one: if you bring your laptop in to be repaired, you should expect the technician to snoop through your hard drive, taking what they want.

Research paper.

Recovering Passwords by Measuring Residual Heat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/recovering-passwords-by-measuring-residual-heat.html

Researchers have used thermal cameras and ML guessing techniques to recover passwords from measuring the residual heat left by fingers on keyboards. From the abstract:

We detail the implementation of ThermoSecure and make a dataset of 1,500 thermal images of keyboards with heat traces resulting from input publicly available. Our first study shows that ThermoSecure successfully attacks 6-symbol, 8-symbol, 12-symbol, and 16-symbol passwords with an average accuracy of 92%, 80%, 71%, and 55% respectively, and even higher accuracy when thermal images are taken within 30 seconds. We found that typing behavior significantly impacts vulnerability to thermal attacks, where hunt-and-peck typists are more vulnerable than fast typists (92% vs 83% thermal attack success if performed within 30 seconds). The second study showed that the keycaps material has a statistically significant effect on the effectiveness of thermal attacks: ABS keycaps retain the thermal trace of users presses for a longer period of time, making them more vulnerable to thermal attacks, with a 52% average attack accuracy compared to 14% for keyboards with PBT keycaps.

“ABS” is Acrylonitrile Butadiene Styrene, which some keys are made of. Others are made of Polybutylene Terephthalate (PBT). PBT keys are less vulnerable.

But, honestly, if someone can train a camera at your keyboard, you have bigger problems.

News article.

Bypassing Two-Factor Authentication

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/04/bypassing-two-factor-authentication.html

These techniques are not new, but they’re increasingly popular:

…some forms of MFA are stronger than others, and recent events show that these weaker forms aren’t much of a hurdle for some hackers to clear. In the past few months, suspected script kiddies like the Lapsus$ data extortion gang and elite Russian-state threat actors (like Cozy Bear, the group behind the SolarWinds hack) have both successfully defeated the protection.

[…]

Methods include:

  • Sending a bunch of MFA requests and hoping the target finally accepts one to make the noise stop.
  • Sending one or two prompts per day. This method often attracts less attention, but “there is still a good chance the target will accept the MFA request.”
  • Calling the target, pretending to be part of the company, and telling the target they need to send an MFA request as part of a company process.

FIDO2 multi-factor authentication systems are not susceptible to these attacks, because they are tied to a physical computer.

And even though there are attacks against these two-factor systems, they’re much more secure than not having them at all. If nothing else, they block pretty much all automated attacks.

Mysterious Macintosh Malware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/03/mysterious-macintosh-malware.html

This is weird:

Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute. So far, however, researchers have yet to observe delivery of any payload on any of the infected 30,000 machines, leaving the malware’s ultimate goal unknown. The lack of a final payload suggests that the malware may spring into action once an unknown condition is met.

Also curious, the malware comes with a mechanism to completely remove itself, a capability that’s typically reserved for high-stealth operations. So far, though, there are no signs the self-destruct feature has been used, raising the question of why the mechanism exists.

Besides those questions, the malware is notable for a version that runs natively on the M1 chip that Apple introduced in November, making it only the second known piece of macOS malware to do so. The malicious binary is more mysterious still because it uses the macOS Installer JavaScript API to execute commands. That makes it hard to analyze installation package contents or the way that package uses the JavaScript commands.

The malware has been found in 153 countries with detections concentrated in the US, UK, Canada, France, and Germany. Its use of Amazon Web Services and the Akamai content delivery network ensures the command infrastructure works reliably and also makes blocking the servers harder. Researchers from Red Canary, the security firm that discovered the malware, are calling the malware Silver Sparrow.

Feels government-designed, rather than criminal or hacker.

Another article. And the Red Canary analysis.