Tag Archives: Public Policy

Incident Reporting Regulations Summary and Chart

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/08/26/incident-reporting-regulations-summary-and-chart/

Incident Reporting Regulations Summary and Chart

A growing number of regulations require organizations to report significant cybersecurity incidents. We’ve created a chart that summarizes 11 proposed and current cyber incident reporting regulations and breaks down their common elements, such as who must report, what cyber incidents must be reported, the deadline for reporting, and more.

Incident Reporting Regulations Summary and Chart
Download the chart now

This chart is intended as an educational tool to enhance the security community’s awareness of upcoming public policy actions, and provide a big picture look at how the incident reporting regulatory environment is unfolding. Please note, this chart is not comprehensive (there are even more incident reporting regulations out there!) and is only current as of August 8, 2022. Many of the regulations are subject to change.

This summary is for educational purposes only and nothing in this summary is intended as, or constitutes, legal advice.

Peter Woolverton led the research and initial drafting of this chart.


Get the latest stories, expertise, and news about security today.

Additional reading:

Avoiding Smash and Grab Under the SEC’s Proposed Cyber Rule

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/08/23/avoiding-smash-and-grab-under-the-secs-proposed-cyber-rule/

Avoiding Smash and Grab Under the SEC’s Proposed Cyber Rule

The SEC recently proposed a regulation to require all public companies to report cybersecurity incidents within four days of determining that the incident is material. While Rapid7 generally supports the proposed rule, we are concerned that the rule requires companies to publicly disclose a cyber incident before the incident has been contained or mitigated. This post explains why this is a problem and suggests a solution that still enables the SEC to drive companies toward disclosure.

(Terminology note: “Public companies” refers to companies that have stock traded on public US exchanges, and “material” means information that “there is a substantial likelihood that a reasonable shareholder would consider it important.” “Containment” aims to prevent a cyber incident from spreading. Containment is part of “mitigation,” which includes actions to reduce the severity of an event or the likelihood of a vulnerability being exploited, though may fall short of full remediation.)

In sum: The public disclosure of material cybersecurity incidents prior to containment or mitigation may cause greater harm to investors than a delay in public disclosure. We propose that the SEC provide an exemption to the proposed reporting requirements, enabling a company to delay public disclosure of an uncontained or unmitigated incident if certain conditions are met. Additionally, we explain why we believe other proposed solutions may not meet the SEC’s goals of transparency and avoidance of harm to investors.

Distinguished by default public disclosure

The purpose of the SEC’s proposed rule is to help enable investors to make informed investment decisions. This is a reflection of the growing importance of cybersecurity to corporate governance, risk assessment, and other key factors that stockholders weigh when investing. With the exception of reporting unmitigated incidents, Rapid7 largely supports this perspective.

The SEC’s proposed rule would (among other things) require companies to disclose material cyber incidents on Form 8-K, which are publicly available via the EDGAR system. Crucially, the SEC’s proposed rule makes no distinction between public disclosure of incidents that are contained or mitigated and incidents that are not yet contained or mitigated. While the public-by-default nature of the disclosure creates new problems, it also aligns with the SEC’s purpose in proposing the rule.

In contrast to the SEC’s proposed rule, the purpose of most other incident reporting regulations is to strengthen cybersecurity – a top global policy priority. As such, most other cyber incident reporting regulators (such as CISA, NERC, FDIC, Fed. Reserve, OCC, NYDFS, etc.) do not typically make incident reports public in a way that identifies the affected organization. In fact, some regulations (such as CIRCIA and the 2021 TSA pipeline security directive) classify company incident reports as sensitive information exempt from FOIA.

Beyond regulations, established cyber incident response protocol is to avoid tipping off an attacker until the incident is contained and the risk of further damage has been mitigated. See, for example, CISA’s Incident Response Playbook (especially sections on opsec) and NIST’s Computer Security Incident Handling Guide (especially Section 2.3.4). For similar reasons, it is commonly the goal of coordinated vulnerability disclosure practices to avoid, when possible, public disclosure of a vulnerability until the vulnerability has been mitigated. See, for example, the CERT Guide to Coordinated Disclosure.

While it may be reasonable to require disclosure of a contained or mitigated incident within four days of determining its materiality, a strict requirement for public disclosure of an unmitigated or ongoing incident is likely to expose companies and investors to additional danger. Investors are not the only group that may act on a cyber incident report, and such information may be misused.

Smash and grab harms investors and misprices securities

Cybercriminals often aim to embed themselves in corporate networks without the company knowing. Maintaining a low profile lets attackers steal data over time, quietly moving laterally across networks, steadily gaining greater access – sometimes over a period of years. But when the cover is blown and the company knows about its attacker? Forget secrecy, it’s smash and grab time.

Public disclosure of an unmitigated or uncontained cyber incident will likely lead to attacker behaviors that cause additional harm to investors. Note that such acts would be in reaction to the public disclosure of an unmitigated incident, and not a natural result of the original attack. For example:

  • Smash and grab: A discovered attacker may forgo stealth and accelerate data theft or extortion activities, causing more harm to the company (and therefore its investors). Consider this passage from the MS-ISAC’s 2020 Ransomware Guide: “Be sure [to] avoid tipping off actors that they have been discovered and that mitigation actions are being undertaken. Not doing so could cause actors to move laterally to preserve their access [or] deploy ransomware widely prior to networks being taken offline.”
  • Scorched earth: A discovered attacker may engage in anti-forensic activity (such as deleting logs), hindering post-incident investigations and intelligence sharing that could prevent future attacks that harm investors. From CISA’s Playbook: “Some adversaries may actively monitor defensive response measures and shift their methods to evade detection and containment.”
  • Pile-on: Announcing that a company has an incident may cause other attackers to probe the company and discover the vulnerability or attack vector from the original incident. If the incident is not yet mitigated, the copycat attackers can cause further harm to the company (and therefore its investors). From the CERT Guide to CVD: “​​Mere knowledge of a vulnerability’s existence in a feature of some product is sufficient for a skillful person to discover it for themselves. Rumor of a vulnerability draws attention from knowledgeable people with vulnerability finding skills — and there’s no guarantee that all those people will have users’ best interests in mind.”
  • Supply chain: Public disclosure of an unmitigated cybersecurity incident may alert attackers to a vulnerability that is present in other companies, the exploitation of which can harm investors in those other companies. Publicly disclosing “the nature and scope” of material incidents within four business days risks exposing enough detail of an otherwise unique zero-day to encourage rediscovery and reimplementation by other criminal and espionage groups against other organizations. For example, fewer than 100 organizations were actually exploited through the Solarwinds supply chain attack, but up to 18,000 organizations were at risk.

In addition, requiring public disclosure of uncontained or unmitigated cyber incidents may result in mispricing the stock of the affected company. By contradicting best practices for cyber incident response and inviting new attacks, the premature public disclosure of an uncontained or unmitigated incident may provide investors with an inaccurate measure of the company’s true ability to respond to cybersecurity incidents. Moreover, a premature disclosure during the incident response process may result in investors receiving inaccurate information about the scope or impact of the incident.

Rapid7 is not opposed to public disclosure of unmitigated vulnerabilities or incidents in all circumstances, and our security researchers publicly disclose vulnerabilities when necessary. However, public disclosure of unmitigated vulnerabilities typically occurs after failure to mitigate (such as due to inability to engage the affected organization), or when users should take defensive measures before mitigation because ongoing exploitation of the vulnerability “in the wild” is actively harming users. By contrast, the SEC’s proposed rule would rely on a public disclosure requirement on a restrictive timeline in nearly all cases, creating the risk of additional harm to investors that can outweigh the benefits of public disclosure.

Proposed solution

Below, we suggest a solution that we believe achieves the SEC’s ultimate goal of investor protection by requiring timely disclosure of cyber incidents and simultaneously avoiding the unnecessary additional harm to investors that may result with premature public disclosure.

Specifically, we suggest that the proposed rule remains largely the same — i.e., the SEC continues to require that companies determine whether the incident is material as soon as practicable after discovery of the cyber incident, and file a report on Form 8-K four days after the materiality determination under normal circumstances. However, we propose that the rule be revised to also provide companies with a temporary exemption from public disclosure if each of the below conditions are met:

  • The incident is not yet contained or otherwise mitigated to prevent additional harm to the company and its investors;
  • The company reasonably believes that public disclosure of the uncontained or unmitigated incident may cause substantial additional harm to the company, its investors, or other public companies or their investors;
  • The company reasonably believes the incident can be contained or mitigated in a timely manner; and
  • The company is actively engaged in containing or mitigating the incident in a timely manner.

The determination of the applicability of the aforementioned exception may be made simultaneously to the determination of materiality. If the exception applies, the company may delay public disclosure until such time that any of the conditions are no longer occurring, at which point, they must publicly disclose the cyber incident via Form 8-K, no later than four days after the date on which the exemption is no longer applicable. The 8-K disclosure could note that, prior to filing the 8-K, the company relied on the exemption from disclosure. Existing insider trading restrictions would, of course, continue to apply during the public disclosure delay.

If an open-ended delay in public disclosure for containment or mitigation is unacceptable to the SEC, then we suggest that the exemption only be available for 30 days after the determination of materiality. In our experience, the vast majority of incidents can be contained and mitigated within that time frame. However, cybersecurity incidents can vary greatly, and there may nonetheless be rare outliers where the mitigation process exceeds 30 days.

Drawbacks of other solutions

Rapid7 is aware of other solutions being floated to address the problem of public disclosure of unmitigated cyber incidents. However, these carry drawbacks that do not align with the purpose of the SEC rule or potentially don’t make sense for cybersecurity. For example:

  • AG delay: The SEC’s proposed rule considers allowing a delay in reporting the incident when the Attorney General (AG) determines the delay is in the interest of national security. This is an appropriate delay, but insufficient on its own. This AG delay would apply to a very small fraction of major cyber incidents and not prevent the potential harms described above in the vast majority of cases.
  • Law enforcement delay: The SEC’s proposed rule considers, and then rejects, a delay when incident reporting would hinder a law enforcement investigation. We believe this too would be an appropriate delay, to ensure law enforcement can help prevent future cyber incidents that would harm investors. However, it is unclear if this delay would be triggered in many cases. First, the SEC’s proposed timeframe (four days after concluding the incident is material) poses a tight turnaround for law enforcement to start a new investigation or add to an existing investigation, determine how disclosure might impact the investigation, and then request delay from the SEC. Second, law enforcement agencies already have investigations opened against many cybercriminal groups, so public disclosure of another incident may not make a significant difference in the investigation, even if public disclosure of the incident would cause harm. Although a law enforcement delay would be used more than the AG delay, we still anticipate it would apply to only a fraction of incidents.
  • Vague disclosures: Another potential solution is to continue to require public companies to disclose unmitigated cyber incidents on the proposed timeline, but to allow the disclosures to be so vague that it is unclear whether the incident has been mitigated. Yet an attacker embedded in a company network is unlikely to be fooled by a vague incident report from the same company, and even a vague report could encourage new attackers to try to get a foothold in. In addition, very vague disclosures are unlikely to be useful for investor decision-making.
  • Materiality after mitigation: Another potential solution is to require a materiality determination only after the incident has been mitigated. However, this risks unnecessary delays in mitigation to avoid triggering the deadline for disclosure, even for incidents that could be mitigated within the SEC’s proposed timeline. Although containment or mitigation of an incident is important prior to public disclosure of the incident, completion of mitigation is not necessarily a prerequisite to determining the seriousness (i.e., materiality) of an incident.

Balancing risks and benefits of transparency

The SEC has an extensive list of material information that it requires companies to disclose publicly on 8-Ks – everything from bankruptcies to mine safety. However, public disclosure of any of these other items is not likely to prompt new criminal actions that bring additional harm to investors. Public disclosure of unmitigated cyber incidents poses unique risks compared with other disclosures and should be considered in that light.

The SEC has long been among the most forward-looking regulators on cybersecurity issues. We thank them for the acknowledgement of the significance of cybersecurity to corporate management, and for taking the time to listen to feedback from the community. Rapid7’s feedback is that we agree on the usefulness of disclosure of material cybersecurity incidents, but we encourage SEC to ensure its public reporting requirement avoids undermining its own goals and providing more opportunities for attackers.


Get the latest stories, expertise, and news about security today.

Additional reading:

Navigating the Evolving Patchwork of Incident Reporting Requirements

Post Syndicated from Peter Woolverton original https://blog.rapid7.com/2022/08/10/navigating-the-evolving-patchwork-of-incident-reporting-requirements/

Navigating the Evolving Patchwork of Incident Reporting Requirements

In March 2022, President Biden signed into law the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), a bipartisan initiative that empowers CISA to require cyber incident reporting from critical infrastructure owners and operators. Rapid7 is supportive of CIRCIA and cyber incident reporting in general, but we also encourage regulators to ensure reporting rules are streamlined and do not impose unnecessary burdens on companies that are actively recovering from cyber intrusions.

Although a landmark legislative change, CIRCIA is just one highly visible example of a broader trend. Incident reporting has emerged as a predominant cybersecurity regulatory strategy across government. Numerous federal and state agencies are implementing their own cyber incident reporting requirements under their respective rulemaking authorities – such as SEC, FTC, the Federal Reserve, OCC, NCUA, NERC, TSA, NYDFS, and others. Several such rules are already in force in US law, with at least three more likely to become effective within the next year.

The trend is not limited to the US. Several international governing bodies have proposed similar cyber incident reporting rules, such as the European Union’s (EU) NIS-2 Directive.

Raising the bar for security transparency through incident reporting is a productive step in a positive direction. Incident reporting requirements can help the government to manage sectoral risk, encourage a higher level of private-sector cyber hygiene, and enhance intrusion remediation and prevention capabilities. But the rapid embrace of this new legal paradigm may have created too much of a good thing, with the emerging regulatory environment risks becoming unmanageable.

Current state

Cyber incident reporting rules that enforce overlapping or contradictory requirements can impose undue compliance burdens on organizations that are actively responding to cyberattacks. To illustrate the problem, consider the potential experience of a hypothetical company – let’s call it Energy1. Energy1 is a US-based, publicly traded utility company that owns and operates energy generation plants, electrical transmission systems, and natural gas distribution lines. If Energy1 experiences a significant cyber attack, it may be required to submit the following reports:

  • Within one hour, provide to NERC – under NERC CIP rules – a report with preliminary details about the incident and its functional impact on operations.
  • Within 24 hours, provide to TSA – under the pipeline security directive – a report with a complete description of the incident, its functional impact on business operations, and the details of remediation steps.
  • Within 72 hours, provide to CISA – under CIRCIA – a complete description of the incident, details of remediation steps, and threat intelligence information that may identify the perpetrator.
  • Within 96 hours, provide to SEC – under the SEC’s proposed rule – a complete description of the incident and its impact, including whether customer data was compromised.

In our hypothetical scenario, Energy1 may need to rapidly compile the necessary information to comply with each different reporting rule or statute, all while balancing the urgent need to remediate and recover from a cyber intrusion. Furthermore, if Energy1 operates in non-US markets as well, it may be subject to several more reporting requirements, such as those proposed under the draft NIS-2 Directive in the EU or the CERT-IN rule in India. Many of these regulations would also require subsequent status updates after the initial report.

The example above demonstrates the complexity of the emerging patchwork of incident reporting requirements. Legal compliance in this new environment creates a number of challenges for the private sector and the government. For example:

  • Redundant requirements: Unnecessarily duplicative compliance requirements imposed in the wake of a cyber incident can draw critical resources away from incident remediation, potentially leading to lower-quality data submitted in the reports.
  • Public vs. private disclosure: Most reports are held privately by regulators, but the SEC’s proposed rule would require companies to file public reports within 96 hours of determining that an incident is significant. Public disclosure before the incident is contained or mitigated may expose the affected company to further risk of cyberattack. In addition, premature public reporting of incidents prior to mitigation may not provide an accurate reflection of the affected company’s cyber incident response capabilities.
  • Inconsistent requirements: The definition of what is reportable is not consistent across agency rules. For example, the SEC requires reporting of cyber incidents that are “material” to a reasonable investor, whereas NERC requires reporting of almost any cyber incident, including failed “attempts” at cyber intrusion. The lack of a uniform definition of reportability adds another layer of complexity to the compliance process.
  • Process inconsistencies: As demonstrated in the Energy1 example, all incident reporting rules and proposed rules have different deadlines. In addition, each rule and proposed rule has different required reporting formats and methods of submission. These process inconsistencies add friction to the compliance process.


The key issues outlined above may be addressed by the Cyber Incident Reporting Council (CIRC), an interagency working group led by the Department of Homeland Security (DHS). This Council was established under CIRCIA and is tasked with harmonizing existing incident reporting requirements into a more unified regulatory regime. A readout of the Council’s first meeting, convened on July 25, stated CIRC’s intent to “reduce [the] burden on industry by advancing common standards for incident reporting.”

In addition to DHS, CIRC includes representatives from across government, including from the Departments of Justice, Commerce, Treasury, and Energy among others. It is not yet clear from the Council’s initial meeting how exactly CIRC will reshape cyber incident reporting regulations, or whether such changes will be achievable through executive action or whether new legislation will be needed. The Council will release a report with recommendations by the end of 2022.

Rapid7 urges CIRC to consider several harmonization strategies intended to streamline compliance while maintaining the benefits of cyber incident reporting, such as:

  • Unified process: When practically possible, develop a single intake point for all incident reporting submissions with a universal format accepted by multiple agencies. This would help eliminate the need for organizations to submit several reports to different agencies with different formats and on different timetables.
  • Deconflicted requirements: Agree on a more unified definition of what constitutes a reportable cyber incident, and build toward more consistent reporting requirements that satisfy the needs of multiple agency rules.
  • Public disclosure delay: Releasing incident reports publicly before affected organizations have time to contain the breach may put the security of the company and its customers at unnecessary risk. Requirements that involve public disclosure, such as proposed rules from the SEC and FTC, should consider delaying and coordinating disclosure timing with the affected company.

Some agencies in the Federal government are already designing incident reporting rules with harmonization in mind. The Federal Reserve, FDIC, and OCC, rather than building out three separate rules for each agency, designed a single universal incident reporting requirement for all three agencies. The rule requires only one report be submitted to whichever of the three agencies is the affected company’s “primary regulator.” The sharing of reports between agencies is handled internally, removing from companies the burden of submitting multiple reports to multiple agencies. Rapid7 supports this approach and would encourage the CIRC to pursue a similarly streamlined strategy in its harmonization efforts where possible.

Striking the right balance

Rapid7 supports the growing adoption of cyber incident reporting. Greater cybersecurity transparency between government and industry can deliver considerable benefits. However, unnecessarily overlapping or contradictory reporting requirements may cause harm by detracting from the critical work of incident response and recovery. We encourage regulators to streamline and simplify the process in order to capture the full benefits of incident reporting without exposing organizations to unnecessary burden or risk in the process.

Additional reading:


Get the latest stories, expertise, and news about security today.

New US Law to Require Cyber Incident Reports

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/03/10/new-us-laws-to-require-cyber-incident-reports/

New US Law to Require Cyber Incident Reports

On March 9, 2022, the US Congress passed the Cyber Incident Reporting for Critical Infrastructure Act of 2022. Once signed by the President, it will become law. The law will require critical infrastructure owners and operators to report cyber incidents and ransomware payments. The legislation was developed in the wake of the SolarWinds supply chain attack and recently gained additional momentum from the Russia-Ukraine conflict. This post will walk through highlights from the law.

Rapid7 supports efforts to increase transparency and information sharing in order to strengthen awareness of the cybersecurity threat landscape and prepare for cyberattacks. We applaud passage of the Cyber Incident Reporting for Critical Infrastructure Act.

What’s this law about?

The Cyber Incident Reporting for Critical Infrastructure Act will require critical infrastructure owners and operators — such as water and energy utilities, health care organizations, some IT providers, etc. — to submit reports to the Cybersecurity and Infrastructure Security Agency (CISA) for cybersecurity incidents and ransomware payments. The law will provide liability protections for submitting reports to encourage compliance, but noncompliance can result in a civil lawsuit. The law will also require the government to analyze, anonymize, and share information from the reports to provide agencies, Congress, companies, and the public with a better view of the cyber threat landscape.

An important note about the timeline: The requirements do not take effect until CISA issues a clarifying regulation. The law will require CISA to issue this regulation within 42 months (though CISA may take less time), so the requirements may not be imminent. In the meantime, the Cyber Incident Reporting for Critical Infrastructure Act provides information on what CISA’s future rule must address.

We detail these items from the law below.

Requiring reporting of cyber incidents and ransom payments

  • Report requirement. Critical infrastructure owners and operators must report substantial cybersecurity incidents to CISA, as well as any ransom payments. (However, as described below, this requirement does not come into effect until CISA issues a regulation.)
  • Type of incident. The types of cyber incidents that must be reported shall include actual breaches of sensitive information and attacks that disrupt business or operations. Mere threats or failed attacks do not need to be reported.
  • Report timeline. For a cyber incident, the report must be submitted within 72 hours after the affected organization determines the incident is substantial enough that it must be reported. For ransom payments, the report must be submitted within 24 hours after the payment is made.
  • Report contents. The reports must include a list of information, including attacker tactics and techniques. Information related to the incident must be preserved until the incident is fully resolved.
  • Enforcement. If an entity does not comply with reporting requirements, CISA may issue a subpoena to compel entities to produce the required information. The Justice Department may initiate a civil lawsuit to enforce the subpoena. Entities that do not comply with the subpoena may be found in contempt of court.

CISA rule to fill in details

  • Rule requirement. CISA is required to issue a regulation that will establish details on the reporting requirements. The reporting requirements do not take effect until this regulation is final.
  • Rule timeline. CISA has up to 42 months to finalize the rule (but the agency can choose to take less time).
  • Rule contents. The rule will establish the types of cyber incidents that must be reported, the types of critical infrastructure entities that must report, the content to be included in the reports, the mechanism for submitting the reports, and the details for preserving data related to the reports.

Protections for submitting reports

  • Not used for regulation. Reports submitted to CISA cannot be used to regulate the activities of the entity that submitted the report.
  • Privileges preserved. The covered entity may designate the reports as commercial and proprietary information. Submission of a report shall not be considered a waiver of any privilege or legal protection.
  • No liability for submitting. No court may maintain a cause of action against any person or entity on the sole basis of submitting a report in compliance with this law.
  • Cannot be used as evidence. Reports, and material used to prepare the reports, cannot be received as evidence or used in discovery proceedings in any federal or state court or regulatory body.

What the government will do with the report information

  • Authorized purposes. The federal government may use the information in the reports cybersecurity purposes, responding to safety or serious economic threats, and preventing child exploitation.
  • Rapid response. For reports on ongoing threats, CISA must rapidly disseminate cyber threat indicators and defensive measures with stakeholders.
  • Information sharing. CISA must analyze reports and share information with other federal agencies, Congress, private sector stakeholders, and the public. CISA’s information sharing must include assessment of the effectiveness of security controls, adversary tactics and techniques, and the national cyber threat landscape.

What’s Rapid7’s view of the law?

Rapid7 views the Cyber Incident Reporting for Critical Infrastructure Act as a positive step. Cybersecurity is essential to ensure critical infrastructure is safe, and this law would give federal agencies more insight into attack trends, and would potentially help provide early warnings of major vulnerabilities or attacks in progress before they spread. The law carefully avoids requiring reports too early in the incident response process and provides protections to encourage companies to be open and transparent in their reports.

Still, the Cyber Incident Reporting for Critical Infrastructure Act does little to ensure critical infrastructure has safeguards that prevent cyber incidents from occurring in the first place. This law is unlikely to change the fact that many critical infrastructure entities are under-resourced and, in some cases, have security maturity that is not commensurate with the risks they face. The law’s enforcement mechanism (a potential contempt of court penalty) is not especially strong, and the final reporting rules may not be implemented for another 3.5 years. Ultimately, the law’s effect may be similar to state breach notification laws, which raised awareness but did not prompt widespread adoption of security safeguards for personal information until states implemented data security laws.

So, the Cyber Incident Reporting for Critical Infrastructure Act is a needed and helpful improvement — but, as always, there is more to be done.


Get the latest stories, expertise, and news about security today.

Prudent Cybersecurity Preparation for the Potential Russia-Ukraine Conflict

Post Syndicated from boB Rudis original https://blog.rapid7.com/2022/02/15/prudent-cybersecurity-preparation-for-the-potential-russia-ukraine-conflict/

Prudent Cybersecurity Preparation for the Potential Russia-Ukraine Conflict

Tensions between Russia and Ukraine remain elevated, with a high degree of uncertainty surrounding the likelihood of military conflict and its aftermath. As the US Cybersecurity and Infrastructure Agency (CISA) noted in a recent statement on these circumstances, while “​​there are not currently any specific credible threats to the US homeland,” there is the “potential for the Russian government to consider escalating its destabilizing actions in ways that may impact others outside of Ukraine.”

Heightened risk

There are reports that Russia is leveraging offensive cyber capabilities as the situation between Russia and Ukraine escalates. If the situation draws closer to conflict, these actions may also extend to potential retaliatory cyberattacks, or cyberattack campaigns, against critical physical and cyber infrastructure within countries that provide support to Ukraine. This may seem alarmist, but US and other Western entities have been under considerable attack from Russian-affiliated hacking groups for years. Government officials have long reported that such activities are supported or, at best, overlooked by the Russian government, and commentators and researchers have suggested this helps advance Russia’s political agenda. In June 2021, the sustained high level of these attacks against US critical infrastructure resulted in the US President Biden addressing the matter directly with Russia’s President Putin.

Moreover, events like NotPetya and Conficker have shown us that targeting in cyberspace is rarely precise, and collateral damage from cyberattacks can spread far beyond the original target.

Given the increased risk of damage from cyberattacks — whether a direct attack against Ukraine and its supporters or an indirect effect from an attack — it is prudent, as CISA notes, that “all organizations — regardless of size — adopt a heightened posture when it comes to cybersecurity and protecting their most critical assets” and business processes.

The following actions are highly recommended for all organizations. These practices should be taken even in the absence of a geopolitical conflict — threats to organizational cybersecurity were present before the recent Ukraine-Russian tensions and will be present afterwards.

Preparation for direct cyberattack

Fending off an attack from a well-resourced nation state is a nightmare scenario for most cybersecurity teams. However, there are some fundamental steps your organization can take to reduce both the likelihood of becoming a target, the severity of the damage, and the chances of success for an attacker.

CISA’s Shields Up advisory itemizes many steps that are sound practices to defend against any potential cyberattack, and we encourage all organizations to review each of them as part of their preparation plans.

Fundamentally, the guidance comes down to ensuring you have:

  • Safe and resilient configurations in your external internet and cloud asset and application deployments
  • Visibility into processes and network activity across all components of critical business functions
  • A well-tested incident response process in place to respond quickly and effectively to all cyber incursions

If your organization currently works with Ukrainian organizations, we echo CISA’s guidance to take extra care to monitor, inspect, and isolate traffic from those organizations and closely review access controls for that traffic.

US organizations should also keep CISA’s contact information (located at the end of their report) handy in both digital and physical form (in the event you cannot access digital assets) so you can engage them or the FBI in the event you are the victim of a direct cyberattack.

Preparation for cyber critical infrastructure attack

As noted, Russia is a very capable cyber adversary, but they cannot attack every asset/organization individually, all at once. There is a greater likelihood for larger-scale disruption or damage through the targeting of digital resources and Internet services that many people and organizations rely on.

Such attacks could take many forms, such as:

  • Denial of Service (DoS) attacks against central/large DNS and other “internet plumbing services” providers (remember the DynDNS DoS attack that brought down Twitter and many other sites back in 2016), which could result in loss of access to both your web- and app-based client-facing resources and your access to any Sofware-as-a-Service (SaaS) offerings, such as Salesforce, Concur, Zendesk, DocuSign, and other widely-used services.
  • DoS attacks against cloud business suite providers, such as Google Workspace or Microsoft 365, which could disrupt critical business communications for a period of time.
  • Extended DoS and targeted ”destructionware” attacks, which could prevent the operation of services and execution of key business processes. Like NotPetya and Olympic Destroyer, destructionware aims to — via encryption or deletion — destroy the capabilities of the machines it infects.
  • Large-scale DoS attacks against critical network routing segments, and mass BGP hijacking campaigns.

Now would be a good time to itemize all the third-party dependencies in your critical business processes and to draft a plan for ensuring continuity of these processes if each of those services became unavailable for some period of time. It would also be prudent to start identifying alternate providers for each service component and drafting rapid migration plans for each.

For the network-level attacks mentioned above, there isn’t much you can do when it comes to centralized network point DoS, but you can help increase your resilience to BGP hijacking by implementing safe BGP practices and encouraging your business partners and ISPs to do the same.

If tensions become seriously strained, there is a non-zero (but likely very low) chance of direct cyberattacks aimed at causing damage to US and allied physical infrastructure. The US alone has a large number of critical infrastructure facilities — many of which are privately owned — that are still in the process of strengthening their cybersecurity defenses and capabilities. This includes entities that all businesses rely on, such as electricity providers, emergency health services, transportation, and financial institutions.

Acknowledging the relatively low likelihood but high impact of a crippling cyberattack, now would be a good time to review and update (as necessary) your business continuity and disaster recovery (BC/DR) plans and playbooks, and perhaps run an exercise (or two!) that involves loss of one or more critical infrastructure components.

Don’t panic

While there is cause for concern and preparation, there is no cause for panic or overreaction. It is, however, very appropriate to take some time to review your security posture, understand this heightened threat level, and engage stakeholders in an assessment of if — and how — you should proceed to shore up any areas that have gaps.


Get the latest stories, expertise, and news about security today.

Additional reading:

How Ransomware Is Changing US Federal Policy

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/01/26/how-ransomware-is-changing-us-federal-policy/

How Ransomware Is Changing US Federal Policy

In past decades, attackers breaching systems and stealing sensitive information prompted a wave of regulations focused on consumer privacy and breach notification. The current surge in ransomware attacks is prompting a new wave of action from policymakers. Unlike the more abstract harms threatened by breaches of personal information, ransomware will grind systems to a halt, suspending business and government operations and potentially threatening health and safety. One indication of the shift in awareness of this form of cybercrime is that President Biden addressed the ransomware threat multiple times in 2021.

The increased stakes of the ransomware threat are pushing regulators to take a harder look at whether regulatory requirements for cybersecurity safeguards are effective or if new requirements are needed to help combat the threat. The federal agencies are also stepping up their coordination on information sharing and incident reporting, and the Administration is growing its collaboration with international partners and the private sector. Let’s look at a few recent and ongoing initiatives.

Cybersecurity requirements for critical infrastructure

In March 2021, Secretary of Homeland Security Mayorkas announced a series of initiatives to strengthen cybersecurity for critical infrastructure, citing ransomware as a national security threat driving the effort. Less than two months later, the Colonial Pipeline ransomware event disrupted the East Coast fuel supply.

Not long after the Colonial attack, the Transportation Security Administration (TSA) exercised its authority to impose security regulations on the pipeline sector. Through two separate rules, TSA required pipeline operators to establish incident response and recovery plans, implement mitigation measures to protect against ransomware attacks, and undergo annual cybersecurity audits and architecture reviews, among other things.

In December 2021, TSA also issued new security regulations for the aviation, freight rail, and passenger rail sectors. The regulations require (among other things) reporting ransomware incidents to CISA and maintaining an incident response plan to detect, mitigate, and recover from ransomware attacks.

Ransomware is a key motivating factor in the sudden tightening of cybersecurity requirements. Previously, the cybersecurity regulations for pipelines were voluntary, with an accommodative relationship between pipeline operators and their regulators. Policymakers are increasingly voicing concern that other critical infrastructure sectors are in a similar position. With basic societal needs at risk when ransomware successfully disrupts critical infrastructure operations, some lawmakers are signaling openness to creating additional cybersecurity regulations for critical sectors.

OFAC sanctions

The federal government is also using its sanctions authority to head off ransomware payments. According to a recent FinCEN report, the average amount of reported ransomware transactions was approximately $100 million per month in 2021. These payments encourage more ransom-based attacks and fund other criminal activities.

The Office of Foreign Assets Control (OFAC) issued guidance warning that paying ransoms to sanctioned persons and organizations is in violation of sanctions regulations. Liability for these violations, OFAC notes, applies even if the person did not know that the ransomware payment was sent to a sanctioned entity.

Critics of this approach warn that applying sanctions to specific attacker groups is ineffective as the groups can simply rebrand or partner with other criminal elements to take payments. They add that sanctions imposed on payments does nothing but further victimize those organizations or individuals being attacked and remove their choices for recovery or force them underground. Ransomware is already grossly under-reported, and critics of sanctions warn that sanctions will likely encourage a lack of transparency.

More recently, OFAC also issued virtual currency guidance — aimed at currency companies, miners, exchanges, and users — emphasizing that the facilitation of ransomware payments to sanctioned entities is illegal. The guidance also describes best practices for assessing the risk of violating sanctions during transactions. In addition, OFAC imposed sanctions on a Russia-based cryptocurrency exchange for allegedly facilitating financial transactions for ransomware actors — the first sanctions of this kind.

OFAC followed up with an advisory on sanctions guidance for the virtual currency industry and applied sanctions on a cryptocurrency firm that was not doing its due diligence in preventing the facilitation of payments to ransomware criminal gangs.

Ransomware reporting

Requirements to report ransomware payments and ransomware-related incidents to federal authorities is another area to watch. Incident reporting requirements are in place for federal agencies and contractors via a Biden Administration Executive Order, but Congress is taking steps to expand these requirements to other private-sector entities.

Both the House of Representatives and the Senate have advanced legislation that would require businesses to report ransomware payments within 24 hours. The report would need to include the method of payment, instructions for making the payment, and other details to help federal investigators follow the payment flows and identify ransomware trends over time. The legislation would also require owners and operators of critical infrastructure to report substantial cybersecurity incidents (including a disruptive ransomware attack) within 72 hours. Interestingly, the legislation’s definition of “ransomware” encompasses all extortion-based attacks (such as the threat of DDoS), not just malware that locks system operations until a ransom is paid.

Although the House and Senate legislation cleared several hurdles, it did not pass Congress in 2021. However, we expect a renewed push for incident reporting, or other legislation to address ransomware, in 2022 and beyond.

A more collaborative, whole-of-government approach

The Biden Administration characterized ransomware as an economic and national security concern relatively early on and has detailed numerous federal efforts to counter it. We have also seen a marked increase in both international government and law enforcement cooperation, and public-private collaboration to identify, prosecute, and disrupt ransomware criminals, and address their safe harbors. In addition to the above, recent efforts have included:

  • In April 2021, the Department of Justice (DOJ) created a Digital Extortion Task Force, and in June elevated ransomware to be a priority on par with terrorism.
  • In June 2021, the US government attended the G7 Summit and discussed ransomware, making a commitment “to work together to urgently address the escalating shared threat from criminal ransomware networks.” They went on to “call on all states to urgently identify and disrupt ransomware criminal networks operating from within their borders, and hold those networks accountable for their actions.”
  • Also in June 2021, ransomware was discussed during the EU-US Justice and Home Affairs Ministerial Meeting, with commitments made to work together to combat “ransomware including through law enforcement action, raising public awareness on how to protect networks as well as the risk of paying the criminals responsible, and to encourage those states that turn a blind eye to this crime to arrest and extradite or effectively prosecute criminals on their territory.”
  • In August 2021, the Cybersecurity and Infrastructure Security Agency (CISA) announced the formation of the Joint Cyber Defense Collaborative (JCDC) to “integrate unique cyber capabilities across multiple federal agencies, many state and local governments, and countless private sector entities.”
  • In August 2021, the White House announced the voluntary Industrial Control System Cybersecurity Initiative to strengthen the resilience of critical infrastructure against ransomware.
  • In September 2021, NIST issued a ransomware risk management profile for its Cybersecurity Framework.
  • In October 2021, the White House hosted a Counter Ransomware Initiative Meeting, bringing together governments from 30 nations around the world “to discuss the escalating global security threat from ransomware” and identify potential solutions.
  • Also in October 2021, a group of international law enforcement agencies and private sector experts collaborated to force ransomware group REvil offline.
  • In November 2021, the US Department of Justice announced the arrest of three ransomware actors, charges against a fourth, and the “seizure of $6.1 million in funds traceable to alleged ransom payments.” It attributed these successes to “the culmination of close collaboration with our international, US government, and especially our private-sector partners.”
  • Collaboration by multiple federal agencies to produce the StopRansomware site, which provides basic resources on what ransomware is, how to reduce risks, and how to report an incident or request assistance.
  • Ongoing work of senior policymakers such as Deputy Attorney General Lisa Monaco, as well as federal agencies such as CISA and the FBI, to keep up a steady flow of timely alerts about the threat of ransomware and the need for public and private-sector collaboration to fight it.

Ransomware brings security center-stage

For years, it was arguable that most policymakers did not “get” the need for cybersecurity. Now the landscape has changed significantly, with ransomware and nation-state competition driving the renewed sense of urgency. Given the seriousness, persistence, and widespread nature of the ransomware threat, Rapid7 supports new measures to detect and mitigate these attacks. These trends do not seem likely to abate soon, and we expect regulatory activity and information sharing on cybersecurity to be driven by ransomware for some time to come.

Additional reading:


Get the latest stories, expertise, and news about security today.

2022 Planning: Simplifying Complex Cybersecurity Regulations

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/12/09/2022-planning-simplifying-complex-cybersecurity-regulations/

2022 Planning: Simplifying Complex Cybersecurity Regulations

Compliance does not equal security, but it’s also true that a strong cybersecurity program meets many compliance obligations. How can we communicate industry regulatory requirements in a more straightforward way that enhances understanding while saving time and effort? How can we more easily demonstrate that a robust cybersecurity program will typically meet many compliance requirements?

Rapid7’s latest white paper, “Simplifying the Complex: Common Practices Across Cybersecurity Regulations,” is an educational resource aimed at breaking down complicated regulatory text into a set of consistent cybersecurity practices. The paper analyzes 10 major cybersecurity regulations, identifies common practices across the regulations, and provides insight on how to operationalize these practices.

Read the full white paper

Get it here

You can also reserve your spot for the upcoming webinar, “Common Cybersecurity Compliance Requirements.” Register now at our 2022 Planning webinar series page. This talk is designed to help you apply simplification practices across regulations and help your team plan for the year ahead.  

Different regulations, common practices

Cybersecurity regulations are complex. They target a patchwork of industry sectors and are enforced by disparate federal, state, and international government agencies. However, there are patterns: Cybersecurity regulations often require similar baseline security practices, even though the legislation may structure compliance requirements differently.

Identifying these common elements can help regulated entities, regulators, and cybersecurity practitioners communicate how compliance obligations translate to operational practices. For example, an organization’s security leader(s) could use this approach to drive executive support and investment prioritization by demonstrating how a robust security program addresses an array of compliance obligations facing the organization.

This white paper organizes common regulatory requirements into 6 core components of organizational security programs:

  1. Security program: Maintain a comprehensive security program.
  2. Risk assessment: Assess internal and external cybersecurity risks and threats.
  3. Security safeguards: Implement safeguards to control the risks identified in the risk assessment.
  4. Testing and evaluation: Assess the effectiveness of policies, procedures, and safeguards to control risks.
  5. Workforce and personnel: Establish security roles and responsibilities for personnel.
  6. Incident response: Detect, investigate, document, and respond to cybersecurity incidents and events.

Learn additional background information on each regulation and how these 6 practices are incorporated into many of them. The white paper also provides extensive citations for each requirement so that readers can locate the official text directly.  

Rapid7 solutions help support compliance

Your organization is different from any other — that’s a fact. You’ll operationalize security practices based on individual risk profile, technology, and structure. Rapid7 helps you approach implementation with the context for each of the cybersecurity practices we outline, including:

  1. Operational overview: See how the cybersecurity practice generally operates within an organization’s security program.
  2. Organizational structure: This stipulates which teams or functions within an organization implement the cybersecurity practice.
  3. Successful approaches: These provide approaches to successfully implementing the cybersecurity practice.
  4. Common challenges: These spell out common issues that hinder consistently successful implementation.

Rapid7’s portfolio of solutions can help meet and exceed the cybersecurity practices commonly required by regulations. To illustrate this, the white paper provides extensive product and service mapping intended to help every unique organization achieve its compliance goals. We discuss the key, go-to products and services that help fulfill each practice, as well as those that provide additional support.

For example, when it comes to maintaining a comprehensive security program, it might help to measure the effectiveness of your program’s current state with a cybersecurity maturity assessment. Or, if you’re trying to stay compliant with safeguards to control risk, InsightCloudSec can help govern Identity and Access Management (IAM) and adopt a unified zero-trust security model across your cloud and container environments. And what about testing? From pentesting to managed application security services, simulate real-world attacks at different stages of the software development lifecycle (SDLC) to understand your state of risk and know if your end product is customer-ready.  

Which regulations are discussed in the white paper?


  1. HIPAA (health)
  2. GLBA (financial)
  3. NYDFS Cybersecurity Regulation (financial)
  4. PCI DSS (retail)
  5. COPPA (retail)
  6. NERC CIP (electrical)

Broadly applicable

  1. State Data Security Laws (CA, FL, MA, NY, TX)
  2. SOX


  1. GDPR
  2. NIS Directive

When it comes to compliance, it’s not just about running afoul of a regulating body you may not have been aware of when entering a new market. Customer trust is difficult to get back once you lose it.    

A comprehensive cybersecurity program from a trusted and vetted provider will help ensure you’re well-protected from threats and in compliance with regulations wherever your company does business. Whether it’s monitoring and testing services, risk assessments, or certification and training for personnel, your provider should deliver tailored solutions and products that help you meet your unique compliance goals — and protect your users — now and well into the future.

Learn more at our webinar on Monday, December 13

Sign up today

Note: The white paper discussed should not be used as a compliance guide and is not legal advice.

3 Strategies That Are More Productive Than Hack Back

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/12/07/3-strategies-that-are-more-productive-than-hack-back/

3 Strategies That Are More Productive Than Hack Back

2021 has been a banner year in terms of the frequency and diversity of cybersecurity breaking news events, with ransomware being the clear headline-winner. While the DarkSide group (now, in theory, retired) may have captured the spotlight early in the year due to the Colonial Pipeline attack, REvil — the ransomware-as-a-service group that helped enable the devastating Kaseya mass ransomware attack in July — made recent headlines as they were summarily shuttered by the FBI in conjunction with Cyber Command, the Secret Service, and like-minded countries.

This was a well-executed response by government agencies with the proper tools and authority to accomplish a commendable mission. While private-sector entities may have participated in this effort, they will have done so through direct government engagement and under the same oversight.

More recently, the LockBit and Marketo ransomware groups suffered distributed denial of service (DDoS) attacks, as our colleagues at IntSights reported, in retaliation for their campaigns: one targeting a large US firm, and another impacting a US government entity.

The former of these two DDoS attacks falls into a category known colloquially as “hack back.” Our own Jen Ellis did a deep dive on hacking back earlier this year and defined the practice as non-government organizations taking intrusive action against a cyber attacker on technical assets or systems not owned or leased by the person taking action or their client.”

The thorny path of hacking back

Hack back, as used by non-government entities, is problematic for many reasons, including:

  • Group attribution is hard, and most organizational cybersecurity teams are ill-equipped to conduct sufficiently thorough research to gain a high enough level of accuracy to ensure they know who the source really is/was.
  • Infrastructure used to conduct attacks is often compromised assets of legitimate organizations, and taking direct action against them can cause real harm to other innocent victims.
  • It is very likely illegal in most jurisdictions.

As our IntSights colleagues noted, the LockBit and Marketo DDoS hack-back attacks did take the groups offline for weeks and temporarily halted ransomware campaigns associated with their infrastructure. But the groups are both back online, and they — along with other groups — appear to be going after less problematic targets, a (hopefully) unexpected, unintended, but very real consequence of these types of cyber vigilante actions.

Choosing a more productive path

While the temptation may be strong to inflict righteous wrath upon those who have infiltrated and victimized your organization, there are ways to channel your reactions into active defense strategies that can help you regain a sense of control, waste attackers’ time (a precious resource for them), contribute to the greater good, and help change the economics of attacks enough to effect real change. Here are 3 possible alternative routes to consider.

1. Improve infrastructure visibility

You can only effect change in environments that have been instrumented for measurement. While this is true for cybersecurity defense in general, it is paramount if you want to take the step into contributing to the community efforts to reduce the levels and impacts of cybercrime (more on that later).

You have to know what assets are in play, where they are, the state they are in, and the activity happening on and between them. If you aren’t outfitted for that now, thankfully it’s the holiday season, and you still have time to get your shopping list to Santa (a.k.a. your CISO/CFO). If you’re strapped for cash, open-source tools and communities such as MISP provide a great foundation to build upon.

2. Invest in information sharing and analysis

There are times when it feels like we may be helpless in the face of so many adversaries and the daily onslaught of attacks. However, we protectors have communities and resources available that can help us all become safer and more resilient. If your organization isn’t part of at least one information sharing and analysis organization (ISAO), that is your first step into both regaining a sense of control and giving you and your cybersecurity teams active, positive steps you can take on a daily basis to secure our entire ecosystem. An added incentive to join one or more these groups is that many of them gain real-time cross-vendor insights via the Cyber Threat Alliance, a nonprofit that is truly leveling up protectors across the globe.

These groups share tools, techniques, and intelligence that enable you to level up your organization’s defenses and can help guide you through the adoption of cutting-edge, science-driven frameworks such as MITRE’s ATT&CK and D3FEND.

3. Consider the benefits of deception technology

“Oh! What a tangled web we weave when first we practice to deceive!”

Sir Walter Scott may not have had us protectors in mind when he penned that line, but it is a vital component of modern organizational cyber defenses. Deception technology can provide rich intelligence on attacker behavior (which you can share with the aforementioned ISAOs!), keep attackers in a playground safe from your real assets, and — perhaps most importantly — waste their time.

While we have some thoughts and offerings in the cyber deception space — and have tapped into the knowledge of other experts in the field — there are plenty of open-source solutions you can dig into, or create your own! Some of the best innovations come from organizations’ security teams.

Remember: You are not alone

Being a victim of a cyberattack of any magnitude is something we all hope to help our organizations avoid. Even if you’re a single-person “team” overseeing cybersecurity for your entire organization, you don’t have to go it alone, and you definitely do not have to give in to the thought of hacking back to “get even” with your adversaries.

As we’ve noted, there are many organizations who are there to help you channel your energies into building solid defenses and intelligence gathering practices to help you and the rest of us be safer and more resilient. Let’s leave the hacking back to the professionals we’ve tasked with legal enforcement and focus on protecting what’s in our purview.


Get the latest stories, expertise, and news about security today.

Thawing Out the Chilling Effect Of DMCA Section 1201

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/11/15/thawing-out-the-chilling-effect-of-dmca-section-1201/

Thawing Out the Chilling Effect Of DMCA Section 1201

The Copyright Office has issued the latest rules on exemptions to Section 1201 of the Digital Millennium Copyright Act (DMCA). Great news: Legal protections for independent security research have once again been meaningfully strengthened. On the whole, these protections are now significantly greater than they were just a few years ago.

Some quick background: DMCA Section 1201 restricts security research on software without authorization of the owner of the copyright of the software, even for software on devices the researcher owns. This has long been criticized as having an adverse chilling effect on legitimate security research that would otherwise benefit consumers. However, the Librarian of Congress (acting through the Copyright Office) can establish exceptions to DMCA Section 1201, which must be renewed every three years. A three-year cycle has just concluded, with the Copyright Office issuing an updated exception for security research.

[For additional background information on why DMCA is important for security research, please see this earlier post.]

Most recent change: “All other laws” requirement removed

Prior to this most recent update, the security researcher exception provided legal protection from Section 1201 only if the researcher was compliant with every other law or regulation in the world, no matter how obscure. If that sounds ridiculous and burdensome, that’s because it is.

We made this “obey all other laws” issue the focus of our advocacy on Section 1201 throughout 2020 and 2021. As we argued extensively before the Copyright Office, the “all other laws” limitation meant security researchers could lose liability protection under Section 1201 for inadvertently violating laws with significant gray area (like CFAA), minor laws unrelated to security (like the electrical code), or sweepingly restrictive foreign laws (such as China’s rules on vulnerability disclosure).

Rapid7 proposed specific language to address this problem. And thankfully, the Department of Justice (DOJ) formally weighed in with the Copyright Office and supported our proposed language. Without the DOJ’s action to support good-faith security researchers, this effort would likely not have succeeded. Then the Dept. of Commerce joined in support of the language as well.

In October 2021, the Copyright Office handed down an updated exception for security research that adopted our proposed language and removed the “all other laws” requirement. This effectively killed the most harmful aspect of the previous rule, representing major progress in legal protection for security researchers under DMCA Section 1201. The DOJ, NTIA, and Copyright Office were united in expanding  protection — a sign of the growing consensus on the importance of this activity.

The change to the language essentially turns the requirement of compliance with all other laws into a helpful reminder that other laws may still apply. The language looks like this:

Striking:  “and does not violate any applicable law, including without limitation the Computer Fraud and Abuse Act of 1986, as amended and codified in title 18, United States Code.”

Inserting: Good-faith security research that qualifies for the exemption under paragraph (b)(16)(i) of this section may nevertheless incur liability under other applicable laws, including without limitation the Computer Fraud and Abuse Act of 1986, as amended and codified in title 18, United States Code, and eligibility for that exemption is not a safe harbor from, or defense to, liability under other applicable laws.

Long-haul advocacy

Many hands — too many to adequately thank here — worked tirelessly to ensure security researchers were protected from DMCA Section 1201. It has truly been a community effort.

For our part, Rapid7 has been engaged in advocacy to protect security research under DMCA Section 1201 for the better part of a decade. Testimony from Rapid7 researchers helped establish the first security research exemption in 2015. But this 2015 exemption was limited, and in 2016 we repeatedly pressed the Copyright Office to support expanded protections and reform the rulemaking process, and the Copyright Office implemented many of our recommendations. During the Copyright Office’s 2018 exemption cycle, we argued against holding security researchers liable for what third parties do with research results, to which the Copyright Office agreed. And now, in 2021, we worked with DOJ to convince the Copyright Office to remove the “any other law” restriction.

Taken together, this is a lot of progress. Rapid7 has put real time and effort into living up to its values in support of independent cybersecurity research, and it has borne fruit.

Though improved, Section 1201 remains flawed

These gains are significant and welcome. Still, it is astonishing just how much time and effort were required to wade through the sea of FUD and bureaucracy to achieve this progress. It is a testament to the danger of regulatory inertia.

And there is still more to be done on DMCA. While the security researcher protections are now greatly strengthened under DMCA Section 1201, which helps address its chilling effect on research, the law still has many flaws. As Rapid7 has noted, DMCA Section 1201 continues to be a legal risk for the use of security tools — something the researcher exemption does not address. And outside of security, DMCA continues to affect the right to repair, accessibility for the disabled, education, and much more. This is a law in acute need of a ruthless overhaul.

As far as US computer crime laws go, DMCA Section 1201 is surely among the most unsound and anachronistic. If DMCA Section 1201 were introduced in Congress today, it would be derided as toxic and never advance far enough to receive a vote. DMCA Section 1201’s most beneficial use now is as a smoldering example of how a sweeping restriction on widely used technology can become an absurd burden as technology matures. Other than that, the law is at best a waste of everyone’s time, and we should celebrate its erosion even as we lament that this erosion is gradual.

Our respect and gratitude go to all the advocates who spent their time and resources working with the Copyright Office to drive this progress.

Update to GLBA Security Requirements for Financial Institutions

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/11/10/update-to-glba-security-requirements-for-financial-institutions/

Update to GLBA Security Requirements for Financial Institutions

Heads up financial institutions: the Federal Trade Commission (FTC) announced the first cybersecurity updates to the Gramm Leach-Bliley Act (GLBA) Safeguards Rule since 2003. The new rule strengthens the required security safeguards for customer information. This includes formal risk assessments, access controls, regular penetration testing and vulnerability scanning, and incident response capabilities, among other things.

Several of these changes go into effect in November 2022, to provide organizations time to prepare for compliance. Below, we’ll detail the changes in comparison to the previous rule.

Background on the Safeguards Rule

GLBA requires, among other things, a wide range of “financial institutions” to protect customer information. Enforcement for GLBA is split up among several different federal agencies, with FTC jurisdiction covering non-banking financial institutions in the Safeguards Rule. Previously, the Safeguards Rule left the implementation details of several aspects of the information security program up to the financial institution, based on its risk assessment.

The Safeguards Rule broad definition of “financial institutions” includes non-bank businesses that offer financial products or services — such as retailers, automobile dealers, mortgage brokers, non-bank lenders, property appraisers, tax preparers, and others. The definition of “customer information” is also broad, to include any record containing non-public personally identifiable information about a customer that is handled or maintained by or on behalf of a financial institution.

Updates to the Safeguards Rule

Many of the other updates concern strengthened requirements on how financial institutions must implement aspects of their security programs. Below is a short summary of changes. Where applicable, we include citations to both the updated rule (starting at page 123) and the previous rule (at 16 USC 314) for easy comparison.

Overall security program

  • Current rule: Financial institutions must maintain a comprehensive, written information security program with administrative, technical, and physical safeguards to ensure the security, confidentiality, and integrity of customer information. 16 USC 314.3(a)-(b).
  • Updated rule: The updated rule now requires the information security program to include the processes and safeguards listed below (i.e., risk assessment, security safeguards, etc.). 16 USC 314.3(a).
  • Approx. effective date: November 2022

Risk assessment

  • Current rule: Financial institutions are required to identify internal and external risks to security, confidentiality, and integrity of customer information. The risk assessment must include employee training, risks to information systems, and detecting and responding to security incidents and events. 16 USC 314.4(b).
  • Updated rule: The update includes more specific criteria for what the risk assessment must include. This includes criteria for evaluating and categorizing of security risks and threats, and criteria for assessing the adequacy of security safeguards. The risk assessment must describe how identified risks will be mitigated or accepted. The risk assessment must be in writing. 16 USC 314.4(b).
  • Approx. effective date: November 2022

Security safeguards

  • Current rule: Financial institutions must implement safeguards to control the risks identified through the risk assessment. 16 USC 314.4(c). Financial institutions must require service providers to maintain safeguards to protect customer information. 16 USC 314.4(d).
  • Updated rule: The updated rule requires that the safeguards must include
    – Access controls, including providing the least privilege;
    – Inventory and classification of data, devices, and systems;
    – Encryption of customer information at rest and in transit over internal networks;
    – Secure development practices for in-house software and applications;
    – Multi-factor authentication;
    – Secure data disposal;
    – Change management procedures; and
    – Monitoring activity of unauthorized users and detecting unauthorized access or use of customer information. 16 USC 314.4(c)(1)-(8).
  • Approx. effective date: November 2022

Testing and evaluation

  • Current rule: Financial institutions must regularly test or monitor the effectiveness of the security safeguards, and make adjustments based on the testing. 16 USC 314.4(c), (e).
  • Updated rule: Regular testing of safeguards must now include either continuous monitoring or periodic penetration testing (annually) and vulnerability assessments (semi-annually). 16 USC 314.4(d).
  • Approx. effective date: November 2022

Incident response

  • Current rule: Financial institutions must include cybersecurity incident detection and response in their risk assessments, and have safeguards to address those risks. 16 USC 314.4(b)(3)-(c).
  • Updated rule: Financial institutions are required to establish a written plan for responding to any security event materially affecting confidentiality, integrity, or availability of customer information. 16 USC 314.4(h).
  • Approx. effective date: November 2022

Workforce and personnel

  • Current rule: Financial institutions must designate an employee to coordinate the information security program. 16 USC 314.4(a). Financial institutions must select service providers that can maintain security and require service providers to implement the safeguards. 16 USC 314.4(d).
  • Updated rule: The rule now requires designation of a single “qualified individual” to be responsible for the security program. This can be a third-party contractor. 16 USC 314.4(a). Financial institutions must now provide security awareness training and updates to personnel. 16 USC 314.4(e). The rule now also requires periodic reports to a Board of Directors or governing body regarding all material matters related to the information security program. 16 USC 314.4(i).
  • Approx. effective date: November 2022

Scope of coverage

  • Updated rule: The FTC update expands on the definition of “financial institution” to require “finders” — companies that bring together buyers and sellers — to follow the Safeguards Rule. 16 USC 314.2(h)(1). However, financial institutions that maintain information on fewer than 5,000 consumers are exempt from the requirements of a written risk assessment, continuous monitoring or periodic pentesting and/or vulnerability scans, incident response plan, and annual reporting to the Board. 16 USC 314.6.
  • Approx. effective date: November 2021 (unlike many of the other updates, this item is not delayed for a year)

Incident reporting next?

In addition to the above, the FTC is also considering requirements that financial institutions report cybersecurity incidents and events to the FTC. Similar requirements are in place under the Cybersecurity Regulation at the New York Department of Financial Services. If the FTC moves forward with these incident reporting requirements, financial institutions could expect the requirements to be implemented later in 2022 or early 2023.

Financial institutions with robust security programs will already be performing many of these practices. For them, the updated Safeguards Rule will not represent a sea change in internal security operations. However, by making these security practices a formal regulatory requirement, the updated Safeguards will make accountability and compliance even more important.


Get the latest stories, expertise, and news about security today.

Ransomware: Is Critical Infrastructure in the Clear?

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/09/24/ransomware-is-critical-infrastructure-in-the-clear/

Ransomware: Is Critical Infrastructure in the Clear?

Recently I’ve been getting asked whether I believe ransomware is on the decline, particularly for critical infrastructure. Part of the reason for this question seems to be a recent security briefing from White House deputy national security adviser Anne Neuberger, suggesting that language on the site of a new-but-already-high-profile ransomware gang, BlackMatter, could indicate that President Biden’s comments to President Putin regarding consequences for attacks against US critical infrastructure may have hit their mark. Yet just this week, this same gang demanded a ransom of $5.9 million for an attack on Iowa-based feed and grain cooperative, NEW Cooperative.

So the question remains: Is critical infrastructure in the clear, is it a specific target of ransomware attackers, or is it simply on the same footing as any other organization? As we’ll see — and as current developments confirm — it’s clear that critical infrastructure is indeed at risk from ransomware attacks.

Before I get into the nuances of this, I want to quickly note upfront that much of this is going to be opinion or theories based on discussion with — and anecdotal evidence from — various security experts, ransomware victims, and news stories. I’m not a ransomware attacker, nor am I directly in touch with any, so I can only speculate on their motivations, interests, and plans. Where possible, I provide reference to further reading to provide context, but in general, it’s important to note that broad under-reporting and inconsistent handling of ransomware incident data means that any predictions, projections, or summaries of ransom activity (on this blog or elsewhere) are likely somewhat incomplete.

The BlackMatter at hand

The BlackMatter website indicates that the group is somewhat selective in which organizations it will target for attacks. According to The Record, BlackMatter is particularly interested in organizations with a revenue of over $100 million a year, with networks of 500 to 1,500 hosts located in the US, the UK, Canada, or Australia. They state they are specifically not planning to attack organizations in the following sectors and would in fact decrypt data for free should they infect any organizations in them:

  • Hospitals
  • Critical infrastructure facilities (nuclear power plants, power plants, water treatment facilities)
  • Oil and gas industry (pipelines, oil refineries)
  • Defense industry
  • Nonprofit companies
  • Government sector

Interestingly, they do not include the food and agriculture sector in this list, though it is included in the US government’s list of 16 critical infrastructure sectors. When NEW Cooperative’s representatives pointed this out to BlackMatter, the ransomware group’s response was:

You do not fall under the rules, everyone will only incur losses, everything is tied to the commerce, the critical ones mean the vital needs of a person.

On the surface, it’s funny to think they are saying food isn’t a vital need for people. The JBS attack at the start of June highlighted the importance of the food supply chain. The cost of basic meat food staples is still higher in the US as a result of the attack, which can make a huge difference to those living on or under the poverty line. BlackMatter explains the distinction in terms of the impact — it views loss of money for the company itself as the only real impact of the NEW Cooperative attack.

This may be because NEW Cooperative is a fairly small, regional entity, nowhere near the scale of JBS, and therefore disruption for them is not going to create anywhere close to the same level of impact on the US food supply chain. This leads to the question of whether these types of organizations really count as critical infrastructure on an individual level. That’s a question for the US government to answer as they determine whether to respond to this attack and others like it. If you want to get into this more, Joseph Marks has a great write-up on the different aspects in his coverage for the The Washington Post’s Cybersecurity 202.

In the meantime, it is interesting to see BlackMatter communicate so proactively on the topic of critical infrastructure and what they consider to be in scope. This could, as Anne Neuberger suggested, reflect a heeding of the President’s warning. It could also be somewhat influenced by the lessons learned from DarkSide’s experiences following the Colonial Pipeline attack back in May. BlackMatter states, “The project has incorporated in itself the best features of DarkSide, REvil, and LockBit,” so it’s entirely possible their communications strategy is informed by the blowback DarkSide experienced in the wake of Colonial.

After coming under intense scrutiny and focus following the Colonial Pipeline attack, the DarkSide group published a statement describing themselves as “apolitical” and asserting, “Our goal is to make money and not creating problems for society.” When its infrastructure was then compromised and their bitcoin drained, the group decided it was time to shut up shop and lay low. This prompted a great deal of speculation from security commentators over whether they would reappear under a different name after sufficient time had passed. It didn’t take long after the appearance of BlackMatter for security researchers to start pointing to indicators that the new ransomware group may be the phoenix rising from DarkSide’s ashes.

Hackers with hearts of gold?

DarkSide and BlackMatter are not the only ransomware gangs to draw a line around healthcare and other targets that can impact public safety.

In March 2020, as the pandemic ramped up in ferocity, Bleeping Computer reached out to a number of high-profile ransomware groups and asked if they would lay off healthcare organizations in light of COVID-19. The group behind the CLOP ransomware stated that they have “never attacked hospitals, orphanages, nursing homes, charitable foundations, and we won’t.” They went on to state, “We are not enemies of humanity… our goal is money, not harm,” and they indicated that if a healthcare organization was encrypted by accident, they would provide the decryptor for free.

Four other ransomware groups responded to Bleeping Computer with similar assertions that hospitals are never targets or would not be during the duration of the pandemic. Some even sounded offended by the suggestion that hospitals could ever be considered fair game for attacks.

Critical infrastructure attacks abound

Yet, despite this, attacks against the healthcare sector were prolific throughout 2020. According to the 2021 Unit 42 Ransomware Threat Report, “the healthcare sector… was the most targeted vertical for ransomware in 2020. Ransomware operators were brazen in their attacks in an attempt to make as much money as possible, knowing that healthcare organizations – which needed to continue operating to treat COVID-19 patients and help save lives – couldn’t afford to have their systems locked out and would be more likely to pay a ransom.”

We see the same trend continuing in 2021. The fantastic Black Fog site tracks publicly disclosed ransomware attacks on The State of Ransomware in 2021. Their stats highlight that 2021 continues to be a busy year for ransomware attackers and their victims, with more attacks in every month of 2021 than during their 2020 counterpart. They break down the attacks they track by industry sector, and the top 9 are all covered within the US government’s description of its 16 sectors of critical infrastructure. Healthcare is the fourth most impacted sector according to their analysis, with government and education taking the first and second spots.

So does this mean that these sectors are in fact being highly targeted for attack? The answer is complicated, and there are a number of factors at play.

It’s worth calling out again that ransomware and other cybercrime remains terribly under-reported. It’s possible that one of the reasons we “see” most attacks in the sectors mentioned earlier is because they are very public-facing in nature. Thus, disruptive attacks against their systems may be more visible to the public — and hence more easily tracked and reported. Other sectors may be better able to avoid public disclosure, possibly in the hopes of avoiding a loss of customer confidence or regulatory or legal implications.

This does not mean that these sectors are not also appealing targets for some cybercriminals. Healthcare, government, and educational organizations are often highly vulnerable to attack due to a number of factors including a deficit of resources, reliance on legacy systems, complexity of technical ecosystems and user behavior models, and lack of tolerance for downtime due to the consequences to the public of a halt in operations. This latter point may also mean these sectors are more likely to pay a ransom demand: If an entity can’t tolerate downtime enough to patch their systems, an attacker may speculate that they will also likely want to resolve a ransomware incident as quickly as possible, resulting in a paid ransom.

So, the question comes down to whether attackers think this way and specifically target these sectors.

Targets locked and loaded?

One of the things that most caught my attention about the DarkMatter website information, the responses to Bleeping Computer, and Unit 42’s research was that they all seem to reflect the notion that ransom attacks are targeted. Indeed, in its response to Bleeping Computer, the Nefilim Ransomware group stated, “We work very diligently in choosing our targets.”

Yet the BlackMatter site and a couple of the other responses also alluded to organizations being infected by accident. In its response to Bleeping Computer, the Netwalker group stated:

Hospitals and medical facilities? do you think someone has a goal to attack hospitals? we don’t have that goal -it never was. it coincidence. no one will purposefully hack into the hospital. [sic]

But they then went on to add:

If someone is encrypted, then he must pay for the decryption.

The implication here is that while they may not go out of their way to target hospitals or any other organization, their attacks are opportunistic and whoever is hit is fair game and expected to pay.

So how do these things relate to each other? How can an attack be both targeted and run the risk of accidentally infecting unintended organizations?

First, consider the nature of profit-motivated attacks of this type. While there are profit-driven attacks that are extremely targeted —for example corporate espionage attacks — in the case of ransomware attacks, it is more likely that groups of organized cybercriminals are going to try to maximize their potential profits by orchestrating attacks at scale. By casting their nets wide, they are able to get more bang for their buck/ruble, making the most of their upfront investment to increase the odds of hitting organizations that are willing to pay. They may have an ideal target profile as indicated on BlackMatter’s site, but that doesn’t mean they won’t take a spray-and-pray approach to see what they can hit. Even with a focus on a specific demographic, they are still likely to take a fairly broad approach to maximize the potential for profit.

This is consistent with the most common attack methodologies for extortion-based attacks. According to Digital Defense, phishing, RDP, and vulnerable systems are the top three attack vectors for ransomware attacks. While any of these can be leveraged in highly targeted attacks, it’s more common for them to be used at scale. Phishing emails are sent out to vast lists of potential recipients, and malware to exploit RDP or other exposed systems is automated and set loose on the internet. With this in mind, it’s not surprising that organizations that weren’t being directly targeted will be impacted.

While it’s important to note that the opportunistic nature of these attack methodologies means any organization can fall victim to a ransomware attack, that does not mean that specific sectors or geographies are not more likely to be hit. The majority of profit-motivated attackers may not be targeting specific organizations (unless there is another motivation at play), but that doesn’t mean they can’t target groups or classes of organization, as we see with BlackMatter’s website. The sheer volume of attacks hitting the US indicates that whatever the chosen attack vector, it is often pointed towards specific geographical regions. Likewise, it’s possible or in some cases, likely, that attackers develop phishing target lists with data specific to certain sectors that they believe will be more easily compromised or likely to pay. As already noted, critical infrastructure is viewed by many as sitting firmly in this category.

Critical infrastructure not in the clear

So what does all this mean? The incomplete data we have clearly shows that ransomware attacks are not in decline and critical infrastructure is certainly not in the clear.

We need more consistent reporting of ransom incidents to get a clearer picture of what’s really happening, but we can confidently say healthcare providers, governments, and education are regularly being hit and need greater support to help them tackle the security issues I mentioned earlier.

The good news is that this is a problem that many are scrutinizing, and we’re starting to see more resources and assistance for critical infrastructure. If you work in one of the US critical infrastructure sectors, check out the free tools and services CISA provides to help you protect yourself. If you are working for a government entity (including public education and healthcare providers), you may also qualify for free services from the MS-ISAC.

In addition, the US Senate recently passed infrastructure legislation that would provide federal grants and funding to several critical infrastructure sectors — such as state and local governments, energy, and water — to help them strengthen their cybersecurity postures. We hope this may be extending and that, as Congress considers large spending bills, the healthcare sector should be provided access to federal funding and other resources dedicated to cybersecurity.

The US government has also announced a number of other measures both to address ransomware and to shore up cybersecurity in critical infrastructure. We hope that, over time, we will see these efforts bearing fruit in the form of less successful attacks against critical infrastructure.

The Ransomware Task Force also identified a number of recommendations for governments to better support critical infrastructure, from grant funding (pages 40 and 41) to mandated adoption of cyber hygiene measures (page 39) and provision of emergency response authorities in the event of a successful attack (page 42). The US government is already taking action on some of these priorities, such as requiring greater cyber hygiene for federal agencies and contractors, and including a response and recovery fund for victims of cyberattack in the pending infrastructure legislation.

Although all public data sources agree that far more ransomware attacks are being reported in the US than in any other country, this is not only a US issue. Many other countries are impacted, and we see critical infrastructure being hit around the world. Governments in other affected countries are likely taking or investigating similar measures, though to date, they have mostly been less vocal on it in public.

In an ideal world, governments will work together to amplify the impact of their actions and proactively deter and disrupt the global ransomware market. To that end, I look forward to seeing what will come from the Extraordinary Senior Officials Forum on ransomware that the G7 has committed to holding before the end of 2021.


Get the latest stories, expertise, and news about security today.

Rapid7 Statement on the New Standard Contractual Clauses for International Transfers of Personal Data

Post Syndicated from Chelsea Portney original https://blog.rapid7.com/2021/09/21/rapid7-statement-on-the-new-standard-contractual-clauses-for-international-transfers-of-personal-data/

Rapid7 Statement on the New Standard Contractual Clauses for International Transfers of Personal Data

Context: On June 4, 2021, the European Commission published new standard contractual clauses (“New SCCs”). Under the General Data Protection Regulation (“GDPR”), transfers of personal data to countries outside of the European Economic Area (EEA) must meet certain conditions. The New SCCs are an approved mechanism to enable companies transferring personal data outside of the EEA to meet those conditions, and they replace the previous set of standard contractual clauses (“Old SCCs”), which were deemed inadequate by the Court of Justice of the European Union (“CJEU”). The New SCCs made a number of improvements to the previous version, including but not limited to (i) a modular design which allows parties to choose the module applicable to the personal data being transferred, (ii) use by non-EEA data exporters, and (iii) strengthened data subjects rights and protections.

Rapid7 Action: In light of the European Commission’s adoption of the New SCCs, Rapid7 performed a thorough assessment of its personal data transfers which involved reviewing the technical, contractual, and organizational measures we have in place, evaluating local laws where the personal data will be transferred, and analyzing the necessity for the transfers in accordance with the type and scope of the personal data being transferred. Rapid7 will be updating our Data Processing Addendum on September 27, 2021, to incorporate the New SCCs, where required, for the transfer of personal data outside of the EEA. Rapid7’s adoption of the New SCCs helps ensure we are able to continue to serve all our clients in compliance with GDPR data transfer rules.

Ongoing Commitments: Rapid7 is committed to upholding high standards of privacy and security for our customers, and we are pleased to be able to offer the New SCCs which provide enhanced protections that better take account of the rapidly evolving data environment. We will continue to monitor ongoing changes in order to comply with applicable law and will regularly assess our technical, contractual, and organizational measures in an effort to improve our data protection safeguards. For information on how Rapid7 collects, uses, and discloses personal data, as well as the choices available regarding personal data collected by Rapid7, please see the Rapid7 Privacy Policy. Additionally, Rapid7 remains dedicated to maintaining and enhancing our robust security and privacy program which is outlined in detail on our Trust page.

For more information about our security and privacy program, please email [email protected].


Get the latest stories, expertise, and news about security today.

Cybersecurity in the Infrastructure Bill

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/08/31/cybersecurity-in-the-infrastructure-bill/

Cybersecurity in the Infrastructure Bill

On August 10, 2021, the U.S. Senate passed the Infrastructure Investment and Jobs Act of 2021 (H.R.3684). The bill comes in at 2,700+ pages, provides for $1.2T in spending, and includes several cybersecurity items. We expect this legislation to become law around late September and do not expect significant changes to the content. This post provides highlights on cybersecurity from the legislation.

(Check out our joint letter calling for cybersecurity in infrastructure legislation here.)

Cybersecurity is a priority — that’s progress

Cybersecurity is essential to ensure modern infrastructure is safe, and Rapid7 commends Congress and the Administration for including cybersecurity in the Infrastructure Investment and Jobs Act. Rapid7 led industry calls to include cybersecurity in the bill, and we are encouraged that several priorities identified by industry are reflected in the text, such as cybersecurity-specific funding for state and local governments and the electrical grid.

On the other hand, cybersecurity will be competing with natural disasters and extreme weather for funding in many (not all) grants created under the bill. In addition, not all critical infrastructure sectors receive cybersecurity resources through the legislation, with healthcare being a notable exclusion. Congress should address these gaps in the upcoming budget reconciliation package.

What’s in the bill for infrastructure cybersecurity

Below is a brief-ish summary of cybersecurity-related items in the bill. The infrastructure sectors with the most allocations appear to be energy, water, transportation, and state and local governments. Many of these funding opportunities take the form of federal grants for infrastructure resilience, which includes cybersecurity as well as natural hazards. Other funds are dedicated solely to cybersecurity.

Please note that this list aims to include major infrastructure cybersecurity funding items, but is not comprehensive. (For example, the bill also provides funding for the National Cyber Director.) Citations to the Senate-passed legislation are included.

  1. State and local governments: $1B over 4 years for the State, Local, Tribal, and Territorial (SLTT) Grant Program. This new grant program will help SLTT governments to develop or implement cybersecurity plans. FEMA will administer the program. This is also known as The State and Local Cybersecurity Improvement Act. [Sec. 70611]

  2. Energy: $250M over five years for the Rural and Municipal Utility Advanced Cybersecurity Grant and Technological Assistance Program. The Department of Energy (DOE) must create a new program to provide grants and technical assistance to improve electric utilities’ ability to detect, respond to, and recover from cybersecurity threats. [Sec. 40124]

  3. Energy: Enhanced grid security. The DOE must create a program to develop advanced cybersecurity applications and technologies for the energy sector, among other things. Over a period of five years, this section authorizes $250M for the Cybersecurity for the Energy Sector RD&D program, $50M for the Energy Sector Operational Support for Cyberresilience Program, and $50M for Modeling and Assessing Energy Infrastructure Risk. [Sec. 40125]

  4. Energy: State energy security plans. This creates federal financial and technical assistance for states to develop or implement an energy security plan that secures state energy infrastructure against cybersecurity threats, among other things. [Sec. 40108]

  5. Water: $250M over 5 years for the Midsize and Large Drinking Water System Infrastructure Resilience and Sustainability Program. This creates a new grant program to assist midsize and large drinking water systems with increasing resilience to cybersecurity vulnerabilities, as well as natural hazards. [Sec. 50107]

  6. Water: $175M over five years for technical assistance and grants for emergencies affecting public water systems. This extends an expired fund to help mitigate threats and emergencies to drinking water. This includes, among other things, emergency situations caused by a cybersecurity incident. [Sec. 50101]

  7. Water: $25M over five years for the Clean Water Infrastructure Resiliency and Sustainability Program. This creates a new program providing grants to owners/operators of publicly owned treatment works to increase the resiliency of water systems against cybersecurity vulnerabilities, as well as natural hazards. [Sec. 50205]

  8. Transportation: Cybersecurity eligible for National Highway Performance Program (NHPP). This expands on the existing NHPP grant program to allow states to use funds for resiliency of the National Highway System. "Resiliency" includes cybersecurity, as well as natural hazards. [Sec. 11105]

  9. Transportation: Cybersecurity eligible for Surface Transportation Block Grant Program. This expands the existing grant program to allow funding measures to protect transportation facilities from cybersecurity threats, among other things. [Sec. 11109]

  10. General: $100M over five years for the Cyber Response and Recovery Fund. This creates a fund for CISA to provide direct support to public or private entities that respond and recover from cyberattacks and breaches designated as a “significant incident.” The support can include technical assistance and response activities, such as vulnerability assessment, threat detection, network protection, and more. The program ends in 2028. [Sec. 70602, Div. J]

Other sectors next?

These cybersecurity items are significant down payments to safeguard the nation’s investment in infrastructure modernization. Combined with the recent Executive Order and memorandum on industrial control systems security, the Biden Administration is demonstrating that cybersecurity is a high priority.

However, more work must be done to address cybersecurity weaknesses in critical infrastructure. While the Infrastructure Investment and Jobs Act provides cybersecurity resources for some sectors, most of the 16 critical infrastructure sectors are excluded. Healthcare is an especially notable example, as the sector faces a serious ransomware problem in the middle of a deadly pandemic.

Congress is now preparing a larger budget reconciliation bill, to be advanced at roughly the same time as the infrastructure legislation. We encourage Congress and the Administration to take this opportunity to boost cybersecurity for other sectors, especially healthcare. As with the infrastructure bill, we suggest providing grants dedicated to cybersecurity, and requiring that grant funds be used to adopt or implement standards-based security safeguards and risk management practices.

Congress’ activity during the COVID-19 crisis continues to be punctuated by large, ambitious bills. To secure the modern economy and essential services, we hope the Infrastructure Investment and Jobs Act sets a precedent that sound cybersecurity policies will be integrated into transformative legislation to come.

Reforming the UK’s Computer Misuse Act

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/12/reforming-the-uks-computer-misuse-act/

Reforming the UK’s Computer Misuse Act

The UK Home Office recently ran a Call for Information to investigate the Computer Misuse Act 1990 (CMA). The CMA is the UK’s anti-hacking law, and as Rapid7 is active in the UK and highly engaged in public policy efforts to advance security, we provided feedback on the issues we see with the legislation.

We have some concerns with the CMA in its current form, as well as recommendations for how to address them. Additionally, because Rapid7 has addressed similar issues relating to U.S. laws — particularly as relates to the U.S. equivalent of the CMA, the Computer Fraud and Abuse Act (CFAA) — for each section below, we’ve included a short comparison with U.S. law for those who are interested.

Restrictions on security testing tools and proof-of-concept code

One of the most concerning issues with the CMA is that it imperils dual-use open-source security testing tools and the sharing of proof-of-concept code.

Section 3A(2) of the CMA states:

(2) A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA.

Security professionals rely on open source and other widely available security testing tools that let them emulate the activity of attackers, and exploiting proof-of-concept code helps organizations test whether their assets are vulnerable. These highly valued parts of robust security testing enable organizations to build defenses and understand the impacts of attacks.

Making these products open source helps keep them up-to-date with the latest attacker methodologies and ensures a broad range of organizations (not just well-resourced organizations) have access to tools to defend themselves. However, because they’re open source and widely available, these defensive tools could still be used by malicious actors for nefarious purposes.

The same issue applies to proof-of-concept exploit code. While the intent of the development and sharing of the code is defensive, there’s always a risk that malicious actors could access exploit code. But this makes the wide availability of testing tools all the more important, so organizations can identify and mitigate their exposure.

Rapid7’s recommendation

Interestingly, this is not an unknown issue — the Crown Prosecution Service (CPS) acknowledges it on their website. We’ve drawn from their guidance, as well as their Fraud Act guidelines, in drafting our recommended response, proposing that the Home Office consider modifying section 3A(2) of the CMA to exempt “articles” that are:

  • Capable of being used for legitimate purposes; and
  • Intended by the creator or supplier of the article to be used for a legitimate purpose; and
  • Widely available; unless
  • The article is deliberately developed or supplied for the sole purpose of committing a CMA offense.

If you’re concerned about creating a loophole in the law that can be exploited by malicious actors, rest assured the CMA would still retain 3A(1) as a means to prosecute those who supply articles with intent to commit CMA offenses.

Comparison with the CFAA

This issue doesn’t arise in the CFAA; however, the U.S. is subject to various export control rules that also restrict the sharing of dual-use security testing tools and proof-of-concept code.

Chilling security research

This is a topic Rapid7 has commented on many times in reference to the CFAA and the Digital Millennium Copyright Act, which is the U.S. equivalent of the UK’s Copyright, Designs and Patents Act 1988.

Independent security research aims to reveal vulnerabilities in technical systems so organizations can deploy better defenses and mitigations. This offers a significant benefit to society, but the CMA makes no provision for legitimate, good-faith testing. While Section 1(1) acknowledges that you must have intent to access the computer without authorization, it doesn’t mention that the motive to do so must be malicious, only that the actor intended to gain access without authorization. The CMA states:

(1) A person is guilty of an offence if—

a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer, or to enable any such access to be secured;

b) the access he intends to secure, or to enable to be secured, is unauthorised; and

c) he knows at the time when he causes the computer to perform the function that that is the case.

Many types of independent security research, including port scanning and vulnerability investigations, could meet that description. As frequently noted in the context of the CFAA, it’s often not clear what qualifies as authorization to access assets connected to the internet, and independent security researchers often aren’t given explicit authorization to access a system.

It’s worth noting that neither the National Crime Agency (NCA) or the CPS seem to be recklessly pursuing frivolous investigations or prosecutions of good-faith security research. Nonetheless, the current legal language does expose researchers to legal risk and uncertainty, and it would be good to see some clarity on the topic.

Rapid7’s recommendation

Creating effective legal protections for good-faith, legitimate security research is challenging. We must avoid inadvertently creating a backdoor in the law that provides a defense for malicious actors or permits activities that can create unintended harm. As legislators consider options on this, we strongly recommend considering the following questions:

  • How do you determine whether research is legitimate and justified? Some considerations include whether sensitive information was accessed, and if so, how much – is there a threshold for what might be acceptable? Was any damage or disruption caused by the action? Did the researcher demand financial compensation from the technology manufacturer or operator?

For example, in our work on the CFAA, Rapid7 has proposed the following legal language to indicate what is understood by “good-faith security research.”

The term “good faith security research” means good faith testing or investigation to detect one or more security flaws or vulnerabilities in software, hardware, or firmware of a protected computer for the purpose of promoting the security or safety of the software, hardware, or firmware.

(A) The person carrying out such activity shall

(i) carry out such activity in a manner reasonably designed to minimize and avoid unnecessary damage or loss to property or persons;

(ii)  take reasonable steps, with regard to any information obtained without authorization, to minimize the information the person obtains, retains, and discloses to only that information which the person reasonably believes is directly necessary to test, investigate, or mitigate a security flaw or vulnerability;

(iii) take reasonable steps to disclose any security vulnerability derived from such activity to the owner of the protected computer or the Cybersecurity and Infrastructure Security Agency prior to disclosure to any other party

(iv) wait a reasonable amount of time before publicly disclosing any security flaw or vulnerability derived from such activity, taking into consideration the following:

(I) the severity of the vulnerability,

(II) the difficulty of mitigating the vulnerability,

(III) industry best practices, and

(IV) the willingness and ability of the owner of the protected computer to mitigate the vulnerability;

(v) not publicly disclose information obtained without authorization that is

(I) a trade secret without the permission of the owner of the trade secret; or

(II) the personally identifiable information of another individual, without the permission of that individual; and

(vi) does not use a nonpublic security flaw or vulnerability derived from such activity for any primarily commercial purpose prior to disclosing the flaw or vulnerability to the owner of the protected computer or the [government vulnerability coordination body].

(B) For purposes of subsection (A), it is not a public disclosure to disclose a vulnerability or other information derived from good faith security research to the [government vulnerability coordination body].

  • What happens if a researcher does not find anything to report? Some proposals for reforming the CMA  have suggested requiring coordinated disclosure as a predicate for a research carve out. This only works if the researcher actually finds something worth reporting. What happens if they do not? Is the research then not defensible?
  • Are we balancing the rights and safety of others with the need for security? For example, easing restrictions for threat intel investigators and security researchers may create a misalignment with existing privacy legislation. This may require balancing controls to protect the rights and safety of others.

The line between legitimate research and hack back

In discussions on CMA reform, we often hear the chilling effect on security research being lumped in with arguments for expanding authorities for threat intelligence gathering and operations. The latter sound alarmingly like requests for private-sector hack back (despite assertions otherwise). We believe it is critical that policymakers understand the distinction between acknowledging the importance of good-faith security research on the one hand and authorizing private-sector hack back on the other.

We understand private-sector hack back to mean an organization taking intrusive action against a cyber-attacker on technical assets or systems not owned or leased by the entity taking action or their client. While threat intel campaigners may disclaim hack back, in asking for authorization to take intrusive action on third-party systems — whether to better understand attacks, disrupt them, or even recapture lost data — they’re certainly satisfying the description of hack back and raising a number of concerns.

Rapid7 is strongly opposed to private-sector hack back. While we view both independent, good-faith security research and threat intelligence investigations as critical for security, we believe the two categories of activity need separate and distinct legal restrictions.

Good-faith security research is typically performed independently of manufacturers and operators in order to identify flaws or exposures in systems that provide opportunities for attackers. The goal is to remediate or mitigate these issues so that we reduce opportunities for attackers and decrease the risk for technology users. These activities often need to be undertaken without authorization to avoid blowback from manufacturers or operators that prioritize their reputation or profit above the security of their customers.

This activity is about protecting the safety and privacy of the many, and while researchers may take actions without authorization, they only do so on the technology of those ultimately responsible for both creating and mitigating the exposure. Without becoming aware of the issue, the technology provider and their users would continue to be exposed to risk.

In contrast, threat intel activities that involve interrogating or interacting with third-party assets prioritize the interests of a specific entity over those of other potential victims, whose compromised assets may have been leveraged in the attack. While threat intelligence can be very valuable in helping us understand how attackers behave — which can help others identify or prepare for attacks — data gathering and operations should be limited to assessing threats to assets that are owned or operated by the authorizing entity, or to non-invasive activities such as port scanning. More invasive activities can result in unintended consequences, including escalation of aggression, disruption or destruction for innocent third parties, and a quagmire of legal liability.

Because cyber attacks are criminal activity, if more investigation is needed, it should be undertaken with appropriate law enforcement involvement and oversight. We see no practical way to provide appropriate oversight or standards for the private sector to engage in this kind of activity.

Comparison to the CFAA

This issue also arises in the CFAA. In fact, it’s exacerbated by the CFAA enabling private entities to pursue civil causes of action, which mean technology manufacturers and operators can seek to apply the CFAA in private cases against researchers. This is often done to protect corporate reputations, likely at the expense of technology users who are being exposed to risk. These private civil actions chill security research and account for the vast majority of CFAA cases and lawsuit threats focused on research. One of Rapid7’s recommendations to the UK Home Office was that the CMA should not be updated to include civil liability.

Washington State has helped protect good-faith security research in its Cybercrime Act (Chapter 9A.90 RCW), which both addresses the issue of defining authorization and exempts white-hat security research.

It’s also worth noting that the U.S. has an exemption for security research in Section 1201 of the Digital Millennium Copyright Act (DMCA). It would be good to see the UK government consider something similar for the Copyright, Designs and Patents Act 1988.

Clarifying authorization

At its core, the CMA effectively operates as a law prohibiting digital trespass and hinges on the concept of authorization. Four of the five classes of offenses laid out in the CMA involve “unauthorized” activities:

1. Unauthorised access to computer material.

2. Unauthorised access with intent to commit or facilitate commission of further offences.

3. Unauthorised acts with intent to impair, or with recklessness as to impairing, operation of computer, etc.

3ZA.Unauthorised acts causing, or creating risk of, serious damage

Unfortunately, the CMA does not define authorization (or the lack thereof), nor detail what authorization should look like. As a result, it can be hard to know with certainty where the legal line is truly being drawn in the context of the internet, where many users don’t read or understand lengthy terms of service, and data and services are often publicly accessible for a wide variety of novel uses.

Many people take the view that if something is accessible in public spaces on the internet, authorization to access it is inherently granted. In this view, the responsibility lies with the owner or operator to ensure that if they don’t want to grant access to something, they don’t make it publicly available.

That being the case, the question becomes how systems owners and operators can indicate a lack of authorization for accessing systems or information in a way that scales, while still enabling broad access and innovative use of online services. In the physical world, we have an expectation that both public and private spaces exist. If a space is private and the owners don’t want others to access it, they can indicate this through signage or physical barriers (walls, fences, or gates). Currently, there is no accepted, standard way for owners and operators to set out a “No Trespassing” sign for publicly accessible data or systems on the internet that truly serves the intended purpose.

Rapid7’s recommendation

While a website’s Terms of Service (TOS) can be legally enforceable in some contexts, in our opinion the Home Office should not take the position that violations of TOS alone qualify as “unauthorized acts.” TOS are almost always ignored by the vast majority of internet users, and ordinary internet behavior may routinely violate TOS (such as using a pseudonym where a real name is required).

Reading TOS also does not scale for internet-wide scanning, as in the case of automated port scanning and other services that analyze the status of millions of publicly accessible websites and online assets. In addition, if TOS is “authorization” for the purposes of the CMA, it gives the author of the TOS the power to define what is and isn’t a hacking crime under CMA section 1.

To address this lack of clarity, the CMA needs a clearer explanation of what constitutes authorization for accessing technical systems or information through the internet and other forms of connected communications.

Comparison with the CFAA

This issue absolutely exists with the CFAA and is at the core of many of the criticisms of the law. Multiple U.S. cases have rejected the notion that TOS violations alone qualify as “exceeding authorization” under the CFAA, creating a split in the courts. The U.S. Supreme Court’s recent decision on Van Buren v. United States confirmed TOS is an insufficient standard, noting that if TOS violations alone qualify as unauthorized act for computer crime purposes, “then millions of otherwise law-abiding citizens are criminals.”

Next steps

We hope the Home Office will take these concerns into consideration, both in terms of ensuring the necessary support for security testing tools and security research, and also in being cautious not to go so far with authorities that we open the door to abuses. We’ll continue to engage on these topics wherever possible to help policymakers navigate the nuances and keep advancing security.

You can read Rapid7’s full response to the Home Office’s CFI or our detailed CMA position.


Get the latest stories, expertise, and news about security today.

Hack Back Is Still Wack

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/10/hack-back-is-still-wack/

Hack Back Is Still Wack

Every year or two, we see a policy proposal around authorizing private-sector hack back. The latest of these is legislation from two U.S. Senators, Daines and Whitehouse, and it would require the U.S. Department of Homeland Security (DHS) to “conduct a study on the potential benefits and risks of amending section 1030 of title 18, United States Code (commonly known as the ‘Computer Fraud and Abuse Act’), to allow private entities to take proportional actions in response to an unlawful network breach, subject to oversight and regulation by a designated Federal agency.”

While we believe the bill would be harmful and do not support the bill in any way, we do acknowledge that at least this legislation is attempting to address how hack back could work in practice and identifying the potential risks. This gets at the heart of one of the main issues with policy proposals for hack back — they rarely address how it would actually work in reality, and how opportunities for abuse or unintended harms would be handled.

Rapid7 does not believe it’s possible to provide sufficient oversight or accountability to make private-sector hack back viable without negative consequences. Further, the very fact that we’re once again discussing private-sector hack back as a possibility is extremely troubling.

Here, we’ll outline why Rapid7 is against the authorization of private-sector hack back.

What is hack back?

When we say “hack back,” we’re referring to non-government organizations taking intrusive action against a cyber attacker on technical assets or systems not owned or leased by the person taking action or their client. This is generally illegal in countries that have anti-hacking laws.

The appeal of hack back is easy to understand. Organizations are subject to more frequent, varied, and costly attacks, often from cybercriminals who have no fear of reprisal or prosecution due to the existence of safe-haven nations that either can’t or won’t crack down on their activities. The scales feel firmly stacked in the favor of these cybercriminals, and it’s understandable that organizations want to shift that balance and give attackers reason to think again before targeting them.

Along these lines, arguments for hack back justify it in a number of ways, citing a desire to recapture lost data, better understand the nature of the attacks, neutralize threats, or use the method as a tit for tat. Hack back activities may be conflated with threat hunting, threat intelligence, or detection and response activities. Confusingly, some proponents for these activities are quick to decry hack back while simultaneously advocating for authority to take intrusive action on third-party assets without consent from their owners.

Hack back is also sometimes referred to as Active Defense or Active Cyber Defense. This can cause confusion, as these terms can also refer to other defensive measures that are not intrusive or conducted without consent from the technology owner. For example, active defense can also describe intrusion prevention systems or deception technologies designed to confuse attackers and gain greater intelligence on them, such as honeypots. Rapid7 encourages organizations to employ active defense techniques within their own environments.

Rapid7’s criticisms of hack back

While the reasons for advocating for private-sector hack back are easy to understand and empathize with, that doesn’t make the idea workable in practice. There’s a wealth of reasons why hack back is a bad idea.

Impracticalities of attribution and application

One of the most widely stated and agreed-upon tenets in security is that attribution is hard. In fact, in many cases, it’s essentially impossible to know for certain that we’ve accurately attributed an attack. Even when we find indications that point in a certain direction, it’s very difficult to ensure they’re not red herrings intentionally planted by the attacker, either to throw suspicion off themselves or specifically to incriminate another party.

We like to talk about digital fingerprints, but the reality is that there’s no such thing: In the digital world, pretty much anything can be spoofed or obfuscated with enough time, patience, skill, and resources. Attackers are constantly evolving their techniques to stay one step ahead of defenders and law enforcement, and the emergence of deception capabilities is just one example of this. So being certain we have the right actor before we take action is extremely difficult.

In addition, where do we draw the line in determining whether an actor or computing entity could be considered a viable target? For example, if someone is under attack from devices that are being controlled as part of a botnet, those devices – and their owners – are as much victims of the attacker as the target of the attack.

Rapid7’s Project Heisenberg observes exactly this phenomenon: The honeypots often pick up traffic from legitimate organizations whose systems have been compromised and leveraged in malicious activity. Should one of these compromised systems be used to attack an organization, and that organization then take action against those affected systems to neutralize the threat against themselves, that would mean the organization defending itself was revictimizing the entity whose systems were already compromised. Depending on the action taken, this could end up being catastrophic and costly for both organizations.  

We must also take motivations into account, even though they’re often unclear or easy to misunderstand. For example, research projects that scan ports on the public-facing internet do so in order to help others understand the attack surface and reduce exposure and opportunities for attackers. This activity is benign and often results in security disclosures that have helped security professionals reduce their organization’s risk. However, it’s not unusual for these scans to encounter a perimeter monitoring tool, throwing up an alert to the security team. If an organization saw the alerts and, in their urgency to defend themselves, took a “shoot first, ask questions later” approach, they could end up attacking the researcher.

Impracticalities of limiting reach and impact

Many people have likened hack back to homeowners defending their property against intruders. They evoke images of malicious, armed criminals breaking into your home to do you and your loved ones harm. They call to you to arm yourself and stand bravely in defense, refusing to be a victim in your own home.

It’s an appealing idea — however, the reality is more akin to standing by your fence and spraying bullets out into the street, hoping to get lucky and stop an attacker as they flee the scene of the crime. With such an approach, even if you do manage to reach your attacker, you’re risking terrible collateral damage, too.

This is because the internet doesn’t operate in neatly defined and clearly demarcated boundaries. If we take action targeted at a specific actor or group of actors, it would be extremely challenging to ensure that action won’t unintentionally negatively impact innocent others. Not only should this concern lawmakers, it should also disincentivize participation. The potential negative consequences of a hack back gone awry could be far-reaching. We frequently discuss damage to equipment or systems, or loss of data, but in the age of the Internet of Things, negative consequences could include physical harm to individuals. And let’s not forget that cyberattacks can be considered acts of war.

Organizations that believe they can avoid negative outcomes in the majority of cases need to understand that even just one or two errors could be extremely costly. Imagine, for example, that a high-value target organization, such as a bank, undertakes 100 hack backs per year and makes a negatively impactful error on two occasions. A 2% fail rate may not seem that terrible — but if either or both of those errors resulted in compromise of another company or harm to a group of individuals, the hack-backer could see themselves tied up in expensive legal proceedings, reputational damage, and loss of trust. Attempts to make organizations exempt from this kind of legal action are problematic, as they raise the question of how we can spot and stop abuses.

Impracticalities of providing appropriate oversight

To date, proposals to legalize hack back have been overly broad and non-specific about how such activities should be managed, and what oversight would be required to ensure there are no abuses of the system. The Daines/Whitehouse bill tries to address this and alludes to a framework for oversight that would determine “which entities would be allowed to take such actions and under what circumstances.”

This seems to refer to an approach commonly advocated by proponents of hack back whereby a license or special authorization to conduct hack back activities is granted to vetted and approved entities. Some advocates have pointed to the example of how privateers were issued Letters of Marque to capture enemy ships — and their associated spoils. Putting aside fundamental concerns about taking as our standard a 200-year-old law passed during a time of prolonged kinetic war and effectively legalizing piracy, there are a number of pragmatic issues with how this would work in practice.  

Indeed, creating a framework and system for such oversight is highly impractical and costly, raising many issues. The government would need to determine basic administrative issues, such as who would run it and how it would be funded. It would also need to identify a path to address far more complex issues around accountability and oversight to avoid abuses. For example, who will determine which activities are acceptable and where the line should be drawn? How would an authorizing agent ensure standards are met and maintained within approved organizations? Existing cybersecurity certification and accreditation schemes have long raised concerns, and these will only worsen when certification results in increased authorities for activities that can result in harm and escalation of aggressions on the internet.

When a government entity itself takes action against attackers, it does so with a high degree of oversight and accountability. They must meet evidentiary standards to prove the action is appropriate, and even then, there are parameters determining the types of targets they can pursue and the kinds of actions they can take. Applying the same level of oversight to the private sector is impractical. At the same time, authorizing the private sector to participate in these activities without this same level of oversight would undermine the checks and balances in place for the government and likely lead to unintended harms.

An authorizing agent cannot have eyes everywhere and at all times, so it would be highly impractical to create a system for oversight that would enable the governing authority to spot and stop accidental or intentional abuses of the system in real time. If the Daines/Whitehouse bill does pass (and we have no indication of that at present), I very much hope that DHS’s resulting report will reflect these issues or, if possible, provide adequate responses to address these concerns.

These issues of practical execution also raise questions around who will bear the responsibility and liability if something goes wrong. For example, if a company hacks back and accidentally harms another organization or individual, the entity that undertook the hacking may incur expensive legal proceedings, reputational damage, and loss of trust. They could become embroiled in complicated and expensive multi-jurisdiction legal action, even if the company has a license to hack back in its home jurisdiction. In scenarios where hack back activities are undertaken by an organization or individual on behalf of a third party, both the agent and their client may bear these negative consequences. There may also be an argument that any licensing authority could also bear some of the liability.  

Making organizations exempt from legal action around unintended consequences would be problematic and likely to result in more recklessness, as well as infringing on the rights of the victim organization. While the internet is a borderless space accessed from every country in the world, each of those countries has its own legal system and expects its citizens to abide by it. It would be very risky for companies and individuals who hack back to avoid running afoul of the laws of other countries or international bodies. When national governments take this kind of action, it tends to occur within existing international legal frameworks and under some regulatory oversight, but this may not apply in the private sector, again begging the question of where the liability rests.

It’s also worth noting that once one major power authorizes private-sector hack back, other governments will likely follow, and legal expectations or boundaries may vary. This raises questions of how governments will respond when their citizens are being attacked as part of a private-sector hack back gone wrong, and whether it will likely lead to escalation of political tensions.

Inequalities of applicability

Should a viable system be developed and hack back authorized, effective participation would likely be costly, as it would require specialist skills. Not every organization would be able to participate. If the authorization framework isn’t stringent, many organizations might try to participate with insufficient expertise, which would likely be ineffective, damaging, or both. At the same time, other organizations won’t have the maturity or budget to participate even in this way.

These are the same organizations that sit below the “cybersecurity poverty line” and can’t afford a great deal of in-house security expertise and technologies to protect themselves – in other words, these organizations are already highly vulnerable. As organizations that do have sufficient resources start to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift toward targeting the less-resourced organizations that reside below the security poverty line. Rather than authorizing a measure as fraught with risk as hack back, we should instead be thinking about how to better protect these vulnerable organizations — for example, by subsidizing or incentivizing security hygiene.

The line between legitimate research and hack back

Those who follow Rapid7’s policy work will know that we’re big proponents of security research and have worked for many years to see greater recognition of its value and importance in public policy. It may come as a surprise to see us advocate so enthusiastically against hack back as, from a brief look, they have some things in common. In both cases, we’re talking about activity undertaken in the name of cybersecurity, which may be intrusive in nature and involve third-party assets without consent of the owner.

While independent, good-faith security research and threat intelligence investigations are both very valuable for security, they’re not the same thing, and we don’t believe we should view related legal restrictions in the same way for both.

Good-faith security research is typically performed independently of manufacturers and operators in order to identify flaws or exposures in systems that provide opportunities for attackers. The goal is to remediate or mitigate these issues so we can reduce opportunities for attackers and thus decrease the risk for technology users. This kind of research is generally about protecting the safety and privacy of the many, and while researchers may take actions without authorization, they only perform those actions on the technology of those ultimately responsible for both creating and mitigating the exposure. Without becoming aware of the issue, the technology provider and their users would continue to be exposed to risk.

Research may bypass authorization to sidestep issues arising from manufacturers and operators prioritizing their reputation or profit above the security of their customers. In contrast, threat intel investigations or operations that involve interrogating or interacting with third-party assets prioritize the interests of the specific entity undertaking or commissioning the activity, rather than other potential victims whose compromised assets may have been leveraged in the attack.

While threat intelligence can help us understand attacker behavior and identify or prepare for attacks, data gathering and operations should be limited only to assessing risks and threats to assets that are owned or operated by the entity authorizing the work, or to non-invasive activities such as port scanning. Because cyber attacks are criminal activity, if more investigation is needed, it should be undertaken with appropriate law enforcement involvement and oversight.

The path forward

It seems likely that the hack back debate will continue to come up as organizations strive to find new ways to repel attacks. I could make a snarky comment here about how organizations should perhaps focus instead on user awareness training, reducing their attack exposure, managing supply chain risk, proper segmentation, patching, Identity Access Management (IAM), and all the other things that make up a robust defense-in-depth program and that we frequently see fail, but I shall refrain. Cough cough.

We shall wait to see what happens with Senators Daines’ and Whitehouse’s “Study on Cyber-Attack Response Options Act’’ bill and hope that, if it passes, DHS will consider the concerns raised in this blog. The same is true for other policymakers as cybercrime is an international blight and governments around the world are subject to lobbying from entities looking to take a more active role in their defense. While we understand and sympathize with the desire to do more, take more control, and fight back, we urge policymakers to be mindful of the potential for catastrophe.


Get the latest stories, expertise, and news about security today.

Rapid7 Joins Statement On DMCA Lawsuits Against Security Tools

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/06/23/rapid7-joins-statement/

Rapid7 Joins Statement On DMCA Lawsuits Against Security Tools

Rapid7 has joined a statement from members of the cybersecurity community cautioning against using Section 1201 of the Digital Millennium Copyright Act (DMCA) to suppress beneficial security tools.

In the past, Rapid7 has written extensively about DMCA Sec. 1201’s impact on performing independent research to improve the security and transparency of devices and systems that consumers and businesses rely on. We have called for better protections for good faith security researchers from DMCA Sec. 1201’s prohibition on circumventing technological protection measures (such as encryption, authentication) to software. While there is still work to be done in this area, protections for security testing under DMCA have improved since 2015 as policymakers have increasingly recognized researchers’ helpful role in strengthening security.

However, DMCA Sec. 1201 also gives software owners the ability to file lawsuits against organizations and individuals that provide the technologies and tools that security researchers and practitioners use. See 17 USC 1201(a)(2) and 1201(b). This is separate from the act of performing security testing, and the law affords fewer protections for providing security technologies than for testing. Yet, as the joint statement notes, security practitioners often must depend on third party tools to test software security, both for research and as part of an organizational security program. It would be risky and burdensome to require researchers and practitioners to create their own testing tools, and limiting the use of security tools to those approved by the software owner would undermine effectiveness of testing and introduce conflicts of interest.

With that in mind, the below statement urges prosecutors and private entities to refrain from using DMCA Sec. 1201 to unnecessarily target security testing tools and technologies. A pdf copy of the statement is also available here.


We the undersigned write to caution against use of Section 1201 of the Digital Millennium Copyright Act (DMCA) to suppress software and tools used for good faith cybersecurity research. Security and encryption researchers help build a safer future for all of us by identifying vulnerabilities in digital technologies and raising awareness so those vulnerabilities can be mitigated. Indeed, some of the most critical cybersecurity flaws of the last decade, like Heartbleed, Shellshock, and DROWN, have been discovered by independent security researchers.

However, too many legitimate researchers face serious legal challenges that prevent or inhibit their work. One of these critical legal challenges comes from provisions of the DMCA that prohibit providing technologies, tools, or services to the public that circumvent technological protection measures (such as bypassing shared default credentials, weak encryption, etc.) to access copyrighted software without the permission of the software owner. 17 USC 1201(a)(2), (b). This creates a risk of private lawsuits and criminal penalties for independent organizations that provide technologies to  researchers that can help strengthen software security and protect users. Security research on devices, which is vital to increasing the safety and security of people around the world, often requires these technologies to be effective.

Good faith security researchers depend on these tools to test security flaws and vulnerabilities in software, not to infringe on copyright. While Sec. 1201(j) purports to provide an exemption for good faith security testing, including using technological means, the exemption is both too narrow and too vague. Most critically, 1201(j)’s accommodation for using, developing or sharing security testing tools is similarly confined; the tool must be for the “sole purpose” of security testing, and not otherwise violate the DMCA’s prohibition against providing circumvention tools.

If security researchers must obtain permission from the software vendor to use third-party security tools, this significantly hinders the independence and ability of researchers to test the security of software without any conflict of interest. In addition, it would be unrealistic, burdensome, and risky to require each security researcher to create their own bespoke security testing technologies.

We, the undersigned, believe that legal threats against the creation of tools that let people conduct security research actively harm our cybersecurity. DMCA Section 1201 should be used in such circumstances with great caution and in consideration of broader security concerns, not just for competitive economic advantage. We urge policymakers and legislators to reform Section 1201 to allow security research tools to be provided and used for good faith security research In addition, we urge companies and prosecutors to refrain from using Section 1201 to unnecessarily target tools used for security research.

Bishop Fox
Black Hills Information Security
Cybersecurity Coalition
Electronic Frontier Foundation
Luta Security
NCC Group
Red Siege
SANS Technology Institute
Social Exploits LLC

Principles for personal information security legislation

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/01/21/principles-for-personal-information-security-legislation/

Principles for personal information security legislation

It goes without saying that the 117th US Congress has a lot to get done and many legitimate priorities are competing for finite legislative attention. Cybersecurity will be in this mix. In the wake of the SolarWinds attack, President-elect Biden issued a statement emphasizing that his Administration will make cybersecurity “a top priority at every level of government.” Vice President-elect Harris was a leader on consumer privacy protection during her time as California’s Attorney General. Given the Democrat-controlled Congress, the multiple privacy/security bills filed in many past legislative sessions, and continued action by states such as California and Washington, businesses should anticipate another push for federal private sector privacy and security legislation in the upcoming Congress. (Whether such legislation actually passes, given the mix of other priorities, is another matter!)

[Check out our past blog post on updating state data security laws.]

Rapid7 welcomes this effort. Congress should pass legislation requiring security of personal information nationwide, independent of breach notification, either as a standalone or as part of comprehensive privacy legislation. We believe consumers and businesses will benefit from uniform rules on data protection, rather than the growing patchwork of state and international privacy regulations. Although security of personal information is only one slice of the broader issue of cybersecurity, it is one that directly affects many individuals, and the ripple effect of requirements to secure personal information will help raise the overall bar on the security of other systems and entities.

Principles for personal information security legislation
A simple model of the overlap. (See also NIST Privacy Framework, pg. 3.)

While any federal privacy and security legislation must accommodate a broad diversity of organizations, it must also be effective and not take a significant step back from protections consumers have today. To that end, as we aim to strike that balance, we suggest the below high level principles on personal information security legislation.  

What we look for in personal information security legislation

1. Effective security requirements.
Require private sector entities to implement reasonable security processes for personal information they collect, store, maintain, or transmit (AKA “personal information security”) to protect the security of personal information against unauthorized acquisition and access. The requirements should be:

  • Flexible. To avoid a disproportionate burden, the security program should be flexible to accommodate a diverse range of organizations. These considerations can include the size and complexity of the covered entity, the sensitivity of the personal information, the state of the art in security protection, and material cybersecurity risks and threats.

  • Built around key security program elements. To keep pace with evolving threats for a diversity of organizations, required security elements should be risk-based. However, legislation should specify that the security program must include high level elements based on standards, best practices, and existing regulations such as GLBA and HIPAA. We believe those elements should include:

    • Designate staff. Designate employees to coordinate the security program, ensure they have adequate resources and training.
    • Risk assessment. Assess material internal and external risks, threats, and vulnerabilities to security of personal information. Periodically reassess.
    • Security controls. Implement physical, technical, and administrative safeguards reasonably designed to control risks identified through the risk assessment. Inclusion of “reasonably” is helpful to ensure the controls are objectively commensurate with the risks.
    • Monitoring and testing. Regularly test and monitor the effectiveness of safeguards.
    • Service providers. Oversee service providers’ implementation of security safeguards.
    • Incident detection and response. Develop and implement written plans to detect, respond to, and recover from cybersecurity intrusions, events, and incidents.
  • Not limited to financial harm. Security requirements should not be limited to protecting against financial, physical, or “substantial” harm. This common loophole would be severely limiting and out of step with user expectations, the wide array of threats organizations face, and many state laws (see, e.g., CA, FL, NY, TX).

2. Security exemption from privacy restrictions.
Privacy law and legislation tend to include a series of exemptions for publicly beneficial purposes, such as public safety (see, e.g., GDPR) If security is folded into privacy legislation, it is important to avoid impeding cybersecurity by exempting activities necessary for security from privacy restrictions. For example, a privacy requirement to obtain consent to share personal information should not apply in a situation where a covered entity detects a phishing attempt and shares the phishing email with other companies or an ISAC to help prevent successful phishing.

  • Scope. The security exemption should cover activities for prevention, detection, and response of cybersecurity incidents and threats. Security should be exempt from a broad range of privacy requirements, rather than just a limited subset, to be most effective.

3. Preemption without undermining cybersecurity.
The issue of whether federal rules should preempt state laws is hotly contested, and a major challenge to the progress of past privacy and security legislation. (Rapid7 supports preemption of state security laws with a strong national standard to provide consistent protections to consumers and because complexity hinders compliance.) For purposes of these principles, we are focusing here only on ensuring that any federal preemption avoids undermining other cybersecurity laws, rules, and standards that are not directly related to personal information security. Examples of state rules that should not be preempted by a federal personal information security law include system security requirements, IoT and critical infrastructure security standards, government procurement requirements, computer crime and wiretapping laws, etc.

  • Scope. Rapid7’s recommendation is to preempt state laws, regulations, and rules having the force of law (not standards without the force of law) that require covered entities (as defined in the legislation) to secure personal information (as defined in the legislation). An example of legislation with an overly broad preemption provision that does not meet this framework, and may undermine cybersecurity, is the SAFE DATA Act of 2020 (S.4626, Sec. 405).

In an effort to be relatively brief, we won’t attempt to cover all the nuance and context for information security legislation (like the much-contested definition of “personal information”). Instead, the above focuses on a few security-specific sticking points that we believe are worth emphasizing based on our experience in working on past legislative efforts.

Worth consistent protections

Federal personal information security legislation is past due, as every year of delay tends to see uncoordinated growth in state and international privacy and data security regulations. Nonetheless, we should temper our expectations about the chances of federal privacy or security legislation crossing the Congressional finish line in the next two years amid competing priorities. Still, we are confident personal information security requirements will be revisited and worked on again during the 117th Congress, and the above principles will help guide Rapid7’s response.