Tag Archives: cybersecurity

Ransomware Gang Files SEC Complaint

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/11/ransomware-gang-files-sec-complaint.html

A ransomware gang, annoyed at not being paid, filed an SEC complaint against its victim for not disclosing its security breach within the required four days.

This is over the top, but is just another example of the extreme pressure ransomware gangs put on companies after seizing their data. Gangs are now going through the data, looking for particularly important or embarrassing pieces of data to threaten executives with exposing. I have heard stories of executives’ families being threatened, of consensual porn being identified (people regularly mix work and personal email) and exposed, and of victims’ customers and partners being directly contacted. Ransoms are in the millions, and gangs do their best to ensure that the pressure to pay is intense.

New York Increases Cybersecurity Rules for Financial Companies

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/11/new-york-increases-cybersecurity-rules-for-financial-companies.html

Another example of a large and influential state doing things the federal government won’t:

Boards of directors, or other senior committees, are charged with overseeing cybersecurity risk management, and must retain an appropriate level of expertise to understand cyber issues, the rules say. Directors must sign off on cybersecurity programs, and ensure that any security program has “sufficient resources” to function.

In a new addition, companies now face significant requirements related to ransom payments. Regulated firms must now report any payment made to hackers within 24 hours of that payment.

Prepare your AWS workloads for the “Operational risks and resilience – banks” FINMA Circular

Post Syndicated from Margo Cronin original https://aws.amazon.com/blogs/security/prepare-your-aws-workloads-for-the-operational-risks-and-resilience-banks-finma-circular/

In December 2022, FINMA, the Swiss Financial Market Supervisory Authority, announced a fully revised circular called Operational risks and resilience – banks that will take effect on January 1, 2024. The circular will replace the Swiss Bankers Association’s Recommendations for Business Continuity Management (BCM), which is currently recognized as a minimum standard. The new circular also adopts the revised principles for managing operational risks, and the new principles on operational resilience, that the Basel Committee on Banking Supervision published in March 2021.

In this blog post, we share key considerations for AWS customers and regulated financial institutions to help them prepare for, and align to, the new circular.

AWS previously announced the publication of the AWS User Guide to Financial Services Regulations and Guidelines in Switzerland. The guide refers to certain rules applicable to financial institutions in Switzerland, including banks, insurance companies, stock exchanges, securities dealers, portfolio managers, trustees, and other financial entities that FINMA oversees (directly or indirectly).

FINMA has previously issued the following circulars to help regulated financial institutions understand approaches to due diligence, third party management, and key technical and organizational controls to be implemented in cloud outsourcing arrangements, particularly for material workloads:

  • 2018/03 FINMA Circular Outsourcing – banks and insurers (31.10.2019)
  • 2008/21 FINMA Circular Operational Risks – Banks (31.10.2019) – Principal 4 Technology Infrastructure
  • 2008/21 FINMA Circular Operational Risks – Banks (31.10.2019) – Appendix 3 Handling of electronic Client Identifying Data
  • 2013/03 Auditing (04.11.2020) – Information Technology (21.04.2020)
  • BCM minimum standards proposed by the Swiss Insurance Association (01.06.2015) and Swiss Bankers Association (29.08.2013)

Operational risk management: Critical data

The circular defines critical data as follows:

“Critical data are data that, in view of the institution’s size, complexity, structure, risk profile and business model, are of such crucial significance that they require increased security measures. These are data that are crucial for the successful and sustainable provision of the institution’s services or for regulatory purposes. When assessing and determining the criticality of data, the confidentiality as well as the integrity and availability must be taken into account. Each of these three aspects can determine whether data is classified as critical.”

This definition is consistent with the AWS approach to privacy and security. We believe that for AWS to realize its full potential, customers must have control over their data. This includes the following commitments:

  • Control over the location of your data
  • Verifiable control over data access
  • Ability to encrypt everything everywhere
  • Resilience of AWS

These commitments further demonstrate our dedication to securing your data: it’s our highest priority. We implement rigorous contractual, technical, and organizational measures to help protect the confidentiality, integrity, and availability of your content regardless of which AWS Region you select. You have complete control over your content through powerful AWS services and tools that you can use to determine where to store your data, how to secure it, and who can access it.

You also have control over the location of your content on AWS. For example, in Europe, at the time of publication of this blog post, customers can deploy their data into any of eight Regions (for an up-to-date list of Regions, see AWS Global Infrastructure). One of these Regions is the Europe (Zurich) Region, also known by its API name ‘eu-central-2’, which customers can use to store data in Switzerland. Additionally, Swiss customers can rely on the terms of the AWS Swiss Addendum to the AWS Data Processing Addendum (DPA), which applies automatically when Swiss customers use AWS services to process personal data under the new Federal Act on Data Protection (nFADP).

AWS continually monitors the evolving privacy, regulatory, and legislative landscape to help identify changes and determine what tools our customers might need to meet their compliance requirements. Maintaining customer trust is an ongoing commitment. We strive to inform you of the privacy and security policies, practices, and technologies that we’ve put in place. Our commitments, as described in the Data Privacy FAQ, include the following:

  • Access – As a customer, you maintain full control of your content that you upload to the AWS services under your AWS account, and responsibility for configuring access to AWS services and resources. We provide an advanced set of access, encryption, and logging features to help you do this effectively (for example, AWS Identity and Access ManagementAWS Organizations, and AWS CloudTrail). We provide APIs that you can use to configure access control permissions for the services that you develop or deploy in an AWS environment. We never use your content or derive information from it for marketing or advertising purposes.
  • Storage – You choose the AWS Regions in which your content is stored. You can replicate and back up your content in more than one Region. We will not move or replicate your content outside of your chosen AWS Regions except as agreed with you.
  • Security – You choose how your content is secured. We offer you industry-leading encryption features to protect your content in transit and at rest, and we provide you with the option to manage your own encryption keys. These data protection features include:
  • Disclosure of customer content – We will not disclose customer content unless we’re required to do so to comply with the law or a binding order of a government body. If a governmental body sends AWS a demand for your customer content, we will attempt to redirect the governmental body to request that data directly from you. If compelled to disclose your customer content to a governmental body, we will give you reasonable notice of the demand to allow the customer to seek a protective order or other appropriate remedy, unless AWS is legally prohibited from doing so.
  • Security assurance – We have developed a security assurance program that uses current recommendations for global privacy and data protection to help you operate securely on AWS, and to make the best use of our security control environment. These security protections and control processes are independently validated by multiple third-party independent assessments, including the FINMA International Standard on Assurance Engagements (ISAE) 3000 Type II attestation report.

Additionally, FINMA guidelines lay out requirements for the written agreement between a Swiss financial institution and its service provider, including access and audit rights. For Swiss financial institutions that run regulated workloads on AWS, we offer the Swiss Financial Services Addendum to address the contractual and audit requirements of the FINMA guidelines. We also provide these institutions the ability to comply with the audit requirements in the FINMA guidelines through the AWS Security & Audit Series, including participation in an Audit Symposium, to facilitate customer audits. To help align with regulatory requirements and expectations, our FINMA addendum and audit program incorporate feedback that we’ve received from a variety of financial supervisory authorities across EU member states. To learn more about the Swiss Financial Services addendum or about the audit engagements offered by AWS, reach out to your AWS account team.

Resilience

Customers need control over their workloads and high availability to help prepare for events such as supply chain disruptions, network interruptions, and natural disasters. Each AWS Region is composed of multiple Availability Zones (AZs). An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. To better isolate issues and achieve high availability, you can partition applications across multiple AZs in the same Region. If you are running workloads on premises or in intermittently connected or remote use cases, you can use our services that provide specific capabilities for offline data and remote compute and storage. We will continue to enhance our range of sovereign and resilient options, to help you sustain operations through disruption or disconnection.

FINMA incorporates the principles of operational resilience in the newest circular 2023/01. In line with the efforts of the European Commission’s proposal for the Digital Operational Resilience Act (DORA), FINMA outlines requirements for regulated institutions to identify critical functions and their tolerance for disruption. Continuity of service, especially for critical economic functions, is a key prerequisite for financial stability. AWS recognizes that financial institutions need to comply with sector-specific regulatory obligations and requirements regarding operational resilience. AWS has published the whitepaper Amazon Web Services’ Approach to Operational Resilience in the Financial Sector and Beyond, in which we discuss how AWS and customers build for resiliency on the AWS Cloud. AWS provides resilient infrastructure and services, which financial institution customers can rely on as they design their applications to align with FINMA regulatory and compliance obligations.

AWS previously announced the third issuance of the FINMA ISAE 3000 Type II attestation report. Customers can access the entire report in AWS Artifact. To learn more about the list of certified services and Regions, see the FINMA ISAE 3000 Type 2 Report and AWS Services in Scope for FINMA.

AWS is committed to adding new services into our future FINMA program scope based on your architectural and regulatory needs. If you have questions about the FINMA report, or how your workloads on AWS align to the FINMA obligations, contact your AWS account team. We will also help support customers as they look for new ways to experiment, remain competitive, meet consumer expectations, and develop new products and services on AWS that align with the new regulatory framework.

To learn more about our compliance, security programs and common privacy and data protection considerations, see AWS Compliance Programs and the dedicated AWS Compliance Center for Switzerland. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security, Identity, & Compliance re:Post or contact AWS Support.

Margo Cronin

Margo Cronin

Margo is an EMEA Principal Solutions Architect specializing in security and compliance. She is based out of Zurich, Switzerland. Her interests include security, privacy, cryptography, and compliance. She is passionate about her work unblocking security challenges for AWS customers, enabling their successful cloud journeys. She is an author of AWS User Guide to Financial Services Regulations and Guidelines in Switzerland.

Raphael Fuchs

Raphael Fuchs

Raphael is a Senior Security Solutions Architect based in Zürich, Switzerland, who helps AWS Financial Services customers meet their security and compliance objectives in the AWS Cloud. Raphael has a background as Chief Information Security Officer in the Swiss FSI sector and is an author of AWS User Guide to Financial Services Regulations and Guidelines in Switzerland.

EPA Won’t Force Water Utilities to Audit Their Cybersecurity

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/epa-wont-force-water-utilities-to-audit-their-cybersecurity.html

The industry pushed back:

Despite the EPA’s willingness to provide training and technical support to help states and public water system organizations implement cybersecurity surveys, the move garnered opposition from both GOP state attorneys and trade groups.

Republican state attorneys that were against the new proposed policies said that the call for new inspections could overwhelm state regulators. The attorney generals of Arkansas, Iowa and Missouri all sued the EPA—claiming the agency had no authority to set these requirements. This led to the EPA’s proposal being temporarily blocked back in June.

So now we have a piece of our critical infrastructure with substandard cybersecurity. This seems like a really bad outcome.

Security Vulnerability of Switzerland’s E-Voting System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/security-vulnerability-of-switzerlands-e-voting-system.html

Online voting is insecure, period. This doesn’t stop organizations and governments from using it. (And for low-stakes elections, it’s probably fine.) Switzerland—not low stakes—uses online voting for national elections. Andrew Appel explains why it’s a bad idea:

Last year, I published a 5-part series about Switzerland’s e-voting system. Like any internet voting system, it has inherent security vulnerabilities: if there are malicious insiders, they can corrupt the vote count; and if thousands of voters’ computers are hacked by malware, the malware can change votes as they are transmitted. Switzerland “solves” the problem of malicious insiders in their printing office by officially declaring that they won’t consider that threat model in their cybersecurity assessment.

But it also has an interesting new vulnerability:

The Swiss Post e-voting system aims to protect your vote against vote manipulation and interference. The goal is to achieve this even if your own computer is infected by undetected malware that manipulates a user vote. This protection is implemented by special return codes (Prüfcode), printed on the sheet of paper you receive by physical mail. Your computer doesn’t know these codes, so even if it’s infected by malware, it can’t successfully cheat you as long as, you follow the protocol.

Unfortunately, the protocol isn’t explained to you on the piece of paper you get by mail. It’s only explained to you online, when you visit the e-voting website. And of course, that’s part of the problem! If your computer is infected by malware, then it can already present to you a bogus website that instructs you to follow a different protocol, one that is cheatable. To demonstrate this, I built a proof-of-concept demonstration.

Appel again:

Kuster’s fake protocol is not exactly what I imagined; it’s better. He explains it all in his blog post. Basically, in his malware-manipulated website, instead of displaying the verification codes for the voter to compare with what’s on the paper, the website asks the voter to enter the verification codes into a web form. Since the website doesn’t know what’s on the paper, that web-form entry is just for show. Of course, Kuster did not employ a botnet virus to distribute his malware to real voters! He keeps it contained on his own system and demonstrates it in a video.

Again, the solution is paper. (Here I am saying that in 2004.) And, no, blockchain does not help—it makes security worse.

NSA AI Security Center

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/nsa-ai-security-center.html

The NSA is starting a new artificial intelligence security center:

The AI security center’s establishment follows an NSA study that identified securing AI models from theft and sabotage as a major national security challenge, especially as generative AI technologies emerge with immense transformative potential for both good and evil.

Nakasone said it would become “NSA’s focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks” for both AI security and the goal of promoting the secure development and adoption of AI within “our national security systems and our defense industrial base.”

He said it would work closely with U.S. industry, national labs, academia and the Department of Defense as well as international partners.

On the Cybersecurity Jobs Shortage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/09/on-the-cybersecurity-jobs-shortage.html

In April, Cybersecurity Ventures reported on extreme cybersecurity job shortage:

Global cybersecurity job vacancies grew by 350 percent, from one million openings in 2013 to 3.5 million in 2021, according to Cybersecurity Ventures. The number of unfilled jobs leveled off in 2022, and remains at 3.5 million in 2023, with more than 750,000 of those positions in the U.S. Industry efforts to source new talent and tackle burnout continues, but we predict that the disparity between demand and supply will remain through at least 2025.

The numbers never made sense to me, and Ben Rothke has dug in and explained the reality:

…there is not a shortage of security generalists, middle managers, and people who claim to be competent CISOs. Nor is there a shortage of thought leaders, advisors, or self-proclaimed cyber subject matter experts. What there is a shortage of are computer scientists, developers, engineers, and information security professionals who can code, understand technical security architecture, product security and application security specialists, analysts with threat hunting and incident response skills. And this is nothing that can be fixed by a newbie taking a six-month information security boot camp.

[…]

Most entry-level roles tend to be quite specific, focused on one part of the profession, and are not generalist roles. For example, hiring managers will want a network security engineer with knowledge of networks or an identity management analyst with experience in identity systems. They are not looking for someone interested in security.

In fact, security roles are often not considered entry-level at all. Hiring managers assume you have some other background, usually technical before you are ready for an entry-level security job. Without those specific skills, it is difficult for a candidate to break into the profession. Job seekers learn that entry-level often means at least two to three years of work experience in a related field.

That makes a lot more sense, and matches what I experience.

Remotely Stopping Polish Trains

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/remotely-stopping-polish-trains.html

Turns out that it’s easy to broadcast radio commands that force Polish trains to stop:

…the saboteurs appear to have sent simple so-called “radio-stop” commands via radio frequency to the trains they targeted. Because the trains use a radio system that lacks encryption or authentication for those commands, Olejnik says, anyone with as little as $30 of off-the-shelf radio equipment can broadcast the command to a Polish train­—sending a series of three acoustic tones at a 150.100 megahertz frequency­—and trigger their emergency stop function.

“It is three tonal messages sent consecutively. Once the radio equipment receives it, the locomotive goes to a halt,” Olejnik says, pointing to a document outlining trains’ different technical standards in the European Union that describes the “radio-stop” command used in the Polish system. In fact, Olejnik says that the ability to send the command has been described in Polish radio and train forums and on YouTube for years. “Everybody could do this. Even teenagers trolling. The frequencies are known. The tones are known. The equipment is cheap.”

Even so, this is being described as a cyberattack.

White House Announces AI Cybersecurity Challenge

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/white-house-announces-ai-cybersecurity-challenge.html

At Black Hat last week, the White House announced an AI Cyber Challenge. Gizmodo reports:

The new AI cyber challenge (which is being abbreviated “AIxCC”) will have a number of different phases. Interested would-be competitors can now submit their proposals to the Small Business Innovation Research program for evaluation and, eventually, selected teams will participate in a 2024 “qualifying event.” During that event, the top 20 teams will be invited to a semifinal competition at that year’s DEF CON, another large cybersecurity conference, where the field will be further whittled down.

[…]

To secure the top spot in DARPA’s new competition, participants will have to develop security solutions that do some seriously novel stuff. “To win first-place, and a top prize of $4 million, finalists must build a system that can rapidly defend critical infrastructure code from attack,” said Perri Adams, program manager for DARPA’s Information Innovation Office, during a Zoom call with reporters Tuesday. In other words: the government wants software that is capable of identifying and mitigating risks by itself.

This is a great idea. I was a big fan of DARPA’s AI capture-the-flag event in 2016, and am happy to see that DARPA is again inciting research in this area. (China has been doing this every year since 2017.)

Methodology for Return on Security Investment

Post Syndicated from Bozho original https://techblog.bozho.net/methodology-for-return-on-security-investment/

Measuring return-on-investement for security (information security/cybersecurity) has always been hard. This is a problem for both cybersecurity vendors and service providers as well as for CISOs, as they find it hard to convince the budget stakeholders why they need another pile of money for tool X.

Return on Security Investment (ROSI) has been discussed, including academically, for a while. But we haven’t yet found a sound methodology for it. I’m not proposing one either, but I wanted to mark some points for such a methodology that I think are important. Otherwise, decisions are often taken by “auditor said we need X” or “regulation says we need Y”. Which are decent reasons to buy something, but it makes security look like a black hole cost center. It’s certainly no profit center, but the more tangibility we add, the more likely investments are going to work.

I think the leading metric is “likelihood of critical incident”. Businesses are (rightly) concerned with this. They don’t care about the number of reconnaissance attempts, false positives ratios, MTTRs and other technical things. This likelihood, if properly calculated, can lead to a sum of money lost due to the incident (due to lack of availability, data loss, reputational cost, administrative fines, etc.). The problem is we can’t get company X and say “you are 20% likely to get hit because that’s the number for SMEs”. It’s likely that a number from a vendor presentation won’t ring true. So I think the following should be factored in the methodology:

  • Likelihood of incident per type – ransomware, DDoS, data breach, insider data manipulation, are all differently likely.
  • Likelihood of incident per industry – industries vary greatly in terms of hacker incentive. Apart from generic ransomware, other attacks are more likely to be targeted at the financial industry, for example, than the forestry industry. That’s why EU directives NIS and NIS2 prioritize some industries as more critical
  • Likelihood of incident per organization size or revenue – not all SMEs and not all large enterprises are the same – the number of employees and their qualification may mean increased or decreased risk; company revenue may make it stand out ontop of the target list (or at the bottom)
  • Likelihood of incident per team size and skill – if you have one IT guy doing printers and security, it’s more likely to get hit by a critical incident than if you have a SOC team. Sounds obvious, but it’s a spectrum, and probably one with diminishing returns, especially for SMEs
  • Likelihood of incident per available security products – if you have nothing installed, you are more likely to get hit. If you have a simple AV, you can the basic attacks out. If you have a firewall, a SIEM/XDR, SOAR, threat intel subscriptions, things are different. Having them, of course, doesn’t mean they are properly deployed, but the types of tools matter in the ballpark calculations

How to get that data – I’m sure someone collects it. If nobody does, governments should. Such metrics are important for security decisions and therefore for the overall security of the ecosystem.

The post Methodology for Return on Security Investment appeared first on Bozho's tech blog.

China Hacked Japan’s Military Networks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/china-hacked-japans-military-networks.html

The NSA discovered the intrusion in 2020—we don’t know how—and alerted the Japanese. The Washington Post has the story:

The hackers had deep, persistent access and appeared to be after anything they could get their hands on—plans, capabilities, assessments of military shortcomings, according to three former senior U.S. officials, who were among a dozen current and former U.S. and Japanese officials interviewed, who spoke on the condition of anonymity because of the matter’s sensitivity.

[…]

The 2020 penetration was so disturbing that Gen. Paul Nakasone, the head of the NSA and U.S. Cyber Command, and Matthew Pottinger, who was White House deputy national security adviser at the time, raced to Tokyo. They briefed the defense minister, who was so concerned that he arranged for them to alert the prime minister himself.

Beijing, they told the Japanese officials, had breached Tokyo’s defense networks, making it one of the most damaging hacks in that country’s modern history.

More analysis.

Microsoft Signing Key Stolen by Chinese

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/microsoft-signing-key-stolen-by-chinese.html

A bunch of networks, including US Government networks, have been hacked by the Chinese. The hackers used forged authentication tokens to access user email, using a stolen Microsoft Azure account consumer signing key. Congress wants answers. The phrase “negligent security practices” is being tossed about—and with good reason. Master signing keys are not supposed to be left around, waiting to be stolen.

Actually, two things went badly wrong here. The first is that Azure accepted an expired signing key, implying a vulnerability in whatever is supposed to check key validity. The second is that this key was supposed to remain in the the system’s Hardware Security Module—and not be in software. This implies a really serious breach of good security practice. The fact that Microsoft has not been forthcoming about the details of what happened tell me that the details are really bad.

I believe this all traces back to SolarWinds. In addition to Russia inserting malware into a SolarWinds update, China used a different SolarWinds vulnerability to break into networks. We know that Russia accessed Microsoft source code in that attack. I have heard from informed government officials that China used their SolarWinds vulnerability to break into Microsoft and access source code, including Azure’s.

I think we are grossly underestimating the long-term results of the SolarWinds attacks. That backdoored update was downloaded by over 14,000 networks worldwide. Organizations patched their networks, but not before Russia—and others—used the vulnerability to enter those networks. And once someone is in a network, it’s really hard to be sure that you’ve kicked them out.

Sophisticated threat actors are realizing that stealing source code of infrastructure providers, and then combing that code for vulnerabilities, is an excellent way to break into organizations who use those infrastructure providers. Attackers like Russia and China—and presumably the US as well—are prioritizing going after those providers.

News articles.

EDITED TO ADD: Commentary:

This is from Microsoft’s explanation. The China attackers “acquired an inactive MSA consumer signing key and used it to forge authentication tokens for Azure AD enterprise and MSA consumer to access OWA and Outlook.com. All MSA keys active prior to the incident—including the actor-acquired MSA signing key—have been invalidated. Azure AD keys were not impacted. Though the key was intended only for MSA accounts, a validation issue allowed this key to be trusted for signing Azure AD tokens. The actor was able to obtain new access tokens by presenting one previously issued from this API due to a design flaw. This flaw in the GetAccessTokenForResourceAPI has since been fixed to only accept tokens issued from Azure AD or MSA respectively. The actor used these tokens to retrieve mail messages from the OWA API.”

New SEC Rules around Cybersecurity Incident Disclosures

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/new-sec-rules-around-cybersecurity-incident-disclosures.html

The US Securities and Exchange Commission adopted final rules around the disclosure of cybersecurity incidents. There are two basic rules:

  1. Public companies must “disclose any cybersecurity incident they determine to be material” within four days, with potential delays if there is a national security risk.
  2. Public companies must “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats” in their annual filings.

The rules go into effect this December.

In an email newsletter, Melissa Hathaway wrote:

Now that the rule is final, companies have approximately six months to one year to document and operationalize the policies and procedures for the identification and management of cybersecurity (information security/privacy) risks. Continuous assessment of the risk reduction activities should be elevated within an enterprise risk management framework and process. Good governance mechanisms delineate the accountability and responsibility for ensuring successful execution, while actionable, repeatable, meaningful, and time-dependent metrics or key performance indicators (KPI) should be used to reinforce realistic objectives and timelines. Management should assess the competency of the personnel responsible for implementing these policies and be ready to identify these people (by name) in their annual filing.

News article.

How To Present SecOps Metrics (The Right Way)

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/08/01/how-to-present-secops-metrics-the-right-way/

How To Present SecOps Metrics (The Right Way)

SecOps metrics can be a gold mine of potential for informing better business decisions, but 78% of CEOs say they don’t have adequate data on risk exposure to make good decisions. Even when they do see the right data, 82% are inclined to “trust their gut” anyway.

Here lies the disconnect between data and decisions for C-level executives: a lack of effective presentation. Ultimately, the responsibility of communicating that SecOps metrics matter falls on today’s security teams. They must transform numbers into narratives that illustrate the challenges in today’s attack landscape to decision-makers — and, most importantly, make stakeholders care about those challenges.

But metrics presentations can get boring. So, how can security professionals present SecOps metrics in an engaging way?

Stories inspire empathy and action

While facts and figures play a role in communication, humans respond differently to stories. With narratives, we understand meaning more deeply, remember events longer, and factor what each story taught us into future decisions. Storytelling is also an effective way for security teams to inspire empathy — and therefore, action — in today’s decision-makers.

It’s critical for security professionals to identify the narrative thread in the metrics they’re analyzing. Here’s what we mean by that, step by step and metric by metric.

Establish how hungry the bad guys are

Hone in on the frequency of security incidents. This metric directly correlates to the power and reach threat actors have. Dive into the causes behind incidents, how much impact incidents had, and what can be done to stop them.

This information gives executives direct insight into the potential risks your organization faces and the negative outcomes associated with them. When executives can see the cold hard number of times their organizations have suffered from a breach, attack, or leak, it can highlight where security strategies are still lacking — and where they’re losing out to malicious actors.

Show how the villains keep winning

MTTD (mean time to detection) is a measure of how fast security teams can detect incidents. While it might not be a flashy metric in and of itself, it can pack a powerful punch when illustrating the damage bad actors can do before they’re suspected of even breaking in.

MTTD provides insight into the efficacy of an organization’s current cybersecurity tools and data coverage. It can also be a helpful indicator of how well current security processes are working — and how overworked or resource-strained a security team might be.

Tell the underdog’s story

Here’s where you leverage MTTR (mean time to respond). This metric shows how quickly the security team can spring into action. More often than not, security teams have a litany of other important tasks at hand that can make MTTR less than ideal. This demonstrates why resource-strained and overworked security professionals are set up to fail if they don’t have the right tools, strategies, and support.

With MTTR, security teams can add an extra layer of context to the data shown by MTTD. This metric highlights how quickly security teams respond to incidents — which can be another indicator of how well tools and processes match up to current threats.

Describe the loot you stand to lose

Finally, communicate the potential cost per incident. Money speaks volumes when you’re crafting a narrative out of SecOps metrics — so it’s best to close out your stories with this powerful data point. This metric provides insight into the efficiency of a cybersecurity program’s processes, tools, and resource allocation.

This is perhaps the most effective metric security professionals can use with executives because it speaks directly to one of their critical concerns: the bottom line.

Putting it all together

While many additional SecOps metrics matter, those four data points can come together most effectively to weave a story that speaks to C-level execs.

However, executives will have their own set of questions and concerns at SecOps briefs. So, it’s important to supplement even the strongest SecOps stories with additional answers, such as:

  • How efficiently your organization is addressing risks compared to other similar companies.
  • Where budget spend works and where it doesn’t in terms of ROI.
  • Where opportunities for increased efficiencies (namely, breaking down disparate silos and cutting costs with consolidation) can come into play.

It all comes down to communication

By focusing on crafting data narratives, security teams can turn SecOps metrics into actionable decisions for stakeholders. Telling the right story to the right people can help procure backing from the top — which means getting the resources, people, and budget security leaders need to stay ahead of threats.

Effectively communicating with C-levels helps build a rapport between stakeholders and boots-on-the-ground security professionals. By presenting metrics as parts of a larger story, organizations can unlock better collaboration, better relationships, and better business outcomes.

All while keeping threat actors in check.

Want to learn more about creating SecOps narratives that pack a punch? Download Presenting Upward: How to Showcase SecOps Metrics That Matter now.

Commentary on the Implementation Plan for the 2023 US National Cybersecurity Strategy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/commentary-on-the-implementation-plan-for-the-2023-us-national-cybersecurity-strategy.html

The Atlantic Council released a detailed commentary on the White House’s new “Implementation Plan for the 2023 US National Cybersecurity Strategy.” Lots of interesting bits.

So far, at least three trends emerge:

First, the plan contains a (somewhat) more concrete list of actions than its parent strategy, with useful delineation of lead and supporting agencies, as well as timelines aplenty. By assigning each action a designated lead and timeline, and by including a new nominal section (6) focused entirely on assessing effectiveness and continued iteration, the ONCD suggests that this is not so much a standalone text as the framework for an annual, crucially iterative policy process. That many of the milestones are still hazy might be less important than the commitment. the administration has made to revisit this plan annually, allowing the ONCD team to leverage their unique combination of topical depth and budgetary review authority.

Second, there are clear wins. Open-source software (OSS) and support for energy-sector cybersecurity receive considerable focus, and there is a greater budgetary push on both technology modernization and cybersecurity research. But there are missed opportunities as well. Many of the strategy’s most difficult and revolutionary goals—­holding data stewards accountable through privacy legislation, finally implementing a working digital identity solution, patching gaps in regulatory frameworks for cloud risk, and implementing a regime for software cybersecurity liability—­have been pared down or omitted entirely. There is an unnerving absence of “incentive-shifting-focused” actions, one of the most significant overarching objectives from the initial strategy. This backpedaling may be the result of a new appreciation for a deadlocked Congress and the precarious present for the administrative state, but it falls short of the original strategy’s vision and risks making no progress against its most ambitious goals.

Third, many of the implementation plan’s goals have timelines stretching into 2025. The disruption of a transition, be it to a second term for the current administration or the first term of another, will be difficult to manage under the best of circumstances. This leaves still more of the boldest ideas in this plan in jeopardy and raises questions about how best to prioritize, or accelerate, among those listed here.

[Lost Bots] S03 E04 A Security Leader’s Playbook for the C-suite

Post Syndicated from Amy Hunt original https://blog.rapid7.com/2023/07/17/lost-bots-s03-e04-a-security-leaders-playbook-for-the-c-suite/

[Lost Bots] S03 E04 A Security Leader’s Playbook for the C-suite

In a special two-part “Lost Bots,” hosts Jeffrey Gardner and Stephen Davis talk about presenting cybersecurity results up the org chart. Both have handled C-suite and board communications and have lots of lessons learned.

Part 1 is about the style of a presentation: the point, the delivery, the storytelling. Gardner believes anyone can be great because he’s “an extreme introvert” himself. He shares a ton of wisdom about how to structure your presentation and really own the room with confidence. About halfway through, the ideas start coming fast and furious.

Part 2 brings it together with a deep dive into metrics (and an extraordinary bowtie on Mr. Davis, seriously). Metrics aren’t your story, but they do prove it true. The episode with one thing you must take away and remember: you’re not there to sell more security, you’re there to help stakeholders make well-informed business decisions. When that purpose is clear, some things get simpler.

Four Signs You Need to Consolidate Your Tech Stack

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/06/29/four-signs-you-need-to-consolidate-your-tech-stack/

Four Signs You Need to Consolidate Your Tech Stack

Recently, Gartner surveyed security professionals and found that over 50% of the respondents were looking to consolidate their security tech stack. Why? These professionals recognized that consolidation is key to achieving their goals of improving productivity, visibility, and reporting as well as bridging staff resourcing gaps.

Additionally, threat actors are leveraging artificial intelligence (AI) and machine learning tools to launch more sophisticated, high-impact attacks. Defending against AI-assisted attacks requires greater network visibility and operational efficiency—not to mention the automated detection and response capabilities in most consolidation offerings. As the threat landscape evolves, streamlining your tech stack can also improve your organization’s security posture and protect against financial losses. This is an important consideration, as the cost of the average data breach has reached $9.44 million in the U.S.  

While the benefits of consolidation are clear, organizations often miss the tell-tale signs that it is time to consolidate their tech stack. Recognizing these signs can help your organization identify the areas where it’s most needed and develop a seamless implementation strategy that minimizes disruption.

Four tell-tale signs it’s time to consolidate your security tools

Sign #1: You can’t track (or visualize) your tech stack

When was the last time you cataloged your resources? This may seem a little on the nose, but one of the best ways to tell if your organization is in need of consolidation is that you’re unable to track or visualize your tech stack.

In 2021, IBM found that 45% of security teams used more than 20 tools when investigating and responding to a cybersecurity incident. These tools are a drain on your budget and can even present security risks. Excess tech is less likely to be monitored for compliance and needlessly broadens your network’s attack surface.

Visibility into your tech stack is just as important as visibility across your network. The inability to track and visualize your tech stack can indicate that your organization is working with tools that are obsolete, underutilized, or ignored.

Sign #2: Your mean time to resolve (MTTR) is high

Did you know it takes the average company a staggering 277 days to identify and contain a breach? Finding and resolving breaches quickly is key to protecting your systems and data. When your MTTR is high, it’s indicative of operational inefficiencies in your security responses.

Working with too many vendors and tools can make it difficult to prioritize and respond to threats. For example, if you’re working with redundant tools, event data from one tool may conflict with another, and your team is forced to spend precious time confirming which dataset is correct before it can respond to the security incident.

Siloed tools from a variety of vendors are another common pain point. Even if you’re using “best of breed” tools, a cobbled together security solution of multiple vendors can create issues. Tools from different vendors may not integrate well (if at all). Consequently, your team may be missing crucial alerts and experiencing a breakdown in workflows as data is transferred from one tool to another.

Sign #3: Your processes are manual

If your team is wasting valuable time manually investigating false positives, prioritizing risks, and drawing context from massive datasets, consolidation could be the solution. Manual investigation is also error-prone, and teams often find that important security events are missed entirely or slip through the cracks until they become pervasive, system-wide concerns. As a result, you may be able to track your team’s elevated MTTR rate back to manual resolution workflows.

Consolidated security platforms offer the crucial automation features that companies need to close skill and staffing resource gaps, as well. Consolidating with automation in mind can simplify and improve your team’s workflows, ensuring that your team is able to respond to threats faster and reduce overall risk across your infrastructure—even if your organization is understaffed. Finally, removing the burden of manual investigation can increase your team’s productivity, free up resources, and create space for senior staff to work on other projects.

Sign #4: Compliance is a struggle

If you’re working with a variety of vendors and security tools, compliance can be problematic. You may find that each vendor’s approach to compliance varies widely, and it’s nearly impossible to impose a consistent standard of compliance across your entire network.

Network applications are difficult to update and secure if your organization struggles to maintain visibility in its tech stack. Also, gathering data across your infrastructure for compliance audits is complicated when you have redundant tools, disagreement between the datasets, and no single source of truth.

Whether your organization is in a highly regulated industry or not, maintaining a compliant network is important. Organizations that maintain compliant networks can resolve configuration-related vulnerabilities faster, creating a baseline for security practices and IT operations.

Following governmental compliance regulations can help your organization enhance its data management capabilities. There are also serious drawbacks to a non-compliant network. Depending on your industry, if your network is non-compliant, you may be required to pay hefty fines. Additionally, non-compliant networks are less secure; they’re prone to configuration vulnerabilities and a host of other issues.

When it comes to consolidation, don’t ignore the signs

Knowing the signs of a tech stack in need of consolidation can save your organization a considerable amount of time, money, and frustration. Some companies worry about giving up “best of breed” security options. However, consolidation is increasingly considered more secure than “best of breed.”

For many organizations, the security advantage of narrowing your attack surface, automating processes, and streamlining data far outweighs the individual benefits of separate solutions and multiple vendors. As the threat landscape evolves, it’s increasingly important to have a streamlined tech stack that can deliver the security support needed to effectively mitigate risk.

Want to learn more about consolidation and where to get started? Check out our eBook,The Case for Security Vendor Consolidation.”

Security and Human Behavior (SHB) 2023

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/security-and-human-behavior-shb-2023.html

I’m just back from the sixteenth Workshop on Security and Human Behavior, hosted by Alessandro Acquisti at Carnegie Mellon University in Pittsburgh.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The fifty or so attendees include psychologists, economists, computer security researchers, criminologists, sociologists, political scientists, designers, lawyers, philosophers, anthropologists, geographers, neuroscientists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Short talks limit presenters’ ability to get into the boring details of their work, and the interdisciplinary audience discourages jargon.

For the past decade and a half, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways­ 00 and has resulted in some unexpected collaborations.

And that’s what’s valuable. One of the most important outcomes of the event is new collaborations. Over the years, we have seen new interdisciplinary research between people who met at the workshop, and ideas and methodologies move from one field into another based on connections made at the workshop. This is why some of us have been coming back every year for over a decade.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is live blogging the talks. We are back 100% in person after two years of fully remote and one year of hybrid.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, and fifteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the sessions. Ross also maintains a good webpage of psychology and security resources.

It’s actually hard to believe that the workshop has been going on for this long, and that it’s still vibrant. We rotate between organizers, so next year is my turn in Cambridge (the Massachusetts one).

Our commitment to shared cybersecurity goals

Post Syndicated from Mark Ryland original https://aws.amazon.com/blogs/security/our-commitment-to-shared-cybersecurity-goals/

The United States Government recently launched its National Cybersecurity Strategy. The Strategy outlines the administration’s ambitious vision for building a more resilient future, both in the United States and around the world, and it affirms the key role cloud computing plays in realizing this vision.

Amazon Web Services (AWS) is broadly committed to working with customers, partners, and governments such as the United States to improve cybersecurity. That longstanding commitment aligns with the goals of the National Cybersecurity Strategy. In this blog post, we will summarize the Strategy and explain how AWS will help to realize its vision.

The Strategy identifies two fundamental shifts in how the United States allocates cybersecurity roles, responsibilities, and resources. First, the Strategy calls for a shift in cybersecurity responsibility away from individuals and organizations with fewer resources, toward larger technology providers that are the most capable and best-positioned to be successful. At AWS, we recognize that our success and scale bring broad responsibility. We are committed to improving cybersecurity outcomes for our customers, our partners, and the world at large.

Second, the Strategy calls for realigning incentives to favor long-term investments in a resilient future. As part of our culture of ownership, we are long-term thinkers, and we don’t sacrifice long-term value for short-term results. For more than fifteen years, AWS has delivered security, identity, and compliance services for millions of active customers around the world. We recognize that we operate in a complicated global landscape and dynamic threat environment that necessitates a dynamic approach to security. Innovation and long-term investments have been and will continue to be at the core of our approach, and we continue to innovate to build and improve on our security and the services we offer customers.

AWS is working to enhance cybersecurity outcomes in ways that align with each of the Strategy’s five pillars:

  1. Defend Critical Infrastructure — Customers, partners, and governments need confidence that they are migrating to and building on a secure cloud foundation. AWS is architected to have the most flexible and secure cloud infrastructure available today, and our customers benefit from the data centers, networks, custom hardware, and secure software layers that we have built to satisfy the requirements of the most security-sensitive organizations. Our cloud infrastructure is secure by design and secure by default, and our infrastructure and services meet the high bar that the United States Government and other customers set for security.
  2. Disrupt and Dismantle Threat Actors — At AWS, our paramount focus on security leads us to implement important measures to prevent abuse of our services and products. Some of the measures we undertake to deter, detect, mitigate, and prevent abuse of AWS products include examining new registrations for potential fraud or identity falsification, employing service-level containment strategies when we detect unusual behavior, and helping protect critical systems and sensitive data against ransomware. Amazon is also working with government to address these threats, including by serving as one of the first members of the Joint Cyber Defense Collaborative (JCDC). Amazon is also co-leading a study with the President’s National Security Telecommunications Advisory Committee on addressing the abuse of domestic infrastructure by foreign malicious actors.
  3. Shape Market Forces to Drive Security and Resilience — At AWS, security is our top priority. We continuously innovate based on customer feedback, which helps customer organizations to accelerate their pace of innovation while integrating top-tier security architecture into the core of their business and operations. For example, AWS co-founded the Open Cybersecurity Schema Framework (OCSF) project, which facilitates interoperability and data normalization between security products. We are contributing to the quality and safety of open-source software both by direct contributions to open-source projects and also by an initial commitment of $10 million in a variety of open-source security improvement projects in and through the Open Source Security Foundation (OpenSSF).
  4. Invest in a Resilient Future — Cybersecurity skills training, workforce development, and education on next-generation technologies are essential to addressing cybersecurity challenges. That’s why we are making significant investments to help make it simpler for people to gain the skills they need to grow their careers, including in cybersecurity. Amazon is committing more than $1.2 billion to provide no-cost education and skills training opportunities to more than 300,000 of our own employees in the United States, to help them secure new, high-growth jobs. Amazon is also investing hundreds of millions of dollars to provide no-cost cloud computing skills training to 29 million people around the world. We will continue to partner with the Cybersecurity and Infrastructure Security Agency (CISA) and others in government to develop the cybersecurity workforce.
  5. Forge International Partnerships to Pursue Shared Goals — AWS is working with governments around the world to provide innovative solutions that advance shared goals such as bolstering cyberdefenses and combating security risks. For example, we are supporting international forums such as the Organization of American States to build security capacity across the hemisphere. We encourage the administration to look toward internationally recognized, risk-based cybersecurity frameworks and standards to strengthen consistency and continuity of security among interconnected sectors and throughout global supply chains. Increased adoption of these standards and frameworks, both domestically and internationally, will mitigate cyber risk while facilitating economic growth.

AWS shares the Biden administration’s cybersecurity goals and is committed to partnering with regulators and customers to achieve them. Collaboration between the public sector and industry has been central to US cybersecurity efforts, fueled by the recognition that the owners and operators of technology must play a key role. As the United States Government moves forward with implementation of the National Cybersecurity Strategy, we look forward to redoubling our efforts and welcome continued engagement with all stakeholders—including the United States Government, our international partners, and industry collaborators. Together, we can address the most difficult cybersecurity challenges, enhance security outcomes, and build toward a more secure and resilient future.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Mark Ryland

Mark Ryland

Mark is the director of the Office of the CISO for AWS. He has over 30 years of experience in the technology industry, and has served in leadership roles in cybersecurity, software engineering, distributed systems, technology standardization, and public policy. Previously, he served as the Director of Solution Architecture and Professional Services for the AWS World Public Sector team.

How Attorneys Are Harming Cybersecurity Incident Response

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/how-attorneys-are-harming-cybersecurity-incident-response.html

New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:

Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.

So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.

EDITED TO ADD (6/13): A podcast interview with two of the authors.