In April, Cybersecurity Ventures reported on extreme cybersecurity job shortage:
Global cybersecurity job vacancies grew by 350 percent, from one million openings in 2013 to 3.5 million in 2021, according to Cybersecurity Ventures. The number of unfilled jobs leveled off in 2022, and remains at 3.5 million in 2023, with more than 750,000 of those positions in the U.S. Industry efforts to source new talent and tackle burnout continues, but we predict that the disparity between demand and supply will remain through at least 2025.
The numbers never made sense to me, and Ben Rothke has dug in and explained the reality:
…there is not a shortage of security generalists, middle managers, and people who claim to be competent CISOs. Nor is there a shortage of thought leaders, advisors, or self-proclaimed cyber subject matter experts. What there is a shortage of are computer scientists, developers, engineers, and information security professionals who can code, understand technical security architecture, product security and application security specialists, analysts with threat hunting and incident response skills. And this is nothing that can be fixed by a newbie taking a six-month information security boot camp.
[…]
Most entry-level roles tend to be quite specific, focused on one part of the profession, and are not generalist roles. For example, hiring managers will want a network security engineer with knowledge of networks or an identity management analyst with experience in identity systems. They are not looking for someone interested in security.
In fact, security roles are often not considered entry-level at all. Hiring managers assume you have some other background, usually technical before you are ready for an entry-level security job. Without those specific skills, it is difficult for a candidate to break into the profession. Job seekers learn that entry-level often means at least two to three years of work experience in a related field.
That makes a lot more sense, and matches what I experience.
Turns out that it’s easy to broadcast radio commands that force Polish trains to stop:
…the saboteurs appear to have sent simple so-called “radio-stop” commands via radio frequency to the trains they targeted. Because the trains use a radio system that lacks encryption or authentication for those commands, Olejnik says, anyone with as little as $30 of off-the-shelf radio equipment can broadcast the command to a Polish train—sending a series of three acoustic tones at a 150.100 megahertz frequency—and trigger their emergency stop function.
“It is three tonal messages sent consecutively. Once the radio equipment receives it, the locomotive goes to a halt,” Olejnik says, pointing to a document outlining trains’ different technical standards in the European Union that describes the “radio-stop” command used in the Polish system. In fact, Olejnik says that the ability to send the command has been described in Polish radio and train forums and on YouTube for years. “Everybody could do this. Even teenagers trolling. The frequencies are known. The tones are known. The equipment is cheap.”
Even so, this is being described as a cyberattack.
The new AI cyber challenge (which is being abbreviated “AIxCC”) will have a number of different phases. Interested would-be competitors can now submit their proposals to the Small Business Innovation Research program for evaluation and, eventually, selected teams will participate in a 2024 “qualifying event.” During that event, the top 20 teams will be invited to a semifinal competition at that year’s DEF CON, another large cybersecurity conference, where the field will be further whittled down.
[…]
To secure the top spot in DARPA’s new competition, participants will have to develop security solutions that do some seriously novel stuff. “To win first-place, and a top prize of $4 million, finalists must build a system that can rapidly defend critical infrastructure code from attack,” said Perri Adams, program manager for DARPA’s Information Innovation Office, during a Zoom call with reporters Tuesday. In other words: the government wants software that is capable of identifying and mitigating risks by itself.
This is a great idea. I was a big fan of DARPA’s AI capture-the-flag event in 2016, and am happy to see that DARPA is again inciting research in this area. (China has been doing this every year since 2017.)
Measuring return-on-investement for security (information security/cybersecurity) has always been hard. This is a problem for both cybersecurity vendors and service providers as well as for CISOs, as they find it hard to convince the budget stakeholders why they need another pile of money for tool X.
Return on Security Investment (ROSI) has been discussed, including academically, for a while. But we haven’t yet found a sound methodology for it. I’m not proposing one either, but I wanted to mark some points for such a methodology that I think are important. Otherwise, decisions are often taken by “auditor said we need X” or “regulation says we need Y”. Which are decent reasons to buy something, but it makes security look like a black hole cost center. It’s certainly no profit center, but the more tangibility we add, the more likely investments are going to work.
I think the leading metric is “likelihood of critical incident”. Businesses are (rightly) concerned with this. They don’t care about the number of reconnaissance attempts, false positives ratios, MTTRs and other technical things. This likelihood, if properly calculated, can lead to a sum of money lost due to the incident (due to lack of availability, data loss, reputational cost, administrative fines, etc.). The problem is we can’t get company X and say “you are 20% likely to get hit because that’s the number for SMEs”. It’s likely that a number from a vendor presentation won’t ring true. So I think the following should be factored in the methodology:
Likelihood of incident per type – ransomware, DDoS, data breach, insider data manipulation, are all differently likely.
Likelihood of incident per industry – industries vary greatly in terms of hacker incentive. Apart from generic ransomware, other attacks are more likely to be targeted at the financial industry, for example, than the forestry industry. That’s why EU directives NIS and NIS2 prioritize some industries as more critical
Likelihood of incident per organization size or revenue – not all SMEs and not all large enterprises are the same – the number of employees and their qualification may mean increased or decreased risk; company revenue may make it stand out ontop of the target list (or at the bottom)
Likelihood of incident per team size and skill – if you have one IT guy doing printers and security, it’s more likely to get hit by a critical incident than if you have a SOC team. Sounds obvious, but it’s a spectrum, and probably one with diminishing returns, especially for SMEs
Likelihood of incident per available security products – if you have nothing installed, you are more likely to get hit. If you have a simple AV, you can the basic attacks out. If you have a firewall, a SIEM/XDR, SOAR, threat intel subscriptions, things are different. Having them, of course, doesn’t mean they are properly deployed, but the types of tools matter in the ballpark calculations
How to get that data – I’m sure someone collects it. If nobody does, governments should. Such metrics are important for security decisions and therefore for the overall security of the ecosystem.
The NSA discovered the intrusion in 2020—we don’t know how—and alerted the Japanese. The Washington Post has the story:
The hackers had deep, persistent access and appeared to be after anything they could get their hands on—plans, capabilities, assessments of military shortcomings, according to three former senior U.S. officials, who were among a dozen current and former U.S. and Japanese officials interviewed, who spoke on the condition of anonymity because of the matter’s sensitivity.
[…]
The 2020 penetration was so disturbing that Gen. Paul Nakasone, the head of the NSA and U.S. Cyber Command, and Matthew Pottinger, who was White House deputy national security adviser at the time, raced to Tokyo. They briefed the defense minister, who was so concerned that he arranged for them to alert the prime minister himself.
Beijing, they told the Japanese officials, had breached Tokyo’s defense networks, making it one of the most damaging hacks in that country’s modern history.
A bunch of networks, including US Government networks, have been hacked by the Chinese. The hackers used forged authentication tokens to access user email, using a stolenMicrosoft Azure account consumer signing key. Congress wantsanswers. The phrase “negligent security practices” is being tossed about—and with good reason. Master signing keys are not supposed to be left around, waiting to be stolen.
Actually, two things went badly wrong here. The first is that Azure accepted an expired signing key, implying a vulnerability in whatever is supposed to check key validity. The second is that this key was supposed to remain in the the system’s Hardware Security Module—and not be in software. This implies a really serious breach of good security practice. The fact that Microsoft has not been forthcoming about the details of what happened tell me that the details are really bad.
I believe this all traces back to SolarWinds. In addition to Russia inserting malware into a SolarWinds update, China used a different SolarWinds vulnerability to break into networks. We know that Russia accessed Microsoft source code in that attack. I have heard from informed government officials that China used their SolarWinds vulnerability to break into Microsoft and access source code, including Azure’s.
I think we are grossly underestimating the long-term results of the SolarWinds attacks. That backdoored update was downloaded by over 14,000 networks worldwide. Organizations patched their networks, but not before Russia—and others—used the vulnerability to enter those networks. And once someone is in a network, it’s really hard to be sure that you’ve kicked them out.
Sophisticated threat actors are realizing that stealing source code of infrastructure providers, and then combing that code for vulnerabilities, is an excellent way to break into organizations who use those infrastructure providers. Attackers like Russia and China—and presumably the US as well—are prioritizing going after those providers.
This is from Microsoft’s explanation. The China attackers “acquired an inactive MSA consumer signing key and used it to forge authentication tokens for Azure AD enterprise and MSA consumer to access OWA and Outlook.com. All MSA keys active prior to the incident—including the actor-acquired MSA signing key—have been invalidated. Azure AD keys were not impacted. Though the key was intended only for MSA accounts, a validation issue allowed this key to be trusted for signing Azure AD tokens. The actor was able to obtain new access tokens by presenting one previously issued from this API due to a design flaw. This flaw in the GetAccessTokenForResourceAPI has since been fixed to only accept tokens issued from Azure AD or MSA respectively. The actor used these tokens to retrieve mail messages from the OWA API.”
The US Securities and Exchange Commission adopted final rules around the disclosure of cybersecurity incidents. There are two basic rules:
Public companies must “disclose any cybersecurity incident they determine to be material” within four days, with potential delays if there is a national security risk.
Public companies must “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats” in their annual filings.
The rules go into effect this December.
In an email newsletter, Melissa Hathaway wrote:
Now that the rule is final, companies have approximately six months to one year to document and operationalize the policies and procedures for the identification and management of cybersecurity (information security/privacy) risks. Continuous assessment of the risk reduction activities should be elevated within an enterprise risk management framework and process. Good governance mechanisms delineate the accountability and responsibility for ensuring successful execution, while actionable, repeatable, meaningful, and time-dependent metrics or key performance indicators (KPI) should be used to reinforce realistic objectives and timelines. Management should assess the competency of the personnel responsible for implementing these policies and be ready to identify these people (by name) in their annual filing.
SecOps metrics can be a gold mine of potential for informing better business decisions, but 78% of CEOs say they don’t have adequate data on risk exposure to make good decisions. Even when they do see the right data, 82% are inclined to “trust their gut” anyway.
Here lies the disconnect between data and decisions for C-level executives: a lack of effective presentation. Ultimately, the responsibility of communicating that SecOps metrics matter falls on today’s security teams. They must transform numbers into narratives that illustrate the challenges in today’s attack landscape to decision-makers — and, most importantly, make stakeholders care about those challenges.
But metrics presentations can get boring. So, how can security professionals present SecOps metrics in an engaging way?
Stories inspire empathy and action
While facts and figures play a role in communication, humans respond differently to stories. With narratives, we understand meaning more deeply, remember events longer, and factor what each story taught us into future decisions. Storytelling is also an effective way for security teams to inspire empathy — and therefore, action — in today’s decision-makers.
It’s critical for security professionals to identify the narrative threadin the metrics they’re analyzing. Here’s what we mean by that, step by step and metric by metric.
Establish how hungry the bad guys are
Hone in on the frequency of security incidents. This metric directly correlates to the power and reach threat actors have. Dive into the causes behind incidents, how much impact incidents had, and what can be done to stop them.
This information gives executives direct insight into the potential risks your organization faces and the negative outcomes associated with them. When executives can see the cold hard number of times their organizations have suffered from a breach, attack, or leak, it can highlight where security strategies are still lacking — and where they’re losing out to malicious actors.
Show how the villains keep winning
MTTD (mean time to detection) is a measure of how fast security teams can detect incidents. While it might not be a flashy metric in and of itself, it can pack a powerful punch when illustrating the damage bad actors can do before they’re suspected of even breaking in.
MTTD provides insight into the efficacy of an organization’s current cybersecurity tools and data coverage. It can also be a helpful indicator of how well current security processes are working — and how overworked or resource-strained a security team might be.
Tell the underdog’s story
Here’s where you leverage MTTR (mean time to respond). This metric shows how quickly the security team can spring into action. More often than not, security teams have a litany of other important tasks at hand that can make MTTR less than ideal. This demonstrates why resource-strained and overworked security professionals are set up to fail if they don’t have the right tools, strategies, and support.
With MTTR, security teams can add an extra layer of context to the data shown by MTTD. This metric highlights how quickly security teams respond to incidents — which can be another indicator of how well tools and processes match up to current threats.
Describe the loot you stand to lose
Finally, communicate the potential cost per incident. Money speaks volumes when you’re crafting a narrative out of SecOps metrics — so it’s best to close out your stories with this powerful data point. This metric provides insight into the efficiency of a cybersecurity program’s processes, tools, and resource allocation.
This is perhaps the most effective metric security professionals can use with executives because it speaks directly to one of their critical concerns: the bottom line.
Putting it all together
While many additional SecOps metrics matter, those four data points can come together most effectively to weave a story that speaks to C-level execs.
However, executives will have their own set of questions and concerns at SecOps briefs. So, it’s important to supplement even the strongest SecOps stories with additional answers, such as:
How efficiently your organization is addressing risks compared to other similar companies.
Where budget spend works and where it doesn’t in terms of ROI.
Where opportunities for increased efficiencies (namely, breaking down disparate silos and cutting costs with consolidation) can come into play.
It all comes down to communication
By focusing on crafting data narratives, security teams can turn SecOps metrics into actionable decisions for stakeholders. Telling the right story to the right people can help procure backing from the top — which means getting the resources, people, and budget security leaders need to stay ahead of threats.
Effectively communicating with C-levels helps build a rapport between stakeholders and boots-on-the-ground security professionals. By presenting metrics as parts of a larger story, organizations can unlock better collaboration, better relationships, and better business outcomes.
The Atlantic Council released a detailed commentary on the White House’s new “Implementation Plan for the 2023 US National Cybersecurity Strategy.” Lots of interesting bits.
So far, at least three trends emerge:
First, the plan contains a (somewhat) more concrete list of actions than its parent strategy, with useful delineation of lead and supporting agencies, as well as timelines aplenty. By assigning each action a designated lead and timeline, and by including a new nominal section (6) focused entirely on assessing effectiveness and continued iteration, the ONCD suggests that this is not so much a standalone text as the framework for an annual, crucially iterative policy process. That many of the milestones are still hazy might be less important than the commitment. the administration has made to revisit this plan annually, allowing the ONCD team to leverage their unique combination of topical depth and budgetary review authority.
Second, there are clear wins. Open-source software (OSS) and support for energy-sector cybersecurity receive considerable focus, and there is a greater budgetary push on both technology modernization and cybersecurity research. But there are missed opportunities as well. Many of the strategy’s most difficult and revolutionary goals—holding data stewards accountable through privacy legislation, finally implementing a working digital identity solution, patching gaps in regulatory frameworks for cloud risk, and implementing a regime for software cybersecurity liability—have been pared down or omitted entirely. There is an unnerving absence of “incentive-shifting-focused” actions, one of the most significant overarching objectives from the initial strategy. This backpedaling may be the result of a new appreciation for a deadlocked Congress and the precarious present for the administrative state, but it falls short of the original strategy’s vision and risks making no progress against its most ambitious goals.
Third, many of the implementation plan’s goals have timelines stretching into 2025. The disruption of a transition, be it to a second term for the current administration or the first term of another, will be difficult to manage under the best of circumstances. This leaves still more of the boldest ideas in this plan in jeopardy and raises questions about how best to prioritize, or accelerate, among those listed here.
In a special two-part “Lost Bots,” hosts Jeffrey Gardner and Stephen Davis talk about presenting cybersecurity results up the org chart. Both have handled C-suite and board communications and have lots of lessons learned.
Part 1 is about the style of a presentation: the point, the delivery, the storytelling. Gardner believes anyone can be great because he’s “an extreme introvert” himself. He shares a ton of wisdom about how to structure your presentation and really own the room with confidence. About halfway through, the ideas start coming fast and furious.
Part 2 brings it together with a deep dive into metrics (and an extraordinary bowtie on Mr. Davis, seriously). Metrics aren’t your story, but they do prove it true. The episode with one thing you must take away and remember: you’re not there to sell more security, you’re there to help stakeholders make well-informed business decisions. When that purpose is clear, some things get simpler.
Recently, Gartner surveyed security professionals and found that over 50% of the respondents were looking to consolidate their security tech stack. Why? These professionals recognized that consolidation is key to achieving their goals of improving productivity, visibility, and reporting as well as bridging staff resourcing gaps.
Additionally, threat actors are leveraging artificial intelligence (AI) and machine learning tools to launch more sophisticated, high-impact attacks. Defending against AI-assisted attacks requires greater network visibility and operational efficiency—not to mention the automated detection and response capabilities in most consolidation offerings. As the threat landscape evolves, streamlining your tech stack can also improve your organization’s security posture and protect against financial losses. This is an important consideration, as the cost of the average data breach has reached $9.44 million in the U.S.
While the benefits of consolidation are clear, organizations often miss the tell-tale signs that it is time to consolidate their tech stack. Recognizing these signs can help your organization identify the areas where it’s most needed and develop a seamless implementation strategy that minimizes disruption.
Four tell-tale signs it’s time to consolidate your security tools
Sign #1: You can’t track (or visualize) your tech stack
When was the last time you cataloged your resources? This may seem a little on the nose, but one of the best ways to tell if your organization is in need of consolidation is that you’re unable to track or visualize your tech stack.
In 2021, IBM found that 45% of security teams used more than 20 tools when investigating and responding to a cybersecurity incident. These tools are a drain on your budget and can even present security risks. Excess tech is less likely to be monitored for compliance and needlessly broadens your network’s attack surface.
Visibility into your tech stack is just as important as visibility across your network. The inability to track and visualize your tech stack can indicate that your organization is working with tools that are obsolete, underutilized, or ignored.
Sign #2: Your mean time to resolve (MTTR) is high
Did you know it takes the average company a staggering 277 days to identify and contain a breach? Finding and resolving breaches quickly is key to protecting your systems and data. When your MTTR is high, it’s indicative of operational inefficiencies in your security responses.
Working with too many vendors and tools can make it difficult to prioritize and respond to threats. For example, if you’re working with redundant tools, event data from one tool may conflict with another, and your team is forced to spend precious time confirming which dataset is correct before it can respond to the security incident.
Siloed tools from a variety of vendors are another common pain point. Even if you’re using “best of breed” tools, a cobbled together security solution of multiple vendors can create issues. Tools from different vendors may not integrate well (if at all). Consequently, your team may be missing crucial alerts and experiencing a breakdown in workflows as data is transferred from one tool to another.
Sign #3: Your processes are manual
If your team is wasting valuable time manually investigating false positives, prioritizing risks, and drawing context from massive datasets, consolidation could be the solution. Manual investigation is also error-prone, and teams often find that important security events are missed entirely or slip through the cracks until they become pervasive, system-wide concerns. As a result, you may be able to track your team’s elevated MTTR rate back to manual resolution workflows.
Consolidated security platforms offer the crucial automation features that companies need to close skill and staffing resource gaps, as well. Consolidating with automation in mind can simplify and improve your team’s workflows, ensuring that your team is able to respond to threats faster and reduce overall risk across your infrastructure—even if your organization is understaffed. Finally, removing the burden of manual investigation can increase your team’s productivity, free up resources, and create space for senior staff to work on other projects.
Sign #4: Compliance is a struggle
If you’re working with a variety of vendors and security tools, compliance can be problematic. You may find that each vendor’s approach to compliance varies widely, and it’s nearly impossible to impose a consistent standard of compliance across your entire network.
Network applications are difficult to update and secure if your organization struggles to maintain visibility in its tech stack. Also, gathering data across your infrastructure for compliance audits is complicated when you have redundant tools, disagreement between the datasets, and no single source of truth.
Whether your organization is in a highly regulated industry or not, maintaining a compliant network is important. Organizations that maintain compliant networks can resolve configuration-related vulnerabilities faster, creating a baseline for security practices and IT operations.
Following governmental compliance regulations can help your organization enhance its data management capabilities. There are also serious drawbacks to a non-compliant network. Depending on your industry, if your network is non-compliant, you may be required to pay hefty fines. Additionally, non-compliant networks are less secure; they’re prone to configuration vulnerabilities and a host of other issues.
When it comes to consolidation, don’t ignore the signs
Knowing the signs of a tech stack in need of consolidation can save your organization a considerable amount of time, money, and frustration. Some companies worry about giving up “best of breed” security options. However, consolidation is increasingly considered more secure than “best of breed.”
For many organizations, the security advantage of narrowing your attack surface, automating processes, and streamlining data far outweighs the individual benefits of separate solutions and multiple vendors. As the threat landscape evolves, it’s increasingly important to have a streamlined tech stack that can deliver the security support needed to effectively mitigate risk.
SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The fifty or so attendees include psychologists, economists, computer security researchers, criminologists, sociologists, political scientists, designers, lawyers, philosophers, anthropologists, geographers, neuroscientists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.
Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Short talks limit presenters’ ability to get into the boring details of their work, and the interdisciplinary audience discourages jargon.
For the past decade and a half, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways 00 and has resulted in some unexpected collaborations.
And that’s what’s valuable. One of the most important outcomes of the event is new collaborations. Over the years, we have seen new interdisciplinary research between people who met at the workshop, and ideas and methodologies move from one field into another based on connections made at the workshop. This is why some of us have been coming back every year for over a decade.
This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is live blogging the talks. We are back 100% in person after two years of fully remote and one year of hybrid.
It’s actually hard to believe that the workshop has been going on for this long, and that it’s still vibrant. We rotate between organizers, so next year is my turn in Cambridge (the Massachusetts one).
The United States Government recently launched its National Cybersecurity Strategy. The Strategy outlines the administration’s ambitious vision for building a more resilient future, both in the United States and around the world, and it affirms the key role cloud computing plays in realizing this vision.
Amazon Web Services (AWS) is broadly committed to working with customers, partners, and governments such as the United States to improve cybersecurity. That longstanding commitment aligns with the goals of the National Cybersecurity Strategy. In this blog post, we will summarize the Strategy and explain how AWS will help to realize its vision.
The Strategy identifies two fundamental shifts in how the United States allocates cybersecurity roles, responsibilities, and resources. First, the Strategy calls for a shift in cybersecurity responsibility away from individuals and organizations with fewer resources, toward larger technology providers that are the most capable and best-positioned to be successful. At AWS, we recognize that our success and scale bring broad responsibility. We are committed to improving cybersecurity outcomes for our customers, our partners, and the world at large.
Second, the Strategy calls for realigning incentives to favor long-term investments in a resilient future. As part of our culture of ownership, we are long-term thinkers, and we don’t sacrifice long-term value for short-term results. For more than fifteen years, AWS has delivered security, identity, and compliance services for millions of active customers around the world. We recognize that we operate in a complicated global landscape and dynamic threat environment that necessitates a dynamic approach to security. Innovation and long-term investments have been and will continue to be at the core of our approach, and we continue to innovate to build and improve on our security and the services we offer customers.
AWS is working to enhance cybersecurity outcomes in ways that align with each of the Strategy’s five pillars:
Defend Critical Infrastructure — Customers, partners, and governments need confidence that they are migrating to and building on a secure cloud foundation. AWS is architected to have the most flexible and secure cloud infrastructure available today, and our customers benefit from the data centers, networks, custom hardware, and secure software layers that we have built to satisfy the requirements of the most security-sensitive organizations. Our cloud infrastructure is secure by design and secure by default, and our infrastructure and services meet the high bar that the United States Government and other customers set for security.
Disrupt and Dismantle Threat Actors — At AWS, our paramount focus on security leads us to implement important measures to prevent abuse of our services and products. Some of the measures we undertake to deter, detect, mitigate, and prevent abuse of AWS products include examining new registrations for potential fraud or identity falsification, employing service-level containment strategies when we detect unusual behavior, and helping protect critical systems and sensitive data against ransomware. Amazon is also working with government to address these threats, including by serving as one of the first members of the Joint Cyber Defense Collaborative (JCDC). Amazon is also co-leading a study with the President’s National Security Telecommunications Advisory Committee on addressing the abuse of domestic infrastructure by foreign malicious actors.
Shape Market Forces to Drive Security and Resilience — At AWS, security is our top priority. We continuously innovate based on customer feedback, which helps customer organizations to accelerate their pace of innovation while integrating top-tier security architecture into the core of their business and operations. For example, AWS co-founded the Open Cybersecurity Schema Framework (OCSF) project, which facilitates interoperability and data normalization between security products. We are contributing to the quality and safety of open-source software both by direct contributions to open-source projects and also by an initial commitment of $10 million in a variety of open-source security improvement projects in and through the Open Source Security Foundation (OpenSSF).
Invest in a Resilient Future — Cybersecurity skills training, workforce development, and education on next-generation technologies are essential to addressing cybersecurity challenges. That’s why we are making significant investments to help make it simpler for people to gain the skills they need to grow their careers, including in cybersecurity. Amazon is committing more than $1.2 billion to provide no-cost education and skills training opportunities to more than 300,000 of our own employees in the United States, to help them secure new, high-growth jobs. Amazon is also investing hundreds of millions of dollars to provide no-cost cloud computing skills training to 29 million people around the world. We will continue to partner with the Cybersecurity and Infrastructure Security Agency (CISA) and others in government to develop the cybersecurity workforce.
Forge International Partnerships to Pursue Shared Goals — AWS is working with governments around the world to provide innovative solutions that advance shared goals such as bolstering cyberdefenses and combating security risks. For example, we are supporting international forums such as the Organization of American States to build security capacity across the hemisphere. We encourage the administration to look toward internationally recognized, risk-based cybersecurity frameworks and standards to strengthen consistency and continuity of security among interconnected sectors and throughout global supply chains. Increased adoption of these standards and frameworks, both domestically and internationally, will mitigate cyber risk while facilitating economic growth.
AWS shares the Biden administration’s cybersecurity goals and is committed to partnering with regulators and customers to achieve them. Collaboration between the public sector and industry has been central to US cybersecurity efforts, fueled by the recognition that the owners and operators of technology must play a key role. As the United States Government moves forward with implementation of the National Cybersecurity Strategy, we look forward to redoubling our efforts and welcome continued engagement with all stakeholders—including the United States Government, our international partners, and industry collaborators. Together, we can address the most difficult cybersecurity challenges, enhance security outcomes, and build toward a more secure and resilient future.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.
So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.
EDITED TO ADD (6/13): A podcast interview with two of the authors.
Developers are starting to talk about the software-defined car.
For decades, features have accumulated like cruft in new vehicles: a box here to control the antilock brakes, a module there to run the cruise control radar, and so on. Now engineers and designers are rationalizing the way they go about building new models, taking advantage of much more powerful hardware to consolidate all those discrete functions into a small number of domain controllers.
The behavior of new cars is increasingly defined by software, too. This is merely the progression of a trend that began at the end of the 1970s with the introduction of the first electronic engine control units; today, code controls a car’s engine and transmission (or its electric motors and battery pack), the steering, brakes, suspension, interior and exterior lighting, and more, depending on how new (and how expensive) it is. And those systems are being leveraged for convenience or safety features like adaptive cruise control, lane keeping, remote parking, and so on.
And security?
Another advantage of the move away from legacy designs is that digital security can be baked in from the start rather than patched onto components (like a car’s central area network) that were never designed with the Internet in mind. “If you design it from scratch, it’s security by design, everything is in by design; you have it there. But keep in mind that, of course, the more software there is in the car, the more risk is there for vulnerabilities, no question about this,” Anhalt said.
“At the same time, they’re a great software system. They’re highly secure. They’re much more secure than a hardware system with a little bit of software. It depends how the whole thing has been designed. And there are so many regulations and EU standards that have been released in the last year, year and a half, that force OEMs to comply with these standards and get security inside,” she said.
I suppose it could end up that way. It could also be a much bigger attack surface, with a lot more hacking possibilities.
The following is background on ransomware, CIS, and the initiatives that led to the publication of this new blueprint.
The Ransomware Task Force
In April of 2021, the U.S. government launched the Ransomware Task Force (RTF), which has the mission of uniting key stakeholders across industry, government, and civil society to create new solutions, break down silos, and find effective new methods of countering the ransomware threat. The RTF has since launched several progress reports with specific recommendations, including the development of the RTF Blueprint for Ransomware Defense, which provides a framework with practical steps to mitigate, respond to, and recover from ransomware. AWS is a member of the RTF, and we have taken action to create our own AWS Blueprint for Ransomware Defense that maps actionable and foundational security controls to AWS services and features that customers can use to implement those controls. The AWS Blueprint for Ransomware Defense is based on the CIS Controls framework.
Center for Internet Security
The Center for Internet Security (CIS) is a community-driven nonprofit, globally recognized for establishing best practices for securing IT systems and data. To help establish foundational defense mechanisms, a subset of the CIS Critical Security Controls (CIS Controls) have been identified as important first steps in the implementation of a robust program to prevent, respond to, and recover from ransomware events. This list of controls was established to provide safeguards against the most impactful and well-known internet security issues. The controls have been further prioritized into three implementation groups (IGs), to help guide their implementation. IG1, considered “essential cyber hygiene,” provides foundational safeguards. IG2 builds on IG1 by including the controls in IG1 plus a number of additional considerations. Finally, IG3 includes the controls in IG1 and IG2, with an additional layer of controls that protect against more sophisticated security issues.
CIS recommends that organizations use the CIS IG1 controls as basic preventative steps against ransomware events. We’ve produced a mapping of AWS services that can help you implement aspects of these controls in your AWS environment. Ransomware is a complex event, and the best course of action to mitigate risk is to apply a thoughtful strategy of defense in depth. The mitigations and controls outlined in this mapping document are general security best practices, but are a non-exhaustive list.
Because data is often vital to the operation of mission-critical services, ransomware can severely disrupt business processes and applications that depend on this data. For this reason, many organizations are looking for effective security controls that will improve their security posture against these types of events. We hope you find the information in the AWS Blueprint for Ransomware Defense helpful and incorporate it as a tool to provide additional layers of security to help keep your data safe.
Let us know if you have any feedback through the AWS Security Contact Us page. Please reach out if there is anything we can do to add to the usefulness of the blueprint or if you have any additional questions on security and compliance. You can find more information from the IST (Institute for Security and Technology) describing ransomware and how to protect yourself on the IST website.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
RSA Conference 2023 brought thousands of cybersecurity professionals to the Moscone Center in San Francisco, California from April 24 through 27.
The keynote lineup was eclectic, with more than 30 presentations across two stages featuring speakers ranging from renowned theoretical physicist and futurist Dr. Michio Kaku to Grammy-winning musician Chris Stapleton. Topics aligned with this year’s conference theme, “Stronger Together,” and focused on actions that can be taken by everyone, from the C-suite to those of us on the front lines of security, to strengthen collaboration, establish new best practices, and make our defenses more diverse and effective.
With over 400 sessions and 500 exhibitors discussing the latest trends and technologies, it’s impossible to recap every highlight. Now that the dust has settled and we’ve had time to reflect, here’s a glimpse of what caught our attention.
We announced three new capabilities for our Amazon GuardDuty threat detection service to help customers secure container, database, and serverless workloads. These include GuardDuty Elastic Kubernetes Service (EKS) Runtime Monitoring, GuardDuty RDS Protection for data stored in Amazon Aurora, and GuardDuty Lambda Protection for serverless applications. The new capabilities are designed to provide actionable, contextual, and timely security findings with resource-specific details.
Artificial intelligence
It was hard to find a single keynote, session, or conversation that didn’t touch on the impact of artificial intelligence (AI).
In “AI: Law, Policy and Common Sense Suggestions on How to Stay Out of Trouble,” privacy and gaming attorney Behnam Dayanim highlighted ambiguity around the definition of AI. Referencing a quote from University of Washington School of Law’s Ryan Calo, Dayanim pointed out that AI may be best described as “…a set of techniques aimed at approximating some aspect of cognition,” and should therefore be thought of differently than a discrete “thing” or industry sector.
Dayanim noted examples of skepticism around the benefits of AI. A recent Monmouth University poll, for example, found that 73% of Americans believe AI will make jobs less available and harm the economy, and a surprising 55% believe AI may one day threaten humanity’s existence.
Equally skeptical, he noted, is a joint statement made by the Federal Trade Commission (FTC) and three other federal agencies during the conference reminding the public that enforcement authority applies to AI. The statement takes a pessimistic view, saying that AI is “…oftenadvertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” but has the potential to produce negative outcomes.
Dayanim covered existing and upcoming legal frameworks around the world that are aimed at addressing AI-related risks related to intellectual property (IP), misinformation, and bias, and how organizations can design AI governance mechanisms to promote fairness, competence, transparency, and accountability.
Many other discussions focused on the immense potential of AI to automate and improve security practices. RSA Security CEO Rohit Ghai explored the intersection of progress in AI with human identity in his keynote. “Access management and identity management are now table stakes features”, he said. In the AI era, we need an identity security solution that will secure the entire identity lifecycle—not just access. To be successful, he believes, the next generation of identity technology needs to be powered by AI, open and integrated at the data layer, and pursue a security-first approach. “Without good AI,” he said, “zero trust has zero chance.”
Mark Ryland, director at the Office of the CISO at AWS, spoke with Infosecurity about improving threat detection with generative AI.
“We’re very focused on meaningful data and minimizing false positives. And the only way to do that effectively is with machine learning (ML), so that’s been a core part of our security services,” he noted.
“Machine learning and artificial intelligence will add a critical layer of automation to cloud security. AI/ML will help augment developers’ workstreams, helping them create more reliable code and drive continuous security improvement. — CJ Moses, CISO and VP of security engineering at AWS
The human element
Dozens of sessions focused on the human element of security, with topics ranging from the psychology of DevSecOps to the NIST Phish Scale. In “How to Create a Breach-Deterrent Culture of Cybersecurity, from Board Down,” Andrzej Cetnarski, founder, chairman, and CEO of Cyber Nation Central and Marcus Sachs, deputy director for research at Auburn University, made a data-driven case for CEOs, boards, and business leaders to set a tone of security in their organizations, so they can address “cyber insecure behaviors that lead to social engineering” and keep up with the pace of cybercrime.
Lisa Plaggemier, executive director of the National Cybersecurity Alliance, and Jenny Brinkley, director of Amazon Security, stressed the importance of compelling security awareness training in “Engagement Through Entertainment: How To Make Security Behaviors Stick.” Education is critical to building a strong security posture, but as Plaggemier and Brinkley pointed out, we’re “living through an epidemic of boringness” in cybersecurity training.
According to a recent report, just 28% of employees say security awareness training is engaging, and only 36% say they pay full attention during such training.
Citing a United Airlines preflight safety video and Amazon’s Protect and Connect public service announcement (PSA) as examples, they emphasized the need to make emotional connections with users through humor and unexpected elements in order to create memorable training that drives behavioral change.
Plaggemeier and Brinkley detailed five actionable steps for security teams to improve their awareness training:
Brainstorm with staff throughout the company (not just the security people)
Find ideas and inspiration from everywhere else (TV episodes, movies… anywhere but existing security training)
Be relatable, and include insights that are relevant to your company and teams
Start small; you don’t need a large budget to add interest to your training
Don’t let naysayers deter you — change often prompts resistance
“You’ve got to make people care. And so you’ve got to find out what their personal motivators are, and how to develop the type of content that can make them care to click through the training and…remember things as they’re walking through an office.” — Jenny Brinkley, director of Amazon Security
Cloud security
Cloud security was another popular topic. In “Architecting Security for Regulated Workloads in Hybrid Cloud,” Mark Buckwell, cloud security architect at IBM, discussed the architectural thinking practices—including zero trust—required to integrate security and compliance into regulated workloads in a hybrid cloud environment.
Maor highlighted common tactics focused on identity theft, including MFA push fatigue, phishing, business email compromise, and adversary-in-the middle attacks. After detailing techniques that are used to establish persistence in SaaS environments and deliver ransomware, Maor emphasized the importance of forensic investigation and threat hunting to gaining the knowledge needed to reduce the impact of SaaS security incidents.
Sarah Currey, security practice manager, and Anna McAbee, senior solutions architect at AWS, provided complementary guidance in “Top 10 Ways to Evolve Cloud Native Incident Response Maturity.” Currey and McAbee highlighted best practices for addressing incident response (IR) challenges in the cloud — no matter who your provider is:
Define roles and responsibilities in your IR plan
Train staff on AWS (or your provider)
Develop cloud incident response playbooks
Develop account structure and tagging strategy
Run simulations (red team, purple team, tabletop)
Prepare access
Select and set up logs
Enable managed detection services in all available AWS Regions
Determine containment strategy for resource types
Develop cloud forensics capabilities
Speaking to BizTech, Clarke Rodgers, director of enterprise strategy at AWS, noted that tools and services such as Amazon GuardDuty and AWS Key Management Service (AWS KMS) are available to help advance security in the cloud. When organizations take advantage of these services and use partners to augment security programs, they can gain the confidence they need to take more risks, and accelerate digital transformation and product development.
Security takes a village
There are more highlights than we can mention on a variety of other topics, including post-quantum cryptography, data privacy, and diversity, equity, and inclusion. We’ve barely scratched the surface of RSA Conference 2023. If there is one key takeaway, it is that no single organization or individual can address cybersecurity challenges alone. By working together and sharing best practices as an industry, we can develop more effective security solutions and stay ahead of emerging threats.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Researchers are worried about Google’s .zip and .mov domains, because they are confusing. Mistaking a URL for a filename could be a security vulnerability.
Cyber Essentials Plus is a UK Government-backed, industry-supported certification scheme intended to help organizations demonstrate organizational cyber security against common cyber attacks. An independent third-party auditor certified by the Information Assurance for Small and Medium Enterprises (IASME) completed the audit. The scope of our Cyber Essentials Plus certificate covers AWS Europe (London), AWS Europe (Ireland), and AWS Europe (Frankfurt) Regions.
The NHS DSPT is a self-assessment that organizations use to measure their performance against data security and information governance requirements. The UK Department of Health and Social Care sets these requirements.
When customers move to the AWS Cloud, AWS is responsible for protecting the global infrastructure that runs our services offered in the AWS Cloud. AWS customers are the data controllers for patient health and care data, and are responsible for anything they put in the cloud or connect to the cloud. For more information, see the AWS Shared Security Responsibility Model.
As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit a comment in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.
Want more AWS Security news? Follow us on Twitter.
In the early stages of the war in Ukraine in 2022, PIPEDREAM, a known malware was quietly on the brink of wiping out a handful of critical U.S. electric and liquid natural gas sites. PIPEDREAM is an attack toolkit with unmatched and unprecedented capabilities developed for use against industrial control systems (ICSs).
The malware was built to manipulate the network communication protocols used by programmable logic controllers (PLCs) leveraged by two critical producers of PLCs for ICSs within the critical infrastructure sector, Schneider Electric and OMRON.
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.