Tag Archives: cloud security

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec

Post Syndicated from Miklos Csecsi original https://blog.rapid7.com/2024/03/07/securing-the-next-level-automated-cloud-defense-in-game-development-with-insightcloudsec/

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec

Imagine the following scenario: You’re about to enjoy a strategic duel on chess.com or dive into an intense battle in Fortnite, but as you log in, you find your hard-earned achievements, ranks, and reputation have vanished into thin air. This is not just a hypothetical scenario but a real possibility in today’s cloud gaming landscape, where a single security breach can undo years of dedication and achievement.

Cloud gaming, powered by giants like AWS, is transforming the gaming industry, offering unparalleled accessibility and dynamic gaming experiences. Yet, with this technological leap forward comes an increase in cyber threats. The gaming world has already witnessed significant security breaches, such as the GTA5 code theft and Activision’s consistent data challenges, highlighting the lurking dangers in this digital arena.

In such a scenario, securing cloud-based games isn’t just an additional feature; it’s an absolute necessity. As we examine the intricate world of cloud gaming, the role of comprehensive security solutions becomes increasingly vital. In the subsequent sections, we will explore how Rapid7’s InsightCloudSec can be instrumental in securing cloud infrastructure and CI/CD processes in game development, thereby safeguarding the integrity and continuity of our virtual gaming experiences.

Challenges in Cloud-Based Game Development

Picture this: You’re a game developer, immersed in creating the next big title in cloud gaming. Your team is buzzing with creativity, coding, and testing. But then, out of the blue, you’re hit by a cyberattack, much like the one that rocked CD Projekt Red in 2021. Imagine the chaos – months of hard work (e.g. Cyberpunk 2077 or The Witcher 3) locked up by ransomware, with all sorts of confidential data floating in the wrong hands. This scenario is far from fiction in today’s digital gaming landscape.

What Does This Kind of Attack Really Mean for a Game Development Team?

The Network Weak Spot: It’s like leaving the back door open while you focus on the front; hackers can sneak in through network gaps we never knew existed. That’s what might have happened with CD Projekt Red. A more fortified network could have been their digital moat.

When Data Gets Held Hostage: It’s one thing to secure your castle, but what about safeguarding the treasures inside? The CD Projekt Red incident showed us how vital it is to keep our game codes and internal documents under lock and key, digitally speaking.

A Safety Net Missing: Imagine if CD Projekt Red had a robust backup system. Even after the attack, they could have bounced back quicker, minimizing the damage. It’s like having a safety net when you’re walking a tightrope. You hope you won’t need it, but you’ll be glad it’s there if you do.

This is where a solution like Rapid7’s InsightCloudSec comes into play. It’s not just about building higher walls; it’s about smarter, more responsive defense systems. Think of it as having a digital watchdog that’s always on guard, sniffing out threats, and barking alarms at the first sign of trouble.

With tools that watch over your cloud infrastructure, monitor every digital move in real time, and keep your compliance game strong, you’re not just creating games; you’re also playing the ultimate game of digital security – and winning.

In the realm of cloud-based game development, mastering AWS services’ security nuances transcends mere technical skill – it’s akin to painting a masterpiece. Let’s embark on a journey through the essential AWS services like EC2, S3, Lambda, CloudFront, and RDS, with a keen focus on their security features – our guardians in the digital expanse.

Consider Amazon EC2 as the infrastructure’s backbone, hosting the very servers that breathe life into games. Here, Security Groups act as discerning gatekeepers, meticulously managing who gets in and out. They’re not just gatekeepers but wise ones, remembering allowed visitors and ensuring a seamless yet secure flow of traffic.

Amazon S3 stands as our digital vault, safeguarding data with precision-crafted bucket policies. These policies aren’t just rules; they’re declarations of trust, dictating who can glimpse or alter the stored treasures. History is littered with tales of those who faltered, so precision here is paramount.

Lambda functions emerge as the silent virtuosos of serverless architecture, empowering game backends with their scalable might. Yet, their power is wielded judiciously, guided by the principle of least privilege through meticulously assigned roles and permissions, minimizing the shadow of vulnerability.

Amazon CloudFront, our swift courier, ensures game content flies across the globe securely and at breakneck speed. Coupled with AWS Shield (Advanced), it stands as a bulwark against DDoS onslaughts, guaranteeing that game delivery remains both rapid and impregnable.

Amazon RDS, the fortress for player data, automates the mundane yet crucial tasks – backups, patches, scaling – freeing developers to craft experiences. It whispers secrets only to those meant to hear, guarding data with robust encryption, both at rest and in transit.

Visibility and vigilance form the bedrock of our security ethos. With tools like AWS CloudTrail and CloudWatch, our gaze extends across every corner of our domain, ever watchful for anomalies, ready to act with precision and alacrity.

Encryption serves as our silent sentinel, a protective veil over data, whether nestled in S3’s embrace or traversing the vastness to and from EC2 and RDS. It’s our unwavering shield against the curious and the malevolent alike.

In weaving the security measures of these AWS services into the fabric of game development, we engage not in mere procedure but in the creation of a secure tapestry that envelops every facet of the development journey. In the vibrant, ever-evolving landscape of game creation, fortifying our cloud infrastructure with a security-first mindset is not just a technical endeavor – it’s a strategic masterpiece, ensuring our games are not only a source of joy but bastions of privacy and security in the cloud.

Automated Cloud Security with InsightCloudSec

When it comes to deploying a game in the cloud, understanding and implementing automated security is paramount. This is where Rapid7’s InsightCloudSec takes center stage, revolutionizing how game developers secure their cloud environments with a focus on automation and real-time monitoring.

Data Harvesting Strategies

InsightCloudSec differentiates itself through its innovative approach to data collection and analysis, employing two primary methods: API harvesting and Event Driven Harvesting (EDH). Initially, InsightCloudSec utilizes the API method, where it directly calls cloud provider APIs to gather essential platform information. This process enables InsightCloudSec to populate its platform with critical data, which is then unified into a cohesive dataset. For example, disparate storage solutions from AWS, Azure, and GCP are consolidated under a generic “Storage” category, while compute instances are unified as “Instances.” This normalization allows for the creation of universal compliance packs that can be applied across multiple cloud environments, enhancing the platform’s efficiency and coverage.

However, the real game-changer is Rapid7’s implementation of EDH. Unlike the traditional API pull method, EDH leverages serverless functions within the customer’s cloud environment to ingest security event data and configuration changes in real-time. This data is then pushed to the InsightCloudSec platform, significantly reducing costs and increasing the speed of data acquisition. For AWS environments, this means event information can be updated in near real-time, within 60 seconds, and within 2-3 minutes for Azure and GCP. This rapid update capability is a stark contrast to the hourly or daily updates provided by other cloud security platforms, setting InsightCloudSec apart as a leader in real-time cloud security monitoring.

Automated Remediation with InsightCloudSec Bots

The integration of near-to-real-time event information through Event Driven Harvesting (EDH) with InsightCloudSec’s advanced bot automation features equips developers with a formidable toolset for safeguarding cloud environments. This unique combination not only flags vulnerable configurations but also facilitates automatic remediation within minutes, a critical capability for maintaining a pristine cloud ecosystem. InsightCloudSec’s bots go beyond mere detection; they proactively manage misconfigurations and vulnerabilities across virtual machines and containers, ensuring the cloud space is both secure and compliant.

The versatility of these bots is remarkable. Developers have the flexibility to define the scope of the bot’s actions, allowing changes to be applied across one or multiple accounts. This granular control ensures that automated security measures are aligned with the specific needs and architecture of the cloud environment.

Moreover, the timing of these interventions can be finely tuned. Whether responding to a set schedule or reacting to specific events – such as the creation, modification, or deletion of resources – the bots are adept at addressing security concerns at the most opportune moments. This responsiveness is especially beneficial in dynamic cloud environments where changes are frequent and the security landscape is constantly evolving.

The actions undertaken by InsightCloudSec’s bots are diverse and impactful. According to the extensive list of sample bots provided by Rapid7, these automated guardians can, for example:

  • Automatically tag resources lacking proper identification, ensuring that all elements within the cloud are categorized and easily manageable
  • Enforce compliance by identifying and rectifying resources that do not adhere to established security policies, such as unencrypted databases or improperly configured networks
  • Remediate exposed resources by adjusting security group settings to prevent unauthorized access, a crucial step in safeguarding sensitive data
  • Monitor and manage excessive permissions, scaling back unnecessary access rights to adhere to the principle of least privilege, thereby reducing the risk of internal and external threats
  • And much more…

This automation, powered by InsightCloudSec, transforms cloud security from a reactive task to a proactive, streamlined process.

By harnessing the power of EDH for real-time data harvesting and leveraging the sophisticated capabilities of bots for immediate action, developers can ensure that their cloud environments are not just reactively protected but are also preemptively fortified against potential vulnerabilities and misconfigurations. This shift towards automated, intelligent cloud security management empowers developers to focus on innovation and development, confident in the knowledge that their infrastructure is secure, compliant, and optimized for the challenges of modern cloud computing.

Infrastructure as Code (IaC) Enhanced: Introducing mimInsightCloudSec

In the dynamic arena of cloud security, particularly in the bustling sphere of game development, the wisdom of “an ounce of prevention is worth a pound of cure” holds unprecedented significance. This is where the role of Infrastructure as Code (IaC) becomes pivotal, and Rapid7’s innovative tool, mimInsightCloudSec, elevates this approach to new heights.

mimInsightCloudSec, a cutting-edge component of the InsightCloudSec platform, is specifically designed to integrate seamlessly into any development pipeline, whether you prefer working with executable binaries or thrive in a containerized ecosystem. Its versatility allows it to be a perfect fit for various deployment strategies, making it an indispensable tool for developers aiming to embed security directly into their infrastructure deployment process.

The primary goal of mimInsightCloudSec is to identify vulnerabilities before the infrastructure is even created, thus embodying the proactive stance on security. This foresight is crucial in the realm of game development, where the stakes are high, and the digital landscape is constantly shifting. By catching vulnerabilities at this nascent stage, developers can ensure that their games are built on a foundation of security, devoid of the common pitfalls that could jeopardize their work in the future.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec
Figure 2: Shift Left – Infrastructure as Code (IaC) Security

Upon completion of its analysis, mimInsightCloudSec presents its findings in a variety of formats suitable for any team’s needs, including HTML, SARIF, and XML. This flexibility ensures that the results are not just comprehensive but also accessible, allowing teams to swiftly understand and address any identified issues. Moreover, these results are pushed to the InsightCloudSec platform, where they contribute to a broader overview of the security posture, offering actionable insights into potential misconfigurations.

But the capabilities of the InsightCloudSec platform extend even further. Within this sophisticated environment, developers can craft custom configurations, tailoring the security checks to fit the unique requirements of their projects. This feature is particularly valuable for teams looking to go beyond standard security measures, aiming instead for a level of infrastructure hardening that is both rigorous and bespoke. These custom configurations empower developers to establish a static level of security that is robust, nuanced, and perfectly aligned with the specific needs of their game-development projects.

By leveraging mimInsightCloudSec within the InsightCloudSec ecosystem, game developers not only can anticipate and mitigate vulnerabilities before they manifest but also refine their cloud infrastructure with precision-tailored security measures. This proactive and customized approach ensures that the gaming experiences they craft are not only immersive and engaging but also built on a secure, resilient digital foundation.

Securing the Next Level: Automated Cloud Defense in Game Development with InsightCloudSec
Figure 3: Misconfigurations and recommended remediations in the InsightCloudSec platform

In summary, Rapid7’s InsightCloudSec offers a comprehensive and automated approach to cloud security, crucial for the dynamic environment of game development. By leveraging both API harvesting and innovative Event Driven Harvesting – along with robust support for Infrastructure as Code – InsightCloudSec ensures that game developers can focus on what they do best: creating engaging and immersive gaming experiences with the knowledge that their cloud infrastructure is secure, compliant, and optimized for performance.

In a forthcoming blog post, we’ll explore the unique security challenges that arise when operating a game in the cloud. We’ll also demonstrate how InsightCloudSec can offer automated solutions to effortlessly maintain a robust security posture.

How to develop an Amazon Security Lake POC

Post Syndicated from Anna McAbee original https://aws.amazon.com/blogs/security/how-to-develop-an-amazon-security-lake-poc/

You can use Amazon Security Lake to simplify log data collection and retention for Amazon Web Services (AWS) and non-AWS data sources. To make sure that you get the most out of your implementation requires proper planning.

In this post, we will show you how to plan and implement a proof of concept (POC) for Security Lake to help you determine the functionality and value of Security Lake in your environment, so that your team can confidently design and implement in production. We will walk you through the following steps:

  1. Understand the functionality and value of Security Lake
  2. Determine success criteria for the POC
  3. Define your Security Lake configuration
  4. Prepare for deployment
  5. Enable Security Lake
  6. Validate deployment

Understand the functionality of Security Lake

Figure 1 summarizes the main features of Security Lake and the context of how to use it:

Figure 1: Overview of Security Lake functionality

Figure 1: Overview of Security Lake functionality

As shown in the figure, Security Lake ingests and normalizes logs from data sources such as AWS services, AWS Partner sources, and custom sources. Security Lake also manages the lifecycle, orchestration, and subscribers. Subscribers can be AWS services, such as Amazon Athena, or AWS Partner subscribers.

There are four primary functions that Security Lake provides:

  • Centralize visibility to your data from AWS environments, SaaS providers, on-premises, and other cloud data sources — You can collect log sources from AWS services such as AWS CloudTrail management events, Amazon Simple Storage Service (Amazon S3) data events, AWS Lambda data events, Amazon Route 53 Resolver logs, VPC Flow Logs, and AWS Security Hub findings, in addition to log sources from on-premises, other cloud services, SaaS applications, and custom sources. Security Lake automatically aggregates the security data across AWS Regions and accounts.
  • Normalize your security data to an open standard — Security Lake normalizes log sources in a common schema, the Open Security Schema Framework (OCSF), and stores them in compressed parquet files.
  • Use your preferred analytics tools to analyze your security data — You can use AWS tools, such as Athena and Amazon OpenSearch Service, or you can utilize external security tools to analyze the data in Security Lake.
  • Optimize and manage your security data for more efficient storage and query — Security Lake manages the lifecycle of your data with customizable retention settings with automated storage tiering to help provide more cost-effective storage.

Determine success criteria

By establishing success criteria, you can assess whether Security Lake has helped address the challenges that you are facing. Some example success criteria include:

  • I need to centrally set up and store AWS logs across my organization in AWS Organizations for multiple log sources.
  • I need to more efficiently collect VPC Flow Logs in my organization and analyze them in my security information and event management (SIEM) solution.
  • I want to use OpenSearch Service to replace my on-premises SIEM.
  • I want to collect AWS log sources and custom sources for machine learning with Amazon Sagemaker.
  • I need to establish a dashboard in Amazon QuickSight to visualize my Security Hub findings and a custom log source data.

Review your success criteria to make sure that your goals are realistic given your timeframe and potential constraints that are specific to your organization. For example, do you have full control over the creation of AWS services that are deployed in an organization? Do you have resources that can dedicate time to implement and test? Is this time convenient for relevant stakeholders to evaluate the service?

The timeframe of your POC will depend on your answers to these questions.

Important: Security Lake has a 15-day free trial per account that you use from the time that you enable Security Lake. This is the best way to estimate the costs for each Region throughout the trial, which is an important consideration when you configure your POC.

Define your Security Lake configuration

After you establish your success criteria, you should define your desired Security Lake configuration. Some important decisions include the following:

  • Determine AWS log sources — Decide which AWS log sources to collect. For information about the available options, see Collecting data from AWS services.
  • Determine third-party log sources — Decide if you want to include non-AWS service logs as sources in your POC. For more information about your options, see Third-party integrations with Security Lake; the integrations listed as “Source” can send logs to Security Lake.

    Note: You can add third-party integrations after the POC or in a second phase of the POC. Pre-planning will be required to make sure that you can get these set up during the 15-day free trial. Third-party integrations usually take more time to set up than AWS service logs.

  • Select a delegated administrator – Identify which account will serve as the delegated administrator. Make sure that you have the appropriate permissions from the organization admin account to identify and enable the account that will be your Security Lake delegated administrator. This account will be the location for the S3 buckets with your security data and where you centrally configure Security Lake. The AWS Security Reference Architecture (AWS SRA) recommends that you use the AWS logging account for this purpose. In addition, make sure to review Important considerations for delegated Security Lake administrators.
  • Select accounts in scope — Define which accounts to collect data from. To get the most realistic estimate of the cost of Security Lake, enable all accounts across your organization during the free trial.
  • Determine analytics tool — Determine if you want to use native AWS analytics tools, such as Athena and OpenSearch Service, or an existing SIEM, where the SIEM is a subscriber to Security Lake.
  • Define log retention and Regions — Define your log retention requirements and Regional restrictions or considerations.

Prepare for deployment

After you determine your success criteria and your Security Lake configuration, you should have an idea of your stakeholders, desired state, and timeframe. Now you need to prepare for deployment. In this step, you should complete as much as possible before you deploy Security Lake. The following are some steps to take:

  • Create a project plan and timeline so that everyone involved understands what success look like and what the scope and timeline is.
  • Define the relevant stakeholders and consumers of the Security Lake data. Some common stakeholders include security operations center (SOC) analysts, incident responders, security engineers, cloud engineers, finance, and others.
  • Define who is responsible, accountable, consulted, and informed during the deployment. Make sure that team members understand their roles.
  • Make sure that you have access in your management account to delegate and administrator. For further details, see IAM permissions required to designate the delegated administrator.
  • Consider other technical prerequisites that you need to accomplish. For example, if you need roles in addition to what Security Lake creates for custom extract, transform, and load (ETL) pipelines for custom sources, can you work with the team in charge of that process before the POC?

Enable Security Lake

The next step is to enable Security Lake in your environment and configure your sources and subscribers.

  1. Deploy Security Lake across the Regions, accounts, and AWS log sources that you previously defined.
  2. Configure custom sources that are in scope for your POC.
  3. Configure analytics tools in scope for your POC.

Validate deployment

The final step is to confirm that you have configured Security Lake and additional components, validate that everything is working as intended, and evaluate the solution against your success criteria.

  • Validate log collection — Verify that you are collecting the log sources that you configured. To do this, check the S3 buckets in the delegated administrator account for the logs.
  • Validate analytics tool — Verify that you can analyze the log sources in your analytics tool of choice. If you don’t want to configure additional analytics tooling, you can use Athena, which is configured when you set up Security Lake. For sample Athena queries, see Amazon Security Lake Example Queries on GitHub and Security Lake queries in the documentation.
  • Obtain a cost estimate — In the Security Lake console, you can review a usage page to verify that the cost of Security Lake in your environment aligns with your expectations and budgets.
  • Assess success criteria — Determine if you achieved the success criteria that you defined at the beginning of the project.

Next steps

Next steps will largely depend on whether you decide to move forward with Security Lake.

  • Determine if you have the approval and budget to use Security Lake.
  • Expand to other data sources that can help you provide more security outcomes for your business.
  • Configure S3 lifecycle policies to efficiently store logs long term based on your requirements.
  • Let other teams know that they can subscribe to Security Lake to use the log data for their own purposes. For example, a development team that gets access to CloudTrail through Security Lake can analyze the logs to understand the permissions needed for an application.

Conclusion

In this blog post, we showed you how to plan and implement a Security Lake POC. You learned how to do so through phases, including defining success criteria, configuring Security Lake, and validating that Security Lake meets your business needs.

As a customer, this guide will help you run a successful proof of value (POV) with Security Lake. It guides you in assessing the value and factors to consider when deciding to implement the current features.

Further resources

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Anna McAbee

Anna McAbee

Anna is a Security Specialist Solutions Architect focused on threat detection and incident response at AWS. Before AWS, she worked as an AWS customer in financial services on both the offensive and defensive sides of security. Outside of work, Anna enjoys cheering on the Florida Gators football team, wine tasting, and traveling the world.

Author

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Marc Luescher

Marc Luescher

Marc is a Senior Solutions Architect helping enterprise customers be successful, focusing strongly on threat detection, incident response, and data protection. His background is in networking, security, and observability. Previously, he worked in technical architecture and security hands-on positions within the healthcare sector as an AWS customer. Outside of work, Marc enjoys his 3 dogs, 4 cats, and 20+ chickens.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/12/19/expanded-coverage-and-new-attack-path-visualizations-help-security-teams-prioritize-cloud-risk-and-understand-blast-radius/

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

Cloud environments differ in a number of ways from more traditional on-prem environments. From the immense scale and compounding complexity to the rate of change, the cloud creates a host of challenges for security teams to navigate and grapple with. By definition, anything running in the cloud has the potential to be made publicly available, either directly or indirectly. The interconnected nature of these environments is such that when one account, resource, or service is compromised, it can be fairly easy for a bad actor to move laterally across your environment and/or grant themselves the permissions to wreak havoc. These avenues for lateral movement or privilege escalation are often referred to as attack paths.

Having a solution in place that can clearly and dynamically detect and depict these attack paths is critical to helping teams not only understand where risks exist across their environment but arguably more importantly how they are most likely to be exploited and what that means for an organization – particularly with respect to protecting high-value assets.

Detect and Remediate Attack Paths With InsightCloudSec

Attack Path Analysis in InsightCloudSec enables Rapid7 customers to see their cloud environments from the perspective of an attacker. It visualizes the various ways an attacker could gain access, move between resources, and compromise the cloud environment. Attack Paths are high fidelity signals in our risk prioritization model that focuses on identifying toxic combinations that lead to real business impact.

Since Rapid7 initially launched Attack Path Analysis, we’ve continued to roll out incremental updates to the feature, primarily in the form of expanded attack path coverage across each of the major cloud service providers (CSPs). In our most recent InsightCloudSec release (12.12.2023), we’ve continued this momentum, announcing additional attack paths as well as some exciting updates around how we visualize risk across paths and the potential blast radius should a compromised resource within an existing attack path be exploited. In this post, we’ll dive into an example of one of our recently added attack paths for Microsoft Azure along with a bit more detail about the new risk visualizations. So with that, let’s jump right in.

Expanding Coverage With New Attack Paths

First, on the coverage side of things we’ve added seven new paths in recent releases across AWS and Azure. Our AWS coverage was extended to support ECS across all of our AWS Attack Paths, and we also introduced 3 new Azure Attack paths. In the interest of brevity, we won’t cover each of them, but we do have an ever-developing list of supported attack paths you can access here on the docs page. As an example, however, let’s dive into one of the new paths we released for Azure, which identifies the presence of attack paths targeting publicly exposed instances that also have attached privileged roles.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

This type of attack path is concerning for a couple of reasons: First and foremost, an attacker could use the publicly exposed instance as an inroad to your cloud environment due to the fact that it’s publicly accessible, gaining access to sensitive data on the resource itself or accessing data the resource in question has indirect access to. Secondly, since the attached role is capable of escalating privileges, an attacker could then leverage the resource to assign themselves admin permissions which could in turn be used to open up new attack vectors.

Because this could have wide-reaching ramifications should it be exploited, we’ve assigned this a critical severity. That means we’ll want to work to resolve this as fast as possible any time this path shows up across our cloud environments, maybe even automating the process of closing down public access or adjusting the resource permissions to limit the potential for lateral movement or privilege escalation. Speaking of paths with widespread impact should they be exploited, that brings me to some other exciting updates we’ve rolled out to Attack Path Analysis.

Clearly Visualizing Risk Severity and Potential Blast Radius

As I mentioned earlier, along with expanded coverage, we’ve also updated Attack Path Analysis to make it clearer for users where your riskiest assets lie across a given attack path and to clearly show the potential blast radius of an exploitation.

To make it easier to understand the overall riskiness of an attack path and where its choke points are, we’ve added a new security view that visualizes the risk of each resource along a given path. This new view makes it very easy for security teams to immediately understand which specific resources present the highest risk and where they should be focusing their remediation efforts to block potential attackers in their tracks.

Expanded Coverage and New Attack Path Visualizations Help Security Teams Prioritize Cloud Risk and Understand Blast Radius

In addition to this new security-focused view, we’ve also extended Attack Path Analysis to show a potential blast radius by displaying a graph-based topology map that helps clearly outline the various ways resources across your environment – and specifically within an attack path – interconnect with one another.

This topology map not only makes it easier for security teams to quickly hone in on what needs their attention first during an investigation, but also where a bad actor could move next. Additionally, this view helps security teams and leaders in communicating risk across the organization, particularly when engaging with non-technical stakeholders that find it difficult to understand why exactly a compromised resource presents a potentially larger risk to the business.

We will continue to expand on our existing Attack Path Analysis capabilities in the future, so be sure to keep an eye out for additional paths being added in the coming months as well as a continued effort to enable security teams to more quickly analyze cloud risk with the context needed to effectively detect, communicate, prioritize, and respond.

NIST SP 800-53 Rev. 5 Updates: What You Need to Know About The Most Recent Patch Release (5.1.1)

Post Syndicated from Lara Sunday original https://blog.rapid7.com/2023/12/14/nist-sp-800-53-rev-5-updates-what-you-need-to-know-about-the-most-recent-patch-release-5-1-1/

NIST SP 800-53 Rev. 5 Updates: What You Need to Know About The Most Recent Patch Release (5.1.1)

On November 6th, the National Institute of Standards and Technology (NIST) issued an update to SP 800-53, a NIST-curated catalog of controls that organizations can implement to effectively manage security and privacy risk. In this blog we’ll cover the new and updated controls within patch release 5.1.1, as well as review how Rapid7 InsightCloudSec helps security teams implement and continuously enforce them across their organizations. Let’s dive right in.

Updates to NIST SP 800-53 Compliance Pack: What You Need to Know About Revision 5.1.1

Unlike the large revision that occurred a few years back when Revision 5 was released – which brought with it nearly 270 control updates in aggregate – this update doesn’t have quite the far-reaching implications. That said, there are a few changes to be aware of. Release 5.1.1 added one new control with three supporting control enhancements, along with some minor grammar and formatting structure changes to other existing controls. Organizations are not mandated to implement the new control and have the option to defer implementation until SP 800-53 Release 6.0.0 is issued, however there is no defined timeline for when 6.0.0 will be released.

While there is no mandate at this time, the team here at Rapid7 generally advises our customers to adopt new patch releases immediately to ensure alignment with the most up-to-date best practices and that your team is covered for emerging attack vectors. In this case, we recommend adopting 5.1.1 primarily to ensure you’re effectively implementing encryption and authentication controls across your environment.

The newly-added control is Identification and Authentication (or IA-13) which states that organizations should “Employ identity providers and authorization servers to manage user, device, and non-person entity (NPE) identities, attributes, and access rights supporting authentication and authorization decisions.”

IA-13 has been broken down by NIST into three supporting control enhancements:

  • IA-13 (01) – Cryptographic keys that protect access tokens are generated, managed, and protected from disclosure and misuse.
  • IA-13 (02) – The source and integrity of identity assertions and access tokens are verified before granting access to system and information resources.
  • IA-13 (03) – Assertions and access tokens are continuously refreshed, time-restricted, audience-restrained and revoked when necessary and after a defined period of non-use.

So, what does all that mean? Put simply, organizations should implement controls to effectively track and manage user and system entity permissions to ensure only authorized users are permitted access to corporate systems or data. This includes the proper use of encryption, hygiene and lifecycle management for access tokens.

This is, of course, a much needed and community-requested addition that speaks to the growing awareness and criticality of implementing checks and guardrails to mitigate identity-related risk. A key component of this equation is implementing a solution that can help you detect areas of your cloud environment that haven’t fully implemented these controls. This can be a particularly challenging thing to manage in a cloud environment, given its democratized nature, the sheer volume of identities and permissions that need to be managed and the ease with which improper allocation of permissions and privileges can occur.

Implement and Continuously Enforce NIST SP 800-53 Rev. 5 with InsightCloudSec

InsightCloudSec allows security teams to establish and continuously measure compliance against organizational policies, whether they’re based on service provider best practices, a common industry framework, or a custom pack tailored to specific business needs.

A compliance pack within InsightCloudSec is a set of checks that can be used to continuously assess your cloud environments for compliance with a given regulatory framework, or industry or provider best practices. The platform comes out of the box with 40+ compliance packs, including a dedicated pack for NIST SP 800-53 Rev. 5.1.1, which now provides an additional 14 insights that align to the newly-added IA-13.

The dedicated pack provides 367 Insights checking against 128 NIST SP 800-53 Rev. 5.1.1 requirements that assess your multi cloud environment for compliance with the controls outlined by NIST. With extensive support for various resource types across all major cloud service providers (CSPs), security teams can confidently implement and continuously enforce compliance with SP 800-53 Rev 5.1.1.

NIST SP 800-53 Rev. 5 Updates: What You Need to Know About The Most Recent Patch Release (5.1.1)

InsightCloudSec continuously assesses your entire multi-cloud environment for compliance with one or more compliance packs and detects noncompliant resources within minutes after they are created or an unapproved change is made. If you so choose, you can make use of the platform’s native, no-code automation to contact the resource owner, or even remediate the issue—either via deletion or by adjusting the configuration or permissions—without any human intervention.

For more information about how to use InsightCloudSec to implement and enforce compliance standards like those outlined in NIST SP 800-53 Rev. 5.1.1, be sure to check out the docs page! For more on our cloud identity and access management capabilities, we’ve got some additional information on that here.

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Post Syndicated from Laura Ellis original https://blog.rapid7.com/2023/11/29/rapid7-takes-next-step-in-ai-innovation-with-new-ai-powered-threat-detections/

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Digital transformation has created immense opportunity to generate new revenue streams, better engage with customers and drive operational efficiency. A decades-long transition to cloud as the de-facto delivery model of choice has delivered undeniable value to the business landscape. But any change in operating model brings new challenges too.

The speed, scale and complexity of modern IT environments results in security teams being tasked with analyzing mountains of data to keep pace with the ever-expanding threat landscape. This dynamic puts security analysts on their heels, constantly reacting to incoming threat signals from tools that weren’t purpose-built to solve hybrid environments, creating coverage gaps and a need to swivel-chair between a multitude of point solutions. Making matters worse? Attackers have increasingly looked to weaponize AI technologies to launch sophisticated attacks, benefiting from increased scale, easy access to AI-generated malware packages, as well as more effective social engineering and phishing using generative AI.

To combat these challenges, we need to equip our security teams with modern solutions leveraging AI to cut through the noise and boost signals that matter.  Our AI algorithms alleviate alert and action fatigue by delivering visibility across your IT environment and intelligently prioritizing the most important risk signals.

Rapid7 Has Been an AI Innovator for Decades

There has been a groundswell of development and corresponding interest in generative AI, particularly in the last few years as mainstream adoption of large language models (LLMs) grows. Most notably OpenAI’s ChatGPT – has brought AI to the forefront of people’s minds. This buzz has resulted in a number of vendors in the security space launching their own intelligent assistants and working to incorporate AI/ML into their respective solutions to keep pace.

From our perspective, this is great news and a huge step forward in the data & AI space. Rapid7 is also accelerating investment, but we’re certainly not starting from scratch. In fact, Rapid7 was a pioneer in AI development for security use cases, starting all the way back to our earliest days with our VM Expert System in the early 2000’s.

Rapid7 Takes Next Step in AI Innovation with New AI-Powered Threat Detections

Built on decades of risk analysis and continuously trained by our expert SOC team, Rapid7 AI enables your team to focus on what matters most by proactively shrinking your attack surface and intelligently re-balancing the scales between signal and noise.

With visibility across your hybrid attack surface, the Insight Platform enables proactive prevention, leveraging a proprietary AI-based detection engine to spot threats faster than ever and automatically prioritize the signals that matter most based on likelihood of exploitation and potential business impact. Based on learnings from your own environment and security operations over time, the platform will intelligently recommend updates to detection rule settings in an effort to reduce excess noise and eliminate false positives.

By integrating our AI capabilities into the Rapid7 platform, customers benefit from:

  • World class threat efficacy, with AI-driven detection of anomalous activity.  With a vast amount of legitimate activity occurring across customer environments, our AI algorithms validate if activity is actually malicious, allowing teams to spot unknown threats faster than ever.
  • Help to cut through the noise, by identifying the signals that matter most.  Our AI algorithms  automatically prioritize risks and threats, intelligently, suppressing benign alerts to eliminate the noise so analysts can focus on what matters most.
  • The confidence that they’re taking action on AI-generated insights they can trust, built on decades of risk and threat analysis and trained by a team of recognized innovators in AI-driven security.

Recent Innovations in AI-driven Threat Detection

We’ve recently announced two new AI/ML-powered threat detection capabilities aimed at enabling teams to detect unknown threats across a customer’s  environment faster than ever before without introducing excess noise.

  • Cloud Anomaly Detection
    Cloud Anomaly Detection is an AI-powered, agentless detection capability designed to detect and prioritize anomalous activity within an organization’s own cloud environment. The proprietary AI engine goes beyond simply detecting suspicious behavior; it automatically suppresses benign signals to reduce noise, eliminate false positives, and enable teams to focus on investigating highly-probable active threats. For more information on Cloud Anomaly Detection, check out the launch blog here.
  • Intelligent Kerberoasting Detection
    We’ve expanded existing AI-driven detections for attack types such as data exfiltration, phishing and credential theft to include intelligent detection, validation and prioritization of Kerberoasting attacks. The platform goes beyond traditional tactics for detecting Kerberoasting by applying a deep understanding of typical user activity across the organization. With this context, SOC teams can respond with confidence knowing the signals they are receiving are actually indicative of a Kerberoasting attack.

Rapid7 continues to explore and invest in ways we can leverage AI/ML to better-equip our customers to defend their organizations against the ever-expanding threat landscape. Keep an eye out in the near-future for additional innovations to come out in this space.

For now, be sure to stop by the Rapid7 booth (#1270) at AWS Re:Invent, where we’ll be showcasing Cloud Anomaly Detection and talk to us about how your team is thinking about utilizing AI.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/11/28/updates-to-layered-context-enable-teams-to-quickly-understand-which-risk-signals-are-most-pressing/

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

Layered Context introduced a consolidated view of all security risks insightCloudSec collects from the various layers of a cloud environment. This enabled our customers to go from visibility into individual security risks on a resource, to understanding all of the risks that impacted that resource and the overall risk of that resource.

For example: let’s take a cloud resource that has a port left open to the public.

With this level of detail it is pretty challenging to identify the risk level, because we don’t know enough about the resource in question, or even if it was supposed to be opened to the public or not. It’s not that this isn’t risky, we just need to know more to evaluate just how risky it is. As we add more context, we start to get a clearer picture: the environment the resource is running in, if it is connected to a business critical application, does it have any known vulnerabilities, are there identities with elevated permissions associated with the resource, etc.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

By layering together all of this context, customers are able to effectively understand the actual risk associated with each and every one of their resources – in real-time. This is of course helpful information to have in one consolidated view, but even still it can be difficult to sift through potentially thousands of resources and prioritize the work that needs to be done to secure each one. To that end, we are excited to introduce a new risk score in Layered Context, which analyzes all the signals and context we know about a given cloud resource and automatically assigns a score and a severity, making it easy for our customers to understand the riskiest resources they should focus on.

Prioritizing Risk By Focusing on Toxic Combinations

Much like Layered Context itself, the new risk score combines a variety of risk signals, assigning a higher risk score to resources that suffer from toxic combinations, or multiple risk vectors that compound to present an increased likelihood or impact of compromise.

The risk score takes into account:

  • Business Criticality, with an understanding of what applications the resource is associated with such as a crown-jewel or revenue generating app
  • Public Accessibility, both from a network perspective as well as via user permissions (more on that in a second)
  • Potential Attack Paths, to understand how a bad actor could move laterally across your inter-connected environment
  • Identity-related risk, including excessive and/or unused permissions and privileges
  • Misconfigurations, including whether or not the resource is in compliance with organizational standards
  • Threats to factor in any malicious behavior that has been detected
  • And of course, Vulnerabilities, using Rapid7’s Active Risk model which consumes data on exploitability and active exploitation in the wild

By identifying these toxic combinations, we can ensure the riskiest resources are given the highest priority. Each resource is assigned a score and a severity, making it easy for our customers to see where the riskiest resources exist in their environment and where to focus.

A Clear Understanding of How We Calculate Risk

Alongside our risk score, we are  introducing a new view to breakdown all of the reasons why a resource has been scored accordingly. This will give an overview of the most important information our customers need to know that clearly summarizes the factors that influenced the risk scoring. Reducing the time required to understand why a resource is risky, meaning security teams can focus on remediating the risks.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

A Bit More on How we Determine Public Accessibility

As mentioned previously, the basis of much of our risk calculation in cloud resources stems from a simple question: “is this resource publicly accessible?” This is a critical detail in determining relative risk, but can be very difficult to ascertain given the complex and ephemeral nature of cloud environments. To address this, we’ve invested significant time and effort to ensure we’re assessing public accessibility as accurately as possible but also explaining why we’ve determined it that way, so it’s much easier to take remediation action. This determination can easily be viewed on a per resource basis from the Layered Context page.

We have lots of exciting releases coming up in the next few months, alongside Risk scoring we are also extending our Attack Path Analysis feature to show the Blast Radius of an Attack with improved topology visualizations.  This will give our customers not only the visibility into how an attacker could exploit a given resource but also the potential for lateral movement between interconnected resources. Additionally, we’ll be updating the way we validate and show proof of public accessibility. Should a resource be publicly accessible, you will be able to easily view the proof details which will show exactly which combination of configurations is resulting in the resource being publicly accessible.

The new risk scoring capabilities in Layered Context will be on display at AWS Re:Invent next week. Be sure to stop by booth #1270 to see it in action!

How to use the BatchGetSecretsValue API to improve your client-side applications with AWS Secrets Manager

Post Syndicated from Brendan Paul original https://aws.amazon.com/blogs/security/how-to-use-the-batchgetsecretsvalue-api-to-improve-your-client-side-applications-with-aws-secrets-manager/

AWS Secrets Manager is a service that helps you manage, retrieve, and rotate database credentials, application credentials, OAuth tokens, API keys, and other secrets throughout their lifecycles. You can use Secrets Manager to help remove hard-coded credentials in application source code. Storing the credentials in Secrets Manager helps avoid unintended or inadvertent access by anyone who can inspect your application’s source code, configuration, or components. You can replace hard-coded credentials with a runtime call to the Secrets Manager service to retrieve credentials dynamically when you need them.

In this blog post, we introduce a new Secrets Manager API call, BatchGetSecretValue, and walk you through how you can use it to retrieve multiple Secretes Manager secrets.

New API — BatchGetSecretValue

Previously, if you had an application that used Secrets Manager and needed to retrieve multiple secrets, you had to write custom code to first identify the list of needed secrets by making a ListSecrets call, and then call GetSecretValue on each individual secret. Now, you don’t need to run ListSecrets and loop. The new BatchGetSecretValue API reduces code complexity when retrieving secrets, reduces latency by running bulk retrievals, and reduces the risk of reaching Secrets Manager service quotas.

Security considerations

Though you can use this feature to retrieve multiple secrets in one API call, the access controls for Secrets Manager secrets remain unchanged. This means AWS Identity and Access Management (IAM) principals need the same permissions as if they were to retrieve each of the secrets individually. If secrets are retrieved using filters, principals must have both permissions for list-secrets and get-secret-value on secrets that are applicable. This helps protect secret metadata from inadvertently being exposed. Resource policies on secrets serve as another access control mechanism, and AWS principals must be explicitly granted permissions to access individual secrets if they’re accessing secrets from a different AWS account (see Cross-account access for more information). Later in this post, we provide some examples of how you can restrict permissions of this API call through an IAM policy or a resource policy.

Solution overview

In the following sections, you will configure an AWS Lambda function to use the BatchGetSecretValue API to retrieve multiple secrets at once. You also will implement attribute based access control (ABAC) for Secrets Manager secrets, and demonstrate the access control mechanisms of Secrets Manager. In following along with this example, you will incur costs for the Secrets Manager secrets that you create, and the Lambda function invocations that are made. See the Secrets Manager Pricing and Lambda Pricing pages for more details.

Prerequisites

To follow along with this walk-through, you need:

  1. Five resources that require an application secret to interact with, such as databases or a third-party API key.
  2. Access to an IAM principal that can:
    • Create Secrets Manager secrets through the AWS Command Line Interface (AWS CLI) or AWS Management Console.
    • Create an IAM role to be used as a Lambda execution role.
    • Create a Lambda function.

Step 1: Create secrets

First, create multiple secrets with the same resource tag key-value pair using the AWS CLI. The resource tag will be used for ABAC. These secrets might look different depending on the resources that you decide to use in your environment. You can also manually create these secrets in the Secrets Manager console if you prefer.

Run the following commands in the AWS CLI, replacing the secret-string values with the credentials of the resources that you will be accessing:

  1.  
    aws secretsmanager create-secret --name MyTestSecret1 --description "My first test secret created with the CLI for resource 1." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-1\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  2.  
    aws secretsmanager create-secret --name MyTestSecret2 --description "My second test secret created with the CLI for resource 2." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-2\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  3.  
    aws secretsmanager create-secret --name MyTestSecret3 --description "My third test secret created with the CLI for resource 3." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-3\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  4.  
    aws secretsmanager create-secret --name MyTestSecret4 --description "My fourth test secret created with the CLI for resource 4." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-4 \"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"
  5.  
    aws secretsmanager create-secret --name MyTestSecret5 --description "My fifth test secret created with the CLI for resource 5." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-5\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app1\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"

Next, create a secret with a different resource tag value for the app key, but the same environment key-value pair. This will allow you to demonstrate that the BatchGetSecretValue call will fail when an IAM principal doesn’t have permissions to retrieve and list the secrets in a given filter.

Create a secret with a different tag, replacing the secret-string values with credentials of the resources that you will be accessing.

  1.  
    aws secretsmanager create-secret --name MyTestSecret6 --description "My test secret created with the CLI." --secret-string "{\"user\":\"username\",\"password\":\"EXAMPLE-PASSWORD-6\"}" --tags "[{\"Key\":\"app\",\"Value\":\"app2\"},{\"Key\":\"environment\",\"Value\":\"production\"}]"

Step 2: Create an execution role for your Lambda function

In this example, create a Lambda execution role that only has permissions to retrieve secrets that are tagged with the app:app1 resource tag.

Create the policy to attach to the role

  1. Navigate to the IAM console.
  2. Select Policies from the navigation pane.
  3. Choose Create policy in the top right corner of the console.
  4. In Specify Permissions, select JSON to switch to the JSON editor view.
  5. Copy and paste the following policy into the JSON text editor.
    {
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Sid": "Statement1",
    			"Effect": "Allow",
    			"Action": [
    				"secretsmanager:ListSecretVersionIds",
    				"secretsmanager:GetSecretValue",
    				"secretsmanager:GetResourcePolicy",
    				"secretsmanager:DescribeSecret"
    			],
    			"Resource": [
    				"*"
    			],
    			"Condition": {
    				"StringNotEquals": {
    					"aws:ResourceTag/app": [
    						"${aws:PrincipalTag/app}"
    					]
    				}
    			}
    		},
    		{
    			"Sid": "Statement2",
    			"Effect": "Allow",
    			"Action": [
    				"secretsmanager:ListSecrets"
    			],
    			"Resource": ["*"]
    		}
    	]
    }

  6. Choose Next.
  7. Enter LambdaABACPolicy for the name.
  8. Choose Create policy.

Create the IAM role and attach the policy

  1. Select Roles from the navigation pane.
  2. Choose Create role.
  3. Under Select Trusted Identity, leave AWS Service selected.
  4. Select the dropdown menu under Service or use case and select Lambda.
  5. Choose Next.
  6. Select the checkbox next to the LambdaABACPolicy policy you just created and choose Next.
  7. Enter a name for the role.
  8. Select Add tags and enter app:app1 as the key value pair for a tag on the role.
  9. Choose Create Role.

Step 3: Create a Lambda function to access secrets

  1. Navigate to the Lambda console.
  2. Choose Create Function.
  3. Enter a name for your function.
  4. Select the Python 3.10 runtime.
  5. Select change default execution role and attach the execution role you just created.
  6. Choose Create Function.
    Figure 1: create a Lambda function to access secrets

    Figure 1: create a Lambda function to access secrets

  7. In the Code tab, copy and paste the following code:
    import json
    import boto3
    from botocore.exceptions import ClientError
    import urllib.request
    import json
    
    session = boto3.session.Session()
    # Create a Secrets Manager client
    client = session.client(
            service_name='secretsmanager'
        )
        
    
    def lambda_handler(event, context):
    
         application_secrets = client.batch_get_secret_value(Filters =[
            {
            'Key':'tag-key',
            'Values':[event["TagKey"]]
            },
            {
            'Key':'tag-value',
            'Values':[event["TagValue"]]
            }
            ])
    
    
        ### RESOURCE 1 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 1")
            resource_1_secret = application_secrets["SecretValues"][0]
            ## IMPLEMENT RESOURCE CONNECTION HERE
    
            print("SUCCESFULLY CONNECTED TO RESOURCE 1")
        
        except Exception as e:
            print("Failed to connect to resource 1")
            return e
    
        ### RESOURCE 2 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 2")
            resource_2_secret = application_secrets["SecretValues"][1]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO RESOURCE 2")
        
        except Exception as e:
            print("Failed to connect to resource 2",)
            return e
    
        
        ### RESOURCE 3 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 3")
            resource_3_secret = application_secrets["SecretValues"][2]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO DB 3")
            
        except Exception as e:
            print("Failed to connect to resource 3")
            return e 
    
        ### RESOURCE 4 CONNECTION ###
        try:
            print("TESTING CONNECTION TO RESOURCE 4")
            resource_4_secret = application_secrets["SecretValues"][3]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO RESOURCE 4")
            
        except Exception as e:
            print("Failed to connect to resource 4")
            return e
    
        ### RESOURCE 5 CONNECTION ###
        try:
            print("TESTING ACCESS TO RESOURCE 5")
            resource_5_secret = application_secrets["SecretValues"][4]
            ## IMPLEMENT RESOURCE CONNECTION HERE
            
            print("SUCCESFULLY CONNECTED TO RESOURCE 5")
            
        except Exception as e:
            print("Failed to connect to resource 5")
            return e
        
        return {
            'statusCode': 200,
            'body': json.dumps('Successfully Completed all Connections!')
        }

  8. You need to configure connections to the resources that you’re using for this example. The code in this example doesn’t create database or resource connections to prioritize flexibility for readers. Add code to connect to your resources after the “## IMPLEMENT RESOURCE CONNECTION HERE” comments.
  9. Choose Deploy.

Step 4: Configure the test event to initiate your Lambda function

  1. Above the code source, choose Test and then Configure test event.
  2. In the Event JSON, replace the JSON with the following:
    {
    "TagKey": "app",
    “TagValue”:”app1”
    }

  3. Enter a Name for your event.
  4. Choose Save.

Step 5: Invoke the Lambda function

  1. Invoke the Lambda by choosing Test.

Step 6: Review the function output

  1. Review the response and function logs to see the new feature in action. Your function logs should show successful connections to the five resources that you specified earlier, as shown in Figure 2.
    Figure 2: Review the function output

    Figure 2: Review the function output

Step 7: Test a different input to validate IAM controls

  1. In the Event JSON window, replace the JSON with the following:
    {
      "TagKey": "environment",
    “TagValue”:”production”
    }

  2. You should now see an error message from Secrets Manager in the logs similar to the following:
    User: arn:aws:iam::123456789012:user/JohnDoe is not authorized to perform: 
    secretsmanager:GetSecretValue because no resource-based policy allows the secretsmanager:GetSecretValue action

As you can see, you were able to retrieve the appropriate secrets based on the resource tag. You will also note that when the Lambda function tried to retrieve secrets for a resource tag that it didn’t have access to, Secrets Manager denied the request.

How to restrict use of BatchGetSecretValue for certain IAM principals

When dealing with sensitive resources such as secrets, it’s recommended that you adhere to the principle of least privilege. Service control policies, IAM policies, and resource policies can help you do this. Below, we discuss three policies that illustrate this:

Policy 1: IAM ABAC policy for Secrets Manager

This policy denies requests to get a secret if the principal doesn’t share the same project tag as the secret that the principal is trying to retrieve. Note that the effectiveness of this policy is dependent on correctly applied resource tags and principal tags. If you want to take a deeper dive into ABAC with Secrets Manager, see Scale your authorization needs for Secrets Manager using ABAC with IAM Identity Center.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Effect": "Deny",
      "Action": [
        "secretsmanager:GetSecretValue",
	“secretsmanager:BatchGetSecretValue”
      ],
      "Resource": [
        "*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceTag/project": [
            "${aws:PrincipalTag/project}"
          ]
        }
      }
    }
  ]
}

Policy 2: Deny BatchGetSecretValue calls unless from a privileged role

This policy example denies the ability to use the BatchGetSecretValue unless it’s run by a privileged workload role.

"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "Statement1",
			"Effect": "Deny",
			"Action": [
				"secretsmanager:BatchGetSecretValue",
			],
			"Resource": [
				"arn:aws:secretsmanager:us-west-2:12345678910:secret:testsecret"
			],
			"Condition": {
				"StringNotLike": {
					"aws:PrincipalArn": [
						"arn:aws:iam::123456789011:role/prod-workload-role"
					]
				}
			}
		}]
}

Policy 3: Restrict actions to specified principals

Finally, let’s take a look at an example resource policy from our data perimeters policy examples. This resource policy restricts Secrets Manager actions to the principals that are in the organization that this secret is a part of, except for AWS service accounts.

{
    "Version": "2012-10-17",
    "Statement": 
	[
        {
            "Sid": "EnforceIdentityPerimeter",
            "Effect": "Deny",
            "Principal": 
			{
                "AWS": "*"
            },
            "Action": "secretsmanager:*",
            "Resource": "*",
            "Condition": 
			{
                "StringNotEqualsIfExists": 
				{
                    "aws:PrincipalOrgID": "<my-org-id>"
                },
                "BoolIfExists": 
				{
                    "aws:PrincipalIsAWSService": "false"
                }
            }
        },
     ]
}

Conclusion

In this blog post, we introduced the BatchGetSecretValue API, which you can use to improve operational excellence, performance efficiency, and reduce costs when using Secrets Manager. We looked at how you can use the API call in a Lambda function to retrieve multiple secrets that have the same resource tag and showed an example of an IAM policy to restrict access to this API.

To learn more about Secrets Manager, see the AWS Secrets Manager documentation or the AWS Security Blog.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Brendan Paul

Brendan Paul

Brendan is a Senior Solutions Architect at Amazon Web Services supporting media and entertainment companies. He has a passion for data protection and has been working at AWS since 2019. In 2024, he will start to pursue his Master’s Degree in Data Science at UC Berkeley. In his free time, he enjoys watching sports and running.

2023 Canadian Centre for Cyber Security Assessment Summary report available with 20 additional services

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/2023-canadian-centre-for-cyber-security-assessment-summary-report-available-with-20-additional-services/

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of current and new AWS services and features. We are pleased to announce the availability of the 2023 Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS. With this assessment, a total of 150 AWS services and features are assessed in the Canada (Central) Region, including 20 additional AWS services and features. The assessment report is available for review and download on demand through AWS Artifact.

The full list of services in scope for the CCCS assessment is available on the Services in Scope page. The 20 new services and features are the following:

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.  

The CCCS cloud service provider information technology security assessment process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach, Annex 3 – Security control catalogue). As of November 2023, 150 AWS services in the Canada (Central) Region have been assessed by CCCS and meet the requirements for the Medium cloud security profile. Meeting the Medium cloud security profile is required to host workloads that are classified up to and including Medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is an Audit Lead for Canada. He has experience leading audits, attestations, certifications, and assessments across the Americas. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. He previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, and utilities industries.

Manage Enterprise Risk at Scale with a Unified, Holistic Approach

Post Syndicated from KC Higgins original https://blog.rapid7.com/2023/11/16/manage-enterprise-risk-at-scale-with-a-unified-holistic-approach/

Manage Enterprise Risk at Scale with a Unified, Holistic Approach

The rapid pace of technological change and the attendant rise of cyber threats in both speed and number leave most organizations at a disadvantage.

Historically, many firms faced this challenge simply by purchasing more technology in the hopes that the latest threat protection software would keep their data safe. But those days have come to an end. Not only have budgets come under increased scrutiny, but the sheer number of tools in most environments has become a handicap as well: Tools don’t always work well together and the expertise required to manage them remains in short supply. According to some analysts, the current complexity and diversity of tech environments also hampers visibility into vulnerability risks, at least in part because data must be obtained from disparate systems or laboriously exported into spreadsheets and data analytics platforms to fine tune and understand relevant risks.

For organizations looking for a unified perspective of risk across their cloud and on-prem environments, prioritizing risk, eliminating repetitive manual work, maintaining complete risk visibility, and consolidating point solutions will enable them to meet cyber threats with speed and success.

Not all Threats are Created Equally

You’ll hear companies claim to stop all threats everywhere all the time, but such claims are neither true nor practical – anyone who follows the news even casually knows that the threats keep coming. The key is to understand which threats pose the largest risks and mitigate those first. Tools like Rapid7’s InsightVM analyze enterprise-wide asset and vulnerability data to identify the actions that will have the largest impact on risk reduction in a given organization. Instead of thousand-page lists of individual patches to apply, organizations can make informed, up-to-the-minute decisions on how to allocate resources for maximum risk reduction.

InsightVM also offers live dashboards that update whenever new data is discovered, allowing teams to track the attack surface and risk as they change. Dashboard views can even be customized for different technical teams or stakeholders as organizational perimeters expand into the cloud and beyond. Other approaches, such as cloud risk management, allow organizations to manage, prioritize, and act on risks within the large scale of modern multi-cloud environments and on-prem footprints by helping them understand the potential impact of a particular risk and its likelihood of exploitation.

You Can’t Protect What You Can’t See

In addition to trying to tackle every imaginable risk, maintaining maximum visibility into attack-surface risk is the only way organizations can hope to minimize security gaps while managing the many containers, cloud services, and virtual devices that are often spun up and down without direct involvement from the security team.

While InsightVM integrates directly with dynamic infrastructure to give full visibility into the risks posed by these assets, solutions like Executive Risk View provide complete visibility into hybrid-environment risk by ingesting data with purpose-built collection mechanisms – regardless of whether work is running on-premises or in the cloud.

Executive Risk View also aggregates and normalizes disparate risk assessments from on-premises and cloud environments for a unified, interactive dashboard that brings clarity to discovered vulnerabilities and the risks each represents so that security teams can prioritize remediation actions and share insights cross functionally. Insight into how vulnerabilities translate into business risk – and which of them are most likely to be targeted by attackers – means teams can quickly and effectively address the risks that pose the most significant danger.

Simplify the Stack

Prioritization, automation, and visibility are all foundational elements of unified risk protection, but if organizations rely on multiple vendors to manage them, they will continue to lose efficiencies and battle tool proliferation. Cloud Risk Complete offers all of these solutions from one comprehensive platform and single subscription model. This means organizations can secure hybrid environments from development to production; detect and address risk across endpoints, cloud workloads, and traditional infrastructure; and perform dynamic application security testing to remediate application risk – all with a single subscription.

Learn more about the ways Rapid7 can help increase your security posture.

Be Empathetic and Hug Your CISO More!

Post Syndicated from Owen Holland original https://blog.rapid7.com/2023/11/10/be-empathetic-and-hug-your-ciso-more/

Be Empathetic and Hug Your CISO More!

In the rapidly evolving landscape of cloud computing, the adoption of multi-cloud environments has become a prevailing trend. Organizations increasingly turn to multiple cloud providers to harness diverse features, prevent vendor lock-in, and optimize costs. The multi-cloud approach offers unparalleled agility, scalability, and flexibility, but it has its complexities and CISOs need your support.

In the final episode of the Cloud Security Webinar Series, Rapid7’s Chief Security Officer Jaya Baloo and other experts share their thoughts on the cloud strategies to support security leaders as they move into 2024 and beyond.

These webinars can now be viewed on-demand, giving security professionals greater insight into how to safeguard their cloud environments and set themselves up for success. A summary of the key discussion points are listed below.

Nurturing Comprehension and Collaboration

Multi-cloud environments present a complex tapestry woven with equal parts opportunity and complexity. Governance, security, and cost optimization are paramount concerns often exacerbated by the absence of centralized visibility and with the threat of misconfigurations and potential compliance issues looming in the background.

So, in the face of these challenges, collaborative unity among security teams becomes not just a nicety but a necessity. It is through the sharing of knowledge and experiences that the security community effectively grapples with these evolving challenges.

Striving for Collective Success

There are several simple strategies security teams can adopt to support a more robust defense:

  1. Centralized visibility: Embrace cloud management tools to unveil a comprehensive view of the multi-cloud landscapes. In doing so, we foster collaboration and unity. This provides a single pane of glass for security teams to gain comprehensive insights into their digital assets, compliance status, and ongoing security threats.
  2. Automation: Leveraging automation is key to efficiently managing multi-cloud landscapes. Automate asset discovery, security policy enforcement, and threat response. Automation not only streamlines these processes but also reduces the risk of human error.
  3. Security governance framework: Develop a comprehensive security governance framework that encompasses all aspects of multi-cloud security, including identity and access management, data protection, and threat detection. This framework should be flexible enough to accommodate the nuances of each cloud platform.
  4. Resource optimization: Regularly evaluate resource utilization across different cloud providers. Ensure that resources are allocated efficiently to minimize costs. Implement scaling and resource allocation strategies to adapt to changing workload requirements.
  5. Enhanced staff training: Invest in the skills and knowledge of security and IT teams, along with opportunities for cross-training and knowledge sharing.

As organizations continue to embrace multi-cloud environments, mastering the complexities of diverse cloud platforms is crucial for enhanced security, governance, and cost optimization. By gaining a deep understanding of the multi-cloud landscape, addressing key challenges head-on, and implementing efficient management strategies, security professionals can navigate the intricate web of multi-cloud and ensure seamless operations in the cloud-native era.

Cultivating Unity for a More Resilient Future

The evolving nature of cybersecurity demands organizations stand together to share experiences, strategies, and best practices. By cultivating unity and empathy across the security community and the wider business, organizations can collectively navigate the shifting threat landscape more easily.

Ultimately, uniting the cybersecurity community is not merely a virtue but an imperative. To find out more, watch the on-demand cloud security series now.

Aggregating, searching, and visualizing log data from distributed sources with Amazon Athena and Amazon QuickSight

Post Syndicated from Pratima Singh original https://aws.amazon.com/blogs/security/aggregating-searching-and-visualizing-log-data-from-distributed-sources-with-amazon-athena-and-amazon-quicksight/

Customers using Amazon Web Services (AWS) can use a range of native and third-party tools to build workloads based on their specific use cases. Logs and metrics are foundational components in building effective insights into the health of your IT environment. In a distributed and agile AWS environment, customers need a centralized and holistic solution to visualize the health and security posture of their infrastructure.

You can effectively categorize the members of the teams involved using the following roles:

  1. Executive stakeholder: Owns and operates with their support staff and has total financial and risk accountability.
  2. Data custodian: Aggregates related data sources while managing cost, access, and compliance.
  3. Operator or analyst: Uses security tooling to monitor, assess, and respond to related events such as service disruptions.

In this blog post, we focus on the data custodian role. We show you how you can visualize metrics and logs centrally with Amazon QuickSight irrespective of the service or tool generating them. We use Amazon Simple Storage Service (Amazon S3) for storage, AWS Glue for cataloguing, and Amazon Athena for querying the data and creating structured query language (SQL) views for QuickSight to consume.

Target architecture

This post guides you towards building a target architecture in line with the AWS Well-Architected Framework. The tiered and multi-account target architecture, shown in Figure 1, uses account-level isolation to separate responsibilities across the various roles identified above and makes access management more defined and specific to those roles. The workload accounts generate the telemetry around the applications and infrastructure. The data custodian account is where the data lake is deployed and collects the telemetry. The operator account is where the queries and visualizations are created.

Throughout the post, I mention AWS services that reduce the operational overhead in one or more stages of the architecture.

Figure 1: Data visualization architecture

Figure 1: Data visualization architecture

Ingestion

Irrespective of the technology choices, applications and infrastructure configurations should generate metrics and logs that report on resource health and security. The format of the logs depends on which tool and which part of the stack is generating the logs. For example, the format of log data generated by application code can capture bespoke and additional metadata deemed useful from a workload perspective as compared to access logs generated by proxies or load balancers. For more information on types of logs and effective logging strategies, see Logging strategies for security incident response.

Amazon S3 is a scalable, highly available, durable, and secure object storage that you will use as the storage layer. To build a solution that captures events agnostic of the source, you must forward data as a stream to the S3 bucket. Based on the architecture, there are multiple tools you can use to capture and stream data into S3 buckets. Some tools support integration with S3 and directly stream data to S3. Resources like servers and virtual machines need forwarding agents such as Amazon Kinesis Agent, Amazon CloudWatch agent, or Fluent Bit.

Amazon Kinesis Data Streams provides a scalable data streaming environment. Using on-demand capacity mode eliminates the need for capacity provisioning and capacity management for streaming workloads. For log data and metric collection, you should use on-demand capacity mode, because log data generation can be unpredictable depending on the requests that are being handled by the environment. Amazon Kinesis Data Firehose can convert the format of your input data from JSON to Apache Parquet before storing the data in Amazon S3. Parquet is naturally compressed, and using Parquet native partitioning and compression allows for faster queries compared to JSON formatted objects.

Scalable data lake

Use AWS Lake Formation to build, secure, and manage the data lake to store log and metric data in S3 buckets. We recommend using tag-based access control and named resources to share the data in your data store to share data across accounts to build visualizations. Data custodians should configure access for relevant datasets to the operators who can use Athena to perform complex queries and build compelling data visualizations with QuickSight, as shown in Figure 2. For cross-account permissions, see Use Amazon Athena and Amazon QuickSight in a cross-account environment. You can also use Amazon DataZone to build additional governance and share data at scale within your organization. Note that the data lake is different to and separate from the Log Archive bucket and account described in Organizing Your AWS Environment Using Multiple Accounts.

Figure 2: Account structure

Figure 2: Account structure

Amazon Security Lake

Amazon Security Lake is a fully managed security data lake service. You can use Security Lake to automatically centralize security data from AWS environments, SaaS providers, on-premises, and third-party sources into a purpose-built data lake that’s stored in your AWS account. Using Security Lake reduces the operational effort involved in building a scalable data lake, as the service automates the configuration and orchestration for the data lake with Lake Formation. Security Lake automatically transforms logs into a standard schema—the Open Cybersecurity Schema Framework (OCSF) — and parses them into a standard directory structure, which allows for faster queries. For more information, see How to visualize Amazon Security Lake findings with Amazon QuickSight.

Querying and visualization

Figure 3: Data sharing overview

Figure 3: Data sharing overview

After you’ve configured cross-account permissions, you can use Athena as the data source to create a dataset in QuickSight, as shown in Figure 3. You start by signing up for a QuickSight subscription. There are multiple ways to sign in to QuickSight; this post uses AWS Identity and Access Management (IAM) for access. To use QuickSight with Athena and Lake Formation, you first must authorize connections through Lake Formation. After permissions are in place, you can add datasets. You should verify that you’re using QuickSight in the same AWS Region as the Region where Lake Formation is sharing the data. You can do this by checking the Region in the QuickSight URL.

You can start with basic queries and visualizations as described in Query logs in S3 with Athena and Create a QuickSight visualization. Depending on the nature and origin of the logs and metrics that you want to query, you can use the examples published in Running SQL queries using Amazon Athena. To build custom analytics, you can create views with Athena. Views in Athena are logical tables that you can use to query a subset of data. Views help you to hide complexity and minimize maintenance when querying large tables. Use views as a source for new datasets to build specific health analytics and dashboards.

You can also use Amazon QuickSight Q to get started on your analytics journey. Powered by machine learning, Q uses natural language processing to provide insights into the datasets. After the dataset is configured, you can use Q to give you suggestions for questions to ask about the data. Q understands business language and generates results based on relevant phrases detected in the questions. For more information, see Working with Amazon QuickSight Q topics.

Conclusion

Logs and metrics offer insights into the health of your applications and infrastructure. It’s essential to build visibility into the health of your IT environment so that you can understand what good health looks like and identify outliers in your data. These outliers can be used to identify thresholds and feed into your incident response workflow to help identify security issues. This post helps you build out a scalable centralized visualization environment irrespective of the source of log and metric data.

This post is part 1 of a series that helps you dive deeper into the security analytics use case. In part 2, How to visualize Amazon Security Lake findings with Amazon QuickSight, you will learn how you can use Security Lake to reduce the operational overhead involved in building a scalable data lake and centralizing log data from SaaS providers, on-premises, AWS, and third-party sources into a purpose-built data lake. You will also learn how you can integrate Athena with Security Lake and create visualizations with QuickSight of the data and events captured by Security Lake.

Part 3, How to share security telemetry per Organizational Unit using Amazon Security Lake and AWS Lake Formation, dives deeper into how you can query security posture using AWS Security Hub findings integrated with Security Lake. You will also use the capabilities of Athena and QuickSight to visualize security posture in a distributed environment.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Pratima Singh

Pratima Singh

Pratima is a Security Specialist Solutions Architect with Amazon Web Services based out of Sydney, Australia. She is a security enthusiast who enjoys helping customers find innovative solutions to complex business challenges. Outside of work, Pratima enjoys going on long drives and spending time with her family at the beach.

Cloud Webinar Series Part 1: Commanding Cloud Strategies

Post Syndicated from Owen Holland original https://blog.rapid7.com/2023/10/17/cloud-webinar-series-part-1-commanding-cloud-strategies/

Cloud Webinar Series Part 1: Commanding Cloud Strategies

Over the past decade, cloud computing has evolved into a cornerstone of modern business operations. Its flexibility, scalability, and efficiency have reshaped industries and brought unprecedented opportunities.

However, this transformation has come with challenges—most notably those associated with cloud security. Our new cloud security webinar series will explore the dynamic landscape of cloud security, unveiling key trends, pinpointing critical challenges, and providing actionable insights tailored to security professionals.

In Commanding Cloud Strategies, the first webinar of the series, Rapid7’s Chief Security Officer Jaya Baloo and other experts will share their thoughts on the cloud challenges that security leaders face and offer insights on how to overcome them.

Please register for the first episode of our Cloud Security Series here to find out what our security experts think are the top strategies to overcome these challenges and considerations.

Armed with the knowledge and insights provided in part-one, security professionals will be better equipped to safeguard their cloud environments and data assets in the modern digital landscape.

To learn more, check out the webinar abstract below.

Commanding Cloud Strategies Webinar Abstract

In the ever-evolving world of cloud security, staying ahead of the curve is paramount. Over the past ten years, several trends have emerged, shaping how organizations safeguard their digital assets.

The shift towards a shared responsibility model, greater emphasis on automation and orchestration, and a growing focus on identity and access management (IAM) are among the defining trends.

Cloud Security Challenges

  • Data Privacy and Compliance: Ensuring data protection and regulatory compliance within cloud environments is a persistent challenge. As data becomes more mobile and diverse, maintaining compliance becomes increasingly complex.
  • Evolving Threat Landscape: The threat landscape is in constant flux, with cyberattacks targeting cloud infrastructure and applications growing in sophistication. Security professionals must adapt to this ever-changing landscape to keep their organizations safe.

Considerations in Cloud Security

  • Scalable Security Architecture: Large enterprises must design security architectures that are both scalable and flexible to adapt to evolving cloud infrastructure and workload needs. The ability to scale security measures efficiently is crucial.
  • Identity and Access Management (IAM): Given the intricate web of user roles and permissions in large organizations, effective IAM is essential. Organizations should prioritize IAM solutions that streamline access while maintaining security.

Understanding Risk

Understanding cybersecurity risk is at the heart of cloud security. Effective risk assessment and mitigation involve evaluating internal and external tactics that could compromise an organization’s digital assets and information security. Our security experts will delve into this critical domain’s core challenges and considerations in the session.

Challenges in Understanding Risk

  • Complexity of Cloud Ecosystems: Successful organizations often operate intricate cloud ecosystems with numerous interconnected services and platforms. Navigating this complexity while assessing risk can be daunting.
  • Lack of Skilled Cybersecurity Personnel: The need for more skilled cybersecurity professionals capable of analyzing and managing cloud security risks is a widespread challenge. Organizations must find and retain the right talent to stay secure.

Considerations for Understanding Risk

  • Risk Assessment and Prioritization: Organizations should prioritize the identification and assessment of cloud security risks based on their potential impact and likelihood. Effective risk assessment tools and threat modelling can help in this regard.
  • Continuous Monitoring and Response: Establishing a robust, real-time monitoring system is essential. It allows organizations to continuously assess cloud environments for security incidents and respond promptly to emerging threats. Integrating Security Information and Event Management (SIEM) and DevSecOps practices can enhance this capability.

Threat Intelligence

In cloud security, threat intelligence is pivotal in staying one step ahead of potential threats and vulnerabilities. Effective threat intelligence involves collecting, analyzing, and disseminating timely information to protect cloud environments and data assets proactively.

Challenges in Threat Intelligence

  • Data Overload and False Positives: Organizations generate vast amounts of security data, including threat intelligence feeds. Managing this data can lead to data overload and false positives, causing alert fatigue.
  • Integration and Compatibility: Integrating threat intelligence feeds into existing security infrastructure can be complex, as different sources may use varying formats and standards.

Considerations in Threat Intelligence

  • Customization and Contextualization: To make threat intelligence actionable, organizations should customize it to their specific cloud environments, industry, and business context. Tailored alerting rules and threat-hunting workflows can enhance effectiveness.
  • Sharing and Collaboration: Collaborating with industry peers, Information Sharing and Analysis Centers (ISACs), and government agencies for threat intelligence sharing can provide valuable insights into emerging threats specific to the industry.

Security Capabilities

Cloud security capabilities encompass the ability to comprehend evolving risks, establish benchmark standards, and take immediate, informed actions to safeguard cloud environments and data assets effectively. The final topic in the webinar will explore the core challenges and considerations in building robust security capabilities.

Challenges in Security Capabilities

  • Resource Allocation and Prioritization: Allocating resources effectively across vast cloud environments can be challenging, leading to difficulties prioritizing security efforts and ensuring critical areas receive the necessary attention and investment.
  • Complexity of Hybrid and Multi-Cloud Environments: Managing security capabilities becomes particularly challenging when organizations operate in hybrid or multi-cloud environments. Ensuring consistent security practices and policies across different platforms and providers requires specialized expertise.

Considerations in Security Capabilities

  • Integrated Security Ecosystem: Organizations should strive to create an integrated security ecosystem that combines various security tools, technologies, and processes to provide a comprehensive view of their cloud environment.
  • Scalability and Elasticity: Cloud security capabilities should be designed to scale and adapt to the organization’s evolving cloud infrastructure and workloads. This includes automated resource scaling and continuous security testing.

Now available: Building a scalable vulnerability management program on AWS

Post Syndicated from Anna McAbee original https://aws.amazon.com/blogs/security/now-available-how-to-build-a-scalable-vulnerability-management-program-on-aws/

Vulnerability findings in a cloud environment can come from a variety of tools and scans depending on the underlying technology you’re using. Without processes in place to handle these findings, they can begin to mount, often leading to thousands to tens of thousands of findings in a short amount of time. We’re excited to announce the Building a scalable vulnerability management program on AWS guide, which includes how you can build a structured vulnerability management program, operationalize tooling, and scale your processes to handle a large number of findings from diverse sources.

Building a scalable vulnerability management program on AWS focuses on the fundamentals of building a cloud vulnerability management program, including traditional software and network vulnerabilities and cloud configuration risks. The guide covers how to build a successful and scalable vulnerability management program on AWS through preparation, enabling and configuring tools, triaging findings, and reporting.

Targeted outcomes

This guide can help you and your organization with the following:

  • Develop policies to streamline vulnerability management and maintain accountability.
  • Establish mechanisms to extend the responsibility of security to your application teams.
  • Configure relevant AWS services according to best practices for scalable vulnerability management.
  • Identify patterns for routing security findings to support a shared responsibility model.
  • Establish mechanisms to report on and iterate on your vulnerability management program.
  • Improve security finding visibility and help improve overall security posture.

Using the new guide

We encourage you to read the entire guide before taking action or building a list of changes to implement. After you read the guide, assess your current state compared to the action items and check off the items that you’ve already completed in the Next steps table. This will help you assess the current state of your AWS vulnerability management program. Then, plan short-term and long-term roadmaps based on your gaps, desired state, resources, and business needs. Building a cloud vulnerability management program often involves iteration, so you should prioritize key items and regularly revisit your backlog to keep up with technology changes and your business requirements.

Further information

For more information and to get started, see the Building a scalable vulnerability management program on AWS.

We greatly value feedback and contributions from our community. To share your thoughts and insights about the guide, your experience using it, and what you want to see in future versions, select Provide feedback at the bottom of any page in the guide and complete the form.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Anna McAbee

Anna is a Security Specialist Solutions Architect focused on threat detection and incident response at AWS. Before AWS, she worked as an AWS customer in financial services on both the offensive and defensive sides of security. Outside of work, Anna enjoys cheering on the Florida Gators football team, wine tasting, and traveling the world.

Author

Megan O’Neil

Megan is a Principal Security Specialist Solutions Architect focused on Threat Detection and Incident Response. Megan and her team enable AWS customers to implement sophisticated, scalable, and secure solutions that solve their business challenges. Outside of work, Megan loves to explore Colorado, including mountain biking, skiing, and hiking.

A Look at Our Development Process of the Cloud Resource Enrichment API

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/09/07/a-look-at-our-development-process-of-the-cloud-resource-enrichment-api/

A Look at Our Development Process of the Cloud Resource Enrichment API

In today’s ever-evolving cybersecurity landscape, detecting and responding to cyber threats is paramount for organizations in cloud environments. At the same time, investigating cyber threat alerts can be arduous due to the time-consuming and complex process of data collection. To tackle this pain point, Rapid7 developed a new Cloud Resource Enrichment API that streamlines data retrieval from various cloud resources. The API empowers security analysts to swiftly respond to cyber threats and improve incident response time.

Identifying the Need for a Unified API

Protecting cloud resources from cyber attacks is a growing challenge. Security analysts must grapple with gathering relevant data spread across multiple systems and APIs, leading to incident response inefficiencies. Presented with this challenge, we recognized a pressing need for a unified API that collects all relevant data types related to a cloud resource during a cyber threat action. This API streamlines data access, enabling analysts to piece together a comprehensive view of incidents rapidly, enhancing cybersecurity operations.

Defining the Vision and Scope

Our development team worked closely with security analysts to tailor the API’s functionalities to meet real-world needs. Defining the API’s scope involved meticulous prioritization of features, striking the right balance between usability and data abundance. By involving analysts from the outset, we laid a solid foundation for the API’s success.

The Development Journey

Adopting agile methodologies, our team iteratively developed the API, adapting and fine-tuning as we progressed. The iterative development process played a vital role in ensuring the API’s success. By breaking down the project into smaller, manageable tasks, we could focus on specific features, implement them efficiently, and gather feedback from early prototypes. With a comprehensive design phase, we defined the API’s architecture and capabilities based on insights from security analysts. Regular meetings and feedback gathering facilitated continuous improvements, streamlining the data retrieval process.

The API utilizes RESTful API design principles for data integration and communication between cloud systems. It collects the following types of data:

  • Harvested cloud resource properties (image, IP, network interfaces, region, cloud organization and account, security groups, and much, much more)
  • Permissions data (permissions on the resource, permissions of the resource)
  • Security insights (risks, misconfigurations, vulnerabilities)
  • Security alerts (“threat finding”)
  • First level cloud related resources
  • Application context (tagging made by the client in the cloud environment)

Each data type required collaboration with a different team which is responsible for collecting and processing the data. This resulted in a feature that involved developers from 6 different teams! Regular meetings and continuous communication with the development team and the product manager, allowed us to incorporate suggestions and make iterative improvements to the API’s design and functionality.

Conclusion

The development journey of our Cloud Resource Enrichment API has been both challenging and rewarding. With a user-centric approach, we have crafted a powerful tool that empowers security teams to respond effectively to cyber threats. As we continue to enhance the API, we remain committed to fortifying organizations’ cyber defenses and elevating incident response capabilities. Together, we can better equip security analysts to face the ever-changing cyber war with confidence.

Why Your AWS Cloud Container Needs Client-Side Security

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/08/24/why-your-aws-cloud-container-needs-client-side-security/

Why Your AWS Cloud Container Needs Client-Side Security

With increasingly complicated network infrastructure and organizations needing to deploy applications across various environments, cloud containers are necessary for companies to stay agile and innovative. Containers are packages of software that hold all of the necessary components for an app to run in any environment. One of the biggest benefits of cloud containers? They virtualize an operating system, enabling users to access from private data centers, public clouds, and even laptops.

According to recent research by Faction, 92% of organizations have a multi-cloud strategy in place or are in the process of adopting one. In addition to the ubiquity of cloud computing, there are a variety of cloud container providers, including Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure. Nearly 80% of all containers on the cloud, however, run on AWS, which is known for its security, reliability, and scalability.

When it comes to cloud container security, AWS works on a shared responsibility model. This means that security and compliance is shared between AWS and the client. AWS protects the infrastructure running the services offered in the cloud — the hardware, software, networking, and facilities.

Unfortunately, many AWS users stop here. They believe that the security provided by AWS is sufficient to protect their cloud containers. While it is true that the level of customer responsibility for security differs depending on the AWS product, each product does require the customer to assume some level of security responsibility.

To avoid this mistake, let’s examine why your AWS cloud container needs additional client-side security and how Rapid7 can help.

Top reasons why your AWS container needs client-side security

Visibility and monitoring

Some of the same qualities that make containers ideal for agility and innovation also creates difficulty in visibility and monitoring. Cloud containers are ephemeral, which means they’re easy to establish and destroy. This is convenient for quickly moving workloads and applications, but it also makes it difficult to track changes. Many AWS containers share memory and CPU resources with a variety of hosts (physical and cloud) in your ecosystem. Consequently, monitoring resource consumption and assessing container performance and application health can be difficult — after all, how can you know how much memory is being utilized by the container or the physical host?

Traditional monitoring tools and solutions also fail to collect the necessary metrics or provide the crucial insights needed for monitoring and troubleshooting container health and performance. While AWS offers protection for the cloud container structure, visualizing and monitoring what happens within the container is the responsibility of your organization.

Alert contextualization and remediation

As your company grows and you scale your cloud infrastructure, your DevOps teams will continue to create containers. For example, Google runs everything in containers and launches an epic amount of containers (several billion per week!) to keep up with their developer and client needs. While you might not be launching quite as many containers, it’s still easy to lose track of them all. Organizations utilize alerts to keep track of container performance and health to resolve problems quickly. While alerting policies differ, most companies use metric- or log-based alerting.

It can be overwhelming to manage and remediate all of your organization’s container alerts. Not only do these alerts need to be routed to the proper developer or resource owner, but they also need to be remediated quickly to ensure the security and continued good performance of the container.

Cybersecurity standards

While AWS provides security for your foundational services in containerized applications — computing, storage, databases, and networking — it’s your responsibility to develop sufficient security protocols to protect your data, applications, operating system, and firewall. In the same way that your organization follows external cybersecurity standards for security and compliance across the rest of your digital ecosystem, it’s best to align your client-side AWS container security with a well-known industry framework.

Adopting a standardized cybersecurity framework will work in concert with AWS’s security measures by providing guidelines and best practices — preventing your organization from a haphazard security application that creates coverage gaps.

How Rapid7 can help with AWS container security

Now that you know why your organization needs client-side security, here’s how Rapid7 can help.

  • Visibility and monitoring: Rapid7’s InsightCloudSec continuously scans your cloud’s infrastructure, orchestration platforms, and workloads to provide a real-time assessment of health, performance, and risk. With the ability to scan containers in less than 60 seconds, your team will be able to quickly and accurately track changes in your containers and view the data in a single, convenient platform, perfect for collaborating across teams and quickly remediating issues.
  • Alert contextualization and remediation: Client-side security measures are key to processing and remediating system alerts in your AWS containers, but it can’t be accomplished manually. Automation is key for alert contextualization and remediation. InsightCloudSec integrates with AWS services like Amazon GuardDuty to analyze logs for malicious activity. The tool also integrates with your larger enterprise security systems to automate the remediation of critical risks in real time — often within 60 seconds.
  • Cybersecurity standards: While aligning your cloud containers with an industry-standard cybersecurity framework is a necessity, it’s often a struggle. Maintaining security and compliance requirements requires specialized knowledge and expertise. With record staff shortages, this often falls by the wayside. InsightCloudSec automates cloud compliance for well-known industry standards like the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework (CSF) with out-of-the-box policies that map back to specific NIST directives.

Secure your container (and it’s contents)

AWS’s shared responsibility model of security helps relieve operational burdens for organizations operating cloud containers. AWS clients don’t have to worry about the infrastructure security of their cloud containers. The contents in the cloud containers, however, are the owner’s responsibility and require additional security considerations.

Client-side security is necessary for proper monitoring and visibility, reduction in alert fatigue and real-time troubleshooting, and the application of external cybersecurity frameworks. The right tools, like Rapid7’s InsightCloudSec, can provide crucial support in each of these areas and beyond, filling crucial expertise and staffing gaps on your team and empowering your organization to confidently (and securely) utilize cloud containers.

Want to learn more about AWS container security? Download Fortify Your Containerized Apps With Rapid7 on AWS.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

Post Syndicated from James Alaniz original https://blog.rapid7.com/2023/08/01/new-insightcloudsec-compliance-pack-for-cis-aws-benchmark-2-0-0/

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

The Center for Internet Security (CIS) recently released version two of their AWS Benchmark. CIS AWS Benchmark 2.0.0 brings two new recommendations and eliminates one from the previous version. The update also includes some minor formatting changes to certain recommendation descriptions.

In this post, we’ll talk a little bit about the “why” behind these changes. We’ll also look at using InsightCloudSec’s new, out-of-the-box compliance pack to implement and enforce the benchmark’s recommendations.

What’s new, what’s changed, and why

Version 2.0.0 of the CIS AWS Benchmark included two new recommendations:

  • Ensure access to AWSCloudShellFullAccess is restricted
    An important addition from CIS, this recommendation focuses on restricting access to the AWSCloudShellFullAccess policy, which presents a potential path for data exfiltration by malicious cloud admins that are given full permissions to the service. AWS documentation describes how to create a more restrictive IAM policy that denies file transfer permissions.
  • Ensure that EC2 Metadata Service only allows IMDSv2
    Users should be using IMDSv2 to avoid leaving your EC2 instances susceptible to Server-Side Request Forgery (SSRF) attacks, a critical fault of IMDSv1.

The update also included the removal of the previous recommendation:

  • Ensure all S3 buckets employ encryption-at-rest
    This recommendation was removed because AWS now encrypts all new objects by default as of January 2023. It’s important to note that this only applies to newly created S3 buckets. So, if you’ve got some buckets that have been kicking around for a while, make sure they are employing encryption-at-rest and that it can not be inadvertently turned off at some point down the line.

Along with these changes, CIS also made a few minor changes related to the wording in some of the benchmark titles and descriptions.

How does ICS help me implement this in my environment?

Available as a compliance pack within InsightCloudSec right out-of-the-box, Rapid7 makes it easy for teams to scan their AWS environments for compliance against the recommendations and controls outlined in the CIS AWS Benchmark. If you’re not yet using InsightCloudSec today, be sure to check out the docs pages here, which will guide you through getting started with the platform.

Once you’re up and running, scoping your compliance assessment to a specific pack is as easy as 4 clicks. First, from the Compliance Summary page  you’ll want to select your desired benchmark. In this case, of course, CIS AWS Benchmark 2.0.0.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

From there, we can select the specific cloud or clouds we want to scan.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

And because we’ve got our badging and tagging strategies in order (right…….RIGHT?!) we can hone in even further. For this example, let’s focus on the production environment.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

You’ll get some trending insights that show how your organization as a whole, as well as how specific teams and accounts are doing and whether or not you’re seeing the improvement over time.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

Finally, if you’ve got a number of cloud accounts and/or clusters running across your environment, you can even scope down to that level. In this example, we’ll select all.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

Once you’ve got your filters set, you can apply and get real-time insight into how well your organization is adhering to the CIS AWS Benchmark. As with any pack, you can see your current compliance score overall along with a breakdown of the risk level associated with each instance of non-compliance.

New InsightCloudSec Compliance Pack for CIS AWS Benchmark 2.0.0

So as you can see, it’s fairly simple to assess your cloud environment for compliance with the CIS AWS Benchmark with a cloud security tool like InsightCloudSec. If you’re just starting your cloud security journey or aren’t really sure where to start, utilizing an out-of-the-box compliance pack is a great way to set a foundation to build off of.

In fact, Rapid7 recently partnered with AWS to help organizations in that very situation. Using a combination of market-leading technology and hands-on expertise, our AWS Cloud Risk Assessment provides a point-in-time understanding of your entire AWS cloud footprint and its security posture.

During the assessment, our experts will inspect your cloud environment for more than 100 distinct risks and misconfigurations, including publicly exposed resources, lack of encryption, and user accounts not utilizing multi-factor authentication. At the end of this assessment, your team will receive an executive-level report aligned to the AWS Foundational Security Best Practices, participate in a read-out call, and discuss next steps for executing your cloud risk mitigation program alongside experts from Rapid7 and our services partners.

If you’re interested, be sure to reach out about the AWS Cloud Risk Assessment with this quick request form!

Managing Risk Across Hybrid Environments with Executive Risk View

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/07/18/managing-risk-across-hybrid-environments-with-executive-risk-view/

Managing Risk Across Hybrid Environments with Executive Risk View

Over the last decade or so, organizations of all shapes and sizes across all industries have been going through a seismic shift in the way they engage with their customers and deliver their solutions to the market. These new delivery models are often underpinned by cloud services, which can change the composition of an organization’s IT environment drastically.

As part of this digital transformation, and in turn cloud adoption, many administrators have moved from maintaining a few hundred or so physical servers in their on-premises environment to running thousands and thousands of cloud instances spread across hundreds of cloud accounts—which are much more complex and ephemeral in nature.

The Modern Attack Surface is Expanding

Whether the impetus for this transformation is an attempt to maintain or gain a competitive advantage, or even as a result of mergers and acquisition, security teams are forced to play catch-up to harden a rapidly expanding attack surface. This expanding attack surface means that security teams need to evolve the scope and approach of their vulnerability management programs, and because they’re already playing catch-up, these teams are often asked to adapt their programs on the fly.

Making matters worse, many of the tools and processes used by teams to manage and secure those workloads aren’t able to keep up with the pace of innovation. Plus, many organizations have given DevOps teams self-service access to the underlying infrastructure that their teams need to innovate quickly, making it even more difficult for the security team to keep up with the ever-changing environment.

Adapting Your Vulnerability Management Program to the Cloud Requires a Different Approach

Assessing and reducing risk across on-premises and cloud environments can be complex and cumbersome, often requiring significant time and manual effort to aggregate, analyze and prioritize a plethora of risk signals. Practitioners are often forced to context switch between multiple tools and exert manual effort to normalize data and translate security findings into meaningful risk metrics that the business can understand. As a result, many teams struggle with blind spots resulting from gaps in data, or too much noise being surfaced without the ability to effectively prioritize remediation efforts and drive accountability across the organization. To effectively manage risk across complex and dynamic hybrid environments, security teams must adapt their programs and take a fundamentally different approach.

Managing Risk Across Hybrid Environments with Executive Risk View

As is the case with traditional on-premises environments, you need to first achieve and maintain full visibility of your environment. You also need to keep track of how the environment changes over time, and how that change impacts your risk posture. Doing this in an ephemeral environment can be tricky, because in the cloud things can (and will) change on a minute to minute basis. Traditional agent-based vulnerability management tools are  too cumbersome to manage and simply won’t scale in the way modern environments require. Agentless solutions deliver the real-time visibility and change management capabilities that today’s cloud and hybrid environments require.

Once you establish real-time and continuous visibility, you need to assess your environment for risk, understanding your organization’s current risk posture. You’re going to need a way to effectively prioritize risk, and make sure your teams are focusing on the most pressing and impactful issues based on exploitability as well as potential impact to your business and customers.

Finally, once you’ve gotten to a point where you can identify which risk signals need your attention first, you’ll want to remediate them as quickly and comprehensively as possible. When you’re operating at the speed of the cloud, this means you’re likely going to be relying on some form of automation, whether that’s automating repetitive processes, or even having a security solution take action to remediate vulnerabilities on your behalf. Of course, you’ll need to be measuring and tracking progress throughout this process, and you’ll need a way to communicate the progress you and your team is making to improve your risk posture with trending analysis over time.

So, as you can see, it’s not that “what” security teams need to do is significantly different, but “how” they go about it has to change, because traditional approaches just won’t work. The challenge is that this isn’t an either/or scenario. Organizations that are operating in a hybrid environment need to adapt their programs to be able to manage and report on risk in on-premises and cloud environments simultaneously and holistically. If not, security leaders will struggle to make informed decisions on how to effectively plan their budgets and allocate resources to ensure that cloud migration doesn’t have a negative impact on its risk posture.

Manage Risk in Hybrid Environments with Executive Risk View

Executive Risk View, now generally available in Rapid7’s Cloud Risk Complete offering, provides security leaders with the comprehensive visibility and context needed to track total risk across both cloud and on-premises assets to better understand organizational risk posture and trends.

Managing Risk Across Hybrid Environments with Executive Risk View

With Executive Risk View, customers can:

  • Achieve a complete view of risk across their hybrid environments to effectively communicate risk across the organization and track progress.
  • Establish a consistent definition of risk across their organization, aggregating insights and normalizing scores from on-premises and cloud assessments.
  • Take a data-driven approach to decision making, capacity planning and drive accountability for risk reduction across the entire business.

Sounds too good to be true? You can see it in action for yourself in a new guided product tour we recently posted on our website! In addition to taking the tour, you can find more information on Executive Risk View in the docs page.

Standardizing SaaS Data to Drive Greater Cloud Security Efficacy

Post Syndicated from Dina Durutlic original https://blog.rapid7.com/2023/06/27/standardizing-saas-data-to-drive-greater-cloud-security-efficacy/

Standardizing SaaS Data to Drive Greater Cloud Security Efficacy

The way we do business has fundamentally changed, and as a result, so must security. Whether it’s legacy modernization initiatives, process improvements, or bridging the gap between physical and digital—most organizational strategies and initiatives involve embracing the cloud. However, investing in the cloud doesn’t come without its complexities.

When organizations adopt new technologies and applications, they inadvertently introduce new opportunities for attackers through vulnerabilities and points of entry. To stay ahead of potential security concerns, teams need to rely on data in order to get an overview of their environment—ensuring protection.

Where this becomes a bigger challenge is two fold:

  1. Security professionals need to secure SaaS applications, but each app has its own methodology for generating and storing vital security and usage data.
  2. Even if a security team puts in the work to centralize all this data, it must be normalized and standardized in order to be usable, which creates more work and visibility gaps.

Elevating Security Posture Around SaaS Applications

As part of our continued commitment to ensuring customers stay future-ready and secure through their cloud adoption, we’re excited to announce our work with AWS on their new service that will continue the effort around data standardization. AWS AppFabric quickly connects SaaS applications across the organization, so IT and security teams can easily manage and secure applications using a standard schema.

By using AppFabric to natively connect SaaS productivity and security applications to each other, security teams can automatically normalize application data (into the Open Cybersecurity Schema Framework (OCSF) format) for administrators to set common policies, standardize security alerts, and easily manage user access across multiple applications.

For Rapid7 customers, InsightIDR will be able to ingest logs from AppFabric so security teams have access to that data—stay tuned for more! This is just one in a series of investments we are making to help secure your cloud infrastructure.

To learn more about how customers are leveraging Rapid7’s elite security expertise and practitioner first platform to elevate their security program, check out our Managed Threat Complete offer.

Uncover and Remediate Toxic Combinations with Attack Path Analysis

Post Syndicated from James Alaniz original https://blog.rapid7.com/2023/06/27/uncover-and-remediate-toxic-combinations-with-attack-path-analysis/

Uncover and Remediate Toxic Combinations with Attack Path Analysis

Particularly at enterprise scale, it’s not uncommon to have hundreds of thousands of resources running across your cloud environments at any given time. Of course, these resources aren’t running independently. In modern environments, these resources are all interconnected and in many cases interdependent. This interconnectivity means that if one resource or account is compromised, the whole system is at risk. Should a bad actor gain access to your systems via an open port, there are a number of avenues for them to move laterally across your environment, and even across environments, if your cloud environment is connected to your on-premises network.

Because of this, security teams need to understand how resources deployed across their environment relate to and interact with each other to effectively assess and prioritize risk remediation efforts.

For example, it’s helpful to know whether or not a resource is publicly available and it shouldn’t be, but what if that’s not the whole story? Perhaps that resource also provided an avenue to a database that was housing sensitive customer data, or was assigned a role that enabled it to escalate privileges and cause havoc across your environment. These types of toxic combinations compound risk and widen the potential blast radius should a resource or account be compromised.

Introducing Attack Path Analysis in InsightCloudSec

Attack Path Analysis provides a graph-based visualization that enables users to quickly identify the potential avenues that bad actors could use to navigate your cloud environment to exploit a vulnerable resource and/or access sensitive information.

Uncover and Remediate Toxic Combinations with Attack Path Analysis

With Attack Path Analysis, you can:

  • Visualize risk across your cloud environments in real-time, mapping relationships between compromised resources and the rest of your environment.
  • Prioritize remediation efforts by understanding the toxic risk combinations present in your environment that provide bad actors avenues to access business-critical resources and sensitive data.
  • Clearly communicate risk and the potential impact of an exploit to non-technical stakeholders with easy-to-consume attack path visualizations.

Identifying Toxic Combinations that Compound Risk and Widen the Blast Radius of an Attack

To effectively prioritize remediation efforts for the various risk signals across your environment, you need to take into account exploitability—whether or not a vulnerable account or resource can actually be accessed by a bad actor—and the potential impact should that vulnerable resource be compromised.

As an example, let’s dive into an attack path that highlights a publicly exposed compute instance with an attached privileged role. This can be exceedingly difficult to identify, because there are a variety of reasons that a compute instance might be assigned a set of permissions. When that instance is also publicly accessible, even if not directly, this can quickly become a major issue.

Uncover and Remediate Toxic Combinations with Attack Path Analysis

In this scenario, the environment would be susceptible to account takeover attacks, in which an attacker can gain control of the instance and use its assigned privileges to steal sensitive data such as login credentials, customer data, financial information or intellectual property. Perhaps even worse, the instance could be weaponized to launch attacks on other systems, cause a denial of service (DOS), or distribute malware across your network.

To remediate this issue, you’ll want to perform an audit to understand whether the compute instance needs to have the permissions and privileges it’s been granted and if it needs to be publicly accessible. Chances are, the answer to one or both will be “no”, and you’ll want to close off public access and/or adjust the privileges assigned to the resource in question.

There are a variety of attack paths that can be detected and investigated in InsightCloudSec upon launch, and we’ll continue to add more in the coming quarters. If you’re interested in learning more about Attack Path Analysis in InsightCloudSec, be sure to check out the dedicated docs page!