Tag Archives: cloud security

Announcing the AWS Blueprint for Ransomware Defense

Post Syndicated from Jeremy Ware original https://aws.amazon.com/blogs/security/announcing-the-aws-blueprint-for-ransomware-defense/

In this post, Amazon Web Services (AWS) introduces the AWS Blueprint for Ransomware Defense, a new resource that both enterprise and public sector organizations can use to implement preventative measures to protect data from ransomware events. The AWS Blueprint for Ransomware Defense provides a mapping of AWS services and features as they align to aspects of the Center for Internet Security (CIS) Critical Security Controls (CIS Controls). This information can be used to help customers assess and protect their data from ransomware events.

The following is background on ransomware, CIS, and the initiatives that led to the publication of this new blueprint.

The Ransomware Task Force

In April of 2021, the U.S. government launched the Ransomware Task Force (RTF), which has the mission of uniting key stakeholders across industry, government, and civil society to create new solutions, break down silos, and find effective new methods of countering the ransomware threat. The RTF has since launched several progress reports with specific recommendations, including the development of the RTF Blueprint for Ransomware Defense, which provides a framework with practical steps to mitigate, respond to, and recover from ransomware. AWS is a member of the RTF, and we have taken action to create our own AWS Blueprint for Ransomware Defense that maps actionable and foundational security controls to AWS services and features that customers can use to implement those controls. The AWS Blueprint for Ransomware Defense is based on the CIS Controls framework.

Center for Internet Security

The Center for Internet Security (CIS) is a community-driven nonprofit, globally recognized for establishing best practices for securing IT systems and data. To help establish foundational defense mechanisms, a subset of the CIS Critical Security Controls (CIS Controls) have been identified as important first steps in the implementation of a robust program to prevent, respond to, and recover from ransomware events. This list of controls was established to provide safeguards against the most impactful and well-known internet security issues. The controls have been further prioritized into three implementation groups (IGs), to help guide their implementation. IG1, considered “essential cyber hygiene,” provides foundational safeguards. IG2 builds on IG1 by including the controls in IG1 plus a number of additional considerations. Finally, IG3 includes the controls in IG1 and IG2, with an additional layer of controls that protect against more sophisticated security issues.

CIS recommends that organizations use the CIS IG1 controls as basic preventative steps against ransomware events. We’ve produced a mapping of AWS services that can help you implement aspects of these controls in your AWS environment. Ransomware is a complex event, and the best course of action to mitigate risk is to apply a thoughtful strategy of defense in depth. The mitigations and controls outlined in this mapping document are general security best practices, but are a non-exhaustive list.

Because data is often vital to the operation of mission-critical services, ransomware can severely disrupt business processes and applications that depend on this data. For this reason, many organizations are looking for effective security controls that will improve their security posture against these types of events. We hope you find the information in the AWS Blueprint for Ransomware Defense helpful and incorporate it as a tool to provide additional layers of security to help keep your data safe.

Let us know if you have any feedback through the AWS Security Contact Us page. Please reach out if there is anything we can do to add to the usefulness of the blueprint or if you have any additional questions on security and compliance. You can find more information from the IST (Institute for Security and Technology) describing ransomware and how to protect yourself on the IST website.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Jeremy Wave

Jeremy Ware

Jeremy is a Security Specialist Solutions Architect focused on Identity and Access Management. Jeremy and his team enable AWS customers to implement sophisticated, scalable, and secure IAM architecture and Authentication workflows to solve business challenges. With a background in Security Engineering, Jeremy has spent many years working to raise the Security Maturity gap at numerous global enterprises. Outside of work, Jeremy loves to explore the mountainous outdoors, and participate in sports such as snowboarding, wakeboarding, and dirt bike riding.

Author

Megan O’Neil

Megan is a Principal Security Specialist Solutions Architect focused on Threat Detection and Incident Response. Megan and her team enable AWS customers to implement sophisticated, scalable, and secure solutions that solve their business challenges. Outside of work, Megan loves to explore Colorado, including mountain biking, skiing, and hiking.

Luis Pastor

Luis Pastor

Luis is a Senior Security Solutions Architect focused on infrastructure security at AWS. Before AWS, he worked with both large and boutique system integrators, helping clients in an array of industries to improve their security posture and reach and maintain compliance in hybrid environments. Luis enjoys keeping active, cooking, and eating spicy food—especially Mexican cuisine.

Casting a Light on Shadow IT in Cloud Environments

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2023/05/23/casting-a-light-on-shadow-it-in-cloud-environments/

Casting a Light on Shadow IT in Cloud Environments

The term “Shadow IT” refers to the use of systems, devices, software, applications, and services without explicit IT approval. This typically occurs when employees adopt consumer products to increase productivity or just make their lives easier. This type of Shadow IT can be easily addressed by implementing policies that limit use of consumer products and services. However, Shadow IT can also occur at a cloud infrastructure level. This can be exceedingly hard for organizations to get a handle on.

Historically, when teams needed to provision infrastructure resources, this required review and approval of a centralized IT team—who ultimately had final say on whether or not something could be provisioned. Nowadays, cloud has democratized ownership of resources to teams across the organization, and most organizations no longer require their development teams to request resources in the same manner. Instead, developers are empowered to provision the resources that they need to get their jobs done and ship code efficiently.

This dynamic is critical to achieving the promise of speed and efficiency that cloud, and more specifically DevOps methodologies, offer. The tradeoff here, however, is control. This paradigm shift means that development teams are spinning up resources without the security team’s knowledge. Obviously, the adage “you can’t secure what you can’t see” comes into play here, and you’re now running blind to the potential risk that this could pose to your organization in the event it was configured improperly

Cloud Shadow IT risks

Blind spots: As noted above, since security teams are unaware of Shadow IT assets, security vulnerabilities inevitably go unaddressed. Dev teams may not understand (or simply ignore) the importance of cloud security updates, patching, etc for these assets.

Unprotected data: Unmitigated vulnerabilities in these assets can put businesses at risk of data breaches or leaks, if cloud resources are accessed by unauthorized users. Additionally, this data will not be protected with centralized backups, making it difficult, if not impossible, to recover.

Compliance problems: Most compliance regulations requirements for processing, storing, and securing customers’ data. Since businesses have no oversight of data stored on Shadow IT assets, this can be an issue.

Addressing Cloud Shadow IT

One way to address Shadow IT in cloud environments is to implement a cloud risk and compliance management platform like Rapid7’s InsightCloudSec.

InsightCloudSec continuously assesses your entire cloud environment whether in a single cloud or across multiple clouds and can detect changes to your environment—such as the creation of a new resource—in less than 60 seconds with event-driven harvesting.

The platform doesn’t just stop at visibility, however. Out-of-the-box, users get access to 30+ compliance packs aligned to common industry standards like NIST, CIS Benchmarks, etc. as well as regulatory frameworks like HIPAA, PCI DSS, and GDPR. Teams also have the ability to tailor their compliance policies to their specific business needs with custom packs that allow you to set exceptions and/or add additional policies that aren’t included in the compliance frameworks you either choose or are required to adhere to.

When a resource is spun up, the platform detects it in real-time and automatically identifies whether or not it is in compliance with organization policies. Because InsightCloudSec offers native, no-code automation, teams are able to build bots that take immediate action whenever Shadow IT creeps into their environment by either adjusting configurations and permissions to regain compliance or even deleting the resource altogether if you so choose.

To learn more, check out our on-demand demo.

Stronger together: Highlights from RSA Conference 2023

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/stronger-together-highlights-from-rsa-conference-2023/

Golden Gate bridge

RSA Conference 2023 brought thousands of cybersecurity professionals to the Moscone Center in San Francisco, California from April 24 through 27.

The keynote lineup was eclectic, with more than 30 presentations across two stages featuring speakers ranging from renowned theoretical physicist and futurist Dr. Michio Kaku to Grammy-winning musician Chris Stapleton. Topics aligned with this year’s conference theme, “Stronger Together,” and focused on actions that can be taken by everyone, from the C-suite to those of us on the front lines of security, to strengthen collaboration, establish new best practices, and make our defenses more diverse and effective.

With over 400 sessions and 500 exhibitors discussing the latest trends and technologies, it’s impossible to recap every highlight. Now that the dust has settled and we’ve had time to reflect, here’s a glimpse of what caught our attention.

Noteworthy announcements

Hundreds of companies — including Amazon Web Services (AWS) — made new product and service announcements during the conference.

We announced three new capabilities for our Amazon GuardDuty threat detection service to help customers secure container, database, and serverless workloads. These include GuardDuty Elastic Kubernetes Service (EKS) Runtime Monitoring, GuardDuty RDS Protection for data stored in Amazon Aurora, and GuardDuty Lambda Protection for serverless applications. The new capabilities are designed to provide actionable, contextual, and timely security findings with resource-specific details.

Artificial intelligence

It was hard to find a single keynote, session, or conversation that didn’t touch on the impact of artificial intelligence (AI).

In “AI: Law, Policy and Common Sense Suggestions on How to Stay Out of Trouble,” privacy and gaming attorney Behnam Dayanim highlighted ambiguity around the definition of AI. Referencing a quote from University of Washington School of Law’s Ryan Calo, Dayanim pointed out that AI may be best described as “…a set of techniques aimed at approximating some aspect of cognition,” and should therefore be thought of differently than a discrete “thing” or industry sector.

Dayanim noted examples of skepticism around the benefits of AI. A recent Monmouth University poll, for example, found that 73% of Americans believe AI will make jobs less available and harm the economy, and a surprising 55% believe AI may one day threaten humanity’s existence.

Equally skeptical, he noted, is a joint statement made by the Federal Trade Commission (FTC) and three other federal agencies during the conference reminding the public that enforcement authority applies to AI. The statement takes a pessimistic view, saying that AI is “…often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” but has the potential to produce negative outcomes.

Dayanim covered existing and upcoming legal frameworks around the world that are aimed at addressing AI-related risks related to intellectual property (IP), misinformation, and bias, and how organizations can design AI governance mechanisms to promote fairness, competence, transparency, and accountability.

Many other discussions focused on the immense potential of AI to automate and improve security practices. RSA Security CEO Rohit Ghai explored the intersection of progress in AI with human identity in his keynote. “Access management and identity management are now table stakes features”, he said. In the AI era, we need an identity security solution that will secure the entire identity lifecycle—not just access. To be successful, he believes, the next generation of identity technology needs to be powered by AI, open and integrated at the data layer, and pursue a security-first approach. “Without good AI,” he said, “zero trust has zero chance.”

Mark Ryland, director at the Office of the CISO at AWS, spoke with Infosecurity about improving threat detection with generative AI.

“We’re very focused on meaningful data and minimizing false positives. And the only way to do that effectively is with machine learning (ML), so that’s been a core part of our security services,” he noted.

We recently announced several new innovations—including Amazon Bedrock, the Amazon Titan foundation model, the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn1n instances powered by AWS Trainium, Amazon EC2 Inf2 instances powered by AWS Inferentia2, and the general availability of Amazon CodeWhisperer—that will make it practical for customers to use generative AI in their businesses.

“Machine learning and artificial intelligence will add a critical layer of automation to cloud security. AI/ML will help augment developers’ workstreams, helping them create more reliable code and drive continuous security improvement. — CJ Moses, CISO and VP of security engineering at AWS

The human element

Dozens of sessions focused on the human element of security, with topics ranging from the psychology of DevSecOps to the NIST Phish Scale. In “How to Create a Breach-Deterrent Culture of Cybersecurity, from Board Down,” Andrzej Cetnarski, founder, chairman, and CEO of Cyber Nation Central and Marcus Sachs, deputy director for research at Auburn University, made a data-driven case for CEOs, boards, and business leaders to set a tone of security in their organizations, so they can address “cyber insecure behaviors that lead to social engineering” and keep up with the pace of cybercrime.

Lisa Plaggemier, executive director of the National Cybersecurity Alliance, and Jenny Brinkley, director of Amazon Security, stressed the importance of compelling security awareness training in “Engagement Through Entertainment: How To Make Security Behaviors Stick.” Education is critical to building a strong security posture, but as Plaggemier and Brinkley pointed out, we’re “living through an epidemic of boringness” in cybersecurity training.

According to a recent report, just 28% of employees say security awareness training is engaging, and only 36% say they pay full attention during such training.

Citing a United Airlines preflight safety video and Amazon’s Protect and Connect public service announcement (PSA) as examples, they emphasized the need to make emotional connections with users through humor and unexpected elements in order to create memorable training that drives behavioral change.

Plaggemeier and Brinkley detailed five actionable steps for security teams to improve their awareness training:

  • Brainstorm with staff throughout the company (not just the security people)
  • Find ideas and inspiration from everywhere else (TV episodes, movies… anywhere but existing security training)
  • Be relatable, and include insights that are relevant to your company and teams
  • Start small; you don’t need a large budget to add interest to your training
  • Don’t let naysayers deter you — change often prompts resistance
“You’ve got to make people care. And so you’ve got to find out what their personal motivators are, and how to develop the type of content that can make them care to click through the training and…remember things as they’re walking through an office.” — Jenny Brinkley, director of Amazon Security

Cloud security

Cloud security was another popular topic. In “Architecting Security for Regulated Workloads in Hybrid Cloud,” Mark Buckwell, cloud security architect at IBM, discussed the architectural thinking practices—including zero trust—required to integrate security and compliance into regulated workloads in a hybrid cloud environment.

Mitiga co-founder and CTO Ofer Maor told real-world stories of SaaS attacks and incident response in “It’s Getting Real & Hitting the Fan 2023 Edition.”

Maor highlighted common tactics focused on identity theft, including MFA push fatigue, phishing, business email compromise, and adversary-in-the middle attacks. After detailing techniques that are used to establish persistence in SaaS environments and deliver ransomware, Maor emphasized the importance of forensic investigation and threat hunting to gaining the knowledge needed to reduce the impact of SaaS security incidents.

Sarah Currey, security practice manager, and Anna McAbee, senior solutions architect at AWS, provided complementary guidance in “Top 10 Ways to Evolve Cloud Native Incident Response Maturity.” Currey and McAbee highlighted best practices for addressing incident response (IR) challenges in the cloud — no matter who your provider is:

  1. Define roles and responsibilities in your IR plan
  2. Train staff on AWS (or your provider)
  3. Develop cloud incident response playbooks
  4. Develop account structure and tagging strategy
  5. Run simulations (red team, purple team, tabletop)
  6. Prepare access
  7. Select and set up logs
  8. Enable managed detection services in all available AWS Regions
  9. Determine containment strategy for resource types
  10. Develop cloud forensics capabilities

Speaking to BizTech, Clarke Rodgers, director of enterprise strategy at AWS, noted that tools and services such as Amazon GuardDuty and AWS Key Management Service (AWS KMS) are available to help advance security in the cloud. When organizations take advantage of these services and use partners to augment security programs, they can gain the confidence they need to take more risks, and accelerate digital transformation and product development.

Security takes a village

There are more highlights than we can mention on a variety of other topics, including post-quantum cryptography, data privacy, and diversity, equity, and inclusion. We’ve barely scratched the surface of RSA Conference 2023. If there is one key takeaway, it is that no single organization or individual can address cybersecurity challenges alone. By working together and sharing best practices as an industry, we can develop more effective security solutions and stay ahead of emerging threats.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Danielle Ruderman

Danielle Ruderman

Danielle is a Senior Manager for the AWS Worldwide Security Specialist Organization, where she leads a team that enables global CISOs and security leaders to better secure their cloud environments. Danielle is passionate about improving security by building company security culture that starts with employee engagement.

Cloud Security Strategies for Manufacturing

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/05/03/cloud-security-strategies-for-manufacturing/

Protecting production while supporting growing cloud initiatives

Cloud Security Strategies for Manufacturing

The manufacturing industry is in limbo as organizations shift to cloud services. Many organizations are transitioning services to the cloud, but the vast majority maintain hybrid network environments that lean heavily on on-prem elements. During the pandemic, some companies were forced to expand their cloud services quickly to keep up with an influx of end users accessing network services remotely. However, few manufacturers are really pursuing a cloud-first approach.

This leaves most manufacturing organizations struggling to address issues of visibility in their hybrid cloud environments. There’s also a growing concern about compliance in the industry, with manufacturers setting internal standards to provide crucial oversight for themselves and their third-party partners. All of this is occurring during an industry-wide push to implement smart factory initiatives and a persistent IT/OT skills gap in manufacturing organizations.

An effective cloud security strategy is key for manufacturing companies. As they transition their services, implementing cloud security will ensure they’re able to monitor their growing attack surfaces, establish the necessary auditing processes and assessments for compliance, and support smart factory initiatives.

Major challenges of cloud security in manufacturing

Ensuring consistent production is paramount for manufacturing organizations. Cloud security strategy for this industry enables hybrid networks to function without disruption, while still supporting developing compliance regulations and smart factory initiatives. Without an effective cloud security strategy, manufacturers jeopardize their entire hybrid network as well as the operational elements and software integral to their manufacturing processes. Let’s look at a few of the obstacles keeping manufacturers from implementing an effective cloud security strategy.

Lack of visibility into the cloud

The manufacturing industry is unique in that organizations are not only monitoring an environment populated with their own cloud and on-prem elements, but they’re also tasked with tracking the elements of the third-party vendors that they partner with. These additional endpoints increase the overall attack surface and can be tricky to secure.

Lack of visibility into the cloud applications and elements in a manufacturing company’s network impacts root-cause analysis, anomaly detection, and the other processes that affect availability, performance, and security across the entire network.

Network disruptions often translate to supply chain issues that can affect production and availability. This ultimately translates to lost revenue and negatively impacts a manufacturer’s brand reputation. In fact, in a Supply Chain Resilience Report, 16.7% of business owners reported a “severe loss of income” due to a supply chain disruption. The report also revealed that the average cost of a disruption was around $610,000 dollars. Cloud security strategy, then, should include visibility across the entire infrastructure as well as third-party dependencies and the necessary context to bring clarity to third-party risk.

Failure to achieve and maintain cloud compliance

Unlike other highly regulated industries like healthcare and financial services, manufacturing organizations don’t have much external guidance when it comes to cloud compliance. In the absence of government regulation, manufacturing companies need a way to validate network configurations and changes in their cloud applications and infrastructure.

The lack of compliance standards for cloud applications prevents many manufacturers from properly deploying cloud-controlled elements, as well as detecting and remediating issues. This leads to system-wide vulnerabilities and greater exposure in the threat landscape. For example, without proper compliance standards in place, an organization may fail to update their service-level agreements (SLAs) or security patches in their cloud environments, which can be exploited by malicious threat actors.

Manufacturing organizations require a cloud security strategy that includes automated detection and remediation assistance, as well as support in adopting and implementing the few regulatory recommendations available, such as those set forth by the National Institute of Standards and Technology (NIST).

Inability to bridge the IT/OT knowledge gap

According to a Gartner survey, 64% of IT executives view talent shortages as the most significant barrier to adoption of emerging technologies. In the manufacturing industry, this translates specifically to a lack of IT/OT specialist knowledge on network teams.

IT/OT refers to the integration of information technology (IT) systems with operational technology (OT) systems. This particular combination of systems is used by manufacturing organizations to balance cloud network infrastructure that controls information and data with industrial equipment, assets, and processes.

Without specialist knowledge of these systems and how they interact, manufacturers struggle with IT and OT silos that lead to system disruption, downtime, and increased vulnerability. Manufacturers often misunderstand that OT systems are critical to their production process, but not necessarily the source of risk in their infrastructure. IT systems, however, may represent a smaller point of entry to their system, but pose a much larger risk as they connect to the larger OT systems. To combat this, manufacturers need a toolkit that will fill this skill gap on their teams, automate processes for increased efficiency, and consolidate data to break down silos between teams.

Where to start with a cloud security strategy in manufacturing

When looking to build a strong cloud security strategy, manufacturers should focus their efforts in the following areas:

  • Visibility
  • Compliance
  • Managed Services

Prioritize cloud visibility

Though the transition to cloud services is slower in the manufacturing industry, it is still an inevitability. Consequently, the best way for manufacturing organizations to adequately protect their cloud infrastructure, and by extension their overall environment, is to focus on visibility.

Visibility reduces risk and allows companies to effectively monitor their attack surfaces. This begins with manufacturers collecting monitoring data from across their cloud infrastructure. Drawing connections between the data, end-user experiences, and supply chain interaction can help manufacturers find weak or vulnerable points in their cloud infrastructure.

The right cloud security tools will help teams continuously monitor both public cloud and container environments. Manufacturers also need real-time visibility and context to find and fix issues quickly. InsightCloudSec offers all of these features and more to manufacturing companies—effectively eliminating network blind spots and giving teams the confidence they need to move forward with their cloud initiatives.

Consider cloud compliance solutions

Many manufacturers struggle with finding and adopting regulatory best practices in their cloud environments. While NIST offers guidance on network security, and the Center for Internet Security (CIS) offers frameworks and CIS Benchmarks, many manufacturers are unsure of which guidelines make the most sense for their organization’s needs. Moreover, manufacturers need guidance on how to implement compliance monitoring, which ensures that their cloud elements are operating securely.

Without compliance, manufacturers are essentially managing their cloud environments in the dark, with little governance on how to deploy applications, configure their cloud environments, and update their elements. This can lead to lapsed security updates and serious vulnerabilities that increase risk across the entire infrastructure.

Enter cloud compliance solutions. These tools can enable manufacturing organizations to automate compliance monitoring and management. For example, InsightCloudSec checks an organization’s multi-cloud environments against dozens of industry and regulatory best practices. Moreover, cloud compliance solutions enable manufacturers to customize external compliance checks to sync with internal compliance regulations. This eliminates frustration and false alarms.

Teams can also take advantage of InsightCloudSec’s embedded automation, which automatically detects compliance drift and returns cloud environments to a secure state within 60 seconds.

Outsource with managed services

Manufacturing teams struggling to hire and retain skilled IT workers often find themselves with a gap in IT/OT oversight. This gap can result in greater silos between IT and OT teams, which can disrupt smart factory initiatives and the adoption of cloud services, and lead to increased unchecked system vulnerabilities.

After all, it’s hard to contextualize risk without a complete understanding of IT/OT cloud elements and how risk in one arena affects the other. Instead of an organization redoubling their hiring efforts or overwhelming their existing team members, managed services allow manufacturers to effectively outsource this role and add a virtual IT/OT specialist to their team.

Rapid7’s managed services team offers regular assessments, handles the operational requirements of incident detection and response, and performs vulnerability scanning. This frees up crucial time for IT/OT teams and streamlines the scanning and reporting process, which encourages greater collaboration. Contextualization, or the process of analyzing threats and gathering relevant supplemental information, is simple with Rapid7’s InsightVM. InsightVM works in partnership with SCADAfence to assess vulnerabilities and leverage insight into OT networks to accurately prioritize risk.

The bottom line

Establishing cloud security strategies in manufacturing organizations often seems like an insurmountable task. Common struggles of visibility, compliance, and IT/OT knowledge gaps plague manufacturing companies who are transitioning to cloud services. This can lead to network blind spots, slowdowns, and increased risk.

Building a toolkit of cloud security solutions can help manufacturers reduce their overall risk in the cloud and optimize their performance by improving internal compliance. Making the most of this toolkit requires specialized knowledge, but leveraging managed services enables manufacturing organizations to streamline reporting and assessments without hiring additional in-house staff.

Manufacturing organizations are evolving to keep up with production demands, changing technology, and an ever-broadening threat landscape. By strengthening cloud security, manufacturing companies can focus on providing a superb product, assured that their cloud environment is secure. Get in touch with us to learn more about how Rapid7 is helping manufacturing companies navigate security during every phase of the cloud transition process.

4 Takeaways from the 2023 Gartner® Market Guide for CNAPP

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2023/04/25/4-takeaways-cnapp-2023-gartner-market-guide-report/

4 Takeaways from the 2023 Gartner® Market Guide for CNAPP

In an ongoing effort to help security organizations gain greater visibility into risk, we’re pleased to offer this complimentary Gartner research, and share our 4 Takeaways from the 2023 Gartner® Market Guide for CNAPP. This critical research can help security leaders take an in-depth look into cloud-native application protection platforms (CNAPPs), and evaluate potential solutions that best fit their specific environments.

Takeaway #1: Attack surfaces are increasing

There’s nothing minor about misconfigurations. If a cloud resource or service is misconfigured, attackers will target and exploit it. It may not even be a misconfiguration in your cloud network, but one found in a supply chain partner that puts everyone’s infrastructure at risk. Application programming interfaces (APIs) are at risk as well, and are being increasingly targeted by threat actors because they’re such a critical component of the build process. The report states:

“CNAPP offerings bring together multiple disparate security and protection capabilities into a single platform that most importantly is able to identify, prioritize, enable collaboration and help remediate excessive risk across the extremely complex logical boundary of a modern cloud-native application.”

Takeaway #2: Developer scope is expanding

As organizations increasingly look to shift left, developers are being asked to take on a more active role in ensuring their applications and the supporting cloud infrastructure are secure and compliant. We feel the report reiterates this point, stating:

“Shifting risk visibility left requires a deep understanding of the development pipeline and artifacts and extending vulnerability scanning earlier into the development pipeline as these artifacts are being created.”

However, the report also states that developers are increasingly responsible for operational tasks, such as addressing vulnerabilities, deploying infrastructure as code, and deploying and tearing down implementations in production, thus requiring tools that address this expanded scope

Extra tooling is needed to address these concerns, with the very real possibility that tooling will be fragmented if it’s coming from different vendors and addressing different parts of the application development process. As far as recommendations, the report states:

“Reduce complexity and improve the developer experience by choosing integrated CNAPP offerings that provide complete life cycle visibility and protection of cloud-native applications across development and staging and into runtime operation.”

Takeaway #3: Context around risk is needed

Developers simply do not want the process to be slowed. Security is important, but if developers are constantly tripped up in their workflows, it’s almost inevitable that adoption of security practices and tooling will become a struggle. Therefore, it’s critical to prioritize security tasks and provide the context needed to remediate the issue as quickly as possible.

That can, however, be easier said than done when collecting disparate information and trying to gain as much visibility as possible into an environment. Let’s look at a few ways to understand context in security data:

  • Set VM processes to detect more than just vulnerabilities in the cloud. It’s also key to be able to see misconfigurations and issues with IAM permissions as well as understand resource/service configurations, permissions and privileges, which applications are running and what data is stored inside. These processes help to contextualize and action on the highest-priority risks.
  • Identify if a vulnerable instance is publicly accessible and the nature of its business application — this will help you determine the scope of the vulnerability.
  • Simply saying developers need to find and fix vulnerabilities in production or pre-production by shifting security left is generally an oversimplification. It’s critical to communicate with developers about why a vulnerability is being prioritized and specific actions they can take to remediate.

Takeaway #4: Depth of functionality is critical

Gartner states that “multiple providers market CNAPP capabilities — some starting with runtime expertise and some starting with development expertise. Few offer the required breadth and depth of functionality with integration between all components across development and operations.” Each customer’s situation will be specific; therefore, there will be no one-size-fits-all solution. Ideally, though, a provider should be able to offer runtime risk visibility, cloud risk visibility, and development artifact risk visibility.

As customer feedback helps to refine the offerings of CNAPP providers, Gartner shares that one of the reasons for moving towards consolidation to a CNAPP offering is to eliminate redundant capabilities. Moving forward, there is a strong customer preference to consolidate vendors.

To secure and protect

That’s the name of the game: to secure and protect cloud-native applications across the development and production lifecycle. Unknown risks can appear anywhere in the process, but it’s possible to mitigate many of these vulnerabilities and blockers. Learn how CNAPP offerings deliver an integrated set of capabilities spanning runtime visibility and control, CSPM capabilities, software composition analysis (SCA) capabilities and container scanning. Download and read the full Market Guide now.

Gartner, “Market Guide for Cloud-Native Application Protection Platforms” Neil MacDonald, Charlie Winckless, Dale Koeppen. 14 March 2023.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Scale your authorization needs for Secrets Manager using ABAC with IAM Identity Center

Post Syndicated from Aravind Gopaluni original https://aws.amazon.com/blogs/security/scale-your-authorization-needs-for-secrets-manager-using-abac-with-iam-identity-center/

With AWS Secrets Manager, you can securely store, manage, retrieve, and rotate the secrets required for your applications and services running on AWS. A secret can be a password, API key, OAuth token, or other type of credential used for authentication purposes. You can control access to secrets in Secrets Manager by using AWS Identity and Access Management (IAM) permission policies. In this blog post, I will show you how to use principles of attribute-based access control (ABAC) to define dynamic IAM permission policies in AWS IAM Identity Center (successor to AWS Single Sign-On) by using user attributes from an external identity provider (IdP) and resource tags in Secrets Manager.

What is ABAC and why use it?

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes or characteristics of the user, the data, or the environment, such as the department, business unit, or other factors that could affect the authorization outcome. In the AWS Cloud, these attributes are called tags. By assigning user attributes as principal tags, you can simplify the process of creating fine-grained permissions on AWS.

With ABAC, you can use attributes to build more dynamic policies that provide access based on matching attribute conditions. ABAC rules are evaluated dynamically at runtime, which means that the users’ access to applications and data and the type of allowed operations automatically change based on the contextual factors in the policy. For example, if a user changes department, access is automatically adjusted without the need to update permissions or request new roles. You can use ABAC in conjunction with role-based access control (RBAC) to combine the ease of policy administration with flexible policy specification and dynamic decision-making capability to enforce least privilege.

AWS IAM Identity Center (successor to AWS Single Sign-On) expands the capabilities of IAM to provide a central place that brings together the administration of users and their access to AWS accounts and cloud applications. With IAM Identity Center, you can define user permissions and manage access to accounts and applications in your AWS Organizations organization centrally. You can also create ABAC permission policies in a central place. ABAC will work with attributes from a supported identity source in IAM Identity Center. For a list of supported external IdPs for identity synchronization through the System for Cross-domain Identity Management (SCIM) and Security Assertion Markup Language (SAML) 2.0, see Supported identity providers.

The following are key benefits of using ABAC with IAM Identity Center and Secrets Manager:

  1. Fewer permission sets — With ABAC, multiple users who use the same IAM Identity Center permission set and the same IAM role can still get unique permissions, because permissions are now based on user attributes. Administrators can author IAM policies that grant users access only to secrets that have matching attributes. This helps reduce the number of distinct permissions that you need to create and manage in IAM Identity Center and, in turn, reduces your permission management complexity.
  2. Teams can change and grow quickly — When you create new secrets, you can apply the appropriate tags, which will automatically grant access without requiring you to update the permission policies.
  3. Use employee attributes from your corporate directory to define access — You can use existing employee attributes from a supported identity source configured in IAM Identity Center to make access control decisions on AWS.

Figure 1 shows a framework to control access to Secrets Manager secrets using IAM Identity Center and ABAC principles.

Figure 1: ABAC framework to control access to secrets using IAM Identity Center

Figure 1: ABAC framework to control access to secrets using IAM Identity Center

The following is a brief introduction to the basic components of the framework:

  1. User attribute source or identity source — This is where your users and groups are administered. You can configure a supported identity source with IAM Identity Center. You can then define and manage supported user attributes in the identity source.
  2. Policy management — You can create and maintain policy definitions (permission sets) centrally in IAM Identity Center. You can assign access to a user or group to one or more accounts in IAM Identity Center with these permission sets. You can then use attributes defined in your identity source to build ABAC policies for managing access to secrets.
  3. Policy evaluation — When you assign a permission set, IAM Identity Center creates corresponding IAM Identity Center-controlled IAM roles in each account, and attaches the policies specified in the permission set to those roles. IAM Identity Center manages the role, and allows the authorized users that you’ve defined to assume the role. When users try to access a secret, IAM dynamically evaluates ABAC policies on the target account to determine access based on the attributes assigned to the user and resource tags assigned to that secret.

How to configure ABAC with IAM Identity Center

To configure ABAC with IAM Identity Center, you need to complete the following high-level steps. I will walk you through these steps in detail later in this post.

  1. Identify and set up identities that are created and managed in the identity source with user attributes, such as project, team, AppID or department.
  2. In IAM Identity Center, enable Attributes for access control and configure select attributes (such as department) to use for access control. For a list of supported attributes, see Supported external identity provider attributes.
  3. If you are using an external IdP and choose to use custom attributes from your IdP for access controls, configure your IdP to send the attributes through SAML assertions to IAM Identity Center.
  4. Assign appropriate tags to secrets in Secrets Manager.
  5. Create permission sets based on attributes added to identities and resource tags.
  6. Define guardrails to enforce access using ABAC.

ABAC enforcement and governance

Because an ABAC authorization model is based on tags, you must have a tagging strategy for your resources. To help prevent unintended access, you need to make sure that tagging is enforced and that a governance model is in place to protect the tags from unauthorized updates. By using service control policies (SCPs) and AWS Organizations tag policies, you can enforce tagging and tag governance on resources.

When you implement ABAC for your secrets, consider the following guidance for establishing a tagging strategy:

  • During secret creation, secrets must have an ABAC tag applied (tag-on-create).
  • During secret creation, the provided ABAC tag key must be the same case as the principal’s ABAC tag key.
  • After secret creation, the ABAC tag cannot be modified or deleted.
  • Only authorized principals can do tagging operations on secrets.
  • You enforce the permissions that give access to secrets through tags.

For more information on tag strategy, enforcement, and governance, see the following resources:

Solution overview

In this post, I will walk you through the steps to enable the IdP that is supported by IAM Identity Center.

Figure 2: Sample solution implementation

Figure 2: Sample solution implementation

In the sample architecture shown in Figure 2, Arnav and Ana are users who each have the attributes department and AppID. These attributes are created and updated in the external directory—Okta in this case. The attribute department is automatically synchronized between IAM Identity Center and Okta using SCIM. The attribute AppID is a custom attribute configured on Okta, and is passed to AWS as a SAML assertion. Both users are configured to use the same IAM Identity Center permission set that allows them to retrieve the value of secrets stored in Secrets Manager. However, access is granted based on the tags associated with the secret and the attributes assigned to the user. 

For example, user Arnav can only retrieve the value of the RDS_Master_Secret_AppAlpha secret. Although both users work in the same department, Arnav can’t retrieve the value of the RDS_Master_Secret_AppBeta secret in this sample architecture.

Prerequisites

Before you implement the solution in this blog post, make sure that you have the following prerequisites in place:

  1. You have IAM Identity Center enabled for your organization and connected to an external IdP using SAML 2.0 identity federation.
  2. You have IAM Identity Center configured for automatic provisioning with an external IdP using the SCIM v2.0 standard. SCIM keeps your IAM Identity Center identities in sync with identities from the external IdP.

Solution implementation

In this section, you will learn how to enable access to Secrets Manager using ABAC by completing the following steps:

  1. Configure ABAC in IAM Identity Center
  2. Define custom attributes in Okta
  3. Update configuration for the IAM Identity Center application on Okta
  4. Make sure that required tags are assigned to secrets in Secrets Manager
  5. Create and assign a permission set with an ABAC policy in IAM Identity Center
  6. Define guardrails to enforce access using ABAC

Step 1: Configure ABAC in IAM Identity Center

The first step is to set up attributes for your ABAC configuration in IAM Identity Center. This is where you will be mapping the attribute coming from your identity source to an attribute that IAM Identity Center passes as a session tag. The Key represents the name that you are giving to the attribute for use in the permission set policies. You need to specify the exact name in the policies that you author for access control. For the example in this post, you will create a new attribute with Key of department and Value of ${path:enterprise.department}. For supported external IdP attributes, see Attribute mappings.

To configure ABAC in IAM Identity Center (console)

  1. Open the IAM Identity Center console.
  2. In the Settings menu, enable Attributes for access control.
  3. Choose the Attributes for access control tab, select Add attribute, and then enter the Key and Value details as follows.
    • Key: department
    • Value: ${path:enterprise.department}

Note: For more information, see Attributes for access control.

Step 2: Define custom attributes in Okta

The sample architecture in this post uses a custom attribute (AppID) on an external IdP for access control. In this step, you will create a custom attribute in Okta.

To define custom attributes in Okta (console)

  1. Open the Okta console.
  2. Navigate to Directory and then select Profile Editor.
  3. On the Profile Editor page, choose Okta User (default).
  4. Select Add Attribute and create a new custom attribute with the following parameters.
    • For Data type, enter string
    • For Display name, enter AppID
    • For Variable name, enter user.AppID
    • For Attribute length, select Less Than from the dropdown and enter a value.
    • For User permission, enter Read Only
  5. Navigate to Directory, select People, choose in-scope users, and enter a value for Department and AppID attributes. The following shows these values for the users in our example.
    • First name (firstName): Arnav
    • Last name (lastName): Desai
    • Primary email (email): [email protected]
    • Department (department): Digital
    • AppID: Alpha
       
    • First name (firstName): Ana
    • Last name (lastName): Carolina
    • Primary email (email): [email protected]
    • Department (department): Digital
    • AppID: Beta

Step 3: Update SAML configuration for IAM Identity Center application on Okta

Automatic provisioning (through the SCIM v2.0 standard) of user and group information from Okta into IAM Identity Center supports a set of defined attributes. A custom attribute that you create on Okta won’t be automatically synchronized to IAM Identity Center through SCIM. You can, however, define the attribute in the SAML configuration so that it is inserted into the SAML assertions.

To update the SAML configuration in Okta (console)

  1. Open the Okta console and navigate to Applications.
  2. On the Applications page, select the app that you defined for IAM Identity Center.
  3. Under the Sign On tab, choose Edit.
  4. Under SAML 2.0, expand the Attributes (Optional) section, and add an attribute statement with the following values, as shown in Figure 3:
    Figure 3: Sample SAML configuration with custom attributes

    Figure 3: Sample SAML configuration with custom attributes

  5. To check that the newly added attribute is reflected in the SAML assertion, choose Preview SAML, review the information, and then choose Save.

Step 4: Make sure that required tags are assigned to secrets in Secrets Manager

The next step is to make sure that the required tags are assigned to secrets in Secrets Manager. You will review the required tags from the Secrets Manager console.

To verify required tags on secrets (console)

  1. Open the Secrets Manager console in the target AWS account and then choose Secrets.
  2. Verify that the required tags are assigned to the secrets in scope for this solution, as shown in Figure 4. In our example, the tags are as follows:
    • Keydepartment
    • Value: Digital
    • KeyAppID
    • Value: Alpha or Beta
    Figure 4: Sample secret configuration with required tags

    Figure 4: Sample secret configuration with required tags

Step 5a: Create a permission set in IAM Identity Center using ABAC policy

In this step, you will create a new permission set that allows access to secrets based on the principal attributes and resource tags.

When you enable ABAC and specify attributes, IAM Identity Center passes the attribute value of the authenticated user to AWS Security Token Service (AWS STS) as session tags when an IAM role is assumed. You can use access control attributes in your permission sets by using the aws:PrincipalTag condition key to create access control rules.

To create a permission set (console)

  1. Open the IAM Identity Center console and navigate to Multi-account permissions.
  2. Choose Permission sets, and then select Create permission set.
  3. On the Specify policies and permissions boundary page, choose Inline policy.
  4. For Inline policy, paste the following sample policy document and then choose Next. This policy allows users to retrieve the value of only those secrets that have resource tags that match the required user attributes (department and AppID in our example).
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "ListAllSecrets",
                "Effect": "Allow",
                "Action": [
                    "secretsmanager:ListSecrets"
                ],
                "Resource": "*"
            },
            {
                "Sid": "AuthorizetoGetSecretValue",
                "Effect": "Allow",
                "Action": [
                    "secretsmanager:GetSecretValue",
                    "secretsmanager:DescribeSecret"
                ],
                "Resource": "*",
                "Condition": {
                    "StringEquals": {
                        "secretsmanager:ResourceTag/department": "${aws:PrincipalTag/department}",
                        "secretsmanager:ResourceTag/AppID": "${aws:PrincipalTag/AppID}"
                    }
                }
            }
        ]
    }

  5. Configure the session duration, and optionally provide a description and tags for the permission set.
  6. Review and create the permission set.

Step 5b: Assign permission set to users in IAM Identity Center

Now that you have created a permission set with ABAC policy, complete the configuration by assigning the permission set to users to grant them access to secrets in one or more accounts in your organization.

To assign a permission set (console)

  1. Open the IAM Identity Center console and navigate to Multi-account permissions.
  2. Choose AWS accounts and select one or more accounts to which you want to assign access.
  3. Choose Assign users or groups.
  4. On the Assign users and groups page, select the users, groups, or both to which you want to assign access. For this example, I select both Arnav and Ana.
  5. On the Assign permission sets page, select the permission set that you created in the previous section.
  6. Review your changes, as shown in Figure 5, and then select Submit.
Figure 5: Sample permission set assignment

Figure 5: Sample permission set assignment

Step 6: Define guardrails to enforce access using ABAC

To govern access to secrets to your workforce users only through ABAC and to help prevent unauthorized access, you can define guardrails. In this section, I will show you some sample service control policies (SCPs) that you can use in your organization.

Note: Before you use these sample SCPs, you should carefully review, customize, and test them for your unique requirements. For additional instructions on how to attach an SCP, see Attaching and detaching service control policies.

Guardrail 1 – Enforce ABAC to access secrets

The following sample SCP requires the use of ABAC to access secrets in Secrets Manager. In this example, users and secrets must have matching values for the attributes department and AppID. Access is denied if those attributes don’t exist or if they don’t have matching values. Also, this example SCP allows only the admin role to access secrets without matching tags. Replace <arn:aws:iam::*:role/secrets-manager-admin-role> with your own information.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyAccesstoSecretsWithoutABACTag",
            "Effect": "Deny",
            "Action": [
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret"
            ],
            "Resource": "*",
            "Condition": {
                "StringNotEqualsIfExists": {
                    "secretsmanager:ResourceTag/department": "${aws:PrincipalTag/department}",
                    "secretsmanager:ResourceTag/AppID": "${aws:PrincipalTag/AppID}"
                },
                "ArnNotLike": {
                    "aws:PrincipalArn": "<arn:aws:iam::*:role/secrets-manager-admin-role>"
                }
            }
        }
    ]
}

Guardrail 2 – Enforce tagging on secret creation

The following sample SCP denies the creation of new secrets that don’t have the required tag key-value pairs. In this example, the SCP denies creation of a new secret if it doesn’t include department and AppID tag keys. It also denies access if the tag department doesn’t have the value Digital and the tag AppID doesn’t have either Alpha or Beta assigned to it. Also, this example SCP allows only the admin role to create secrets without matching tags. Replace <arn:aws:iam::*:role/secrets-manager-admin-role> with your own information.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyCreatingResourcesWithoutRequiredTag",
      "Effect": "Deny",
      "Action": [
        "secretsmanager:CreateSecret"
      ],
      "Resource": [
        "arn:aws:secretsmanager:*:*:secret:*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:RequestTag/department": ["Digital"],
          "aws:RequestTag/AppID": ["Alpha", "Beta"]
        },
        "ArnNotLike": {
          "aws:PrincipalArn": "<arn:aws:iam::*:role/secrets-manager-admin-role>"
        }
      }
    }
  ]
}

Guardrail 3 – Restrict deletion of ABAC tags

The following sample SCP denies the ability to delete the tags used for ABAC. In this example, only the admin role can delete the tags department and AppID after they are attached to a secret. Replace <arn:aws:iam::*:role/secrets-manager-admin-role> with your own information.

Guardrail 4 – Restrict modification of ABAC tags

The following sample SCP denies the ability to modify required tags for ABAC after they are attached to a secret. In this example, only the admin role can modify the tags department and AppID after they are attached to a secret. Replace <arn:aws:iam::*:role/secrets-manager-admin-role> with your own information.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyModifyingABACTags",
      "Effect": "Deny",
      "Action": [
        "secretsmanager:TagResource"
      ],
      "Resource": [
        "arn:aws:secretsmanager:*:*:secret:*"
      ],
      "Condition": {
        "Null": {
          "aws:ResourceTag/department": "false",
          "aws:ResourceTag/AppID": "false"
        },
        "ArnNotLike": {
          "aws:PrincipalArn": "<arn:aws:iam::*:role/secrets-manager-admin-role>"
        }
      }
    }
  ]
}

Test the solution 

In this section, you will test the solution by retrieving a secret using the Secrets Manager console. Your attempt to retrieve the secret value will be successful only when the required resource and principal tags exist, and have matching values (AppID and department in our example).

Test scenario 1: Retrieve and view the value of an authorized secret

In this test, you will verify whether you can successfully retrieve the value of a secret that belongs to your application.

To test the scenario

  1. Sign in to IAM Identity Center and log in with your external IdP user. For this example, I log in as Arnav.
  2. On the IAM Identity Center dashboard, select the target account.
  3. From the list of available roles that the user has access to, choose the role that you created in Step 5a and select Management console, as shown in Figure 6. For this example, I select the SecretsManagerABACTest permission set.
    Figure 6: Sample IAM Identity Center dashboard

    Figure 6: Sample IAM Identity Center dashboard

  4. Open the Secrets Manager console and select a secret that belongs to your application. For this example, I select RDS_Master_Secret_AppAlpha.

    Because the AppID and department tags exist on both the secret and the user, the ABAC policy allowed the user to describe the secret, as shown in Figure 7.

    Figure 7: Sample secret that was described successfully

    Figure 7: Sample secret that was described successfully

  5. In the Secret value section, select Retrieve secret value.

    Because the value of the resource tags, AppID and department, matches the value of the corresponding user attributes (in other words, the principal tags), the ABAC policy allows the user to retrieve the secret value, as shown in Figure 8.

    Figure 8: Sample secret value that was retrieved successfully

    Figure 8: Sample secret value that was retrieved successfully

Test scenario 2: Retrieve and view the value of an unauthorized secret

In this test, you will verify whether you can retrieve the value of a secret that belongs to a different application.

To test the scenario

  1. Repeat steps 1-3 from test scenario 1.
  2. Open the Secrets Manager console and select a secret that belongs to a different application. For this example, I select RDS_Master_Secret_AppBeta.

    Because the value of the resource tag AppID doesn’t match the value of the corresponding user attribute (principal tag), the ABAC policy denies access to describe the secret, as shown in Figure 9.

    Figure 9: Sample error when describing an unauthorized secret

    Figure 9: Sample error when describing an unauthorized secret

Conclusion

In this post, you learned how to implement an ABAC strategy using attributes and to build dynamic policies that can simplify access management to Secrets Manager using IAM Identity Center configured with an external IdP. You also learned how to govern resource tags used for ABAC and establish guardrails to enforce access to secrets using ABAC. To learn more about ABAC and Secrets Manager, see Attribute-Based Access Control (ABAC) for AWS and the Secrets Manager documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on AWS Secrets Manager re:Post.

 
Want more AWS Security news? Follow us on Twitter.

Aravind Gopaluni

Aravind Gopaluni

Aravind is a Senior Security Solutions Architect at AWS helping Financial Services customers meet their security and compliance objectives in the AWS cloud. Aravind has about 20 years of experience focusing on identity & access management and data protection solutions to numerous global enterprises. Outside of work, Aravind loves to have a ball with his family and explore new cuisines.

Logging strategies for security incident response

Post Syndicated from Anna McAbee original https://aws.amazon.com/blogs/security/logging-strategies-for-security-incident-response/

Effective security incident response depends on adequate logging, as described in the AWS Security Incident Response Guide. If you have the proper logs and the ability to query them, you can respond more rapidly and effectively to security events. If a security event occurs, you can use various log sources to validate what occurred and understand the scope. Then, you can use the results of your analysis to take remediation actions. To learn more about logging best practices, see Configure service and application logging and Analyze logs, findings, and metrics centrally.

In this blog post, we will show you how to achieve an effective strategy for logging for security incident response. We will share logging options across the typical cloud application stack, log analysis options, and sample queries. AWS offers managed services, such as Amazon GuardDuty for threat detection and Amazon Detective for incident analysis. If you want to collect additional logs or perform custom analysis, then you should consider the options described in this blog post.

Selection of logs

To select the appropriate logs for security incident response, you should start with the common cloud application stack, which consists of the components and layers of your application deployed on AWS. For each component, we will describe the logging sources that you have. For each log source, we will describe why you should log it for security incident response, how to enable the logs, and what your log storage options are.

To select the logs for security incident response, first consider the following questions:

  • What are your compliance and regulatory requirements for logging?

    Note: Make sure that you comply with the log retention requirements of compliance standards relevant to your organization, as well as your organization’s incident response strategy.

  • What AWS services do you commonly use?
  • What AWS services have access to or contain sensitive data?
  • What threats are most relevant to you?

    Note: Performing a threat model of your cloud architectures can help you answer this question. For more information, see How to approach threat modelling.

Considering these questions can help you develop requirements for logging that will guide your selection of the following log sources.

AWS account logs

An AWS account is the first, fundamental component of an application deployed on AWS. The account is a container for your AWS resources. You create and manage your AWS resources in this account, and the account provides administrative capabilities for access and billing.

AWS CloudTrail

Within an account, each action performed is an API call. From a console sign-in to the deployment of each resource in an AWS CloudFormation stack, events are generated to provide transparency on what has occurred in the account. With AWS CloudTrail, you can log, continuously monitor, and retain account activity related to actions across supported AWS services. CloudTrail provides the event history of your account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. CloudTrail logs API calls as three types of events:

  • Management events (also known as control plane operations) show management operations that are performed on resources in your account. This includes actions like creating an Amazon Simple Storage Service (Amazon S3) bucket and setting up logging.
  • Data events (also known as data plane operations) show the resource operations performed on or within resources in your account. These operations are often high-volume activities, such as Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations) and AWS Lambda function invocation activity.
  • Insights events capture unusual API call rate or error rate activity in your account. You must enable these events on a trail in order to capture them, and they are logged to a different folder prefix in the destination S3 bucket for your trail. Insights events provide you with information such as the type of event, the incident time period, the associated API, the error code, and statistics to help you understand and respond effectively to unusual activity.

For security investigations, CloudTrail provides context on the creation, modification, and deletion of AWS resources. Therefore, CloudTrail is one of your most important log sources for security incident response in an AWS environment. You have three primary ways to set up CloudTrail:

  • CloudTrail Event history — CloudTrail is enabled by default with 90-day retention of management events that you can retrieve through the CloudTrail Event history facility using the console, AWS Command line Interface (AWS CLI), or AWS SDK. You don’t need to take any action to get started using the Event history feature.
  • CloudTrail trail — For longer retention and visibility of data events, you need to create a CloudTrail trail and associate it with an S3 bucket and optionally with an Amazon CloudWatch log group. If you use AWS Organizations, you can create an organization trail that will log events for each account in the organization. By default, trails are multi-Region, so you don’t need to enable CloudTrail logs in each AWS Region.
  • AWS CloudTrail Lake — You can create a CloudTrail lake, which retains CloudTrail logs for up to seven years and provides a SQL-based querying facility. You don’t need to have a trail configured in your account to use CloudTrail Lake.
  • Amazon Security Lake — You can use Security Lake to ingest CloudTrail events, which include management and data events. You can further analyze these events with Amazon QuickSight or another other third-party security information and event management (SIEM) tool.

AWS Config

Creating and modifying resources is an integral part of your account use. Tracking resource configuration changes made by calling the AWS API helps you review changes throughout the resource lifecycle. AWS Config provides a detailed view of the configuration of AWS resources in your account, examines the resource configurations periodically, and tracks configuration changes that were not initiated by the API. This includes how the resources are related to one another and how they were configured in the past so that you can see how configurations and relationships change over time.

You should enable AWS Config in each Region where you have resources deployed, and you should configure an S3 bucket to receive configuration history and configuration snapshot files, which contain details on the resources that AWS Config records. You can then review configuration compliance and analyze activities performed before, during, and after an event using the configuration history in S3. You should centralize AWS Config resource tracking across multiple accounts in the same organization by setting up an aggregator. You can use AWS Control Tower to automate the setup.

During a security investigation, you might want to understand how a resource configuration has changed over time. For example, you might want to investigate the changes to an S3 bucket policy before and after a security event that involves an S3 bucket. AWS Config provides a configuration history for resources that can help you track activities performed during a security event.

Operating system and application logs

To record interactions with applications, you must capture operating system (OS) and application logs, especially custom logs generated by the application development framework. OS and local application logs are relevant for security events that involve an Amazon Elastic Compute Cloud (Amazon EC2) instance. These instances could be standalone, in an auto scaling group behind a load balancer, or compute workloads for Amazon Elastic Container Service (Amazon ECS) or an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. OS logs track privileged use, processes, login events, access to directory services, and file system activity on a server. To analyze a potential compromise to an EC2 instance, you will want to review the security event logs for Windows OS and the system logs for Linux-based OS.

With the unified CloudWatch agent, you can collect metrics and logs from EC2 instances and on-premises servers. The CloudWatch agent aggregates log data into CloudWatch logs, which can then be exported to Amazon S3 for long-term retention and analyzed with a SIEM tool of your choice or Amazon Athena, as shown in Figure 1.

Figure 1: Aggregate OS and application logs using CloudWatch Logs

Figure 1: Aggregate OS and application logs using CloudWatch Logs

Database logs

With SQL databases, you can log transactions to help track modifications to the databases, such as additions or deletions. After an engine or system failure, you will need transaction logs to restore a database to a consistent state. Transaction logs are designed to be secure, and they require additional processing to access valuable information. It’s important that you understand data interactions during a security investigation, especially if your databases hold personally identifiable information (PII), financial and payments information, or other information subject to regulatory controls.

When you use Amazon Relational Database Service (Amazon RDS), you can publish database logs to Amazon CloudWatch Logs. For NoSQL databases, tracking atomic interactions is useful. You can find logs for managed NoSQL databases like Amazon DynamoDB in CloudTrail. DynamoDB integrates with CloudTrail, providing a record of actions taken by a user, role, or service. These events are classified as data events in CloudTrail.

Network logs

The goal of logging network activity is to gain insight into the communications that traverse your network. You might need this data for a variety of reasons, such as network troubleshooting or for use in a forensic investigation of suspected malware activity within your network.

In the AWS Cloud, you can log network activity by creating a proxy that logs network traffic or by using Traffic Mirroring to send a copy of network traffic to a logging server. You can adopt cloud-native approaches to capture this type of data using Amazon Route 53 DNS query logs and Amazon VPC Flow Logs.

There are also a variety of third-party networking solutions available like Palo Alto Networks and Fortinet, so you can continue to use the network logging mechanisms that you might have used in an on-premises environment.

Route 53 DNS query logs

You can configure Amazon Route 53 to log Domain Name System (DNS) queries. These logs are categorized into two groups:

  • Public DNS query logging
  • Resolver query logging

Logging public DNS queries against domains that you have hosted in Route 53 provides query information, such as the domain or subdomain requested, date and time stamp of the request, DNS record type, Route 53 edge location that responded, and response code.

The Amazon Route 53 Resolver comes with Amazon Virtual Private Cloud (Amazon VPC) by default. Capturing Resolver query logs provides the same information as public queries, as well as additional information such as the Instance ID of the resource that the query originated from. You can also capture Resolver query logs against different types of queries.

VPC Flow Logs

You can configure VPC Flow Logs for a VPC in your account to capture traffic that enters and moves around your VPC network, without the addition of instances or products. From these logs, you can review information, such as source and destination IP, ports, timestamps, protocol, account ID, and whether the traffic was accepted or rejected. For a complete list of the fields available for flow log records, see Available fields. You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, IP traffic going to and from each network interface in that subnet or VPC will be logged. For more details on VPC Flow Logs, see Logging IP traffic using VPC Flow Logs.

You can forward flow logs to Amazon CloudWatch Logs to create CloudWatch alarms based on metric filters. You can also forward flow logs to an S3 bucket for long-term retention and further analysis. Figure 2 demonstrates these configurations.

Figure 2: Sending VPC Flow logs to CloudWatch Logs and S3

Figure 2: Sending VPC Flow logs to CloudWatch Logs and S3

Access logs

To identify access patterns for accessible endpoints, especially public endpoints, you should use access logs. Access logs capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. With services built in layers behind a load balancer, unless you track the X-Forwarded-For request header, the requestor’s context is lost. Access logs help bridge that gap during investigations and analysis.

Amazon S3 server access logs

Access logs are critical to track object level access when using S3 buckets to store confidential or sensitive data. You can also turn on CloudTrail to capture S3 data events. You can store access logs in S3 buckets for long-term storage for compliance purposes and to run analyses during and after an event.

Load balancing logs

Elastic Load Balancing provides access logs that capture detailed information about requests sent to load balancers. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use this log to analyze traffic patterns and to troubleshoot issues.

Access logs is an optional feature of Elastic Load Balancing that is turned off by default. To enable access logs for load balancers, see Access logs for your Application Load Balancer.

If you implement your own reverse proxy for load balancing needs, make sure that you capture the reverse proxy access logs. You can use the unified CloudWatch agent to forward the logs to CloudWatch. As with OS logs, you can export CloudWatch logs to an S3 bucket for long-term retention and analysis.

If you use an Amazon CloudFront distribution as the public endpoint for end users with load balancers as the custom origin, then load balancing access logs will represent the CloudFront distribution as the requestor, rather than the actual end user. If this information doesn’t add value to your incident handling process, then you can use CloudFront access logs as the log source that provides end user request details.

CloudFront access logs

You should enable standard logs, also known as access logs, when using CloudFront. Specify an S3 bucket where you want CloudFront to save the files.

CloudFront access logs are delivered on a best-effort basis. For information about requests made to a distribution in real time, use real-time logs that are delivered within seconds of receiving the requests. You should use real-time logs to monitor, analyze, and take action based on content delivery performance. For more details on the fields available from these logs, see the CloudFront standard log file format.

AWS WAF logs

When associated with a supported resource like a CloudFront distribution, Amazon API Gateway REST API, Application Load Balancer, AWS AppSync GraphQL API, Amazon Cognito user pool, or AWS App Runner, AWS WAF can help you monitor HTTP and HTTPS requests that are forwarded to the resource. You should configure web access control lists (ACLs) to gain fine-grained control over the requests, and enable logging for such ACLs to get detailed information about traffic that is analyzed by AWS WAF. Log information includes time of the request being received by AWS WAF from the AWS resource, details about the request, and the AWS WAF rules that the request matched. You can use this log information to monitor access patterns of public endpoints and configure rules to inspect requests in detail. For more information about AWS WAF logging, see Logging web ACL traffic.

Serverless logs

Serverless computing has become increasingly popular in the cloud-computing space. It provides on-demand compute power in a relatively short burst, meaning that cloud-based instances don’t need to be provisioned and kept around, idle, when there are no tasks to be completed. Although more and more compute tasks are being moved to serverless solutions, the need to log has not changed, but how the logs are generated has. In a serverless environment, security investigations not only benefit from logs that demonstrate the interactions and changes made by the code deployed, but that also document changes to the deployed code itself and access permissions of the Lambda execution role that is granting privileged access.

AWS Lambda

The logging of Lambda functions involves two components: how the function itself is operating, and what is happening inside the function (what your code is actually doing).

The logging of a Lambda function itself occurs through data events captured by CloudTrail. As noted earlier in this post, you must configure data events on a trail created in CloudTrail. During configuration, you will need to specify the function from which logs will be captured by your trail, and the destination S3 bucket where they will be stored. These logs contain details on the invocation of the function and help identify the IAM principals that called the Invoke API for Lambda.

AWS Lambda automatically monitors Lambda functions on your behalf and sends logs to CloudWatch. Your Lambda function comes with a CloudWatch Logs log group and a log stream for each instance of your function. The Lambda runtime environment sends details about each invocation to the log stream, and relays logs and other output from your function’s code. For more details on how to monitor Lambda functions, see Accessing Amazon CloudWatch logs for AWS Lambda.

Log analysis

For incident response, you need to be able to analyze and query your logs to validate what occurred and to understand the scope.

To begin, you can aggregate logs from various sources in S3 buckets for long-term storage, and you can integrate that data with query tools for further investigation. Logs can be exported and either parsed through directly, or ingested by another tool to help with the analysis. The following are some options that you can use to query these logs:

  • Amazon Athena — You can directly query CloudTrail events stored in S3 with Athena using SQL commands, specifying the LOCATION of the log files. You would generally use this approach if you have advanced queries to run, and you don’t have a SIEM. To set up Athena to query logs, you can use this open-source solution from AWS.
  • Amazon OpenSearch Service — OpenSearch is a distributed search and log analytics suite. Because it’s open source, it can ingest logs from more than just AWS log sources. To set this up, you can use this open-source SIEM solution from AWS.
  • CloudTrail Event History — Either from the console, or programmatically, you can query CloudTrail management events from the last 90-day period. This is ideal for when you have simple queries to make within the last 90 days, and you don’t need stored logs or more complex queries.
  • AWS CloudTrail Lake — Either from the console, or programmatically, you can query stored events in your configured CloudTrail Lake from the time of its configuration, up until the maximum storage duration of 2,557 days (7 years) from the time that you make your query. This approach allows for SQL-based queries, and it is ideal for when you need to make more complex queries against events, but don’t require the additional features of a SIEM solution.
  • Parse through raw JSON using CLI — This is achieved programmatically and parsed through terminal commands. It’s more a legacy method of parsing through logs. You might choose to use this approach for analysis if another service or solution isn’t feasible (for example, if you can’t use the service due to your corporate security policy).
  • Third-party SIEM — A third-party SIEM might be ideal if you already have a SIEM solution on AWS or elsewhere, and you don’t need a duplicated solution elsewhere. Typically, SIEM solutions will import logs from an S3 bucket and process and index events for analysis. To learn more about SIEM options, see the SIEM solutions in the AWS Marketplace, or the AWS Security Competency Partners for a partner local to you with threat detection and incident response (TDIR) expertise.

Sample queries

In this section, we provide samples of SQL queries. Both Athena and CloudTrail Lake accept SQL queries, but the following samples have been tested for use in Athena only. This is because some samples are for VPC Flow Logs, which you can’t query from CloudTrail Lake. To query CloudTrail logs in Athena, you must first create a table definition that points to the location of your logs stored in S3. You can do this from the CloudTrail Events console by using a hyperlinked suggestion, or from the Athena console directly. Alternatively, for Athena, you can use the AWS Security Analytics Bootstrap.

For each of these queries, you might need to modify some of the fields, such as the time frame that you are investigating, the IAM entity involved, and the account and Region in scope. For example, you might want to modify the time frame based on the current time and when you believe the security event began. This often involves expanding the time frame after running additional queries and learning more about the scope and timeline.

By using partitions for tables, you can restrict the amount of data scanned by each Athena query, helping to improve performance and reduce cost. For example, you can partition your CloudTrail Athena table manually or by using partition projection. You can include the partition column (for example, the timestamp) in your queries to limit the amount of data scanned.

Unauthorized attempts

When a security event occurs, you might want to review API calls that were attempted but failed due to the IAM principal not having access to perform the action on that resource. To discover this activity, run the following query (be sure to modify the time window first):

SELECT *
FROM cloudtrail
WHERE errorcode IN ('Client.UnauthorizedOperation','Client.InvalidPermission.NotFound','Client.OperationNotPermitted','AccessDenied')
AND useridentity.arn LIKE '%iam%'
AND eventtime >= '2023-01-01T00:00:00Z'
AND eventtime < '2023-03-01T00:00:00Z'
ORDER BY eventtime desc

This sample query can help you identify whether certain IAM principals have a significant amount of unauthorized API calls, which can indicate that an IAM principal is compromised.

Rejected TCP connections

During a security event, the unauthorized user that is interacting with the resources in your account is probably trying to establish persistence through the network layer. To get a list of rejected TCP connections and extract from it the day that these events occurred, run the following query:

SELECT day_of_week(date) AS
day,date,interface_id,srcaddr,action,protocol
FROM vpc_flow_logs
WHERE action = 'REJECT' AND protocol = 6
LIMIT 100;

Connections over older TLS versions

You might want to see how many calls to AWS APIs were made using older versions of the TLS protocol, as part of a forensic follow-up or a discovery job after a risk analysis. You can get this data by querying CloudTrail logs.

SELECT eventSource
COUNT(*) AS numOutdatedTlsCalls FROM cloudtrail WHERE tlsDetails.tlsVersion IN ('TLSv1', 'TLSv1.1') AND eventTime > '2023-01-01 00:00:00' GROUP BY eventSource ORDER BY numOutdatedTlsCalls DESC

Filter connections from an IP

With an IP address that you’d like to investigate, as a part of your forensic analysis, you might want to see the connections made to resources in a VPC. You can obtain this information by querying VPC Flow Logs. As with the server access logs, if you’re using Athena, you will first need to create a new table.

SELECT day_of_week(date) AS 
day, date, srcaddr, dstaddr, action, protocol
FROM vpc_flow_logs
WHERE day >= '2023/01/01' AND day < '2023/03/01' AND srcaddr LIKE '172.50.%'
ORDER BY day DESC
LIMIT 100

Investigate user actions

If you have identified a user who has been compromised, or that you suspect has been compromised, you might want to know the API calls that they made over a specific time period. Understanding the activity of a user can help you understand the scope of impact during an incident, as well as the reach of user permissions when you design your access management strategy.

SELECT eventID, eventName, eventSource, eventTime, userIdentity.arn
AS user
FROM cloudtrail
WHERE userIdentity.arn = '%<username>%' AND eventTime > '2022-12-05 00:00:00' AND eventTime < '2022-12-08 00:00:00'

Conclusion

It is essential that you capture logs from various layers within your application architecture, so that you can effectively respond to a security event at various layers of the application stack. If a security event occurs, logs can help provide a clear picture of what happened and the scope of the affected resources. This post helps you build a logging strategy for security incident response by understanding what logs you want to analyze, where you want to store those logs, and how you will analyze them.

Further resources

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security, Identity, & Compliance re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Anna McAbee

Anna is a Security Specialist Solutions Architect focused on threat detection and incident response at AWS. Before AWS, she worked as an AWS customer in financial services on both the offensive and defensive sides of security. Outside of work, Anna enjoys cheering on the Florida Gators football team, wine tasting, and traveling the world.

Pratima Singh

Pratima Singh

Pratima is a Security Specialist Solutions Architect with Amazon Web Services based out of Sydney, Australia. She is a security enthusiast who enjoys helping customers find innovative solutions to complex business challenges. Outside of work, Pratima enjoys going on long drives and spending time with her family at the beach.

Ciarán Carragher

Ciarán Carragher

Ciarán is a Security Specialist Solutions Architect, based out of Dublin, Ireland. Before becoming a Security SSA, Ciarán was an AWS Cloud Support Engineer for AWS security services. Outside of work, Ciarán is an avid computer gamer as well as a serving army reservist in the Irish Defence Forces.

Center for Information Security (CIS) unveils Azure Foundations Benchmark v2.0.0

Post Syndicated from Marla Rosner original https://blog.rapid7.com/2023/03/23/center-for-information-security-cis-unveils-azure-foundations-benchmark-v2-0-0/

Center for Information Security (CIS) unveils Azure Foundations Benchmark v2.0.0

The Center for Information Security (CIS) recently unveiled the latest version of their Azure Foundations Benchmark—Version 2.0.0. This is the first major release since the benchmark was originally released more than 4 years ago, which could lead you to believe that this update would come with a bunch of significant changes. However, this release actually brings fewer impactful changes than the minor releases that preceded it.

Instead of sweeping changes, the update includes a number of reconciled and renumbered sections along with some refreshed language and branding.

Rapid7 is actively reviewing the new recommendations and evaluating the need and potential of them being made into insights within InsightCloudSec.

So the changes were minor, but what were they?

Of the 10 sections that make up the benchmark, four sections were expanded with new recommendations:

  • Section 1 (Identity and Access Management)
  • This was also the only section that had a recommendation removed.
  • Section 4 (Database Services)
  • Section 5 (Logging and Monitoring)
  • Section 7 (Virtual Machines)

Five sections had no changes:

  • Section 3 (Storage Accounts)
  • Section 6 (Networking)
  • Section 8 (Key Vault)
  • Section 9 (AppService)
  • Section 10 (Miscellaneous)

Section 2 (Microsoft Defender) did not have any additions or subtractions, but did have some alterations related to numbering and categorization.

Section 1 (Identity and Access Management)

This section covers a wide range of recommendations centered around identity and access policies. For 2.0.0, there was one addition:

Recommendation: 1.3 – Ensure that ‘Users can create Azure AD Tenants’ is set to ‘No’

Why it Matters: It is best practice to only allow an administrator to create new tenants. This prevents users from creating new Azure AD or Azure AD B2C tenants and ensures that only authorized users are able to do so.

As noted above, this was also the only section from which a recommendation was removed entirely:

Removed Recommendation: 1.5 – Ensure that ‘Restore multi-factor authentication on all remembered devices’ is enabled (this recommendation has been replaced in v2.0.0)

Why it Was Removed: This recommendation was likely removed, as it is somewhat redundant to what is now recommendation 1.1.4 (Ensure that ‘Allow users to remember multi-factor authentication on devices they trust’ is enabled). Essentially, the updated recommendation asserts you should not allow users to bypass MFA for any device.

Section 4 (Database Services)

This section focuses on securing database services within Azure environments—such as Azure SQL or Cosmos DB. CIS added a recommendation to this section, specifically for Cosmos DB, that guides users to leverage Active Directory and Azure RBAC whenever possible.

Recommendation: 4.5.3 – Use Azure Active Directory (AAD) Client Authentication and Azure RBAC where possible.

Why it Matters: Cosmos DB, Azure’s native NoSQL database service, can use tokens or AAD for client authentication which in turn will use Azure RBAC for authorization. Using AAD is significantly more secure because AAD handles the credentials and allows for MFA and centralized management, and Azure RBAC is better integrated with the rest of Azure.

Section 5 (Logging and Monitoring)

The two new recommendations within this section are targeted toward ensuring you’ve properly configured your environment for log management, including collecting the necessary logs (flow logs, audit logs, activity logs, etc.) and also ensuring that the storage location for those logs is secure.

Recommendation: 5.3.1 – Ensure Application Insights are Configured

Why it Matters: Application Insights within Azure act as an application performance monitoring solution providing valuable data into how well an application performs and additional information when performing incident response. The types of log data collected include application metrics, telemetry data, and application trace logging data, which provide organizations with detailed information about application activity and application transactions. Both data sets help organizations adopt a proactive and retroactive means to handle security and performance related metrics within their modern applications.

Recommendation: 5.5 – Ensure that SKU Basic/Consumption is not used on artifacts that need to be monitored

Why it Matters: Basic or Free SKUs in Azure, while cost effective, have significant limitations in terms of what can be monitored and what support can be realized from the team at Microsoft. Typically, these SKU’s do not have service SLAs, and Microsoft generally refuses to provide support for them. Because of this, Basic/Free SKUs are not recommended for production workloads. While upgrading to the Standard tier may be a bit more expensive, you’ll receive more support from Microsoft, as well as the ability to generate and consume more detailed information via native monitoring services such as Azure Monitor.

Section 7 (Virtual Machines)

Section 7 is focused on securing virtual machines within your Azure environment. Recommendations in this section include ensuring that your VMs are utilizing managed disks and that OS and Data disks are encrypted with Customer Managed Keys (CMK), just to name a few. There was one new recommendation in this section.

Recommendation: 7.1 – Ensure an Azure Bastion Host Exists

Why it Matters: The Azure Bastion service allows organizations a more secure means of accessing Azure Virtual Machines over the Internet without assigning public IP addresses to them. The Azure Bastion service provides Remote Desktop Protocol (RDP) and Secure Shell (SSH) access to Virtual Machines using TLS within a web browser. This is aimed at preventing organizations from opening up 3389/TCP and 22/TCP to the Internet on Azure Virtual Machines.

Using InsightCloudSec to Implement and Enforce CIS Azure Foundations Benchmark 2.0.0

InsightCloudSec continuously assesses your entire cloud environment—whether single cloud or hosted across multiple platforms—for compliance with organizational standards. It detects noncompliant resources and unapproved changes within minutes. The platform continuously monitors your environment to ensure you’ve properly implemented the necessary controls as recommended by CIS for securing your cloud workloads running in Azure environments.

InsightCloudSec can instantly detect whenever an account or resource drifts from compliance. The platform comes out of the box with 30+ compliance packs, including a dedicated pack for the CIS Azure Foundations Benchmark 2.0.0. A compliance pack within InsightCloudSec is a set of checks that can be used to continuously assess your cloud environments for compliance with a given regulatory framework, or industry or provider best practices.

As you can see in the screenshot below, InsightCloudSec has a host of checks and insights that directly align to recommendations within the CIS Azure Foundations Benchmark 2.0.0.

Center for Information Security (CIS) unveils Azure Foundations Benchmark v2.0.0

When you dive deeper into a given insight, you’re provided with detail into how many resources are in violation with a given check and a relative risk score to outline just how risky a given violation is.

Below is an example from CIS Azure Foundations Benchmark 2.0.0, specifically a set of resources that are in violation of the check ‘Storage Account Older than 90 Days Without Access Keys Rotated’. You’re provided with an overview of the insight, including why it’s important to implement and the risks associated with not doing so.

Center for Information Security (CIS) unveils Azure Foundations Benchmark v2.0.0

That platform doesn’t just stop there, however. You’re also provided with the necessary remediation steps as advised by CIS themselves, and if you so choose, the recommended automations that can be created using native bots within InsightCloudSec for folks that would prefer to completely remove the human effort involved in enforcing compliance with this policy.

Center for Information Security (CIS) unveils Azure Foundations Benchmark v2.0.0

If you’re interested in learning more about how InsightCloudSec helps continuously and automatically enforce cloud security standards, be sure to check out the demo!

Reduce Risk and Regain Control with Cloud Risk Complete

Post Syndicated from Marla Rosner original https://blog.rapid7.com/2023/03/23/reduce-risk-and-regain-control-with-cloud-risk-complete/

Reduce Risk and Regain Control with Cloud Risk Complete

Over the last 10 to 15 years, organizations have been migrating to the cloud to take advantage of the speed and scale it enables. During that time, we’ve all had to learn that new cloud infrastructure means new security challenges, and that many legacy tools and processes are unable to keep up with the new pace of innovation.

The greater scale, complexity, and rate of change associated with modern cloud environments means security teams need more control to effectively manage organizational risk. Traditional vulnerability management (VM) tools are not designed to keep pace with highly dynamic cloud environments, creating coverage gaps that increase risk and erode confidence.

In the report “Forecast Analysis: Cloud Infrastructure and Platform Services, Worldwide” Gartner® estimates that “By 2025, more than 90% of enterprise cloud infrastructure and platform environments will be based on a CIPS [cloud infrastructure and platform services] offering from one of the top four public cloud hyperscale providers, up from 75% to 80% in 2021.”

In the face of all this rapid change, how do you keep up?

Rapid7’s Cloud Risk Complete Is Here

The future of risk management is seamless coverage across your entire environment. That’s why our new offer, Cloud Risk Complete, is the most comprehensive solution to detect and manage risk across cloud environments, endpoints, on-premises infrastructure, and web applications.

With Cloud Risk Complete, you can:

  • Gain unlimited risk coverage with a unified solution purpose-built for hybrid environments, providing continuous visibility into your on-prem infrastructure, cloud, and apps, all in a single subscription.
  • Make context-driven decisions by intelligently prioritizing risk based on context from every layer of your attack surface, driven by a real risk score that ties risk to business impact.
  • Enable practitioner-first collaboration with native, no-code automation to help teams work more efficiently and executive-level dashboards that provide visibility into your risk posture.

Cloud Risk Complete

Analyze, respond to, and remediate risks without a patchwork of solutions or additional costs.

LEARN MORE

What makes this solution different is that we started with the outcome and worked backwards to bring to life a solution that meets the needs of your security program.

  • While most solutions offer daily scans of your cloud environment, we deliver real-time visibility into everything running across your environment. So, you’re never working with stale data or running blind.
  • While most solutions only provide insight into a small portion of your environment, we provide a unified view of risk across your entire estate, including your apps, both in the cloud and on-prem.
  • While most solutions show you a risk signal and leave the analysis and remediation process up to you, we provide step-by-step guidance on how to remediate the issue, and can even take immediate action with automated workflows that remove manual effort and accelerate response times.

Risk Is Pervasive. Your Cloud Security Should Be Too

Cloud Risk Complete stands apart from the pack with best-in-class cloud vulnerability assessment and management, cloud security posture management, cloud detection and response, and automation—in a single subscription.

Unlimited ecosystem automation enables your team to collaborate more effectively, improve the efficiency of your risk management program, and save time. With all of this, you can eliminate multiple contracts and vendors that are stretching budgets and enjoy a higher return on investment.

Get comprehensive cloud risk coverage across your business—without compromise. Discover Cloud Risk Complete today.

MITRE ATT&CK® Mitigations: Thwarting Cloud Threats With Preventative Policies and Controls

Post Syndicated from James Alaniz original https://blog.rapid7.com/2023/03/16/mitre-attack-mitigations-thwarting-cloud-threats-with-preventative-policies-and-controls/

MITRE ATT&CK® Mitigations: Thwarting Cloud Threats With Preventative Policies and Controls

As IT infrastructure has become more and more sophisticated, so too have the techniques and tactics used by bad actors to gain access to your environment and sensitive information. That’s why it’s essential to implement robust security measures to protect your organization. One way to do this is to utilize the MITRE ATT&CK framework, which provides a comprehensive guide to understanding and defending against cyber threats.

Who is MITRE and what is the MITRE ATT&CK Framework?

MITRE is a non-profit organization supporting various U.S. government agencies across a variety of fields, but primarily focusing on defense and cybersecurity. The MITRE ATT&CK® Framework is a free knowledge base of adversarial tactics and techniques based on real-world observations.

It is a tremendous resource for any security practitioner, and can be used as a foundational resource for developing specific threat models and methodologies in both the public and private sectors. The framework is curated by the folks at MITRE, but anyone is able to contribute information or findings for review, as they look to crowdsource as much intelligence as humanly possible to better serve the broader community.

The ATT&CK Framework is intended to provide insights into the goals of hackers as well as the techniques and tactics they are likely to use. These insights provide organizations and the security teams that protect them with a detailed roadmap to plan, detect, and mitigate risk and detect threats. Once an organization has identified potential attack vectors, it can implement the appropriate mitigations.

Wait, but what are Mitigations?

Under each technique outlined within the ATT&CK Framework is a section on relevant mitigations. Mitigations represent security concepts and classes of technologies that can be used to prevent a technique or sub-technique from being successfully executed. It is a large and comprehensive list, so MITRE has broken these mitigations into two primary groups: “Enterprise,” focusing on mitigations that prevent hackers from breaching a corporate network, and “Mobile,” which—as you might have guessed—is dedicated to protecting against attacks targeting mobile devices.

While these mitigations can not guarantee that you won’t be breached, they serve as a great baseline for teams looking to do whatever they can to avoid an attacker gaining access to their sensitive data.

Example mitigations and what they entail

As noted above, MITRE provides a wide range of mitigations. For the purpose of this post, let’s look at a few example mitigations to give a sense of what they entail.

Before we dive in, a quick note: It’s very important to select and implement mitigations based on your organization’s specific threat landscape and unique aspects of your environment. You’ll want to prioritize the mitigations that address the most significant risks to business operations and data first to effectively mitigate risk and the likelihood of a breach.

Mitigation: Data Backup (ID: M1053)

Backing up data from end-user systems and servers is critical to ensure you’re not at risk of attack types that center around deletion or defacement of sensitive organizational and customer data, such as Data Destruction (T1485) and Disk Wipe (T1561). The only recourse to these types of attacks is to have a solid disaster recovery plan. Security teams should regularly back up data and store backups in a secure location that is separate from the rest of the corporate network to avoid them being compromised. This way, you’ll have the ability to quickly recover lost data and restore your systems to a steady state should a bad actor delete your data or hold it as ransom.

Mitigation: Account Use Policies (ID: M1036)

This mitigation is geared toward preventing unwanted or malicious access to your network via attack types such as brute force (T1110) and multi-factor authentication request generation (T1621). By establishing policies such as limiting the number of attempts a user has to properly enter their credentials and passwords before being locked out of their account, you can thwart bad actors that are simply repeatedly guessing your passwords until they gain access. This control needs to be configured in such a way that effectively prevents these types of attacks, but without being so strict that legitimate users within your organization are denied access to systems or data they need to perform their jobs.

Mitigation: Encrypt Sensitive Information (ID: M1041)

As you can probably guess from the name, this mitigation focuses on implementing strong data encryption hygiene. Given that the end goal of many breaches is to gain access to sensitive information, it will come as no surprise that this mitigation plays a critical role in protecting against a wide range of attack techniques, including adversary-in-the-middle (T1557), improper access of data within misconfigured cloud storage buckets (T1530), and network sniffing (T1040), just to name a few. Properly encrypting data—both at rest and in transit—is a critical step in fortifying against these types of attacks.

There are several MITRE tactics and techniques, such as those highlighted above, where the only mitigation for an attack is to ensure your organization’s security policies and controls are configured properly. While it can be a daunting task to ensure you maintain compliance with all policies and controls across your entire environment, InsightCloudSec offers out-of-the-box insights that are mapped directly to each mitigation.

Leveraging InsightCloudSec to implement and track performance against MITRE ATT&CK Mitigations

With this new pack, InsightCloudSec you can easily audit and assess your entire environment against the recommended mitigations provided by MITRE, and ensure you are taking every step possible to stop bad actors from gaining unauthorized access to your network and accessing sensitive information.

MITRE ATT&CK® Mitigations: Thwarting Cloud Threats With Preventative Policies and Controls

InsightCloudSec continuously assesses your entire cloud environment — whether single cloud or hosted across multiple platforms — for compliance with organizational standards. It detects noncompliant resources and unapproved changes within minutes. The platform continuously monitors your environment to ensure you’ve properly implemented the necessary controls as recommended by MITRE for thwarting attackers, regardless of which technique or sub-technique they utilize.

InsightCloudSec can instantly detect whenever an account or resource drifts from compliance. The platform comes out of the box with 30+ compliance packs, including a dedicated pack for MITRE Mitigations. A compliance pack within InsightCloudSec is a set of checks that can be used to continuously assess your cloud environments for compliance with a given regulatory framework, or industry or provider best practices.

If you’re interested in learning more about how InsightCloudSec helps continuously and automatically enforce cloud security standards, be sure to check out the demo!

On-demand InsightCloudSec Demo

Protect your cloud and container environments from misconfigurations, policy violations, threats, and IAM challenges.

LEARN MORE

Three Steps for Ramping Up to Fully Automated Remediation

Post Syndicated from Marla Rosner original https://blog.rapid7.com/2023/03/15/three-steps-for-ramping-up-to-fully-automated-remediation/

Three Steps for Ramping Up to Fully Automated Remediation

The number one threat to cloud security is misconfiguration of resources, and frankly, it’s not hard to understand why. The cloud is getting bigger, more tangled, and flat-out more unmanageable by the day.

In modern Amazon Web Services (AWS) environments, there are typically millions of resources being added and spread across various environments on the regular, and each resource has its own set of configurations, roles, and permissions. The result of this tangled web is that for one in four organizations, resolving misconfigurations manually takes at least a week—and for one in ten, it takes over a month. What’s a security team to do?

The answer: don’t try to resolve misconfigurations manually. At least, not entirely manually. Why do it all yourself when automation can help?

Benefits of automation include:

  • Time saved: Common issues are handled automatically, dramatically decreasing the hours teams spend addressing them.
  • Increased security and reduced risk: You can set up remediation automation to take immediate action before a security event occurs.
  • Improved compliance: Proof of automated remediation results helps keep cloud environments compliant.
  • Consistency: Repeatable workflow actions ensure consistent results across your environment.

Of course, as great as all that sounds, implementing automation can’t be done overnight. We’re talking about major, pervasive change to your processes and workflows; setting that up within your organization takes time, and a good roadmap. We’re here to help you get started with an incremental crawl, walk, run approach.

1. Crawl: Use automatic notifications to find misconfigurations

Using automated notifications is the first step to implementing an automated remediation strategy. Automated notifications can alert resource owners of misconfigurations through whatever channel they prefer, and even offer recommended steps for remediation. This eliminates the need for security teams to work to identify the owner of a resource, and significantly speeds the remediation process—even when the actual fix is done manually.

Automated notifications are a great way to dip your toes in the water and start getting used to working with an automatic process, without having to make any huge changes just yet.

2. Walk: Meet security policies and standards automatically

Once you’ve gotten comfortable with automated notifications, a great next step is to implement automation for security policies and standards associated with compliance. By automating compliance in this way, you’ll still have a lot of control over the whole process, but your automation can now help resolve a much wider range of issues.

For this middle phase, you can establish the standards and policies your organization wants to follow—whether those are standard frameworks or custom policies—and use automation to enforce them. This means using specific actions like identifying when an account has a required service turned off and automatically turning it back on. This will also be a huge help in maintaining good security hygiene for your organization.

3. Run: Embrace automation to address risk signals and control costs

After you’ve spent some time working with automated notifications and policy enforcement—and verified that automation isn’t going to break anything in your cloud environment—you’ll be ready to make the full plunge. That means using automation for a full range of tasks, including:

  • Identifying misconfigurations or noncompliant actions
  • Taking remedial action
  • Updating resource configurations, roles, and permissions
  • Cleaning up or removing unused or over-provisioned resources

Implementing a full process like this for automated remediation drastically saves time and creates efficiencies, and ensures a consistent approach to fixing issues across your cloud.

Adding new technologies and workflows to your organization can feel like a daunting task, but it doesn’t have to be. All you need is a proper plan to put it into action.

Ready to learn more about how to automate remediation for your organization? Rapid7 and AWS have teamed up for a full ebook on the subject.

Download it now!

Microsoft Defender for Cloud Management Port Exposure Confusion

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2023/03/14/microsoft-defender-for-cloud-management-port-exposure-confusion/

Microsoft Defender for Cloud Management Port Exposure Confusion

Prior to March 9, 2023, Microsoft Defender for Cloud incorrectly marked some Azure virtual machines as having secured management ports including SSH (port 22/TCP), RDP (port 3389/TCP) and WINRM (port 5985/TCP), when in fact one or more of these ports were exposed to the internet. This occured when the Network Security Group (NSG) associated with the virtual machine contained a rule that allowed access to one of these ports from the IPv4 range “0.0.0.0/0”. Defender for Cloud would only detect an open management port if the source in the port rule is set to the literal alias of “Any”. Although the CIDR-notated network of "/0" is often treated as synonymous with "Any," they are not equivalent in Defender for Cloud’s logic.

Note that as of this writing, the same issue appears when using the IPv6 range “::/0” as a synonym for "any" and Microsoft has not yet fixed this version of the vulnerability.

Product Description

Microsoft Defender for Cloud is a cloud security posture management (CSPM) solution that provides several security capabilities, including the ability to detect misconfigurations in Azure and multi-cloud environments. Defender for Cloud is described in detail at the vendor’s website.

Security groups are a concept that exists in both Azure and Amazon Web Services (AWS) cloud environments. Similar to a firewall, a security group allows you to create rules that limit what IP addresses/ranges can access which ports on one or more virtual machines in the cloud environment.

Credit

This issue was discovered by Aaron Sawitsky, Senior Manager for Cloud Product Integrations at Rapid7. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

If an Azure Virtual Machine is associated with a Network Security Group with “management ports” such as RDP (Remote Desktop Protocol on port 3389/TCP) or SSH (Secure Shell protocol on port 22/TCP) exposed to the "Any" pseudo-class for "Source," Microsoft Defender for Cloud will create a security recommendation to highlight that the management port is open to the internet, which allows an administrator to easily recognize that there is a virtual machine in their environment with one or more over-exposed server management ports.

However, prior to March 9th, if the Network Security Group was instead configured such that a “management port” like RDP or SSH was exposed to “0.0.0.0/0,” as a source (which is the entire IPv4 range of all possible addresses) no security recommendation was created and the configuration was incorrectly marked as “Healthy.”

The effect is demonstrated in the screenshots below:

Microsoft Defender for Cloud Management Port Exposure Confusion
Microsoft Defender for Cloud Management Port Exposure Confusion

Because of this network scope confusion, Azure users can easily and accidentally expose management ports to the entire internet and evade detection by Defender for Cloud.

We suspect that other Defender for Cloud features that check for the "any-ness" of ingress tests are similarly affected, but we have not comprehensively tested for other manifestations of this issue.

Impact

We can imagine two cases where this unexpected behavior in Defender for Cloud could be useful for attackers. First, it’s likely that administrators are unaware of any practical semantic difference between "Any" and "0.0.0.0/0" or “::/0” since these terms are often used interchangeably in other networking products, most notably, as when configuring AWS Security Groups. As a result, this misconfiguration could be accidentally applied by a legitimate administrator, but remain undetected by the person or process responsible for monitoring Defender for Cloud security recommendations. This is the most likely scenario most administrators will face.

More maliciously, an attacker who has already compromised a virtual Azure-hosted machine could leverage this confusion to avoid post-exploit detection by the Defender for Cloud. This makes repeated, post-exploit access from several different sources much easier for more sophisticated attackers. In this case, the "attacker" will often be an insider who is merely subverting their own IT security organization for ostensibly virtuous, just-get-it-done reasons, such as testing a configuration in production, but forgetting to re-limit the exposure.

Note that more exotic combinations of subnets could be used to achieve the same effect; for example, an administrator could define "0.0.0.0/1" and "128.0.0.1/1" which would have exactly the same effect as one "0.0.0.0/0" source rule. Or, even more cleverly, define a set of subnets that adds up to "almost any," which would be good enough for a thoughtful attacker to ensure continued, un-alerted exposure. However, this kind of configuration is extremely unlikely to be implemented by accident (as described in the first case), and thus, is almost certainly beyond the reasonable scope of the Defender for Cloud use case. After all, Defender for Cloud is designed to catch common misconfigurations, and not necessarily an intentionally confusing configuration.

Remediation

Since Defender for Cloud is a cloud-based solution, users should not have to do anything special to enjoy the benefits of Microsoft’s update. With that said, customers should remember that the update has not resolved the issue when using the IPV6 range ::/0 as a synonym for “any.” As a result, customers should search their Azure environments for any Security Groups configured to allow ingress from a source of “::/0” and seriously consider reconfiguring these rules to be more restrictive. In addition, customers should regularly subject their cloud infrastructure to auditing and penetration tests to verify that their CSPM is actually catching common misconfigurations. We have already validated that this issue does not impact Rapid7’s InsightCloudSec CSPM solution. In addition, Defender for Cloud customers who have previously used the "/0" CIDR notation in their security group rules should review access logs to ensure that malicious actors were not evading the presumed detection capabilities provided by Defender for Cloud.

Disclosure Timeline

January 2023: Issue discovered by Rapid7 cloud security researcher Aaron Sawitsky
Wed, Jan 11, 2023: Initial disclosure to Microsoft
Thu, Jan 12, 2023: Details explained further and validated by the vendor
Mon, Feb 6, 2023: Fix planned by the vendor
Thu, Mar 9, 2023: Fix for "0.0.0.0/0" confirmed by Rapid7
Tue, Mar 14, 2023: This disclosure

Cloud Security Strategies for Healthcare

Post Syndicated from Marla Rosner original https://blog.rapid7.com/2023/03/14/cloud-security-strategies-for-healthcare/

How to Stay Secure in the Cloud While Driving Innovation and Discovery

Cloud Security Strategies for Healthcare

The healthcare industry is undergoing a transformational shift. Health organizations are traditionally entrenched in an on-prem way of life, but the past three years have plunged them into a digital revolution. A heightened demand for improved healthcare services—like distributed care and telehealth—ignited a major push for health orgs to move to the cloud, and as a result, implement new cloud security strategies.

But the processes and tools that worked well to secure healthcare organizations’ traditional data centers do not directly translate to the public cloud. Resource and budget strain, priority negotiation with leadership, and challenges with regulatory compliance only exacerbate a daunting digital maturity gap. These challenges are why many healthcare organizations have approached public cloud adoption tentatively.

The healthcare industry must innovate in the cloud to meet patient and business needs, but they need to do so without creating unnecessary or unmanaged risk. Most importantly, they must move to and adopt cloud solutions securely to protect patients in a new world of digital threats.

Major Challenges

Modern technologies bring modern challenges. Here are the main obstacles healthcare organizations face when it comes to securing the cloud.

Resource Strain

Like most industries, healthcare organizations face major obstacles when finding qualified security talent. That means hospitals, clinics, and other healthcare businesses must compete with tech giants, startups, and other more traditionally cybersecurity-savvy companies for the best and brightest minds on the market.

What’s more challenging is that the typical day of a security professional in healthcare tends to be disproportionately focused on time-consuming and often monotonous tasks. These duties are often related to maintaining and reporting on compliance with a sea of regulatory standards and requirements. Carrying out these repetitive but necessary tasks can quickly lead to burnout—and, as a result, turnover.

Moreover, those security professionals who do end up working within healthcare organizations can quickly find themselves inundated with more work than any one person is capable of handling. Small teams are tasked with securing massive amounts of sensitive data—both on-prem and as it moves into the cloud. And sometimes, cybersecurity departments at healthcare orgs can be as small as a CISO and a few analysts.

Those challenges with resource strain can lead to worse problems for security teams, including:

  • Burnout and rapid turnover, as discussed above
  • Slow MTTR, exacerbating the impact of breaches
  • Shadow IT, letting vulnerable assets fall through the cracks

Balancing Priorities With Leadership

It’s up to cybersecurity professionals to connect the dots for leadership on how investing in cloud security leads to greater ROI and less risk. Decision-makers in the healthcare industry are already juggling a great deal—and those concerns can be, quite literally, a matter of life or death.

In the modern threat landscape, poor cybersecurity health also has the potential to mean life or death. As medical science tools become more sophisticated, they’re also becoming more digitally connected. That means malicious actors who manage to infiltrate and shut down servers could also possibly shut down life-saving technology.

Tech professionals must illustrate to stakeholders how cybersecurity risk is interconnected with business risk and—perhaps most importantly—patient risk. To do that, they must regularly engage with and educate leadership to effectively balance priorities.

Achieving that perfect balance includes meeting leadership where they’re at. In healthcare, what is typically the biggest security concern for leaders? The answer: Meeting the necessary compliance standards with every new technology investment.

HIPAA Compliance and Protected Health Information

For stakeholders, achieving, maintaining, and substantiating legal and regulatory compliance is of critical importance. When it comes to the healthcare industry, one compliance standard often reigns supreme over all business decisions: HIPAA.

HIPAA provides data privacy and security provisions for safeguarding Protected Health Information (PHI). It addresses the use and disclosure of individuals’ health information and requires that sensitive information be governed by strict data security and confidentiality. It also obligates organizations to provide PHI to patients upon request.

When migrating to the cloud, healthcare organizations need a centralized approach to protecting sensitive data. InsightCloudSec allows you to automate compliance with HIPAA. Through our HIPAA Compliance Pack, InsightCloudSec provides dozens of out-of-the-box checks that map back to specific HIPAA requirements. For example, InsightCloudSec’s “Snapshot With PHI Unencrypted” policy supports compliance with HIPAA §164.312(a)(2)(iv), Encryption Controls.

Experience Gap

An evolving threat landscape and growing attack surface are challenging enough to deal with for even the most experienced security professionals. Add the health industry’s talent gap into the mix, and those challenges are multiplied.

Cloud security in the healthcare space is still relatively new. That means internal cybersecurity teams are not only playing a relentless game of catch-up—they also might consist of more traditional network engineers and IT pros who have historically been tasked with securing on-premises environments.

This makes it critical that the cloud security solutions healthcare industries implement be user-friendly, low-maintenance, and ultra-reliable.

Cloud Security Solutions and Services

As health organizations dive into work in the cloud, their digital footprints will likely grow far faster than their teams can keep up with. Visibility into these cloud environments is essential to an organization’s ability to identify, assess, prioritize, and remediate risk. Without a clear picture of what they have and where they have it, companies can be vulnerable to malicious attacks.

To avoid biting off more than they can chew, security professionals in healthcare must leverage cloud security strategies and solutions that grant them complete real-time visibility in the cloud over all their most sensitive assets. Enterprise cloud security tools like InsightCloudSec can enable automated discovery and inventory assessment. That unlocks visibility across all their CSPs and containers.

InsightCloudSec also makes it easier for teams, regardless of their cloud security expertise, to effectively define, implement, and enforce security guardrails. With resource normalization, InsightCloudSec removes the need for security teams to learn and keep track of an ever-expanding list of cloud resources and services. Security teams can make use of InsightCloudSec’s native, no-code automation to enable hands-off enforcement of their organization’s security practices and policies when a non-compliant resource is created or a risk configuration change is made.

The fact of the matter is that many healthcare security teams will need to build their cybersecurity programs from the ground up. With limited resources, strained budgets, and patients’ lives on the line, they can’t afford to make big mistakes. That’s why, for many organizations, partnering with a managed service provider is the right approach.

Rapid7’s managed services relieve security teams from the strain of running and building cloud security frameworks. They can also help healthcare security pros better connect lack of investment with risks to stakeholders—acting as an external set of experts.

The Bottom Line

Staying continuously secure in the cloud can be daunting, particularly for those responsible for not only sensitive medical, patient, and research data, but also the digitally connected machines and tools that ensure top-of-the-line patient care. Protecting the health of patients is paramount in the healthcare industry.

With the right tools (and teams) to support continuous security and compliance, this responsibility becomes manageable—and even, dare we say, easy.

InsightCloudSec

A complete cloud security toolbox in a single solution.

LEARN MORE

What Tech Companies Should Look For in Cloud Security

Post Syndicated from Marla Rosner original https://blog.rapid7.com/2023/03/08/what-tech-companies-should-look-for-in-cloud-security/

What Tech Companies Should Look For in Cloud Security

The cloud’s computing power and flexibility unlocks unprecedented speed and efficiency—a tech company’s two best friends. But with that speed and efficiency comes new environments and touchpoints in an organization’s footprint. That expanding attack surface brings along with it an expanding range of security concerns.

Rapid7’s Peter Scott joined Temporal Technologies’s Brandon Sherman and Ancestry’s Tony Black for a fireside chat to address today’s growing CloudSec challenges.

The key? Building technologies and security policies alongside one another—from the start. That applies to both companies that are moving to the cloud and those that are cloud-first.

Making Security an Enabler, Not a Blocker

When companies start to move and function in the cloud, SecOps must adapt their thinking and processes to ephemeral environments. That entails getting down in the trenches early on with tech teams as they innovate and create while spinning up new instances.

“We started working with the idea that everything should be ephemeral and short-lived… That really started getting us into that mindset of a true cloud infrastructure and architecture,” says Black. “It allowed us to start doing some things like tearing down our dead environments on the weekend if no one is using it. It reduces our attack surface and reduces our cost.”

Collaboration between tech and security teams drives secure cloud innovation. However, that level of collaboration requires consistent communication and most importantly, a willingness to build trust.

“If you don’t exercise that muscle of being engaged with those teams, then it’s going to atrophy,” says Sherman. “But if you keep working on it, then you get in earlier and earlier. Even with simple Slack conversations—if I can get involved at that part of it and help shape this whole process as it’s built out to help experiment securely… That’s awesome.”

Consistently collaborating with tech teams not only helps keep security top of mind and integrate cloudsec with DevOps; it also transforms SecOps from a dreaded blocker into a reliable enabler. This level of collaboration requires not only trust but also a mutual understanding that implementing security is a problem-solver, not a problem to be solved.

That gives SecOps the power to help dev teams accelerate into production with fewer bumps in the road—because each new feature is built securely from the start.

Cloud Risk Complete

Analyze, respond to, and remediate risks without a patchwork of solutions or additional costs.

LEARN MORE

Operationalization and Resiliency as Cloud Maturity Increases

Reframing security from a blocker to an enabler is indicative of a larger shift in security’s role across the entire enterprise. As companies’ cloud footprints grow and their maturity increases, CloudSec teams must ensure that their practices are not only scalable but also sustainable.

That means the security mindset needs to shift towards operationalization and resiliency.

“We want to be as reliable as running water,” says Sherman. “That is a really high bar to obtain. So you need really good operational metrics, rigor, and processes. But the nice thing about the cloud is that you can practice all those things.”

With the power of the cloud, tech companies have the freedom to rehearse their responses to even the most large-scale, potentially devastating attacks. By taking the time to practice building and breaking down high-risk environments, security teams can operationalize and, most importantly, plan for when—not if—new threats emerge.

To Black, it also comes down to establishing a consistent set of security practices that tech organizations hold to.

“A bunch of practices have to be in place. For example, no one should be able to touch a certain server, because if someone touches a certain server, then all of a sudden I can’t rebuild that the way it was before,” says Black, “Then, everything has to go through a pipeline. And that pipeline has to have controls … and checks in place to make sure that what we deploy is consistent and repeatable.”

Keeping track of that decision-making ripple effect is a major factor in how well security teams can operationalize their best practices for securing cloud-native environments.That dedication to operationalization and resiliency ensures a better experience for devs, sec, and—most importantly—clients.

Clients, the Cloud, and Securing Sensitive Data

Tech companies are (for better or for worse) held to a higher standard by users in terms of reliability, ease of use, and security. Meeting those standards is even more critical when users are trusting tech companies with personal information that might be as sensitive as DNA, as is the case with some of Ancestry’s clients.

What helps ensure and improve data protection in the cloud? A company-wide emphasis on customer trust.

“Fundamentally, our management team agrees that customer trust is part of the value proposition,” says Black. “We have to earn that trust every day… It really helps when the management team says that customer trust is part of the value proposition, so we have to spend money and take an effort to maintain that customer trust.”

Establishing a strong cloudsec foundation helps enable experiences that build customer trust. How can security teams create that base of security that keeps both dev teams and consumers happy? By approaching features as if they were to personally use them.

“Part of that comes down to being your own customer. If you trust your own company with your own data… You feel a lot better about it,” says Sherman.

Aligning the Evolution of Security and Technology

Security teams are no longer the department of “no.” With Gartner having predicted a 22% growth in worldwide cloud use by the end of 2022, tech companies should look out for this constant in cloud security: Change.

“The cloud changes beneath our feet. Things constantly evolve. If you’re stuck in a mentality of, ‘We do this thing, and this thing is security’,” says Sherman, “even if that’s best practice today, it might not be best practice tomorrow.”

Because of that constant change, success in the cloud is rooted in incremental progress. Taking on cloudsec challenges one bit at a time will ensure a smoother cloud journey for organizations looking to unlock the power of working in ephemeral environments.

Want deeper insights on how tech companies can tackle today’s cloud security challenges? Watch the full webinar below:

New InsightCloudSec Compliance Pack: Key Takeaways From the Azure Security Benchmark V3

Post Syndicated from James Alaniz original https://blog.rapid7.com/2023/03/01/new-insightcloudsec-compliance-pack-key-takeaways-from-the-azure-security-benchmark-v3/

New InsightCloudSec Compliance Pack: Key Takeaways From the Azure Security Benchmark V3

Implementing the proper security policies and controls to keep cloud environments, and the applications and sensitive data they host secure, is a daunting task for anyone. It’s even more of a challenge for folks that are just getting started on their journey to the cloud, and for teams that lack hands-on experience securing dynamic, highly-ephemeral cloud environments.

To reduce the learning curve for teams ramping up their cloud security programs, cloud providers have curated sets of security controls, including recommended resource configurations and access policies to provide some clarity. While these frameworks may not be the be-all-end-all—because let’s face it, there is no silver bullet when it comes to securing these environments—they are a really great place to start as you define and implement the standards that are right for your business. In a recent post, we covered some highlights within the AWS Foundational Security Best Practices, so be sure to check that out in case you missed it.

Today, we’re going to dive into the new Azure Security Benchmark V3, and identify some of the controls that we view as particularly impactful. Let’s dig in.

How does Azure Security Benchmark V3 differ from AWS Foundational Security Best Practices?

Before we get started with some specifics from the Azure Security Benchmark, it’s probably worthwhile to highlight some key similarities and differences between the Microsoft and AWS benchmarks.

The AWS Foundational Security Best Practices are, as one might intuitively expect, focused solely on AWS environments. These best practices provide prescriptive guidance around how a given resource or service should be configured to mitigate the risks of security incidents. Because the recommendations are so prescriptive and targeted, users are able to leverage AWS Config—a native service provided by AWS to assess resource configurations—to ensure the recommended configuration is utilized.

Much like the AWS Foundational Security Best Practices, the Azure Security Benchmark is a set of guidelines and recommendations provided by Microsoft to help organizations establish a baseline for what “good” looks like in terms of effective cloud security controls and configurations. However, where AWS’s guidelines are laser-focused on AWS environments, Microsoft has taken a cloud-agnostic approach, with higher-level security principles that can be applied regardless of which platform you select to run your mission-critical workloads. This approach makes quite a bit of sense given AWS and Microsoft’s respective go-to-market strategies and target customer bases. It also means implementation of these recommendations requires a slightly different approach.

As noted above, the guidance in the Azure Security Benchmark isn’t tied to Azure specifically, it’s more broad in nature and speaks to general approaches and themes. For example,it recommends that you use encryption and proper key management hygiene, as opposed to specifying a granular resource or service configuration. That’s not to say that Microsoft hasn’t provided any Azure-specific guidance, as many of the guidelines are accompanied by step-by-step instructions as to how you can implement them in your Azure environment. As AWS has provided checks within AWS Config, Azure has similarly provided checks within Defender for Cloud that help ensure your environment is configured in accordance with the benchmark recommendations.

Five recommendations from the Azure Security Benchmark V3 we find particularly impactful

Now that we’ve compared the benchmarks, let’s take a look at some of the recommendations provided within the Azure Security Benchmark V3 that we find particularly impactful for hardening your cloud security posture.

NS-2: Secure cloud services with network controls

This recommendation focuses on securing cloud services by establishing a private access point for the resources. Additionally, you should be sure to disable or restrict access from public networks (when possible) to avoid unwanted access from folks outside of your organization.

DP-3, 4 & 5: Data Protection and Encryption At Rest and In Transit

These recommendations are focused on ensuring proper implementation of data security controls, most notably via encryption for all sensitive data, whether in transit or at rest. Data should be encrypted at rest by default, and teams should use the option for customer-managed keys whenever required.

DP-8: Ensure Security of Key and Certificate Repository

Another Data Protection control, this recommendation is centered on proper hardening of the key vault service. Teams should ensure the security of the key vault service used for the cryptographic key and certificate lifecycle management. Key vault service hardening can be accomplished through a variety of controls, including identity and access, network security, logging and monitoring, and backup.

PA-1: Separate and Limit Highly Privileged/Administrative Users

Teams should ensure all business-critical accounts are identified and should apply limits to the number of privileged or administrative accounts in your cloud’s control plane, management plane, and data/workload plane. Additionally, you should restrict privileged accounts in other management, identity, and security systems that have administrative access to your assets, such as tools with agents installed on business-critical systems that could be weaponized.

LT-1: Enable Threat Detection Capabilities for Azure Resources

This one is fairly self-explanatory, but focuses on ensuring you are monitoring your cloud environment for potential threats. Whether or not you’re using native services provided by your cloud provider of choice—such as Azure Defender for Cloud or Azure Sentinel—you should leverage a cloud detection and response tool that can monitor resource inventory, configurations, and user activity in real time to identify anomalous activity across your environment.

Implement and enforce Azure Security Benchmark V3 with InsightCloudSec

New InsightCloudSec Compliance Pack: Key Takeaways From the Azure Security Benchmark V3

InsightCloudSec allows security teams to establish and continuously measure compliance against organizational policies, whether they’re based on service provider best practices like those provided by Microsoft, a common industry framework, or a custom pack tailored to specific business needs.

A compliance pack within InsightCloudSec is a set of checks that can be used to continuously assess your cloud environments for compliance with a given regulatory framework, or industry or provider best practices. The platform comes out of the box with 30+ compliance packs, including a dedicated pack for the Azure Security Benchmark V3.

InsightCloudSec continuously assesses your entire cloud environment—whether that’s a single Azure environment or across multiple platforms—for compliance with best practice recommendations, and detects noncompliant resources within minutes after they are created or an unapproved change is made. If you so choose, you can make use of the platform’s native, no-code automation to remediate the issue—either via deletion or by adjusting the configuration or permissions—without any human intervention.

If you’re interested in learning more about how InsightCloudSec helps continuously and automatically enforce cloud security standards, be sure to check out the demo!

AWS Melbourne Region has achieved HCF Strategic Certification

Post Syndicated from Lori Klaassen original https://aws.amazon.com/blogs/security/aws-melbourne-region-has-achieved-hcf-strategic-certification/

Amazon Web Services (AWS) is delighted to confirm that our new AWS Melbourne Region has achieved Strategic Certification for the Australian Government’s Hosting Certification Framework (HCF).

We know that maintaining security and resiliency to keep critical data and infrastructure safe is a top priority for the Australian Government and all our customers in Australia. The Strategic Certification of both the existing Sydney and the new Melbourne Regions reinforces our ongoing commitment to meet security expectations for cloud service providers and means Australian citizens can now have even greater confidence that the Government is securing their data.

The HCF provides guidance to government customers to identify cloud providers that meet enhanced privacy, sovereignty, and security requirements. The expanded scope of the AWS HCF Strategic Certification gives Australian Government customers additional architectural options, including the ability to store backup data in geographically separated locations within Australia.

Our AWS infrastructure is custom-built for the cloud and designed to meet the most stringent security requirements in the world, and is monitored 24/7 to help support the confidentiality, integrity, and availability of customers’ data. All data flowing across the AWS global network that interconnects our data centers and Regions is automatically encrypted at the physical layer before it leaves our secured facilities. We will continue to expand the scope of our security assurance programs at AWS and are pleased that Australian Government customers can continue to innovate at a rapid pace and be confident AWS meets the Government’s requirements to support the secure management of government systems and data.

The Melbourne Region was officially added to the AWS HCF Strategic Certification on December 21, 2022, and the Sydney Region was certified in October 2021. AWS compliance status is available on the HCF Certified Service Providers website, and the Certificate of Compliance is available through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact. AWS has also achieved many international certifications and accreditations, demonstrating compliance with third-party assurance frameworks such as ISO 27017 for cloud security, ISO 27018 for cloud privacy, and SOC 1, SOC 2, and SOC 3.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

Please reach out to your AWS account team if you have questions or feedback about HCF compliance.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Lori Klaassen

Lori Klaassen

Lori is a Senior Regulatory Specialist on the AWS Security Assurance team. She supports the operationalisation and ongoing assurance of direct regulatory oversight programs in ANZ.

CIEM is Required for Cloud Security and IAM Providers to Compete: Gartner® Report

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2023/02/15/ciem-is-required-for-cloud-security-and-iam-providers-to-compete-gartner-r-report/

CIEM is Required for Cloud Security and IAM Providers to Compete: Gartner® Report

In an ongoing effort to help security organizations stay competitive, we’re pleased to offer this complimentary Gartner® report, Emerging Tech: CIEM Is Required for Cloud Security and IAM Providers to Compete. The research in the report demonstrates the need for Cloud Infrastructure Entitlement Management (CIEM) product leaders to adopt trends that can help deliver value across Cloud Security and Identity and Access Management (IAM) enterprises.

CIEM product leaders looking to remain competitive in Cloud Security and IAM practices should consider prioritizing specific capabilities in their planning in order to address new and emerging threats and, as Gartner says:                            

  • Gain a further competitive edge in the CIEM market by investing in more-advanced guided remediation capabilities, such as automated downsizing of over-privileged accounts.
  • Appeal to a larger audience beyond cloud security teams by positioning CIEM as part of broader enterprise security controls.

Businesses not currently prioritizing CIEM capabilities, however, can’t simply “do a 180” and expect to be successful. Managing entitlements in the current sophisticated age of attacks and digital espionage can feel impossible. It is imperative for security organizations to adopt updated access practices though, not only to remain competitive but to remain secure.

Least Privileged Access (LPA) approaches lacking in effectiveness can find support in CIEM tools that provide advanced enforcement and remediation of ineffective LPA methods. Gartner says:

“The anomaly-detection capabilities leveraged by CIEM tools can be extended to analyze the misconfigurations and vulnerabilities in the IAM stack. With overprivileged account discovery, and some guided remediation, CIEM tools can help organizations move toward a security posture where identities have at least privileges.”

Broadening the portfolio

Within cloud security, identity-verification practices are more critical than ever. Companies developing and leveraging SaaS applications must constantly grapple with varying business priorities, thus identity permissions across these applications can become inconsistent. This can leave applications — and the business — open to vulnerabilities and other challenges.

When it comes to dynamic multi- and hybrid-cloud environments, it can become prohibitively difficult to monitor identity administration and governance. Challenges can include:

  • Prevention of misuse from privileged accounts
  • Poor visibility for performing compliance and oversight
  • Added complexity from short-term cloud entitlements
  • Inconsistency across multiple cloud infrastructures
  • Accounts with excessive access permissions

Multi-cloud IAM requires a more refined approach, and CIEM tools can capably address the challenges above, which is why they must be adopted as part of a suite of broader enterprise security controls.

Accelerating cloud adoption

Technology and service providers fulfilling IAM services are in critical need of capabilities that can address specific cloud use cases. Gartner says:

“It is a natural extension to assist existing customers in their digital transformation and cloud adoption journey. These solutions are able to bridge both on-premises identity implementations and cloud to support hybrid use cases. This will also translate existing IAM policies and apply relevant elements for the cloud while adding additional use cases unique to the cloud environment.”

In fact, a key finding from the report is that “visibility of entitlements and rightsizing of permissions are quickly becoming ‘table stakes’ features in the CIEM market.”

Mature CIEM vendors can typically be expected to also offer additional capabilities like cloud security posture management (CSPM). InsightCloudSec from Rapid7 is a CIEM solution that also offers CSPM capabilities to effectively manage the perpetual shift, adoption, and innovation of cloud infrastructure. Businesses and security organizations can more effectively compete when they offer strong solutions that support and aid existing CIEM capabilities.

Download the report

Rapid7 is pleased to continually offer leading research to help you gain clarity into ways you can stand out in this ultra-competitive landscape. Read the entire complimentary Gartner report now to better understand just how in-demand CIEM capabilities are becoming and how product leaders can tailor strategies to Cloud Security and IAM enterprises.

Gartner, “Emerging Tech: CIEM Is Required for Cloud Security and IAM Providers to Compete”

Swati Rakheja, Mark Wah. 13 July 2022.

Gartner is registered trademark and servicemark of Gartner, Inc and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

The anatomy of ransomware event targeting data residing in Amazon S3

Post Syndicated from Megan O'Neil original https://aws.amazon.com/blogs/security/anatomy-of-a-ransomware-event-targeting-data-in-amazon-s3/

Ransomware events have significantly increased over the past several years and captured worldwide attention. Traditional ransomware events affect mostly infrastructure resources like servers, databases, and connected file systems. However, there are also non-traditional events that you may not be as familiar with, such as ransomware events that target data stored in Amazon Simple Storage Service (Amazon S3). There are important steps you can take to help prevent these events, and to identify possible ransomware events early so that you can take action to recover. The goal of this post is to help you learn about the AWS services and features that you can use to protect against ransomware events in your environment, and to investigate possible ransomware events if they occur.

Ransomware is a type of malware that bad actors can use to extort money from entities. The actors can use a range of tactics to gain unauthorized access to their target’s data and systems, including but not limited to taking advantage of unpatched software flaws, misuse of weak credentials or previous unintended disclosure of credentials, and using social engineering. In a ransomware event, a legitimate entity’s access to their data and systems is restricted by the bad actors, and a ransom demand is made for the safe return of these digital assets. There are several methods actors use to restrict or disable authorized access to resources including a) encryption or deletion, b) modified access controls, and c) network-based Denial of Service (DoS) attacks. In some cases, after the target’s data access is restored by providing the encryption key or transferring the data back, bad actors who have a copy of the data demand a second ransom—promising not to retain the data in order to sell or publicly release it.

In the next sections, we’ll describe several important stages of your response to a ransomware event in Amazon S3, including detection, response, recovery, and protection.

Observable activity

The most common event that leads to a ransomware event that targets data in Amazon S3, as observed by the AWS Customer Incident Response Team (CIRT), is unintended disclosure of Identity and Access Management (IAM) access keys. Another likely cause is if there is an application with a software flaw that is hosted on an Amazon Elastic Compute Cloud (Amazon EC2) instance with an attached IAM instance profile and associated permissions, and the instance is using Instance Metadata Service Version 1 (IMDSv1). In this case, an unauthorized user might be able to use AWS Security Token Service (AWS STS) session keys from the IAM instance profile for your EC2 instance to ransom objects in S3 buckets. In this post, we will focus on the most common scenario, which is unintended disclosure of static IAM access keys.

Detection

After a bad actor has obtained credentials, they use AWS API actions that they iterate through to discover the type of access that the exposed IAM principal has been granted. Bad actors can do this in multiple ways, which can generate different levels of activity. This activity might alert your security teams because of an increase in API calls that result in errors. Other times, if a bad actor’s goal is to ransom S3 objects, then the API calls will be specific to Amazon S3. If access to Amazon S3 is permitted through the exposed IAM principal, then you might see an increase in API actions such as s3:ListBuckets, s3:GetBucketLocation, s3:GetBucketPolicy, and s3:GetBucketAcl.

Analysis

In this section, we’ll describe where to find the log and metric data to help you analyze this type of ransomware event in more detail.

When a ransomware event targets data stored in Amazon S3, often the objects stored in S3 buckets are deleted, without the bad actor making copies. This is more like a data destruction event than a ransomware event where objects are encrypted.

There are several logs that will capture this activity. You can enable AWS CloudTrail event logging for Amazon S3 data, which allows you to review the activity logs to understand read and delete actions that were taken on specific objects.

In addition, if you have enabled Amazon CloudWatch metrics for Amazon S3 prior to the ransomware event, you can use the sum of the BytesDownloaded metric to gain insight into abnormal transfer spikes.

Another way to gain information is to use the region-DataTransfer-Out-Bytes metric, which shows the amount of data transferred from Amazon S3 to the internet. This metric is enabled by default and is associated with your AWS billing and usage reports for Amazon S3.

For more information, see the AWS CIRT team’s Incident Response Playbook: Ransom Response for S3, as well as the other publicly available response frameworks available at the AWS customer playbooks GitHub repository.

Response

Next, we’ll walk through how to respond to the unintended disclosure of IAM access keys. Based on the business impact, you may decide to create a second set of access keys to replace all legitimate use of those credentials so that legitimate systems are not interrupted when you deactivate the compromised access keys. You can deactivate the access keys by using the IAM console or through automation, as defined in your incident response plan. However, you also need to document specific details for the event within your secure and private incident response documentation so that you can reference them in the future. If the activity was related to the use of an IAM role or temporary credentials, you need to take an additional step and revoke any active sessions. To do this, in the IAM console, you choose the Revoke active session button, which will attach a policy that denies access to users who assumed the role before that moment. Then you can delete the exposed access keys.

In addition, you can use the AWS CloudTrail dashboard and event history (which includes 90 days of logs) to review the IAM related activities by that compromised IAM user or role. Your analysis can show potential persistent access that might have been created by the bad actor. In addition, you can use the IAM console to look at the IAM credential report (this report is updated every 4 hours) to review activity such as access key last used, user creation time, and password last used. Alternatively, you can use Amazon Athena to query the CloudTrail logs for the same information. See the following example of an Athena query that will take an IAM user Amazon Resource Number (ARN) to show activity for a particular time frame.

SELECT eventtime, eventname, awsregion, sourceipaddress, useragent
FROM cloudtrail
WHERE useridentity.arn = 'arn:aws:iam::1234567890:user/Name' AND
-- Enter timeframe
(event_date >= '2022/08/04' AND event_date <= '2022/11/04')
ORDER BY eventtime ASC

Recovery

After you’ve removed access from the bad actor, you have multiple options to recover data, which we discuss in the following sections. Keep in mind that there is currently no undelete capability for Amazon S3, and AWS does not have the ability to recover data after a delete operation. In addition, many of the recovery options require configuration upon bucket creation.

S3 Versioning

Using versioning in S3 buckets is a way to keep multiple versions of an object in the same bucket, which gives you the ability to restore a particular version during the recovery process. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning, you can recover more easily from both unintended user actions and application failures. Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, Amazon S3 inserts a delete marker instead of removing the object permanently. The previous version remains in the bucket and becomes a noncurrent version. You can restore the previous version. Versioning is not enabled by default and incurs additional costs, because you are maintaining multiple copies of the same object. For more information about cost, see the Amazon S3 pricing page.

AWS Backup

Using AWS Backup gives you the ability to create and maintain separate copies of your S3 data under separate access credentials that can be used to restore data during a recovery process. AWS Backup provides centralized backup for several AWS services, so you can manage your backups in one location. AWS Backup for Amazon S3 provides you with two options: continuous backups, which allow you to restore to any point in time within the last 35 days; and periodic backups, which allow you to retain data for a specified duration, including indefinitely. For more information, see Using AWS Backup for Amazon S3.

Protection

In this section, we’ll describe some of the preventative security controls available in AWS.

S3 Object Lock

You can add another layer of protection against object changes and deletion by enabling S3 Object Lock for your S3 buckets. With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model and can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.

AWS Backup Vault Lock

Similar to S3 Object lock, which adds additional protection to S3 objects, if you use AWS Backup you can consider enabling AWS Backup Vault Lock, which enforces the same WORM setting for all the backups you store and create in a backup vault. AWS Backup Vault Lock helps you to prevent inadvertent or malicious delete operations by the AWS account root user.

Amazon S3 Inventory

To make sure that your organization understands the sensitivity of the objects you store in Amazon S3, you should inventory your most critical and sensitive data across Amazon S3 and make sure that the appropriate bucket configuration is in place to protect and enable recovery of your data. You can use Amazon S3 Inventory to understand what objects are in your S3 buckets, and the existing configurations, including encryption status, replication status, and object lock information. You can use resource tags to label the classification and owner of the objects in Amazon S3, and take automated action and apply controls that match the sensitivity of the objects stored in a particular S3 bucket.

MFA delete

Another preventative control you can use is to enforce multi-factor authentication (MFA) delete in S3 Versioning. MFA delete provides added security and can help prevent accidental bucket deletions, by requiring the user who initiates the delete action to prove physical or virtual possession of an MFA device with an MFA code. This adds an extra layer of friction and security to the delete action.

Use IAM roles for short-term credentials

Because many ransomware events arise from unintended disclosure of static IAM access keys, AWS recommends that you use IAM roles that provide short-term credentials, rather than using long-term IAM access keys. This includes using identity federation for your developers who are accessing AWS, using IAM roles for system-to-system access, and using IAM Roles Anywhere for hybrid access. For most use cases, you shouldn’t need to use static keys or long-term access keys. Now is a good time to audit and work toward eliminating the use of these types of keys in your environment. Consider taking the following steps:

  1. Create an inventory across all of your AWS accounts and identify the IAM user, when the credentials were last rotated and last used, and the attached policy.
  2. Disable and delete all AWS account root access keys.
  3. Rotate the credentials and apply MFA to the user.
  4. Re-architect to take advantage of temporary role-based access, such as IAM roles or IAM Roles Anywhere.
  5. Review attached policies to make sure that you’re enforcing least privilege access, including removing wild cards from the policy.

Server-side encryption with customer managed KMS keys

Another protection you can use is to implement server-side encryption with AWS Key Management Service (SSE-KMS) and use customer managed keys to encrypt your S3 objects. Using a customer managed key requires you to apply a specific key policy around who can encrypt and decrypt the data within your bucket, which provides an additional access control mechanism to protect your data. You can also centrally manage AWS KMS keys and audit their usage with an audit trail of when the key was used and by whom.

GuardDuty protections for Amazon S3

You can enable Amazon S3 protection in Amazon GuardDuty. With S3 protection, GuardDuty monitors object-level API operations to identify potential security risks for data in your S3 buckets. This includes findings related to anomalous API activity and unusual behavior related to your data in Amazon S3, and can help you identify a security event early on.

Conclusion

In this post, you learned about ransomware events that target data stored in Amazon S3. By taking proactive steps, you can identify potential ransomware events quickly, and you can put in place additional protections to help you reduce the risk of this type of security event in the future.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security, Identity and Compliance re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Megan O’Neil

Megan is a Principal Specialist Solutions Architect focused on threat detection and incident response. Megan and her team enable AWS customers to implement sophisticated, scalable, and secure solutions that solve their business challenges.

Karthik Ram

Karthik Ram

Karthik is a Senior Solutions Architect with Amazon Web Services based in Columbus, Ohio. He has a background in IT networking, infrastructure architecture and Security. At AWS, Karthik helps customers build secure and innovative cloud solutions, solving their business problems using data driven approaches. Karthik’s Area of Depth is Cloud Security with a focus on Threat Detection and Incident Response (TDIR).

Kyle Dickinson

Kyle Dickinson

Kyle is a Sr. Security Solution Architect, specializing in threat detection, incident response. He focuses on working with customers to respond to security events with confidence. He also hosts AWS on Air: Lockdown, a livestream security show. When he’s not – he enjoys hockey, BBQ, and trying to convince his Shitzu that he’s in-fact, not a large dog.