Tag Archives: cloud security

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/16/security-is-shifting-in-a-cloud-native-world-insights-from-rsac-2022/

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

The cloud has become the default for IT infrastructure and resource delivery, allowing an unprecedented level of speed and flexibility for development and production pipelines. This helps organizations compete and innovate in a fast-paced business environment. But as the cloud becomes more ingrained, the ephemeral nature of cloud infrastructure is presenting new challenges for security teams.

Several talks by our Rapid7 presenters at this year’s RSA Conference touched on this theme. Here’s a closer look at what our RSAC 2022 presenters had to say about adapting security processes to a cloud-native world.

A complex picture

As Lee Weiner, SVP Cloud Security and Chief Innovation Officer, pointed out in his RSA briefing, “Context Is King: The Future of Cloud Security,” cloud adoption is not only increasing — it’s growing more complex. Many organizations are bringing on multiple cloud vendors to meet a variety of different needs. One report estimates that a whopping 89% of companies that have adopted the cloud have chosen a multicloud approach.

This model is so popular because of the flexibility it offers organizations to utilize the right technology, in the right cloud environment, at the right cost — a key advantage in a today’s marketplace.

“Over the last decade or so, many organizations have been going through a transformation to put themselves in a position to use the scale and speed of the cloud as a strategic business advantage,” Jane Man, Director of Product Management for VRM, said in her RSA Lounge presentation, “Adapting Your Vulnerability Management Program for Cloud-Native Environments.”

While DevOps teams can move more quickly than ever before with this model, security pros face a more complex set of questions than with traditional infrastructure, Lee noted. How many of our instances are exposed to known vulnerabilities? Do they have property identity and access management (IAM) controls established? What levels of access do those permissions actually grant users in our key applications?

New infrastructure, new demands

The core components of vulnerability management remain the same in cloud environments, Jane said in her talk. Security teams must:

  • Get visibility into all assets, resources, and services
  • Assess, prioritize, and remediate risks
  • Communicate the organization’s security and compliance posture to management

But because of the ephemeral nature of the cloud, the way teams go about completing these requirements is shifting.

“Running a scheduled scan, waiting for it to complete and then handing a report to IT doesn’t work when instances may be spinning up and down on a daily or hourly basis,” she said.

In his presentation, Lee expressed optimism that the cloud itself may help provide the new methods we need for cloud-native security.

“Because of the way cloud infrastructure is built and deployed, there’s a real opportunity to answer these questions far faster, far more efficiently, far more effectively than we could with traditional infrastructure,” he said.

Calling for context

For Lee, the goal is to enable secure adoption of cloud technologies so companies can accelerate and innovate at scale. But there’s a key element needed to achieve this vision: context.

What often prevents teams from fully understanding the context around their security data is the fact that it is siloed, and the lack of integration between disparate systems requires a high level of manual effort to put the pieces together. To really get a clear picture of risk, security teams need to be able to bring their data together with context from each layer of the environment.

But what does context actually look like in practice, and how do you achieve it? Jane laid out a few key strategies for understanding the context around security data in your cloud environment.

  • Broaden your scope: Set up your VM processes so that you can detect more than just vulnerabilities in the cloud — you want to be able to see misconfigurations and issues with IAM permissions, too.
  • Understand the environment: When you identify a vulnerable instance, identify if it is publicly accessible and what its business application is — this will help you determine the scope of the vulnerability.
  • Catch early: Aim to find and fix vulnerabilities in production or pre-production by shifting security left, earlier in the development cycle.

4 best practices for context-driven cloud security

Once you’re able to better understand the context around security data in your environment, how do you fit those insights into a holistic cloud security strategy? For Lee, this comes down to four key components that make up the framework for cloud-native security.

1. Visibility and findings

You can’t secure what you can’t see — so the first step in this process is to take a full inventory of your attack surface. With different kinds of cloud resources in place and providers releasing new services frequently, understanding the security posture of these pieces of your infrastructure is critical. This includes understanding not just vulnerabilities and misconfigurations but also access, permissions, and identities.

“Understanding the layer from the infrastructure to the workload to the identity can provide a lot of confidence,” Lee said.

2. Contextual prioritization

Not everything you discover in this inventory will be of equal importance, and treating it all the same way just isn’t practical or feasible. The vast amount of data that companies collect today can easily overwhelm security analysts — and this is where context really comes in.

With integrated visibility across your cloud infrastructure, you can make smarter decisions about what risks to prioritize. Then, you can assign ownership to resource owners and help them understand how those priorities were identified, improving transparency and promoting trust.

3. Prevent and automate

The cloud is built with automation in mind through Infrastructure as Code — and this plays a key role in security. Automation can help boost efficiency by minimizing the time it takes to detect, remediate, or contain threats. A shift-left strategy can also help with prevention by building security into deployment pipelines, so production teams can identify vulnerabilities earlier.

Jane echoed this sentiment in her talk, recommending that companies “automate to enable — but not force — remediation” and use tagging to drive remediation of vulnerabilities found running in production.

4. Runtime monitoring

The next step is to continually monitor the environment for vulnerabilities and threat activity — and as you might have guessed, monitoring looks a little different in the cloud. For Lee, it’s about leveraging the increased number of signals to understand if there’s any drift away from the way the service was originally configured.

He also recommended using behavioral analysis to detect threat activity and setting up purpose-built detections that are specific to cloud infrastructure. This will help ensure the security operations center (SOC) has the most relevant information possible, so they can perform more effective investigations.

Lee stressed that in order to carry out the core components of cloud security and achieve the outcomes companies are looking for, having an integrated ecosystem is absolutely essential. This will help prevent data from becoming siloed, enable security pros to obtain that ever-important context around their data, and let teams collaborate with less friction.

Looking for more insights on how to adapt your security program to a cloud-native world? Check out Lee’s presentation on demand, or watch our replays of Rapid7 speakers’ sessions from RSAC 2022.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

A sneak peek at the identity and access management sessions for AWS re:Inforce 2022

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-identity-and-access-management-sessions-for-aws-reinforce-2022/

Register now with discount code SALFNj7FaRe to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

AWS re:Inforce 2022 will take place in-person in Boston, MA, on July 26 and 27 and will include some exciting identity and access management sessions. AWS re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

The identity and access management track will showcase how quickly you can get started to securely manage access to your applications and resources as you scale on AWS. You will hear from customers about how they integrate their identity sources and establish a consistent identity and access strategy across their on-premises environments and AWS. Identity experts will discuss best practices for establishing an organization-wide data perimeter and simplifying access management with the right permissions, to the right resources, under the right conditions. You will also hear from AWS leaders about how we’re working to make identity, access control, and resource management simpler every day. This post highlights some of the identity and access management sessions that you can add to your agenda. To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.

Breakout sessions

Lecture-style presentations that cover topics at all levels and are delivered by AWS experts, builders, customers, and partners. Breakout sessions typically conclude with 10–15 minutes of Q&A.

IAM201: Security best practices with AWS IAM
AWS IAM is an essential service that helps you securely control access to your AWS resources. In this session, learn about IAM best practices like working with temporary credentials, applying least-privilege permissions, moving away from users, analyzing access to your resources, validating policies, and more. Leave this session with ideas for how to secure your AWS resources in line with AWS best practices.

IAM301: AWS Identity and Access Management (IAM) the practical way
Building secure applications and workloads on AWS means knowing your way around AWS Identity and Access Management (AWS IAM). This session is geared toward the curious builder who wants to learn practical IAM skills for defending workloads and data, with a technical, first-principles approach. Gain knowledge about what IAM is and a deeper understanding of how it works and why.

IAM302: Strategies for successful identity management at scale with AWS SSO
Enterprise organizations often come to AWS with existing identity foundations. Whether new to AWS or maturing, organizations want to better understand how to centrally manage access across AWS accounts. In this session, learn the patterns many customers use to succeed in deploying and operating AWS Single Sign-On at scale. Get an overview of different deployment strategies, features to integrate with identity providers, application system tags, how permissions are deployed within AWS SSO, and how to scale these functionalities using features like attribute-based access control.

IAM304: Establishing a data perimeter on AWS, featuring Vanguard
Organizations are storing an unprecedented and increasing amount of data on AWS for a range of use cases including data lakes, analytics, machine learning, and enterprise applications. They want to make sure that sensitive non-public data is only accessible to authorized users from known locations. In this session, dive deep into the controls that you can use to create a data perimeter that allows access to your data only from expected networks and by trusted identities. Hear from Vanguard about how they use data perimeter controls in their AWS environment to meet their security control objectives.

IAM305: How Guardian Life validates IAM policies at scale with AWS
Attend this session to learn how Guardian Life shifts IAM security controls left to empower builders to experiment and innovate quickly, while minimizing the security risk exposed by granting over-permissive permissions. Explore how Guardian validates IAM policies in Terraform templates against AWS best practices and Guardian’s security policies using AWS IAM Access Analyzer and custom policy checks. Discover how Guardian integrates this control into CI/CD pipelines and codifies their exception approval process.

IAM306: Managing B2B identity at scale: Lessons from AWS and Trend Micro
Managing identity for B2B multi-tenant solutions requires tenant context to be clearly defined and propagated with each identity. It also requires proper onboarding and automation mechanisms to do this at scale. Join this session to learn about different approaches to managing identities for B2B solutions with Amazon Cognito and learn how Trend Micro is doing this effectively and at scale.

IAM307: Automating short-term credentials on AWS, with Discover Financial Services
As a financial services company, Discover Financial Services considers security paramount. In this session, learn how Discover uses AWS Identity and Access Management (IAM) to help achieve their security and regulatory obligations. Learn how Discover manages their identities and credentials within a multi-account environment and how Discover fully automates key rotation with zero human interaction using a solution built on AWS with IAM, AWS Lambda, Amazon DynamoDB, and Amazon S3.

Builders’ sessions

Small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

IAM351: Using AWS SSO and identity services to achieve strong identity management
Organizations often manage human access using IAM users or through federation with external identity providers. In this builders’ session, explore how AWS SSO centralizes identity federation across multiple AWS accounts, replaces IAM users and cross-account roles to improve identity security, and helps administrators more effectively scope least privilege. Additionally, learn how to use AWS SSO to activate time-based access and attribute-based access control.

IAM352: Anomaly detection and security insights with AWS Managed Microsoft AD
This builders’ session demonstrates how to integrate AWS Managed Microsoft AD with native AWS services like Amazon CloudWatch Logs and Amazon CloudWatch metrics and alarms, combined with anomaly detection, to identify potential security issues and provide actionable insights for operational security teams.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

IAM231: Prevent unintended access: AWS IAM Access Analyzer policy validation
In this chalk talk, walk through ways to use AWS IAM Access Analyzer policy validation to review IAM policies that do not follow AWS best practices. Learn about the Access Analyzer APIs that help validate IAM policies and how to use these APIs to prevent IAM policies from reaching your AWS environment through mechanisms like AWS CloudFormation hooks and CI/CD pipeline controls.

IAM232: Navigating the consumer identity first mile using Amazon Cognito
Amazon Cognito allows you to configure sign-in and sign-up experiences for consumers while extending user management capabilities to your customer-facing application. Join this chalk talk to learn about the first steps for integrating your application and getting started with Amazon Cognito. Learn best practices to manage users and how to configure a customized branding UI experience, while creating a fully managed OpenID Connect provider with Amazon Cognito.

IAM331: Best practices for delegating access on AWS
This chalk talk demonstrates how to use built-in capabilities of AWS Identity and Access Management (IAM) to safely allow developers to grant entitlements to their AWS workloads (PassRole/AssumeRole). Additionally, learn how developers can be granted the ability to take self-service IAM actions (CRUD IAM roles and policies) with permissions boundaries.

IAM332: Developing preventive controls with AWS identity services
Learn about how you can develop and apply preventive controls at scale across your organization using service control policies (SCPs). This chalk talk is an extension of the preventive controls within the AWS identity services guide, and it covers how you can meet the security guidelines of your organization by applying and developing SCPs. In addition, it presents strategies for how to effectively apply these controls in your organization, from day-to-day operations to incident response.

IAM333: IAM policy evaluation deep dive
In this chalk talk, learn how policy evaluation works in detail and walk through some advanced IAM policy evaluation scenarios. Learn how a request context is evaluated, the pros and cons of different strategies for cross-account access, how to use condition keys for actions that touch multiple resources, when to use principal and aws:PrincipalArn, when it does and doesn’t make sense to use a wildcard principal, and more.

Workshops

Interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

IAM271: Applying attribute-based access control using AWS IAM
This workshop provides hands-on experience applying attribute-based access control (ABAC) to achieve a secure and scalable authorization model on AWS. Learn how and when to apply ABAC, which is native to AWS Identity and Access Management (IAM). Also learn how to find resources that could be impacted by different ABAC policies and session tagging techniques to scale your authorization model across Regions and accounts within AWS.

IAM371: Building a data perimeter to allow access to authorized users
In this workshop, learn how to create a data perimeter by building controls that allow access to data only from expected network locations and by trusted identities. The workshop consists of five modules, each designed to illustrate a different AWS Identity and Access Management (IAM) and network control. Learn where and how to implement the appropriate controls based on different risk scenarios. Discover how to implement these controls as service control policies, identity- and resource-based policies, and virtual private cloud endpoint policies.

IAM372: How and when to use different IAM policy types
In this workshop, learn how to identify when to use various policy types for your applications. Work through hands-on labs that take you through a typical customer journey to configure permissions for a sample application. Configure policies for your identities, resources, and CI/CD pipelines using permission delegation to balance security and agility. Also learn how to configure enterprise guardrails using service control policies.

If these sessions look interesting to you, join us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Author

Ilya Epshteyn

Ilya is a Senior Manager of Identity Solutions in AWS Identity. He helps customers to innovate on AWS by building highly secure, available, and scalable architectures. He enjoys spending time outdoors and building Lego creations with his kids.

Marc von Mandel

Marc von Mandel

Marc leads the product marketing strategy and execution for AWS Identity Services. Prior to AWS, Marc led product marketing at IBM Security Services across several categories, including Identity and Access Management Services (IAM), Network and Infrastructure Security Services, and Cloud Security Services. Marc currently lives in Atlanta, Georgia and has worked in the cybersecurity and public cloud for more than twelve years.

Identifying Cloud Waste to Contain Unnecessary Costs

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2022/06/07/identifying-cloud-waste-to-contain-unnecessary-costs/

Identifying Cloud Waste to Contain Unnecessary Costs

Cloud adoption has exploded over the past decade or so, and for good reason. Many digital transformation advancements – and even the complete reimagination of entire industries – can be directly mapped and attributed to cloud innovation. While this rapid pace of innovation has had a profound impact on businesses and how we connect with our customers and end users, the journey to the cloud isn’t all sunshine and rainbows.

Along with increased efficiency, accelerated innovation, and added flexibility comes an exponential increase in complexity, which can make managing and securing cloud-based applications and workloads a daunting challenge. This added complexity can make it difficult to maintain visibility into what’s running across your cloud(s).

Beyond management challenges, organizations often run into massive increases in IT costs as they scale. Whether from neglecting to shut down old resources when they are no longer needed or over-provisioning them from the beginning to avoid auto-scaling issues, cloud waste and overspend are among the most prevailing challenges that organizations face when adopting and accelerating cloud consumption.

Just how prevalent is this issue? Well, according to Flexera’s 2022 State of Cloud Report, nearly 60% of cloud decision-makers say optimizing their cloud usage to cut costs is a top priority for this year.

The cost benefits of reducing waste can be massive, but knowing where to look and what the most common culprits of waste can be a challenge, particularly if your organization are relative novices when it comes to cloud.

Common cases of cloud waste and how to avoid them

Now that we’ve covered the factors that drive exploding cloud budgets, let’s take a look at some of the most common cases of cloud waste we see, and the types of checks you and your teams should make to avoid unnecessary spending. I’ve categorized these issues as major, moderate, and minor, based on the relative cost savings possible when customers we’ve worked with eliminate them.

Important to note: While this is what we’ve seen in our experience, it’s important to keep in mind that the actual real-world impact will vary based on each organization’s specific situation.

Unattached volumes (major)

Multiple creation and termination of instances often results in certain volumes remaining attached to already terminated instances. These unused and overlooked volumes contribute directly to increased costs, while delivering little or no value.

Cloud teams should identify volumes that are not shown as attached to any instances. Once detected, schedule unattached storage volumes for deletion if they are no longer in use. Alternatively, you could minimize overhead by transitioning these volumes to serve as offline backups.

Load balancer with no instances (major)

Load balancers distribute traffic across instances to handle the load of your application. If a load balancer is not attached to any instances, it will consume costs without providing any functionality. An orphaned load balancer could also be an indication that an instance was deleted or otherwise impaired.

You should identify unattached load balancers, and double-check to make sure there isn’t a larger problem related to an improperly deleted instance that was once associated with those load balancers. After you’ve determined there isn’t a bigger issue to dig into, notify the necessary resource owners that they should delete them.

Database instance with zero connections (moderate)

Databases that have not been connected to within a given time frame are likely to be billable for all classes of service, except for free tiers.

After some agreed-upon time frame (we typically see teams use about 14 days), you should consider these databases stale and remove them. It’s important here to be sure there isn’t a good reason for the perceived inactivity before you go ahead and hit that delete button.

Snapshot older than 60 days (moderate)

Snapshots represent a complete backup of your computing instances at a specific point in time. Maintaining snapshot backups incurs cost and provides diminishing returns over time, as snapshots become old and diverge more and more from the instances they originally represented.  

Unless regulatory compliance or aggressive backup schedules mandate otherwise, old snapshots should be purged. Before scheduling a deletion or taking any other actions, create a ServiceNow Incident for reporting purposes and to ensure snapshot policy socialization.

Instance with high core counts (minor)

Instances that have more cores will tend to perform tasks more quickly and be able to handle larger loads. However, with greater power comes greater costs. For many workloads, eight cores should be more than sufficient.

Users should identify these instances, mark them non-compliant, and notify the resource owner or operations team about potentially downsizing, stopping, or deleting instances with more than eight cores.

How InsightCloudSec can help contain cloud costs

By this point, you might be wondering why we here at Rapid7 would be writing about cloud cost management. I mean, we’re a security company, right? While that’s true, and our dedication to powering protectors hasn’t waned one bit, the benefits of InsightCloudSec (ICS) don’t stop there.

ICS provides real-time visibility into your entire cloud asset inventory across all of your cloud platforms, which gives us the ability to provide relevant insights and automation that help improve cost effectiveness. In fact, we’ve got built-in checks for each of the issues outlined above (and many more) available right out of the box, as well as recommended remediation steps and tips for automating the entire process with native bots. So while you might initially look into our platform for the ability to simplify cloud security and compliance, you can also use it to get a handle on that runaway cloud spend.

Our customers have realized massive savings on their cloud bills over the years, covering portions –  or in some cases, the entirety – of the cost of their InsightCloudSec licenses. (Gotta love a security platform that can pay for itself!) If you’re interested in learning more about how you accelerate in the cloud without sacrificing security and save some money at the same time, don’t hesitate to request a free demo!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

A sneak peek at the data protection and privacy sessions for AWS re:Inforce 2022

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-data-protection-and-privacy-sessions-for-reinforce-2022/

Register now with discount code SALUZwmdkJJ to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we want to tell you about some of the engaging data protection and privacy sessions planned for AWS re:Inforce. AWS re:Inforce is a learning conference where you can learn more about on security, compliance, identity, and privacy. When you attend the event, you have access to hundreds of technical and business sessions, an AWS Partner expo hall, a keynote speech from AWS Security leaders, and more. AWS re:Inforce 2022 will take place in-person in Boston, MA on July 26 and 27. re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

This post will highlight of some of the data protection and privacy offerings that you can sign up for, including breakout sessions, chalk talks, builders’ sessions, and workshops. For the full catalog of all tracks, see the AWS re:Inforce session preview.

Breakout sessions

Lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

DPP 101: Building privacy compliance on AWS
In this session, learn where technology meets governance with an emphasis on building. With the privacy regulation landscape continuously changing, organizations need innovative technical solutions to help solve privacy compliance challenges. This session covers three unique customer use cases and explores privacy management, technology maturity, and how AWS services can address specific concerns. The studies presented help identify where you are in the privacy journey, provide actions you can take, and illustrate ways you can work towards privacy compliance optimization on AWS.

DPP201: Meta’s secure-by-design approach to supporting AWS applications
Meta manages a globally distributed data center infrastructure with a growing number of AWS Cloud applications. With all applications, Meta starts by understanding data security and privacy requirements alongside application use cases. This session covers the secure-by-design approach for AWS applications that helps Meta put automated safeguards before deploying applications. Learn how Meta handles account lifecycle management through provisioning, maintaining, and closing accounts. The session also details Meta’s global monitoring and alerting systems that use AWS technologies such as Amazon GuardDuty, AWS Config, and Amazon Macie to provide monitoring, access-anomaly detection, and vulnerable-configuration detection.

DPP202: Uplifting AWS service API data protection to TLS 1.2+
AWS is constantly raising the bar to ensure customers use the most modern Transport Layer Security (TLS) encryption protocols, which meet regulatory and security standards. In this session, learn how AWS can help you easily identify if you have any applications using older TLS versions. Hear tips and best practices for using AWS CloudTrail Lake to detect the use of outdated TLS protocols, and learn how to update your applications to use only modern versions. Get guidance, including a demo, on building metrics and alarms to help monitor TLS use.

DPP203: Secure code and data in use with AWS confidential compute capabilities
At AWS, confidential computing is defined as the use of specialized hardware and associated firmware to protect in-use customer code and data from unauthorized access. In this session, dive into the hardware- and software-based solutions AWS delivers to provide a secure environment for customer organizations. With confidential compute capabilities such as the AWS Nitro System, AWS Nitro Enclaves, and NitroTPM, AWS offers protection for customer code and sensitive data such as personally identifiable information, intellectual property, and financial and healthcare data. Securing data allows for use cases such as multi-party computation, blockchain, machine learning, cryptocurrency, secure wallet applications, and banking transactions.

Builders’ sessions

Small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

DPP251: Disaster recovery and resiliency for AWS data protection services
Mitigating unknown risks means planning for any situation. To help achieve this, you must architect for resiliency. Disaster recovery (DR) is an important part of your resiliency strategy and concerns how your workload responds when a disaster strikes. To this end, many organizations are adopting architectures that function across multiple AWS Regions as a DR strategy. In this builders’ session, learn how to implement resiliency with AWS data protection services. Attend this session to gain hands-on experience with the implementation of multi-Region architectures for critical AWS security services.

DPP351: Implement advanced access control mechanisms using AWS KMS
Join this builders’ session to learn how to implement access control mechanisms in AWS Key Management Service (AWS KMS) and enforce fine-grained permissions on sensitive data and resources at scale. Define AWS KMS key policies, use attribute-based access control (ABAC), and discover advanced techniques such as grants and encryption context to solve challenges in real-world use cases. This builders’ session is aimed at security engineers, security architects, and anyone responsible for implementing security controls such as segregating duties between encryption key owners, users, and AWS services or delegating access to different principals using different policies.

DPP352: TLS offload and containerized applications with AWS CloudHSM
With AWS CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. This builders’ session covers two common scenarios for CloudHSM: TLS offload using NGINX and OpenSSL Dynamic agent and a containerized application that uses PKCS#11 to perform crypto operations. Learn about scaling containerized applications, discover how metrics and logging can help you improve the observability of your CloudHSM-based applications, and review audit records that you can use to assess compliance requirements.

DPP353: How to implement hybrid public key infrastructure (PKI) on AWS
As organizations migrate workloads to AWS, they may be running a combination of on-premises and cloud infrastructure. When certificates are issued to this infrastructure, having a common root of trust to the certificate hierarchy allows for consistency and interoperability of the public key infrastructure (PKI) solution. In this builders’ session, learn how to deploy a PKI that allows such capabilities in a hybrid environment. This solution uses Windows Certificate Authority (CA) and ACM Private CA to distribute and manage x.509 certificates for Active Directory users, domain controllers, network components, mobile, and AWS services, including Amazon API Gateway, Amazon CloudFront, and Elastic Load Balancing.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

DPP231: Protecting healthcare data on AWS
Achieving strong privacy protection through technology is key to protecting patient. Privacy protection is fundamental for healthcare compliance and is an ongoing process that demands legal, regulatory, and professional standards are continually met. In this chalk talk, learn about data protection, privacy, and how AWS maintains a standards-based risk management program so that the HIPAA-eligible services can specifically support HIPAA administrative, technical, and physical safeguards. Also consider how organizations can use these services to protect healthcare data on AWS in accordance with the shared responsibility model.

DPP232: Protecting business-critical data with AWS migration and storage services
Business-critical applications that were once considered too sensitive to move off premises are now moving to the cloud with an extension of the security perimeter. Join this chalk talk to learn about securely shifting these mature applications to cloud services with the AWS Transfer Family and helping to secure data in Amazon Elastic File System (Amazon EFS), Amazon FSx, and Amazon Elastic Block Storage (Amazon EBS). Also learn about tools for ongoing protection as part of the shared responsibility model.

DPP331: Best practices for cutting AWS KMS costs using Amazon S3 bucket keys
Learn how AWS customers are using Amazon S3 bucket keys to cut their AWS Key Management Service (AWS KMS) request costs by up to 99 percent. In this chalk talk, hear about the best practices for exploring your AWS KMS costs, identifying suitable buckets to enable bucket keys, and providing mechanisms to apply bucket key benefits to existing objects.

DPP332: How to securely enable third-party access
In this chalk talk, learn about ways you can securely enable third-party access to your AWS account. Learn why you should consider using services such as Amazon GuardDuty, AWS Security Hub, AWS Config, and others to improve auditing, alerting, and access control mechanisms. Hardening an account before permitting external access can help reduce security risk and improve the governance of your resources.

Workshops

Interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

DPP271: Isolating and processing sensitive data with AWS Nitro Enclaves
Join this hands-on workshop to learn how to isolate highly sensitive data from your own users, applications, and third-party libraries on your Amazon EC2 instances using AWS Nitro Enclaves. Explore Nitro Enclaves, discuss common use cases, and build and run an enclave. This workshop covers enclave isolation, cryptographic attestation, enclave image files, building a local vsock communication channel, debugging common scenarios, and the enclave lifecycle.

DPP272: Data discovery and classification with Amazon Macie
This workshop familiarizes you with Amazon Macie and how to scan and classify data in your Amazon S3 buckets. Work with Macie (data classification) and AWS Security Hub (centralized security view) to view and understand how data in your environment is stored and to understand any changes in Amazon S3 bucket policies that may negatively affect your security posture. Learn how to create a custom data identifier, plus how to create and scope data discovery and classification jobs in Macie.

DPP273: Architecting for privacy on AWS
In this workshop, follow a regulatory-agnostic approach to build and configure privacy-preserving architectural patterns on AWS including user consent management, data minimization, and cross-border data flows. Explore various services and tools for preserving privacy and protecting data.

DPP371: Building and operating a certificate authority on AWS
In this workshop, learn how to securely set up a complete CA hierarchy using AWS Certificate Manager Private Certificate Authority and create certificates for various use cases. These use cases include internal applications that terminate TLS, code signing, document signing, IoT device authentication, and email authenticity verification. The workshop covers job functions such as CA administrators, application developers, and security administrators and shows you how these personas can follow the principal of least privilege to perform various functions associated with certificate management. Also learn how to monitor your public key infrastructure using AWS Security Hub.

If any of these sessions look interesting to you, consider joining us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

Katie Collins

Katie Collins

Katie is a Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

Cybersecurity Is More Than a Checklist: Joel Yonts on Tech’s Unfair Disadvantage

Post Syndicated from Peter Scott original https://blog.rapid7.com/2022/06/03/cybersecurity-is-more-than-a-checklist-joel-yonts-on-techs-unfair-disadvantage/

Cybersecurity Is More Than a Checklist: Joel Yonts on Tech’s Unfair Disadvantage

Breaches caused by misconfigurations are alarmingly common. Over a third of all cyberattacks in 2020 were the result of firewall, cloud, and server misconfigurations. The tech industry is at the highest risk of bad actors taking advantage of these preventable vulnerabilities, with the information sector falling victim to a majority of 2021’s breaches caused by misconfigurations. One such instance is the Android app data leak, which compromised over 100 million users’ data.

As organizations mature and innovate, so do malicious actors – and many of them target tech companies’ public cloud infrastructure. In today’s world of rapid development, companies must prioritize the protection of their infrastructure with a mix of people, processes, and technology.

Achieving this means going beyond tactical security approaches and instilling organization-wide commitment. While checklists can be a helpful tool, security has to go beyond that; it should be a mission rather than a to-do list to set aside when complete. Neglecting to fulfill that mission can cause serious financial and reputational damage.

When it comes to strengthening their security postures, tech companies must address an industry-specific set of challenges. Recently, we were fortunate to sit down with Joel Yonts, Chief Research Officer and Strategist at Malicious Streams and CEO at Secure Robotics.ai. A seasoned security executive with over 25 years of experience, Yonts’s expertise in enterprise security, digital forensics, artificial intelligence, and robotic and IoT systems gives him a nuanced, data-driven perspective.

Read our Q&A below to get his insights on today’s best practices in security for tech companies.

Can you tell us what you do in your roles as Chief Research Officer and Strategist at Malicious Streams and CEO at Secure Robotics?

Malicious Streams delivers holistic cybersecurity strategies. The most complex things that I deal with are people – researching, listening, and observing how people work at all levels to determine how I can connect them.

I am a fractional CISO for several multinational organizations. I also build cybersecurity programs, do cyber maturity assessments, and build cybersecurity strategies in emerging areas.

In true Joel fashion, however, I needed more research time than I was getting in those roles. So, I spun up Secure Robotics.ai as a side company to Malicious Streams. Secure Robotics focuses on cyber protection for intelligent machines.

There’s a technology and capability gap right now: Technology adoption has outpaced cybersecurity. There are areas where we can’t protect these intelligent machines properly. I’m focused on developing practices and capabilities like digital forensics to better secure artificial intelligence systems.

How do you think about the tech adoption/security maturity gap? How do you address it?

This is one of the things that I talk about often: In cybersecurity, things move fast. The attacker-defender cycle is nonstop — as one side innovates, the other side does too. It’s a zero-sum game, a head-to-head competition. Every time a company innovates and creates some new technology or security tool, I always ask, “How are the attackers going to innovate to address that?”

For example, if you’ve got a new technology that improves network security, you can project that cybercriminals might subvert that by attacking the host. They might go upstream or downstream from there. Once you accept that, you can start thinking through those patterns and plan for early detection and prevention in those new areas.

In other words, if you don’t think a couple of moves ahead in cybersecurity, you will lose. The attackers can mobilize and deploy technology far faster than companies can. Anticipation is very important for long-term security.

When you’re working with companies on their cybersecurity strategies, how do you decide what to prioritize?

One of the biggest problems that I deal with in cybersecurity is priorities. Everybody’s got them. But many organizations have far too many, and this creates a problem when it comes to focusing resources. We need to wrangle them down to just a few of the highest-risk and greatest-value items for us to be successful.

We’ve seen the collapse of the castle walls around corporate networks. I’ve heard it said that the internet has become the new corporate network, and it’s such a true statement.

A corporate solution isn’t one monolithic server in a data center anymore. It is a cloud solution connected to seven SaaS solutions and three other cloud environments. This network of technologies may communicate across the product network or through the open internet. This requires different security approaches than in the past.

Another big priority that I see is identity and access management (IAM). Regretfully, I still see a lot of companies struggling with multi-factor authentication. There are still people working to catch up because there are some complexities associated with IAM, but it’s a high-risk area and should be a priority. IAM is a foundation in security that is expected to be in place. Incidents here are likely to create additional brand damage and draw regulators’ attention.

Another big challenge is cyber insurance. Rates are skyrocketing — double or triple what they used to be. Often, your rate will depend on how well you can prove the maturity of your cyber program. One of the questions they’re likely to ask is, “Do you use multi-factor authentication?” Every cyber insurance company will ask that question because it’s such an effective control. If you answer their questions wrong, it could cost you millions of dollars in higher premiums.

What have been the most valuable developments in cybersecurity in recent years?

The first thing that jumps to mind is EDR technology. We obviously want to solve cybersecurity challenges much earlier in the process. But EDR is instrumental in the detection and response phases.

EDR products can catch an attacker in an environment very quickly, even if they’re using stealthy technologies. These days, attackers don’t deploy a bunch of new tools when they break into an environment; they just use the tools you already have against you. But even in those scenarios, EDR has been a massive game-changer.

The most significant advancement in cybersecurity hasn’t been improving vulnerability detection but improving vulnerability management. If you find one security flaw in a company, that’s manageable. But what if you find 700,000 flaws in a company? How do you sort through that in a meaningful way so you can prioritize, communicate, take action, and maintain an audit trail? That’s where I’ve seen a lot of innovation recently.

We’ve been in the cloud now for a while. Many companies are still doing cloud security wrong in many ways, but I think that the move to cloud security posture management and the adoption of multi-cloud tools for visibility and control have been significant steps in the right direction.

What challenges do tech companies face that other companies don’t necessarily wrestle with?

One of the big ones is the level of forgiveness from users. Whether you’re selling a piece of technology or a security service, if you have a breach or a major vulnerability, there’s a lot less forgiveness than if you’re a retailer with the same issue. The high expectations mean you have to be more diligent to keep your customers happy.

Fair or not, people think, “I need technology companies to help solve my problems, not add to my problems.” When a retailer or manufacturer has a public cybersecurity issue, you don’t hear people saying they won’t do business with that company. But with tech companies that have security issues — sometimes with a very skilled attacker that any company would struggle to defend against — I hear a lot of people saying they will take their business elsewhere. So that’s a definite challenge for them.  

Are there any specific threats that are a bigger risk for tech companies?

Supply chain risk is a significant threat for tech companies. Making decisions about where you’re going to source parts of your products and services, what you’re going to source, and how you’re going to source them — all of that is incredibly difficult from a security perspective, even down to the hardware level. Detecting threats on that level is just monumentally difficult.

For technology companies, embracing new technologies is part of their DNA, but it also brings more security challenges. For example, what does it mean to secure serverless environments? Containers? Those are technologies that have been adopted fast because of their value propositions, but not every company has figured out how to handle asset management, detection, and response for them.

Manage Your Cloud Security Posture

Learn More About InsightCloudSec

The other challenge is rapid adoption of intelligent machines. For example, I just released a paper on chatbots. They’re valuable, offering efficiency and improving some aspects of customer experience. But a lot of the time they sit outside the cybersecurity program.

In this recent paper, I released a couple of proof-of-concept attacks where I trojanized a chatbot and skimmed credit cards with it. I duplicated a standard interface where someone checking on a product order could enter the information and see the real-time status of their purchase. I was able to compromise the chatbot interface and inject a couple of new fields so that, instead of just asking for the name and order, it asks for the credit card and zip code you used to place the order. In my trojanized version, it then stores that in a slot in the chatbot memory and never goes inside the company. It sits on the outside. Then when I, as an “attacker,” come and give it a specific order number, it dumps all the cards I skimmed out through the chat interface in an encoded format.

That’s a white-hat example, obviously, but attackers are out there figuring this out. Tech companies need to be seriously considering how they will secure chatbots and other intelligent machines they leverage.

What advice would you give tech companies looking for security solutions?

Don’t start with the technology – that is the very first thing. Technology companies may be the most guilty of doing this because technology is their business. But it’s not the way to go.

Instead, start by trying to understand and define the problem. You need to understand what it is you’re trying to protect, how you’re going to protect it, and what it looks like if it is not protected. Otherwise, you might decide to adopt a huge range of technologies — fantastic, big, monolithic solutions — and then find that there are massive gaps between them. They’re not connected. You need to make sure that you have an interlocking set of strategies to protect the entire attack surface area. Because guess what? If you have chinks in your armor, the attacker will probe and exploit those weaknesses.

For more insights on how to navigate the future of cloud security as a technology company, visit our hub page.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Join me in Boston this July for AWS re:Inforce 2022

Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/join-me-in-boston-this-july-for-aws-reinforce-2022/

I’d like to personally invite you to attend the Amazon Web Services (AWS) security conference, AWS re:Inforce 2022, in Boston, MA on July 26–27. This event offers interactive educational content to address your security, compliance, privacy, and identity management needs. Join security experts, customers, leaders, and partners from around the world who are committed to the highest security standards, and learn how to improve your security posture.

As the new Chief Information Security Officer of AWS, my primary job is to help our customers navigate their security journey while keeping the AWS environment safe. AWS re:Inforce offers an opportunity for you to understand how to keep pace with innovation in your business while you stay secure. With recent headlines around security and data privacy, this is your chance to learn the tactical and strategic lessons that will help keep your systems and tools secure, while you build a culture of security in your organization.

AWS re:Inforce 2022 will kick off with my keynote on Tuesday, July 26. I’ll be joined by Steve Schmidt, now the Chief Security Officer (CSO) of Amazon, and Kurt Kufeld, VP of AWS Platform. You’ll hear us talk about the latest innovations in cloud security from AWS and learn what you can do to foster a culture of security in your business. Take a look at the most recent re:Invent presentation, Continuous security improvement: Strategies and tactics, and the latest re:Inforce keynote for examples of the type of content to expect.

For those who are just getting started on AWS, as well as our more tenured customers, AWS re:Inforce offers an opportunity to learn how to prioritize your security investments. By using the Security pillar of the AWS Well-Architected Framework, sessions address how you can build practical and prescriptive measures to protect your data, systems, and assets.

Sessions are offered at all levels and for all backgrounds, from business to technical, and there are learning opportunities in over 300 sessions across five tracks: Data Protection & Privacy; Governance, Risk & Compliance; Identity & Access Management; Network & Infrastructure Security; and Threat Detection & Incident Response. In these sessions, connect with and learn from AWS experts, customers, and partners who will share actionable insights that you can apply in your everyday work. At AWS re:Inforce, the majority of our sessions are interactive, such as workshops, chalk talks, boot camps, and gamified learning, which provides opportunities to hear about and act upon best practices. Sessions will be available from the intermediate (200) through expert (400) levels, so you can grow your skills no matter where you are in your career. Finally, there will be a leadership session for each track, where AWS leaders will share best practices and trends in each of these areas.

At re:Inforce, hear directly from AWS developers and experts, who will cover the latest advancements in AWS security, compliance, privacy, and identity solutions—including actionable insights your business can use right now. Plus, you’ll learn from AWS customers and partners who are using AWS services in innovative ways to protect their data, achieve security at scale, and stay ahead of bad actors in this rapidly evolving security landscape.

A full conference pass is $1,099. However, if you register today with the code ALUMkpxagvkV you’ll receive a $300 discount (while supplies last).

We’re excited to get back to re:Inforce in person; it is emblematic of our commitment to giving customers direct access to the latest security research and trends. We’ll continue to release additional details about the event on our website, and you can get real-time updates by following @AWSSecurityInfo. I look forward to seeing you in Boston, sharing a bit more about my new role as CISO and providing insight into how we prioritize security at AWS.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

CJ Moses

CJ Moses

CJ Moses is the Chief Information Security Officer (CISO) at AWS. In his role, CJ leads product design and security engineering for AWS. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Prior to joining Amazon in 2007, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. CJ also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

What It Takes to Securely Scale Cloud Environments at Tech Companies Today

Post Syndicated from Ben Austin original https://blog.rapid7.com/2022/05/25/what-it-takes-to-securely-scale-cloud-environments-at-tech-companies-today/

What It Takes to Securely Scale Cloud Environments at Tech Companies Today

In January 2021, foreign trade marketing platform SocialArks was the target of a massive cyberattack. Security Magazine reported that the rapidly growing startup experienced a breach of over 214 million social media profiles and 400GB of data, exposing users’ names, phone numbers, email addresses, subscription data, and other sensitive information across Facebook, Instagram, and LinkedIn. According to Safety Detectives, the breach affected more than 318 million records in total, including those of high-profile influencers in the United States, China, the Netherlands, South Korea, and more.

The cause? A misconfigured database.

SocialArks’s Elasticsearch database contained scraped data from hundreds of millions of social media users from all around the world. The database was publicly exposed without password encryption or protection, meaning that any bad actor in possession of the company’s server IP address could easily access the private data.

What can tech companies learn from what happened to SocialArks?

One wrong misconfiguration can lead to major consequences — from reputational damage to revenue loss. As the cloud becomes increasingly pervasive and complex, tech companies know they must take advantage of innovative services to scale up. At the same time, DevOps and security teams must work together to ensure that they are using the cloud securely, from development to production.

Here are three ways to help empower your teams to take advantage of the many benefits of public cloud infrastructure without sacrificing security.

1. Improve visibility

Tech companies – probably more than those in any other industry – are keen to take advantage of the endless stream of new and innovative services coming from public cloud providers like AWS, Azure, and GCP. From more traditional cloud offerings like containers and databases to advanced machine learning, data analytics, and remote application delivery, developers at tech companies love to explore new cloud services as a means to spur innovation.

The challenge for security, of course, is that the sheer complexity of the average enterprise tech company’s cloud footprint is dizzying, not to mention the rapid rate of change. For example, a cloud environment with 10,000 compute instances can expect a daily churn of 20%, including auto-scaling groups, new and re-deployments of infrastructure and workloads, ongoing changes, and more. That means over the course of a year, security teams must monitor and apply guardrails to over 700,000 individual instances.

It’s easy for security (and operations) teams to wind up without unified visibility into what cloud services their development teams are using at any given point in time. Without a purpose-built multicloud security solution in place, there’s just no way to continuously monitor cloud and container services and maintain insight into potential risks.

It is entirely possible, however, to gain visibility. More than that, it’s necessary if you want to continue to scale. In the cloud world, the old security adage applies: You can’t secure what you can’t see. Total visibility into all cloud resources can help security teams quickly detect changes that could open the organization up to risk. With visibility in place, you can more readily assess risks, identify and remediate issues, and ensure continuous compliance with relevant regulations.

2. Create a culture of security

No one wants their DevOps and security teams to be working in opposition, especially in a rapid growth period. When you uphold DevSecOps principles, you eliminate the friction between DevOps and security professionals. There’s no need to “circle back” after an initial release or “push pause” on a scheduled deployment when securing the cloud throughout the CI/CD pipeline is just part of how the business operates. A culture that values security is vital when it comes to rapid scaling. You can’t rely on each individual to “do the right thing,” so you’re much better off building security into your culture on a deep level.

When it comes to timing your culture shift, all signs point to now. Fortune notes that while the pandemic-era adoption of hybrid work provides unprecedented flexibility and accessibility, it also can create a “nightmare scenario” with “hundreds (or thousands) of new vectors through which malicious actors can gain a foothold in your network.” Gartner reports that cloud security saw the largest spending increase of all other information security and risk management segments in 2021, ticking up by 41%. Yet, a survey by Cloud Security Alliance revealed that 76% of professionals polled fear that the risk of cloud misconfigurations will stay the same or increase.

Given these numbers, encouraging a culture of security is a present necessity, not a future concern. But how do you know when you’ve successfully created one?

The answer: When all parts of your team see cybersecurity as just another part of their job.

Of course, that’s easier said than done. Creating a culture of security requires processes that provide context and early feedback to developers, meaning that command and control is no longer security’s fallback position. Instead, collaboration should be the name of the game. Making security easy is what bridges the historical cultural divide between security and DevOps.

The utopia version of DevSecOps promises seamless collaboration – but each team has plenty on their own plates to worry about. How can tech companies foster a culture of security while optimizing their existing resources and workflows?

3. Focus on security by design

TrendMicro reports that simple cloud infrastructure misconfigurations account for 65% to 70% of all cloud security challenges. The Ponemon Institute and IBM found that the average cost of a data breach in 2021 was $4.24 million – the highest average cost ever recorded in the report’s 17-year history. That same report found that organizations with more mature cloud security practices were able to contain breaches on average 77 days faster than those with less mature strategies.

Security professionals are human, too. They can only be in so many places at once. With talent already scarce, you want your security team to focus on creating new strategies, without getting bogged down by simple fixes.

That’s why integrating security measures into the dev cycle framework can help you move towards achieving that balance between speed and security. Embedding checks within the development process is one way to empower early detection, saving your team’s time and resources.

This approach helps catch problems like policy violations or misconfigurations without sacrificing the speed that developers love or the safety that security professionals need. Plus, building security into your development processes will empower your dev teams to correct issues right away as they’re alerted, making that last deployment the breath of relief it should be.

When you integrate security and compliance checks early in the dev lifecycle, you can prevent the majority of vulnerabilities from cropping up in the first place — meaning your dev and sec teams can rest easy knowing that their infrastructure as code (IaC) templates are secure from the beginning.

How to get started: Empower secure development

Get your developers implementing security without having to onboard them to an entirely new role. By integrating and automating security checks into the workflows and tools your DevOps teams already know and love, you empower them to prioritize both speed and security.

Taking on even one of the three strategies described above can be intimidating. We suggest getting started by focusing on actionable steps, which we cover in depth in our eBook below.

Scaling securely is possible. Want to learn more? Read up on 6 Strategies to Empower Secure Innovation at Enterprise Tech Companies to tackle the unique cloud security challenges facing the tech industry.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Update for CIS Google Cloud Platform Foundation Benchmarks – Version 1.3.0

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2022/05/13/update-for-cis-google-cloud-platform-foundation-benchmarks-version-1-3-0/

Update for CIS Google Cloud Platform Foundation Benchmarks - Version 1.3.0

The Center for Internet Security (CIS) recently released an updated version of their Google Cloud Platform Foundation Benchmarks – Version 1.3.0. Expanding on previous iterations, the update adds 21 new benchmarks covering best practices for securing Google Cloud environments.

The updates were broad in scope, with recommendations covering configurations and policies ranging from resource segregation to Compute and Storage. In this post, we’ll briefly cover what CIS Benchmarks are, dig into a few key highlights from the newly released version, and highlight how Rapid7 InsightCloudSec can help your teams implement and maintain compliance with new guidance as it becomes available.

What are CIS Benchmarks?

In the rare case that you’ve never come across them, the CIS Benchmarks are a set of recommendations and best practices determined by contributors across the cybersecurity community intended to provide organizations and security practitioners with a baseline of configurations and policies to better protect their applications, infrastructure, and data.

While not a regulatory requirement, the CIS Benchmarks provide a foundation for establishing a strong security posture, and as a result, many organizations use them to guide the creation of their own internal policies. As new benchmarks are created and updates are announced, many throughout the industry sift through the recommendations to determine whether or not they should be implementing the guidelines in their own environments.

CIS Benchmarks can be even more beneficial to practitioners taking on emerging technology areas where they may not have the background knowledge or experience to confidently implement security programs and policies. In the case of the GCP Foundation Benchmarks, they can prove to be a vital asset for folks looking to get started in cloud security or that are taking on the added responsibility of their organizations’ cloud environments.

Key highlights from CIS GCP Foundational Benchmarks 1.3.0

Relative to benchmarks created for more traditional security fields such as endpoint OS, Linux, and others, those developed for cloud service providers (CSPs) are relatively new. As a result, when updates are released they tend to be fairly substantial as it relates to the volume of new recommendations. Let’s dig in a bit further into some of the key highlights from version 1.3.0 and why they’re important to consider for your own environment.

2.13 – Ensure Cloud Asset Inventory is enabled

Enabling Cloud Asset Inventory is critical to maintaining visibility into your entire environment, providing a real-time and retroactive (5 weeks of history retained) view of all assets across your cloud estate. This is critical because in order to effectively secure your cloud assets and data, you first need to gain insight into everything that’s running within your environment. Beyond providing an inventory snapshot, Cloud Asset Inventory also surfaces metadata related to those assets, providing added context when assessing the sensitivity and/or integrity of your cloud resources.

4.11 – Ensure that compute instances have Confidential Computing enabled

This is a really powerful new configuration that enables organizations to secure their mission critical data throughout its lifecycle, including while actively in use. Typically, encryption is only available while data is either at rest or in transit. Making use of Google’s new Secure Encrypted Virtualization (SEV) feature, Confidential Computing allows customers to encrypt their data while it is being indexed or queried.

A dozen new recommendations for securing GCP databases

The new benchmarks added 12 new recommendations targeted at securing GCP databases, each of which are geared toward addressing issues related to data loss or leakage. This aligns with Verizon’s most recent Data Breach Investigations Report, which found that data stores remained the most commonly exploited cloud service, with more than 40% of breaches resulting from misconfiguration of cloud data stores such as AWS S3 buckets, Azure Blob Storage, and Google Cloud Storage buckets.

How InsightCloudSec can help your team align to new CIS Benchmarks

In response to the recent update, Rapid7 has released a new compliance pack – GCP 1.3.0 – for InsightCloudSec to ensure that security teams can easily check their environment for adherence with the new benchmarks. The new pack contains 57 Insights to help organizations reconcile their own existing GCP configurations against the new recommendations and best practices. Should your team need to make any adjustments based on the benchmarks, InsightCloudSec users can leverage bots to notify the necessary team(s) or automatically enact them.

In subsequent releases, we will continue to update the pack as more filters and Insights are available. If you have specific questions on this capability or a supported GCP resource, reach out to us through the Customer Portal.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[Infographic] Cloud Misconfigurations: Don’t Become a Breach Statistic

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/05/09/infographic-cloud-misconfigurations-dont-become-a-breach-statistic/

[Infographic] Cloud Misconfigurations: Don't Become a Breach Statistic

No one wants their company to be named in the latest headline-grabbing data breach. Luckily, there are steps you can take to keep your organization from becoming another security incident statistic — chief among them, avoiding misconfigurations in the cloud.

Our 2022 Cloud Misconfigurations Report found some key commonalities across publicly reported data exposure incidents last year. Check out some of the highlights here, in our latest infographic.

[Infographic] Cloud Misconfigurations: Don't Become a Breach Statistic

Want to learn more about the cloud misconfigurations and breaches that happened last year? Check out the full 2022 Cloud Misconfigurations Report.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Is Your Kubernetes Cluster Ready for Version 1.24?

Post Syndicated from Alon Berger original https://blog.rapid7.com/2022/05/03/is-your-kubernetes-cluster-ready-for-version-1-24/

Is Your Kubernetes Cluster Ready for Version 1.24?

Kubernetes rolled out Version 1.24 on May 3, 2022, as its first release of 2022. This version is packed with some notable improvements, as well as new and deprecated features. In this post, we will cover some of the more significant items on the list.

The Dockershim removal

The new release has caught the attention of most users, mainly due to the official removal of Dockershim, a built-in Container Runtime Interface (CRI) in the Kubelet codebase, which has been deprecated since v1.20.

Docker is essentially a user-friendly abstraction layer, created before Kubernetes was introduced. Docker isn’t compliant with CRI, which is why Dockershim was needed in the first place. However, upon discovering maintenance overhead and weak points involving Docker and containerd, it was decided to remove Docker completely, encouraging users to utilize other CRI-compliant runtimes.

Docker-produced images are still able to run with all other CRI compliant runtimes, as long as worker nodes are configured to support those runtimes and any node customizations are properly updated based on the environment and runtime requirements. The release team also published an FAQ article dedicated entirely to the Dockershim removal.

Better security with short-lived tokens

A major update in this release is the reduction of secret-based service account tokens. This is a big step toward improving the overall security of service account tokens, which until now remained valid as long as their respective service accounts lived.

Now, with a much shorter lifespan, these tokens are significantly less susceptible to security risks, preventing attackers from gaining access to the cluster and from leveraging multiple attack vectors such as privileged escalations and lateral movement.

Network Policy status

Network Policy resources are implemented differently by different Container Network Interface (CNI) providers and often apply certain features in a different order.

This can lead to a Network Policy not being honored by the current CNI provider — worst of all, without notifying the user about the situation.

In this version, a new subresource status is added that allows users to receive feedback on whether a NetworkPolicy and its features have been properly parsed and help them understand why a particular feature is not working.

This is another great example of how developers and operation teams can benefit from features like this one, alleviating the often involved pain with troubleshooting a Kubernetes network issue.

CSI volume health monitoring

Container Storage Interface (CSI) drivers can now load an external controller as a sidecar that will check for volume health, and they can also provide extra information in the NodeGetVolumeStats function that Kubelet already uses to gather information on the volumes.

In this version, the Volume Health information is exposed as kubelet VolumeStats metrics. The kubelet_volume_stats_health_status_abnormal metric will have a persistentvolumeclaim label with a value of “1” if the volume is unhealthy, or “0” otherwise.

Additional noteworthy changes in Kubernetes Version 1.24

A few other welcome changes include new features like implementing new changes to the kubelet agent, Kubernetes’ primary component that runs on each node. Dockershim-related CLI flags were removed due to its deprecation. Furthermore, the Dynamic Kubelet Configuration feature, which allows dynamic Kubelet configurations, has been officially removed in this version, after it was announced as deprecated in earlier versions. This removal aims to simplify code and to improve its reliability.

Furthermore, the newly added kubectl create token command allows easier creation and retrieval of tokens for the Kubernetes API access and control management, or SIG-Auth.

This new command significantly improves automation processes throughout the CI/CD pipelines and will accelerate roles-based access control (RBAC) policy changes as well as hardening TokenRequest endpoint validations.

Lastly, a useful added feature for cluster operators is to identify Windows pods at API admission level authoritatively. This can be crucial for managing Windows containers by applying better security policies and constraints based on the operating system.

The first release for 2022 mainly introduces improvements towards providing helpful feedback for users, reducing the attack surface and improving security posture all around. The official removal of Dockershim support will push organizations and users to adapt and align with infrastructure changes, moving forward with new technology developments in Kubernetes and the cloud in general.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Cloud-Native Application Protection (CNAPP): What’s Behind the Hype?

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/05/02/cloud-native-application-protection-cnapp-whats-behind-the-hype/

Cloud-Native Application Protection (CNAPP): What's Behind the Hype?

There’s no shortage of acronyms when it comes to security product categories. DAST, EDR, CWPP — it sometimes feels like we’re awash in a sea of letters, and that can be a little dizzying. Every once in a while, though, a new term pops up that cuts through the noise, thanks to a combination of catchiness and excitement about that product category’s potential to solve the big problems security teams face. (Think of XDR, for a recent example.)

Cloud-native application protection platform, or CNAPP, is one of those standout terms that has the potential to solve significant problems in cloud security by consolidating a list of other “C” letter acronyms. Gartner introduced CNAPP as one of its cloud security categories in 2021, and the term quickly began to make headlines. But what’s the reality behind the hype? Is CNAPP an all-in-one answer to building secure apps in a cloud-first ecosystem, or is it part of a larger story? Let’s take a closer look.

New needs of cloud-native teams

CNAPP is a cloud security archetype that takes an integrated, lifecycle approach, protecting both hosts and workloads for truly cloud-native application development environments. These environments have their own unique demands and challenges, so it should come as little surprise that new product categories have arisen to address those concerns.

Cloud infrastructures are inherently complex — that makes it tougher to monitor these environments, potentially opening the door to security gaps. If you’re building applications within a cloud platform, the challenge multiplies: You need next-level visibility to ensure your environment and the applications you’re building in it are secure from the ground up.

A few trends have emerged within teams building cloud-native applications to address their unique needs.

DevSecOps: A natural extension of the DevOps model, DevSecOps brings security into the fold with development and operations as an integral part of the same shared lifecycle. It makes security everyone’s business, not just the siloed responsibility of a team of infosec specialists.

Shift left: Tied into the DevSecOps model is the imperative to shift security left — i.e. earlier in the development cycle — making it a fundamental aspect of building applications rather than an afterthought. The “bake it in, don’t bolt it on” adage has become almost cliché in security circles, but shifting left is in some ways a more mature — and arguably more radical — version of this concept. It changes security from something you do to an application to part of what the application is. Security becomes part of the fundamental conception and design of a web app.

All of that said, the real challenge here comes down to security teams trying to monitor and manage large-scale, complex cloud environments – not to mention trying to generate buy-in from other teams and get them to collaborate on security protocols that may occasionally slow them down.

How CNAPP hopes to help

To bring DevSecOps and shift-left practices to life, teams need tools that support the necessary levels of visibility and flexibility that underlie these goals. That brings us to where CNAPP fits into this picture.

“Optimal security of cloud-native applications requires an integrated approach that starts in development and extends to runtime protection,” Gartner writes in their report introducing CNAPP, according to Forbes. “The unique characteristics of cloud-native applications makes them impossible to secure without a complex set of overlapping tools spanning development and production.”

Forbes goes on to outline the 5 core components that Gartner uses in its definition of CNAPP:

Infrastructure as code (IaC) scanning: Because infrastructure is managed and provisioned as code in many cloud environments, this code must be continuously scanned for vulnerabilities.

Container scanning: The cloud has made containers an integral part of application development and deployment — these must also be scanned for security threats.

Cloud workload protection (CWPP): This type of security solution focuses on protecting workloads in cloud data center architectures.

Cloud infrastructure entitlement management (CIEM): This cloud security category streamlines identity and access management (IAM) by providing least-privileged access and governance controls for distributed cloud environments.

Cloud security posture management (CSPM): CSPM capabilities continuously manage cloud security risk, with automated detection, logging, and reporting to aid governance and compliance.

A holistic approach to cloud-native security

You might have noticed some of the components of CNAPP are themselves cloud security categories as defined by Gartner. How are they different from CNAPP? Do you need all of them individually, or are they available in a single package? What gives?

While CNAPP is meant to be a product category, right now the broad set of capabilities in Gartner’s definition describes an ideal future state that remains rare in the industry as a single solution. The fact remains there aren’t many vendors out there that have all these components, even across multiple product sets – let alone the ability to fit them into a single solution.

That said, vendors and practitioners can start working together now to bring that vision to life. While there are and will continue to be products that label or identify themselves as a CNAPP, what’s really needed is a comprehensive approach to cloud security – both from the technology provided by vendors and the strategy executed by practitioners – that simplifies the process of monitoring and remediating risks from end to end within vast, complex cloud environments.

The cloud is now dominant, and infrastructure is increasingly becoming code — that means scanning for vulnerabilities within infrastructure and in applications have begun to look more alike than ever. Just like DevSecOps brings development, security, and operations together into (ideally) a harmonious unit, application security testing and cloud security monitoring are coequal, integral parts of a truly cloud-native security platform.

The real excitement around CNAPP is that by bringing once-disparate cloud security concepts together, it shines a light on what today’s organizations really need: a full-access path to a secure cloud ecosystem, with all the necessary speed of innovation and deployment and as little risk as possible.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Canadian Centre for Cyber Security Assessment Summary report now available in AWS Artifact

Post Syndicated from Rob Samuel original https://aws.amazon.com/blogs/security/canadian-centre-for-cyber-security-assessment-summary-report-now-available-in-aws-artifact/

French version

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of AWS services. We are pleased to announce the availability of the Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS, which you can view and download on demand through AWS Artifact.

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium Cloud Security Profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT Security Risk Management: A Lifecycle Approach, Annex 3 – Security Control Catalogue). As of September, 2021, 120 AWS services in the Canada (Central) Region have been assessed by the CCCS, and meet the requirements for medium cloud security profile. Meeting the medium cloud security profile is required to host workloads that are classified up to and including medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope by Compliance Program page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. If you have questions about this blog post, please start a new thread on the AWS Artifact forum or contact AWS Support.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Rob Samuel

Rob Samuel

Rob Samuel is a Principal technical leader for AWS Security Assurance. He partners with teams across AWS to translate data protection principles into technical requirements, aligns technical direction and priorities, orchestrates new technical solutions, helps integrate security and privacy solutions into AWS services and features, and addresses cross-cutting security and privacy requirements and expectations. Rob has more than 20 years of experience in the technology industry, and has previously held leadership roles, including Head of Security Assurance for AWS Canada, Chief Information Security Officer (CISO) for the Province of Nova Scotia, various security leadership roles as a public servant, and served as a Communications and Electronics Engineering Officer in the Canadian Armed Forces.

Naranjan Goklani

Naranjan Goklani

Naranjan Goklani is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 12 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the retail, ecommerce, and utilities industries.

Brian Mycroft

Brian Mycroft

Brian Mycroft is a Chief Technologist at AWS, based in Ottawa (Canada), specializing in national security, intelligence, and the Canadian federal government. Brian is the lead architect of the AWS Secure Environment Accelerator (ASEA) and focuses on removing public sector barriers to cloud adoption.

.


 

Rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité disponible sur AWS Artifact

Par Robert Samuel, Naranjan Goklani et Brian Mycroft
Amazon Web Services (AWS) s’engage à fournir à ses clients une assurance continue à travers des évaluations, des certifications et des attestations qui appuient l’adoption des services proposés par AWS. Nous avons le plaisir d’annoncer la mise à disposition du rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité (CCCS) pour AWS, que vous pouvez dès à présent consulter et télécharger à la demande sur AWS Artifact.

Le CCC est l’autorité canadienne qui met son expertise en matière de cybersécurité au service du gouvernement canadien, du secteur privé et du grand public. Les organisations des secteurs public et privé établies au Canada dépendent de la rigoureuse évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques conduite par le CCC pour leur décision relative à l’utilisation de ces services infonuagiques. De plus, le processus d’évaluation de la sécurité des technologies de l’information est une étape obligatoire pour permettre à AWS de fournir des services infonuagiques aux agences et aux ministères du gouvernement fédéral canadien.

Le Processus d’évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques détermine si les exigences en matière de technologie de l’information du Gouvernement du Canada (GC) pour le profil de contrôle de la sécurité infonuagique moyen (précédemment connu sous le nom de Protégé B/Intégrité moyenne/Disponibilité moyenne) sont satisfaites conformément à l’ITSG-33 (Gestion des risques liés à la sécurité des TI : Une méthode axée sur le cycle de vie, Annexe 3 – Catalogue des contrôles de sécurité). En date de septembre 2021, 120 services AWS de la région (centrale) du Canada ont été évalués par le CCC et satisfont aux exigences du profil de sécurité moyen du nuage. Satisfaire les exigences du niveau moyen du nuage est nécessaire pour héberger des applications classées jusqu’à la catégorie moyenne incluse. Le CCC évalue périodiquement les nouveaux services, ou les services qui n’ont pas encore été évalués, et réévalue les services AWS précédemment évalués pour s’assurer qu’ils continuent de satisfaire aux exigences du Gouvernement du Canada. Le CCC priorise l’évaluation des nouveaux services AWS selon leur disponibilité au Canada et en fonction de la demande des clients pour les services AWS. La liste complète des services AWS évalués par le CCC est consultable sur notre page Services AWS concernés par le programme de conformité.

Pour en savoir plus sur l’évaluation du CCC ainsi que sur nos autres programmes de conformité et de sécurité, visitez la page Programmes de conformité AWS. Comme toujours, nous accordons beaucoup de valeur à vos commentaires et à vos questions; vous pouvez communiquer avec l’équipe Conformité AWS via la page Communiquer avec nous.

Si vous avez des commentaires sur cette publication, n’hésitez pas à les partager dans la section Commentaires ci-dessous. Vous souhaitez en savoir plus sur AWS Security? Retrouvez-nous sur Twitter.

Biographies des auteurs :

Rob Samuel : Rob Samuel est responsable technique principal d’AWS Security Assurance. Il collabore avec les équipes AWS pour traduire les principes de protection des données en recommandations techniques, aligne la direction technique et les priorités, met en œuvre les nouvelles solutions techniques, aide à intégrer les solutions de sécurité et de confidentialité aux services et fonctionnalités proposés par AWS et répond aux exigences et aux attentes en matière de confidentialité et de sécurité transversale. Rob a plus de 20 ans d’expérience dans le secteur de la technologie et a déjà occupé des fonctions dirigeantes, comme directeur de l’assurance sécurité pour AWS Canada, responsable de la cybersécurité et des systèmes d’information (RSSI) pour la province de la Nouvelle-Écosse, divers postes à responsabilités en tant que fonctionnaire et a servi dans les Forces armées canadiennes en tant qu’officier du génie électronique et des communications.

Naranjan Goklani : Naranjan Goklani est responsable des audits de sécurité pour AWS, il est basé à Toronto (Canada). Il est responsable des audits, des attestations, des certifications et des évaluations pour l’Amérique du Nord et l’Europe. Naranjan a plus de 12 ans d’expérience dans la gestion des risques, l’assurance de la sécurité et la réalisation d’audits de technologie. Naranjan a exercé dans l’une des quatre plus grandes sociétés de comptabilité et accompagné des clients des industries de la distribution, du commerce en ligne et des services publics.

Brian Mycroft : Brian Mycroft est technologue en chef pour AWS, il est basé à Ottawa (Canada) et se spécialise dans la sécurité nationale, le renseignement et le gouvernement fédéral du Canada. Brian est l’architecte principal de l’AWS Secure Environment Accelerator (ASEA) et s’intéresse principalement à la suppression des barrières à l’adoption du nuage pour le secteur public.

2022 Cloud Misconfigurations Report: A Quick Look at the Latest Cloud Security Breaches and Attack Trends

Post Syndicated from Jacob Roundy original https://blog.rapid7.com/2022/04/20/2022-cloud-misconfigurations-report-a-quick-look-at-the-latest-cloud-security-breaches-and-attack-trends/

2022 Cloud Misconfigurations Report: A Quick Look at the Latest Cloud Security Breaches and Attack Trends

Every year, Rapid7’s team of cloud security experts and researchers put together a report to review data from publicly disclosed breaches that occurred over the prior year. The goal of this report is to unearth patterns and trends in cloud-related breaches and persistent exposures, so organizations around the world can better protect against threats and address cloud misconfigurations in their own environments.

In the 2022 Cloud Misconfigurations Report, we reviewed 68 accounts of breaches from 2021. Let’s take a brief look at some of the findings from this report, including what industries are being targeted, what the bad guys are looking to gain, and what you can do to shore up your cloud security.

For more information, read Rapid7’s full 2022 Cloud Misconfigurations Report.

What industries are being targeted?

In the subset of breaches we studied, there was a broad distribution of affected industries. Our sample had the following industries represented:

  • Information
  • Healthcare
  • Public administration
  • Retail
  • Professional services
  • Arts and entertainment
  • Manufacturing
  • Finance
  • Educational services
  • Transportation
  • Real estate
  • Accommodation and food services
  • Utilities

This is a notable swath of industries, especially considering the sample size. Among the organizations affected by breaches, some were prominent brands and even staples of the Fortune 500, not just startups operating on shoestring budgets. These organizations have the resources and expertise to establish the gold standard of cloud security best practices, so it just goes to show that anyone is susceptible to breaches due to cloud misconfigurations.

While we found that breaches can hit any organization, no matter their size and prestige, organizations in high-risk industries — like information, healthcare, and public administration — should be especially cautious. The information industry, in particular, was represented at the top of our list, with a considerable lead of nearly double the amount of breaches than reported by the healthcare industry (the second-most affected industry).

What are the bad guys looking for?

So we know that a variety of industries are being targeted, with a particular focus on organizations that store highly sensitive information. Next, let’s take a look at what exactly bad actors are trying to gain by exploiting cloud misconfigurations.

For starters, we found that details on physical location (such as addresses or latitude/longitude details), names, and email were the most commonly lost resources. Other highly sought after data included:

  • Identifier information
  • Passwords
  • Health details
  • Social data
  • Financial information
  • Phone numbers

That’s not all: We also saw that personal, legal, and technical information was stolen, as well as authentication and even media data.

Depending on your industry, you may not store all these data types, but the overall set of details lost represents a gold mine for bad actors who want to carry out social engineering attacks. In the hands of a skilled social engineer, this data can be leveraged to craft incredibly convincing phishing attempts. Passwords, identifiers, and authentication data could also be used by a bad actor to infiltrate a network and extract even more valuable information.

All in all, the data compromised isn’t always the expected high-value nuggets, like credit card information or Social Security numbers. Simple data on names, locations, and email addresses can be powerful weapons, so it’s critical to keep these seemingly less important tidbits of information safe.

What can you do to stay secure?

Better cloud security doesn’t have to be hard. Many of the breaches we reviewed tended to be caused by avoidable circumstances, such as using unsecured resources or users relaxing security permissions. As a result, you can take a few easy steps to better defend your environment and even discover misconfigurations faster.

Rapid7 maintains a globally distributed honeypot network called Project Heisenberg. These honeypot instances are set up on various cloud vendors, waiting for inbound connections, which helps in identifying a misconfiguration or some type of malicious activity. Bad actors will often scan the internet looking for exposed resources to exploit, so this is one way we get a view into what they’re trying to take advantage of.

Thanks to this data, we know that far too many breaches happen as a result of users manually relaxing security settings on cloud resources or making simple mistakes, like typing in the wrong IP address when connecting to a network resource. As such, keeping cloud resources safe can sometimes be as easy as leaving the default security settings intact. (Also, seriously, stop deploying unencrypted instances on the cloud.)

Misconfigurations and lapses in security can also be addressed by:

  • Providing better user training
  • Implementing systems and controls to discourage the relaxing of security mechanisms
  • Conducting reviews of identified resources for appropriate configurations

Breaches are out there — and they’re pervasive — but that doesn’t mean you have to be a target, and keeping your organization safe may be simpler than you think, so long as you know how to keep an eye out for misconfigurations and follow industry-standard best practices for cloud security.

Curious to learn more about the cloud misconfigurations and breaches that happened last year? Check out the full 2022 Cloud Misconfigurations Report.

Additional reading:

InsightCloudSec Supports the Recently Updated NSA/CISA Kubernetes Hardening Guide

Post Syndicated from Alon Berger original https://blog.rapid7.com/2022/04/14/insightcloudsec-supports-the-recently-updated-nsa-cisa-kubernetes-hardening-guide/

InsightCloudSec Supports the Recently Updated NSA/CISA Kubernetes Hardening Guide

The National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) recently updated their Kubernetes Hardening Guide, which was originally published in August 2021.

With the help and feedback received from numerous partners in the cybersecurity community, this guide outlines a strong line of action towards minimizing the chances of potential threats and vulnerabilities within Kubernetes deployments, while adhering to strict compliance requirements and recommendations.

The purpose of the Kubernetes hardening guide

This newly updated guide comes to the aid of multiple teams — including security, DevOps, system administrators, and developers — by focusing on the security challenges associated with setting up, monitoring, and maintaining a Kubernetes cluster. It brings together strategies to help organizations avoid misconfigurations and implement recommended hardening measures by highlighting three main sources of compromise:

  • Supply chain risks: These often occur during the container build cycle or infrastructure acquisition and are more challenging to mitigate.
  • Malicious threat actors: Attackers can exploit vulnerabilities and misconfigurations in components of the Kubernetes architecture, such as the control plane, worker nodes, or containerized applications.
  • Insider threats: These can be administrators, users, or cloud service providers, any of whom may have special access to the organization’s Kubernetes infrastructure.

“This guide focuses on security challenges and suggests hardening strategies for administrators of National Security Systems and critical infrastructure. Although this guide is tailored to National Security Systems and critical infrastructure organizations, NSA and CISA also encourage administrators of federal and state, local, tribal, and territorial (SLTT) government networks to implement the recommendations in this guide,” the authors state.

CIS Benchmarks vs. the Kubernetes Hardening Guide

For many practitioners, the Center for Internet Security (CIS) is the gold standard for security benchmarks; however, their benchmarks are not the only guidance available.

While the CIS is compliance gold, the CIS Benchmarks are very prescriptive and usually offer minimal explanations. In creating their own Kubernetes hardening guidelines, it appears that the NSA and CISA felt there was a need for a higher-level security resource that explained more of the challenges and rationale behind Kubernetes security. In this respect, the two work as perfect complements — you get strategies and rationale with the Kubernetes Hardening Guide and the extremely detailed prescriptive checks and controls enumerated by CIS.

In other words, CIS Benchmarks offer the exact checks you should use, along with recommended settings. The NSA and CISA guide supplements these by explaining challenges and recommendations, why they matter, and detailing how potential attackers look at the attack. In version 1.1, the updates include the latest hardening recommendations necessary to protect and defend against today’s threat actors.

Breaking down the updated guidance

As mentioned, the guide breaks down the Kubernetes threat model into three main sources: supply chain, malicious threat actors, and insider threats. This model reviews threats within the Kubernetes cluster and beyond its boundaries by including underlying infrastructure and surrounding workloads that Kubernetes does not manage.

Via a new compliance pack, InsightCloudSec supports and covers the main sources of compromise for a Kubernetes cluster, as mentioned in the guide. Below are the high-level points of concern, and additional examples of checks and insights, as provided by the InsightCloud Platform:

  • Supply chain: This is where attack vectors are more diverse and hard to tackle. An attacker might manipulate certain elements, services, and other product components. It is crucial to continuously monitor the entire container life cycle, from build to runtime. InsightCloudSec provides security checks to cover the supply chain level, including:

    • Checking that containers are retrieved from known and trusted registries/repositories
    • Checking for container runtime vulnerabilities
  • Kubernetes Pod security: Kubernetes Pods are often used as the attacker’s initial execution point. It is essential to have a strict security policy, in order to prevent or limit the impact of a successful compromise. Examples of relevant checks available in InsightCloudSec include:

    • Non-root containers and “rootless” container engines
      • Reject containers that execute as the root user or allow elevation to root.
      • Check K8s container configuration to use SecurityContext:runAsUser specifying a non-zero user or runAsUser.
      • Deny container features frequently exploited to break out, such as hostPID, hostIPC, hostNetwork, allowedHostPath.
    • Immutable container file systems
      • Where possible, run containers with immutable file systems.
      • Kubernetes administrators can mount secondary read/write file systems for specific directories where applications require write access.
    • Pod security enforcement
      • Harden applications against exploitation using security services such as SELinux®, AppArmor®, and secure computing mode (seccomp).
    • Protecting Pod service account tokens
      • Disable the secret token from being mounted by using the automountServiceAccountToken: false directive in the Pod’s YAML specification.
  • Network separation and hardening: Monitoring the Kubernetes cluster’s networking is key. It holds the communication among containers, Pods, services, and other external components. These resources are not isolated by default and therefore could lead to lateral movement or privilege escalations if not separated and encrypted properly. InsightCloudSec provides checks to validate that the relevant security policies are in place:

    • Namespaces
      • Set up network policies to isolate resources. Pods and services in different namespaces can still communicate with each other unless additional separation is enforced.
    • Network policies
      • Set up network policies to isolate resources. Pods and services in different namespaces can still communicate with each other unless additional separation is enforced.
    • Resource policies
      • Use resource requirements and limits.
    • Control plane hardening
      • Set up TLS encryption.
      • Configure control plane components to use authenticated, encrypted communications using Transport Layer Security (TLS) certificates.
      • Encrypt etcd at rest, and use a separate TLS certificate for communication.
      • Secure the etcd datastore with authentication and role-based access control (RBAC) policies. Set up TLS certificates to enforce Hypertext Transfer Protocol Secure (HTTPS) communication between the etcd server and API servers. Using a separate certificate authority (CA) for etcd may also be beneficial, as it trusts all certificates issued by the root CA by default.
    • Kubernetes Secrets
      • Place all credentials and sensitive information encrypted in Kubernetes Secrets rather than in configuration files
  • Authentication and authorization: Probably the primary mechanisms to leverage toward restricting access to cluster resources are authentication and authorization. There are several configurations that are supported but not enabled by default, such as RBAC controls. InsightCloudSec provides security checks that cover the activity of both users and service accounts, enabling faster detection of any unauthorized behavior:

    • Prohibit the addition of the service token by setting automaticServiceAccountToken or automaticServiceAccounttoken to false.
    • Anonymous requests should be disabled by passing the --anonymous-auth=false option to the API server.
    • Start the API server with the --authorizationmode=RBAC flag in the following command. Leaving authorization-mode flags, such as AlwaysAllow, in place allows all authorization requests, effectively disabling all authorization and limiting the ability to enforce least privilege for access.
  • Audit logging and threat detection: Kubernetes audit logs are a goldmine for security, capturing attributed activity in the cluster and making sure configurations are properly set. The security checks provided by InsightCloudSec ensure that the security audit tools are enabled. In order to keep track of any suspicious activity:

    • Check that the Kubernetes native audit logging configuration is enabled.
    • Check that seccomp: audit mode is enabled. The seccomp tool is disabled by default but can be used to limit a container’s system call abilities, thereby lowering the kernel’s attack surface. Seccomp can also log what calls are being made by using an audit profile.
  • Upgrading and application security practices: Security is an ongoing process, and it is vital to stay up to date with upgrades, updates, and patches not only in Kubernetes, but also in hypervisors, virtualization software, and other plugins. Furthermore, administrators need to make sure they uninstall old and unused components as well, in order to reduce the attack surface and risk of outdated tools. InsightCloudSec provides the checks required for such scenarios, including:

    • Promptly applying security patches and updates
    • Performing periodic vulnerability scans and penetration tests
    • Uninstalling and deleting unused components from the environment

Stay up to date with InsightCloudSec

Announcements like this catch the attention of the cybersecurity community, who want to take advantage of new functionalities and requirements in order to make sure their business is moving forward safely. However, this can often come with a hint of hesitation, as organizations need to ensure their services and settings are used properly and don’t introduce unintended consequences to their environment.

In order to help our customers to continuously stay aligned with the new guidelines, InsightCloudSec is already geared with a new compliance pack that provides additional coverage and support, based on insights that are introduced in the Kubernetes Hardening Guide.

Want to see InsightCloudSec in action? Check it out today.

Additional reading:

Cloud Pentesting, Pt. 3: The Impact of Ecosystem Maturity

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/04/04/cloud-pentesting-pt-3-the-impact-of-ecosystem-maturity/

Cloud Pentesting, Pt. 3: The Impact of Ecosystem Maturity

Now that we’ve covered the basics of cloud pentesting and the style in which a cloud environment could be attacked, let’s turn our attention to the entirety of this ecosystem. This environment isn’t too different from the on-premise ecosystem that traditional penetration testing is performed on. Who doesn’t want to gain internal access to the client’s environment from an external perspective? Recently, one consultant obtained firewall access due to default credentials never being changed, and the management interface was being publicly exposed. Or how about gaining a shell on a web server because of misconfigurations?  

Typically, a client who has a bit more maturity beyond just a vulnerability management program will shift gears to doing multiple pentests against their environments, which are external, internal, web app, mobile app, wireless, and potentially more. By doing these types of pentests, clients can better understand which aspects of their ecosystem are failing and which are doing fine. This is no different than when their infrastructure is deployed in the cloud.

Cloud implementation maturity

There’s an old saying that one must learn how to crawl before walking and how to walk before running. That same adage runs true for pentesting. Pentesting a network before ever having some sort of vulnerability management program can certainly show the weaknesses within the network, but it may not show the true depth of the issue.

The same holds true with Red Teams. You wouldn’t want to immediately jump on the Red Team pentesting bandwagon without having gone through multiple iterations of pentesting to true up gaps within the environment. Cloud pentesting should be treated in the same manner.  

The maturity of a company’s cloud implementation will help determine the depth in which a cloud pentest can occur, if it can occur at all! To peel this orange back a bit, let’s say a company wants a cloud ecosystem pentest. Discovery calls to scope the project will certainly help uncover how a customer implements their cloud environment, but what if it’s a basic approach? As in, there is no use of cloud-native deployments, all user accounts have root access, tagging of assets within the environment is not implemented, and so on. Where does this leave a pentest?  

In this particular case, an ecosystem pentest is not feasible at this juncture. The more basic approaches, such as vulnerability management or scanning of built-in cloud vendor-specific checks, would most certainly be ideal. Again, crawl before you walk, and walk before you run.This would look more like a traditional pentest, where an external and an internal test are performed.

What if the client is very mature in their implementation of cloud? Now we’re talking! User accounts are not root, IAM roles are leveraged instead of users, departments have separate permission profiles, the environment utilizes cloud native deployments as much as possible, and there’s separation of department environments by means of accounts, access control lists (ACLs), or virtual private clouds (VPCs). This now becomes the cloud ecosystem pentest that will show gaps within the environment — with the understanding that the customer has implemented, to the best of their abilities, controls that are baked into the cloud platform.

Maturity example

I’ve had the absolute pleasure of chatting with a ton of potential customers that are interested in performing a cloud ecosystem pentest. This not only helps to understand how the customer needs their pentest to be structured, but it also helps me to understand how we can improve our offering at Rapid7. There’s one particular case that stood out to me, which helped me understand that some customers are simply not ready to move into a cloud-native deployment.  

In discussing Rapid7’s approach to cloud ecosystem pentesting, we typically ask what types of services the customer uses with their respective cloud vendor. In this discussion with this particular customer, we discovered they were using Kubernetes (K8s) quite extensively. After we asked a few more questions, it turned out that the customer wasn’t using K8s from a cloud-native perspective — rather, they had installed Kubernetes on their own virtual machines and were administering it from there. The reason behind this was that they just weren’t ready yet to fully transition to a cloud vendor running other parts of their infrastructure.  

Now, this is a bit of a head-scratcher, because in this type of scenario, they’re taking on more of the support than is necessary. Who am I to argue, though?  The customer and I had a very fruitful conversation, which actually led us both to a deeper understanding of not only their business approach but also their IT infrastructure strategy.

So, in this particular instance, if we were to pentest K8s that this customer deployed onto their virtual machines, how far could we go? Well, since they own the entire stack — from the operating system, to the software, to the actual containers — we can go as far as we can go. If, however, this had been deployed from a cloud-native perspective, we would have restrictions due to the cloud vendor’s terms of services.

One major restriction is container escapes, which are out of scope. This goes back to the shared environment that has made cloud so successful. If Rapid7 were capable of performing a container escape, not only would this have been severely out of scope, but Rapid7 would most certainly be reporting the exploit to the cloud vendor themselves. These are the dreams of a white hat hacker, who signed up to perform a bug bounty and get paid out potentially tens of thousands of dollars!

But while that isn’t exactly how all cloud pentests turn out, they can still be done just as effectively as traditional on-premise pentests. It just requires a clear understanding of how the customer has deployed their cloud ecosystem, how mature their implementation is, and what is in and out of scope for a pentest based on those factors.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Cloud Pentesting, Pt. 2: Testing Across Different Deployments

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/03/29/cloud-pentesting-pt-2-testing-across-different-deployments/

Cloud Pentesting, Pt. 2: Testing Across Different Deployments

In part one of this series, we broke down the various types of cloud deployments. So, pentesting in the cloud is just like on-prem, right? (Who asks these loaded questions!?)  

The answer is yes and no. It depends on how a customer has set up their cloud deployment. Let’s cover a few basics first, because this will really clear things up.

Each cloud vendor has their own unique restrictions on what can and cannot be attacked, due to the nature of how the cloud is architected. Most have very similar restrictions — however, these rules must be followed, or one could quickly find themselves outside of scope. The next sections will outline which parts of the “as-a-service” components are out of scope for testing, and which are in scope.

Infrastructure as a service

This, in my experience, is how most clients have come to set up their cloud deployment. This as-a-service model could have simply been the quickest way to appease a C-level person, asking their Directors and Managers to go all-in with cloud. This is that direct lift from on-premise to the cloud that we discussed in the last post.  

When it comes to testing this type of deployment, the scope is the largest it could be, with very few exceptions to what is out of scope. Getting dropped directly into a virtual private cloud (VPC) is likely the scenario that will work as an “assumed breach” approach. The client would then deploy a virtual machine, which will then be allowed specific access inbound from a tester’s IP address, along with gaining that access via an SSH keypair.  

Some exceptions to this testing that are OUT of scope include:

  • Auto-scaling functions and network address translation (NAT) gateways
  • Exploiting the software that was used to deploy compute instances, or changing NAT

Some items that are IN scope for this deployment model include:

  • Attacking the operating system and attempting exploitation against outdated versions
  • Exploiting the software or applications installed on the operating system

Platform as a service

You’ve heard of bring your own device (BYOD) — think of this as BYOS, or bring your own software. Platform as a service (PaaS) brings us up a level in terms of support requirements. With this approach, clients can utilize a cloud provider’s products that allow a client to bring their own code for things like web applications. A client no longer has to work on keeping their operating system up to date. The code is typically deployed on something like a container, which could cost the client much less than that of having to deploy a virtual machine, licensing for an operating system, vulnerability management of the operating system, and staffing considerations. There are again exceptions, however, to what can and can’t be tested.  

In this example, the following would be considered OUT of scope:

  • The host itself and/or containers hosting the application
  • Attempting to escape containers where the application is hosted

The items which are IN scope for this deployment model include:

  • Attempting to exploit the application that is installed on the operating system itself

Software as a service

At last, the greatest reduction in liability: software as a service (SaaS). Microsoft’s Office 365 is perhaps the most common example of a very widely used SaaS deployment. Click a few buttons in a cloud provider’s dashboard, input some user credentials, upload some data, and you’re done! Easy like Sunday morning!  

Now, the only thing to worry about is the data within the application and the users themselves.  Everything else — including virtual machine deployment, operating system installation and upkeep, patch management of the operation system and the software installed on it, and the code base, to name a few  — is completely removed from worry. Imagine how much overhead you can now dedicate to other parts of the business. Windows Admins, web app developers, infosec staff, and even IT staff now have less to worry about. However, if you’re looking to have a pentest in this kind of environment, just know that there is not a whole lot that can actually be done.  

Application exploits, for example, are OUT of scope. The items that are IN scope for this deployment model are the following:

  • Leveraging privileges and attempting to acquire data
  • Adding user accounts or elevating privileges

That’s it! The only thing that can be attacked is the users themselves, via password attacks, or the data that is held within the application — but that’s only if authentication is bypassed.

Those above examples are not made up from Rapid7’s perspective either.  These are industry-wide standards that cloud providers have created. These types of deployments are specifically designed to help reduce liability and to increase not only the capabilities of an organization but also its speed. These are known as a “shared platform” model.

As-a-service example

Recently, we had a discussion with a client who needed a pentest performed on their web application. Their web app was deployed from a third-party cloud provider, which ended up using Google Cloud Platform on their back end. After a consultant discovered that this client had deployed their web application via the SaaS model, I explained that, due to the SaaS deployment, application exploitation was out of scope, and the only attempts that could be made would be password attacks and to go after the data.

Now, obviously, education needs to happen all around, but again, the cloud isn’t new. After about an hour of discussing how their deployment looked, the client then asked a very interesting question: “How can I get to the point where we make the application available to fully attempt exploitation against it?” I was befuddled, and quite simply, the answer was, “Why would you want to do that?” You see, by using SaaS, you remove liability from worrying about this sort of issue, which the organization may not have the capacity or budget to address. SaaS is click-and-go, and the software provider is now at risk of not providing a secured platform for content delivery.  

After I had explained this to the client, they quickly understood that SaaS is the way to go, and transforming into a PaaS deployment model would have actually required that they now hire additional headcount, including a web app developer.

It is this maturity that needs to happen throughout the industry to continue to maintain security within not just small companies, but large enterprises, too.

Digging deeper

Externally

There’ve been numerous breaches of customer data, and there’s typically a common culprit: a misconfigured S3 bucket, or discovered credentials to a cloud vendor’s platform dashboard.  These all seem like very easy things to remedy, but performing an external pentest where the targets are the assets hosted by a cloud vendor will certainly show if there are misconfigurations or accidental access being provided. This can be treated like any normal external pentest, but with the sole focus on knowing these assets live within a cloud environment.

Internally

There are multiple considerations when discussing what is “internal” to a cloud environment. Here, we’ll dig into the differences between platform and infrastructure.

Platform vs. infrastructure

In order to move or create assets within a cloud environment, one must first set up an account with the cloud vendor of choice. A username and password are created, then a user logs into the web application dashboard of the cloud vendor, and finally, assets are created and deployed to provide the functionality that is needed. The platform that the user is logged into is one aspect of an “internal” pentest against a cloud environment.  

Platform pentest example

There I was, doing a thick client pentest against an executable. I installed the application on my Windows VM, started up a few more apps to hook into the running processes, and off to the races I went.

One of the more basic steps in the process is to check the installation files. Within the directory, I find an .INI file. I opened this file with a text editor, and I was greeted with an Amazon AWS Access Key ID and SecretAccessKey! Wow, did I get lucky. I fired up aws cli, punched in the access key ID and SecretAccessKey, along with the target IP address, and bam! I was in like Flynn.

Now, kudos go out to the client that didn’t provide this user with root access. However, I was still able to gain a ton of access with additional information. I stopped from there though, because this quickly turned into a cloud-style pentest. I called the client up right away and informed them of this information, and they were happy (not happy) that this was discovered.

Internal platform pentest

A platform pentest is like being given a domain account, in an assumed-breached scenario, on an internal pentest. It’s a “hey, if you can’t get creds, here’s a low-priv account to see what you can do with it” approach.

On a cloud platform pentest, we’re being given this account to attempt additional attacks, such as privilege escalation, launching of additional virtual machines, or using a function to inject code into containers that auto-deploy and dial back to a listening server each time. A virtual machine, preferably Kali Linux, will need to be deployed within the VPC, so you can perform your internal pentest from it.

Internal infrastructure pentest

This pentest is much easier to construct. It looks very similar to an internal, on-premise pentest. The client sets up a virtual machine inside of the VPC they want tested, then the consultant creates that public/private SSH keypair and provides the SSH public key to the client. The client allows specific source IPs to SSH into that VM, and the pentest begins.  

In my experience, a lot of clients only have one VPC, so that makes life a bit easier. However, as more and more people gain experience and knowledge with how to set up their cloud environments, VPC separation is becoming more prevalent. As an example, perhaps a customer utilizes functions to auto-deploy new “sites” each time one of their customers signs on to use their services. This function automatically creates a brand-new VPC, with separation from other VPCs (which are their other clients), virtual machines, databases, network connectivity, access into the VPC for their clients administration, user accounts, authentication, SSO, and more. In this scenario, we’d ask the client to create multiple VPCs and drop us into at least one of them. This way, we can then perform a tenant-to-tenant-style pentest, to see if it’s possible to break out of one segment to access another.

In part three, we’ll take a look at how the maturity of the client’s cloud implementation can impact the way pentests are carried out. Stay tuned!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Rapid7 Recognized as Top Ranked in Current Offering Category in Forrester Wave™ for Cloud Workload Security

Post Syndicated from Ben Austin original https://blog.rapid7.com/2022/03/23/rapid7-recognized-as-top-ranked-in-current-offering-category-in-forrester-wave-for-cloud-workload-security/

Rapid7 Recognized as Top Ranked in Current Offering Category in Forrester Wave™ for Cloud Workload Security

The widespread growth in cloud adoption in recent years has given businesses across all industries the ability to transform and scale in ways never before possible. But the speed of those changes, combined with the increased volume and complexity of resources in cloud environments, often forces organizations to choose between slowing the pace of innovation or taking on massive amounts of unmanaged risk.

Because of this, cloud misconfigurations have become a leading attack vector for malicious breaches. Organizations are now scrambling to evolve their cloud security programs to properly secure their most sensitive and valuable data — before it falls into the hands of an adversary.

This requires going beyond the siloed toolsets and manual efforts of increasingly hard-to-find cloud security professionals. Instead, businesses need to leverage comprehensive cloud workload security solutions that allow their teams to get a broad set of capabilities in a single platform in order to move more efficiently and effectively.

But in the still-emerging and rapidly evolving cloud security space, narrowing down a shortlist of vendors can be challenging and confusing.

To help buyers select the right vendor for their needs among the industry’s many players, Forrester Research evaluated the top 12 cloud security providers to equip evaluators with necessary context on each provider’s current offering and strategy. We’re excited to share that Rapid7 has been included among these top vendors and recognized as a Strong Performer in the Forrester Wave™: Cloud Workload Security, Q1 2022.

Most notably, Forrester’s report showed Rapid7 as the top-ranked solution in the Current Offering category.

This is the first time the Forrester Wave™ for Cloud Workload Security has been published since Rapid7’s acquisition of DivvyCloud and Alcide, which culminated in the launch of InsightCloudSec in July 2021. We believe the results of this Forrester Wave™ show that Rapid7’s strategy and execution following those acquisitions has already positioned us well against other vendors in the highly competitive cloud security market.

Get the Full Report

Download Now

A closer look at the results

“In its current CWS offering, the vendor provides excellent setup, configuration, and data integration solution features; identity and access management and CSPM is strong. Runtime and container orchestration platform protection is also strong and easy to use.”

The Forrester report gave Rapid7 the highest possible scores in the criteria of cloud security posture management (CSPM), infrastructure as code (IaC), identity and access management (IAM), and container protection, which to us emphasizes the breadth and depth of our current solution in both posture management and workload protection.

Rapid7 also received the highest possible score in the Setup, Configuration, and Data Integration criterion. When combined with the highest possible score in the Integration category, we believe this leads to faster time to value and more efficient ongoing cloud security operations compared to other vendors.

Within the Strategy category, Forrester gave Rapid7 the highest possible scores in the Cloud Workload Protection (CWP) Plans and Container Protection Plans criteria. According to the report, “In its CWS strategy, [Rapid7] has differentiated CWP and container protection plans.”

“The vendor plans to implement runtime analysis using machine learning (ML) to establish a baseline profile of activity and identify anomalous behaviors and unknown threats, simplifying runtime rules and detections,“ the report explains.

From our standpoint, this combination of current market-leading CSPM and shift-left capabilities, alongside a highly differentiated CWP and Container Protection roadmap, is evidence of the significant value Rapid7 is bringing to our customers by enabling organizations to innovate and accelerate their business strategies with secure adoption of cloud technologies.

Learn more

You can download The Forrester Wave™: Cloud Workload Security, Q1 2022 from our website here.

Interested in hearing more about Rapid7’s cloud security solution, and how we’re providing value to our current customers? Join us on Tuesday, March 29 for our third-annual Cloud Security Summit to hear from our leadership, customers, and partners about how Rapid7 is driving cloud security forward.

Join our 2022 Cloud Security Summit

Register Now

Additional reading:

Cloud Pentesting, Pt. 1: Breaking Down the Basics

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/03/21/cloud-pentesting-pt-1-breaking-down-the-basics/

Cloud Pentesting, Pt. 1: Breaking Down the Basics

The concept of cloud computing has been around for awhile, but it seems like as of late — at least in the penetration testing field — more and more customers are looking to get a pentest done in their cloud deployment. What does that mean? How does that look? What can be tested, and what’s out of scope? Why would I want a pentest in the cloud? Let’s start with the basics here, to hopefully shed some light on what this is all about, and then we’ll get into the thick of it.

Cloud computing is the idea of using software and services that run on the internet as a way for an organization to deploy their once on-premise systems. This isn’t a new concept — in fact, the major vendors, such as Amazon’s AWS, Microsoft’s Azure, and Google’s Cloud Platform, have all been around for about 15 years. Still, cloud sometimes seems like it’s being talked about as if it was invented just yesterday, but we’ll get into that a bit more later.

So, cloud computing means using someone else’s computer, in a figurative or quite literal sense. Simple enough, right?  

Wrong! There are various ways that companies have started to utilize cloud providers, and these all impact how pentests are carried out in cloud environments. Let’s take a closer look at the three primary cloud configurations.

Traditional cloud usage

Some companies have simply lifted infrastructure and services straight from their own on-premise data centers and moved them into the cloud. This looks a whole lot like setting up one virtual private cloud (VPC), with numerous virtual machines, a flat network, and that’s it! While this might not seem like a company is using their cloud vendor to its fullest potential, they’re still reaping the benefits of never having to manage uptime of physical hardware, calling their ISP late at night because of an outage, or worrying about power outages or cooling.

But one inherent problem remains: The company still requires significant staff to maintain the virtual machines and perform operating system updates, software versioning, cipher suite usage, code base fixes, and more. This starts to look a lot like the typical vulnerability management (VM) program, where IT and security continue to own and maintain infrastructure. They work to patch and harden endpoints in the cloud and are still in line for changes to be committed to the cloud infrastructure.

Cloud-native usage

The other side of cloud adoption is a more mature approach, where a company has devoted time and effort toward transitioning their once on-premise infrastructure to a fully utilized cloud deployment. While this could very well include the use of the typical VPC, network stack, virtual machines, and more, the more mature organization will utilize cloud-native deployments. These could include storage services such as S3, function services, or even cloud-native Kubernetes.

Cloud-native users shift the priorities and responsibilities of IT and security teams so that they no longer act as gatekeepers to prevent the scaling up or out of infrastructure utilized by product teams. In most of these environments, the product teams own the ability to make commitments in the cloud without IT and security input. Meanwhile, IT and security focus on proper controls and configurations to prevent security incidents. Patching is exchanged for rebuilds, and network alerting and physical server isolation are handled through automated responses, such as an alert with AWS Config that automatically changes the security group for a resource in the cloud and isolates it for further investigation.

These types of deployments start to more fully utilize the capabilities of the cloud, such as automated deployment through infrastructure-as-code solutions like AWS Cloud Formation. Gone are the days when an organization would deploy Kubernetes on top of a virtual machine to deploy containers. Now, cloud-native vendors provide this service with AWS’s Elastic Kubernetes Services, Microsoft’s Azure Kubernetes Services, and for obvious reasons, Google’s Kubernetes Engine. These and other types of cloud native deployments really help to ease the burden on the organization.

Hybrid cloud

Then there’s hybrid cloud. This is where a customer can set up their on-premise environment to also tie into their cloud environment, or visa versa. One common theme we see is with Microsoft Azure, where the Azure AD Connect sync is used to synchronize on-premise Active Directory to Azure AD. This can be very beneficial when the company is using other Software-as-a-Service (SaaS) components, such as Microsoft Office 365.  

There are various benefits to utilizing hybrid cloud deployments. Maybe there are specific components that a customer wants to keep in house and support on their own infrastructure. Or perhaps the customer doesn’t yet have experience with how to maintain Kubernetes but is utilizing Google Cloud Platform. The ability to deploy your own services is the key to flexibility, and the cloud helps provide that.

In part two, we’ll take a closer look at how these different cloud deployments impact pentesting in the cloud.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

3 Reasons to Join Rapid7’s Cloud Security Summit

Post Syndicated from Ben Austin original https://blog.rapid7.com/2022/03/09/3-reasons-to-join-rapid7s-cloud-security-summit/

3 Reasons to Join Rapid7’s Cloud Security Summit

The world of the cloud never stops moving — so neither can cloud security. In the face of rapidly evolving technology and a constantly changing threat landscape, keeping up with all the latest developments, trends, and best practices in this emerging practice is more vital than ever.

Enter Rapid7’s third annual Cloud Security Summit, which we’ll be hosting this year on Tuesday, March 29. This one-day virtual event is dedicated to cloud security best practices and will feature industry experts from Rapid7, as well as Amazon Web Services (AWS), Snyk, and more.

While the event is fully virtual and free, we know that the time commitment can be the most challenging part of attending a multi-hour event during the workday. With that in mind, we’ve compiled a short list of the top reasons you’ll definitely want to register, clear your calendar, and attend this event.

Reason 1: Get a sneak peak at some original cloud security research

During the opening session of this year’s summit, two members of Rapid7’s award-winning security research team will be presenting some never-before-published research on the current state of cloud security operations, the most common misconfigurations in 2021, Log4j, and more.

Along with being genuinely interesting data, this research will also give you some insights and benchmarks that will help you evaluate your own cloud security program, and prioritize the most commonly exploited risks in your organization’s environment.

Reason 2: Learn from industry experts, and get CPE credits

Along with a handful of team member’s from Rapid7’s own cloud security practice, this year’s summit includes a host of subject matter experts from across the industry. You can look forward to hearing from Merritt Baer, Principal in the Office of the CISO at Amazon Web Services; Anthony Seto, Field Director for Cloud Native Application Security at Snyk; Keith Hoodlet, Code Security Architect at GitHub; and more. And that doesn’t even include the InsightCloudSec customers who will be joining to share their expert perspectives as well.

While learning and knowledge gain are clearly the most important aspects here, it’s always great to have something extra to show for the time you devoted to an event like this. To help make the case to your management that this event is more than worth the time you’ll put in, we’ve arranged for all attendees to earn 3.5 continuing professional education (CPE) credits to go toward maintaining or upgrading security certifications, such as CISSP, CISM, and more.

Reason 3: Be the first to hear exciting Rapid7 announcements

Last but not least, while the event is primarily focused on cloud security research, strategies, and thought leadership, we are also planning to pepper in some exciting news related to InsightCloudSec, Rapid7’s cloud-native security platform.

We’ll end the day with a demonstration of the product, so you can see some of our newest capabilities in action. Whether you’re already an InsightCloudSec customer, or considering a new solution for uncovering misconfigurations, automating cloud security workflows, shifting left, and more, this is the best way to get a live look at one of the top solutions available in the market today.  

So what are you waiting for? Come join us, and let’s dive into the latest and greatest in cloud security together.

Join our 2022 Cloud Security Summit

Register Now

Additional reading

Let’s Architect! Architecting for Security

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-security/

At AWS, security is “job zero” for every employee—it’s even more important than any number one priority. In this Let’s Architect! post, we’ve collected security content to help you protect data, manage access, protect networks and applications, detect and monitor threats, and ensure privacy and compliance.

Managing temporary elevated access to your AWS environment

One challenge many organizations face is maintaining a solid security governance across AWS accounts.

This Security Blog post provides a practical approach to temporarily elevate access for specific users. For example, imagine a developer wants to access a resource in the production environment. With elevated access, you won’t have to provide them an account that has access to the production environment. You would just elevate their access for a short period of time. The following diagram shows the few steps needed to temporarily elevate access to a user.

This diagram shows the few steps needed to temporarily elevate access to a user

This diagram shows the few steps needed for to temporarily elevate access to a user

Security should start left: The problem with shift left

You already know security is job zero at AWS. But it’s not just a technology challenge. The gaps between security, operations, and development cycles are widening. To close these gaps, teams must have real-time visibility and control over their tools, processes, and practices to prevent security breaches.

This re:Invent session shows how establishing relationships, empathy, and understanding between development and operations teams early in the development process helps you maintain the visibility and control you need to keep your applications secure.

Screenshot from re:Invent session

Empowering developers means shifting security left and presenting security issues as early as possible in your process

AWS Security Reference Architecture: Visualize your security

Securing a workload in the cloud can be tough; almost every workload is unique and has different requirements. This re:Invent video shows you how AWS can simplify the security of your workloads, no matter their complexity.

You’ll learn how various services work together and how you can deploy them to meet your security needs. You’ll also see how the AWS Security Reference Architecture can automate common security tasks and expand your security practices for the future. The following diagram shows how AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts.

The AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts

The AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts

Network security for serverless workloads

Serverless technologies can improve your security posture. You can build layers of control and security with AWS managed and abstracted services, meaning that you don’t have to do as much security work and can focus on building your system.

This video from re:Invent provides serverless strategies to consider to gain greater control of networking security. You will learn patterns to implement security at the edge, as well as options for controlling an AWS Lambda function’s network traffic. These strategies are designed to securely access resources (for example, databases) placed in a virtual private cloud (VPC), as well as resources outside of a VPC. The following screenshot shows how
Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints.

Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints

Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints

See you next time!

Thanks for reading! If you’re looking for more ways to architect your workload for security, check out Best Practices for Security, Identity, & Compliance in the AWS Architecture Center.

See you in a couple of weeks when we discuss the best tools offered by AWS for software architects!

Other posts in this series