Tag Archives: Security, Identity & Compliance

A sneak peek at the data protection sessions for re:Inforce 2024

Post Syndicated from Katie Collins original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-data-protection-sessions-for-reinforce-2024/

Join us in Philadelphia, Pennsylvania on June 10–12, 2024 for AWS re:Inforce, a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As an attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring Amazon Web Services (AWS) experts and AWS Security Competency Partners, and keynote and leadership sessions featuring Security leadership.

AWS re:Inforce features content in the following six areas:

  • Data Protection
  • Governance, Risk, and Compliance
  • Identity and Access Management
  • Network and Infrastructure Security
  • Threat Detection and Incident Response
  • Application Security

This post will highlight some of the Data Protection sessions that you can add to your agenda. The data protection content showcases best practices for data in transit, at rest, and in use. Learn how AWS, customers, and AWS Partners work together to protect data across industries like financial services, healthcare, and the public sector. You will learn from AWS leaders about how customers innovate in the cloud, use the latest generative AI tools, and raise the bar on data security, resilience, and privacy.

Breakout sessions, chalk talks, and lightning talks

DAP221: Secure your healthcare generative AI workloads on Amazon EKS
Many healthcare organizations have been modernizing their applications using containers on Amazon EKS. Today, they are increasingly adopting generative AI models to innovate in areas like patient care, drug discovery, and medical imaging analysis. In addition, these organizations must comply with healthcare security and privacy regulations. In this lightning talk, learn how you can work backwards from expected healthcare data protection outcomes. This talk offers guidance on extending healthcare organizations’ standardization of containerized applications on Amazon EKS to build more secure and resilient generative AI workloads.

DAP232: Innovate responsibly: Deep dive into data protection for generative AI
AWS solutions such as Amazon Bedrock and Amazon Q are helping organizations across industries boost productivity and create new ways of operating. Despite all of the excitement, organizations often pause to ask, “How do these new services handle and manage our data?” AWS has designed these services with data privacy in mind and many security controls enabled by default, such as encryption of data at rest and in transit. In this chalk talk, dive into the data flows of these new generative AI services to learn how AWS prioritizes security and privacy for your sensitive data requirements.

DAP301: Building resilient event-driven architectures, feat. United Airlines
United Airlines plans to accept a delivery of 700 new planes by 2032. With this growing fleet comes more destinations, passengers, employees, and baggage—and a big increase in data, the lifeblood of airline operations. United Airlines is using event-driven architecture (EDA) to build a system that scales with their operations and evolves with their hybrid cloud throughout this journey. In this session, learn how United Airlines built a hybrid operations management system by modernizing from mainframes to AWS. Using Amazon MSK, Amazon DynamoDB, AWS KMS, and event mesh AWS ISV Partner Solace, they were able to design a well-crafted EDA to address their needs.

DAP302: Capital One’s approach for secure and resilient applications
Join this session to learn about Capital One’s strategic AWS Secrets Manager implementation that has helped ensure unified security across environments. Discover the key principles that can guide consistent use, with real-world examples to showcase the benefits and challenges faced. Gain insights into achieving reliability and resilience in financial services applications on AWS, including methods for maintaining system functionality amidst failures and scaling operations safely. Find out how you can implement chaos engineering and site reliability engineering using multi-Region services such as Amazon Route 53, AWS Auto Scaling, and Amazon DynamoDB.

DAP321: Securing workloads using data protection services, feat. Fannie Mae
Join this lightning talk to discover how Fannie Mae employs a comprehensive suite of AWS data protection services to securely manage their own keys, certificates, and application secrets. Fannie Mae demonstrates how they utilized services such as AWS Secrets Manager, AWS KMS, and AWS Private Certificate Authority to empower application teams to build securely and align with their organizational and compliance expectations.

DAP331: Encrypt everything: How different AWS services help you protect data
Encryption is supported by every AWS service that stores data. However, not every service implements encryption and key management identically. In this chalk talk, learn in detail how different AWS services such as Amazon S3 or Amazon Bedrock use encryption and manage keys. These insights can help you model threats to your applications and be better prepared to respond to questions about adherence to security standards and compliance requirements. Also, find out about some of the methodologies AWS uses when designing for encryption and key management at scale in a diverse set of services.

Hands-on sessions (builders’ sessions, code talks, and workshops)

DAP251: Build a privacy-enhancing healthcare data collaboration solution
In this builders’ session, learn how to build a privacy-enhanced environment to analyze datasets from multiple sources using AWS Clean Rooms. Build a solution for a fictional life sciences company that is researching a new drug and needs to perform analyses with a hospital system. Find out how you can help protect sensitive data using SQL query controls to limit how the data can be queried, Cryptographic Computing for Clean Rooms (C3R) to keep the data encrypted at all times, and differential privacy to quantifiably safeguard patients’ personal information in the datasets. You must bring your laptop to participate.

DAP341: Data protection controls for your generative AI applications on AWS
Generative AI is one of the most disruptive technologies of our generation and has the potential to revolutionize all industries. Cloud security data protection strategies need to evolve to meet the changing needs of businesses as they adopt generative AI. In this code talk, learn how you can implement various data protection security controls for your generative AI applications using Amazon Bedrock and AWS data protection services. Discover best practices and reference architectures that can help you enforce fine-grained data protection controls to scale your generative AI applications on AWS.

DAP342: Leveraging developer platforms to improve secrets management at scale
In this code talk, learn how you can leverage AWS Secrets Manager and Backstage.io to give developers the freedom to deploy secrets close to their applications while maintaining organizational standards. Explore how using a developer portal can remove the undifferentiated heavy lifting of creating secrets that have consistent naming, tagging, access controls, and encryption. This talk touches on cross-Region replication, cross-account IAM permissions and policies, and access controls and integration with AWS KMS. Also find out about secrets rotation as well as new AWS Secrets Manager features such as BatchGetSecretValue and managed rotation.

DAP371: Encryption in transit
Encryption in transit is a fundamental aspect of data protection. In this workshop, walk through multiple ways to accomplish encryption in transit on AWS. Find out how to enable HTTPS connections between microservices on Amazon ECS and AWS Lambda via Amazon VPC Lattice, enforce end-to-end encryption in Amazon EKS, and use AWS Private Certificate Authority to issue TLS certificates for private applications. You must bring your laptop to participate.

If these sessions look interesting to you, join us in Philadelphia by registering for re:Inforce 2024. We look forward to seeing you there!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.

Katie Collins

Katie Collins

Katie is a Senior Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

How to set up SAML federation in Amazon Cognito using IdP-initiated single sign-on, request signing, and encrypted assertions

Post Syndicated from Vishal Jakharia original https://aws.amazon.com/blogs/security/how-to-set-up-saml-federation-in-amazon-cognito-using-idp-initiated-single-sign-on-request-signing-and-encrypted-assertions/

When an identity provider (IdP) serves multiple service providers (SPs), IdP-initiated single sign-on provides a consistent sign-in experience that allows users to start the authentication process from one centralized portal or dashboard. It helps administrators have more control over the authentication process and simplifies the management.

However, when you support IdP-initiated authentication, the SP (Amazon Cognito in this case) can’t verify that it has solicited the SAML response that it receives from IdP because there is no SAML request initiated from the SP. To accept unsolicited SAML assertions in your user pool, you must consider its effect on your app security. Although your user pool can’t verify an IdP-initiated sign-in session, Amazon Cognito validates your request parameters and SAML assertions.

Amazon Cognito has recently enhanced support for the SAML 2.0 protocol by adding support to IdP-initiated single sign-on (SSO), SAML request signing and accepting encrypted SAML responses.

Amazon Cognito acts as the SP representing your application and generates a token after federation that can be used by the application to access protected backends. The SAML provider acts as an IdP, where the user identities and credentials are stored, and is responsible for authenticating the user.

This post describes the steps to integrate a SAML IdP, Microsoft Entra ID, with an Amazon Cognito user pool and use SAML IdP-initiated SSO flow. It also describes steps to enable signing authentication requests and accepting encrypted SAML responses.

IdP-initiated authentication flow using SAML federation

Figure 1: High-level diagram for SAML IdP-initiated authentication flow in a web or mobile app

Figure 1: High-level diagram for SAML IdP-initiated authentication flow in a web or mobile app

As shown in Figure 1, the high-level flow diagram of an application with federated authentication typically involves the following steps:

  1. An enterprise user opens their SSO portal and signs in. This usually opens a portal with several applications that the user has access to. When the user selects an Amazon Cognito protected application from their SSO portal, an IdP-initiated SSO flow is initiated.
  2. When the user launches an application from the SSO portal, Entra ID sends a SAML assertion to the Cognito endpoint to federate the user.
  3. Amazon Cognito validates the SAML assertion and creates the user in Cognito if this is first-time federation for the user or updates the user’s record if user has signed in before from this IdP. Cognito then generates an authorization code and redirects the user to the application URL with this authorization code. The application exchanges the authorization code for tokens from the Cognito token endpoint.
  4. After the application has tokens, it uses them to authorize access within the application stack as needed.

The SAML response contains claims or assertions that contain user-specific data. The SAML response is transferred over HTTPS to protect confidentiality of the data, but you can also enable encryption to further protect the confidentiality of transferred user information. This enables trusted parties who have the decryption key to decrypt the data. It protects the confidentiality of the data after it’s received by the SP.

Setting up SAML federation between Amazon Cognito and Entra ID

To set up SAML federation and use IdP-initiated SSO, you will complete the following steps:

  1. Create an Amazon Cognito user pool.
  2. Create an app client in the Cognito user pool.
  3. Add Cognito as an enterprise application in Entra ID.
  4. Add Entra ID as the SAML IdP and enable IdP-initiated SSO in Cognito.
  5. Add the newly created SAML IdP to your user pool app client.
  6. Enable encrypting the SAML response.
  7. Add RelayState in Entra ID SAML SSO.

Prerequisites

To implement the solution, you must have the necessary permissions to perform these tasks in Azure portal and in your AWS account.

Step 1: Create an Amazon Cognito user pool

Create a new user pool in Amazon Cognito with the default settings. Make a note of the user pool ID, for example, us-east-1_abcd1234. You will need this value for the next steps.

Add a domain name to user pool

The Cognito user pool’s hosted UI can be used as the OAuth 2.0 authorization server with a customizable web interface for sign-up and sign-in. Cognito OAuth 2.0 endpoints are accessible from a domain name that must be added to the user pool. There are two options for adding a domain name to a user pool. You can either use a Cognito domain or a domain name that you own. This solution uses a Cognito domain, which will look like the following:

https://<yourDomainPrefix>.auth.<aws-region>.amazoncognito.com

To add a domain name to a user pool:

  1. In the AWS Management Console for Amazon Cognito, navigate to the App integration tab for your user pool.
  2. On the right side of the pane, choose Actions and select Create Cognito domain.

    Figure 2: Create a Cognito domain

    Figure 2: Create a Cognito domain

  3. Enter an available domain prefix (for example example-corp-prd) to use with the Cognito domain.

    Figure 3: Add a domain prefix

    Figure 3: Add a domain prefix

  4. Choose Create Cognito domain.

Step 2: Create an app client in the Cognito user pool

Before you can use Amazon Cognito in your web application, you must register your app with Amazon Cognito as an app client. The IdP-initiated SAML flow can’t be enabled on one app client with the other SP-initiated authentication SAML IdPs or social IdPs. IdP-initiated SAML introduces additional risks that other SSO providers aren’t subject to. For example, it’s not possible to add a state parameter, which is usually used for cross-site request forgery (CSRF) mitigation. Because of this, you can’t add IdPs that aren’t SAML, including the user pool itself, to an app client that uses a SAML provider with IdP-initiated SSO.

To create an app client:

  1. In the Amazon Cognito console, navigate to the App integration tab for the same user pool and locate App clients. Choose Create an app client.
  2. Select an Application type. For this example, create a public client.
  3. Enter an App client name.
  4. Choose Don’t generate client secret.
  5. Keep the rest of the settings as default.
  6. Under Hosted UI settings, add Allowed callback URLs for your app client. This is where you will be directed after authentication.
  7. Choose Authorization code grant for OAuth 2.0 grant types.
  8. You can keep the remaining configuration as default and choose Create app client.

After the app client is successfully created, capture the app client ID from the App integration tab of the user pool.

Prepare information for the Entra ID setup

Prepare the Identifier (Entity ID) and Reply URL, which are required to add Amazon Cognito as an enterprise application in Entra ID (Step 3).

Create values for Identifier (Entity ID) and Reply URL according to the following formats:

For Identifier (Entity ID), the format is:
urn:amazon:cognito:sp:<yourUserPoolID>

For example: urn:amazon:cognito:sp:us-east-1_abcd1234

For Reply URL, the format is:
https://<yourDomainPrefix>.auth.<aws-region>.amazoncognito.com/saml2/idpresponse

For example: https://example-corp-prd.auth.us-east-1.amazoncognito.com/saml2/idpresponse

The reply URL is the endpoint where Entra ID will send the SAML assertion to Amazon Cognito during user authentication.

For more information, see Adding SAML identity providers to a user pool.

Step 3: Add Amazon Cognito as an enterprise application in Entra ID

With the user pool and app client created and the information for Entra ID prepared, you can add Amazon Cognito as an application in Entra ID. To complete this step, you will add Cognito as an enterprise application and set up SSO.

To add Cognito as an enterprise application

  1. Sign in to the Azure portal.
  2. In the search box, search for the service Microsoft Entra ID.
  3. In the left sidebar, select Enterprise applications.
  4. Choose New application.
  5. On the Browse Microsoft Entra Gallery page, choose Create your own application.

    Figure 4: Create an application in Entra ID

    Figure 4: Create an application in Entra ID

  6. Under What’s the name of your app?, enter a name for your application and select Integrate any other application you don’t find in the gallery (Non-gallery), as shown in Figure 4. Choose Create.
  7. It will take few seconds for the application to be created in Entra ID, and then you should be redirected to the Overview page for the newly added application.

To set up SSO using SAML:

  1. On the Getting started page, in the Set up single sign on tile, choose Get started, as shown in Figure 5.

    Figure 5: Choose Set up single sign-on in Getting Started

    Figure 5: Choose Set up single sign-on in Getting Started

  2. On the next screen, select SAML.
  3. In the middle pane under Set up Single Sign-On with SAML, in the Basic SAML Configuration section, choose the edit icon.
  4. In the right pane under Basic SAML Configuration, replace the default Identifier ID (Entity ID) with the identifier (entity ID) you created in Step 2. Replace Reply URL (Assertion Consumer Service URL) with the reply URL you created in Step 2.

    Figure 6: Add the identifier (entity ID) and reply URL

    Figure 6: Add the identifier (entity ID) and reply URL

  5. Now go to Attributes & Claims and note the claims, as shown in Figure 7. You’ll need these when creating attribute mapping in Amazon Cognito.

    Figure 7: Entra ID Attributes & Claims

    Figure 7: Entra ID Attributes & Claims

  6. Scroll down to the SAML Certificates section and copy the App Federation Metadata Url by choosing the copy into clipboard icon. Make a note of this URL to use in the next step.

    Figure 8: Copy SAML metadata URL from Entra ID

    Figure 8: Copy SAML metadata URL from Entra ID

Step 4: Add Entra ID as SAML IdP in Amazon Cognito

In this step, you’ll add Entra ID as a SAML IdP to your user pool and download the signing and encryption certificates.

To add the SAML IdP:

  1. In the Amazon Cognito console, navigate to the Sign-in experience tab of the same user pool. Locate Federated identity provider sign-in and choose Add an Identity provider.
  2. Choose a SAML IdP.
  3. Enter a Provider name, for example, EntraID.
  4. Under IdP-initiated SAML sign-in, choose Accept SP-initiated and IdP-initiated SAML assertions.
  5. Under Metadata document source, enter the metadata document endpoint URL you captured in Step 3.
  6. (Optional) Under SAML signing and encryption, select Require encrypted SAML assertion from this provider.

    Enable Required encrypted SAML assertion from this provider only if you can turn on token encryption in the Entra ID application. See Step 6.

  7. Under Map attributes between your SAML provider and your user pool to map SAML provider attributes to the user profile in your user pool. Include your user pool required attributes in your attribute map.

    For example, when you choose User pool attribute email, enter the SAML attribute name as it appears in the SAML assertion from your IdP. In our case it will be http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.

    Figure 9: Enter the SAML attribute name

    Figure 9: Enter the SAML attribute name

  8. Choose Add identity provider.

After the IdP has been created, you can navigate to the recently added EntraID IdP in the user pool for downloading the SAML signing and encryption certificate. These certificates must be imported into the Entra ID enterprise application.

To download the certificates

  1. To download the SAML signing certificate, Choose View signing certificate and Download as .crt
  2. To download the SAML encryption certificate, Choose View encryption certificate and Download as .crt.

Step 5: Add the newly created SAML IdP to your user pool app client

Before you can use Amazon Cognito in your web application, you must add the SAML IdP created in Step 4 to your app client.

To add the SAML IdP:

  1. In the Amazon Cognito console, navigate to the App integration tab for the same user pool and locate App clients.
  2. Choose the app client you created in Step 2.
  3. Locate the Hosted UI section and choose Edit.
  4. Under Identity providers, select the identity provider you created in Step 4 and choose Save changes.

    Figure 10: Enabling the Entra ID SAML identity provider in the Cognito app client

    Figure 10: Enabling the Entra ID SAML identity provider in the Cognito app client

At this stage, the Amazon Cognito OAuth 2.0 server is up and running and the web interface is accessible and ready to use. You can access the Cognito hosted UI from your app client using the Cognito console to test it further.

Step 6: Enable encrypting the SAML response in EntraID

For additional security and privacy of user data, enable encrypting the SAML response. Amazon Cognito and your IdP can establish confidentiality in SAML responses when users sign in and sign out. Cognito assigns a public-private RSA key pair and a certificate to each external SAML provider that you configure in your user pool. You will use the SAML encryption certificate downloaded in step 4.

To enable encrypting the SAML response:

  1. Navigate to your Enterprise application in Entra ID and in the left menu, under Security, select Token encryption.
  2. Import the SAML encryption certificate you have already downloaded in step 4.

    Figure 11: Import the Cognito encryption certificate to Entra ID

    Figure 11: Import the Cognito encryption certificate to Entra ID

  3. After the certificate is imported, it’s inactive by default. To activate it, right-click on the certificate and select Activate token encryption certificate. This enables the encrypted SAML response.

    Figure 12: Activate the token encryption certificate in Entra ID

    Figure 12: Activate the token encryption certificate in Entra ID

Step 7: Add RelayState in Entra ID SAML SSO

A RelayState parameter is required when using SAML IdP-initiated authentication flow. Set this up in Entra ID for the Amazon Cognito user pool and the enabled app client ID.

To add RelayState in Entra ID SAML SSO:

  1. Sign in to the Azure portal and open the enterprise application created in Step 3.
  2. In the left sidebar, choose Single sign-on.
  3. In the middle pane under Set up Single Sign-On with SAML, in the Basic SAML Configuration section, choose the edit icon.
  4. In the right pane under Basic SAML Configuration, apply the value as the format below to the Relay State (Optional) field.
    identity_provider=<IDProviderName>&client_id=<ClientId>&redirect_uri=<callbackURL>&response_type=code&scope=openid+email+phone

    1. Replace <IDProviderName> with the name you previously used for ID provider.
    2. Replace <ClientId> with the app client’s ClientID created in Step 2.
    3. Replace <ecallbackURL> with the URL of your web application that will receive the authorization code. It must be an HTTPS endpoint, except for in a local development environment where you can use http://localhost:PORT_NUMBER.

    For example:

    identity_provider=EntraID&client_id=abcd1234567&redirect_uri=https://example.com&response_type=code&scope=openid+email+phone

    Figure 13: Set RelayState in Entra ID single sign-on

    Figure 13: Set RelayState in Entra ID single sign-on

Test the IdP-initiated flow

Next, do a quick test to check if everything is configured properly.

  1. Sign in to the Azure portal and open the Enterprise application created in Step 3.
  2. In the left sidebar, choose Users and groups.
  3. On the right side, choose Add user/group. This will show the Add Assignment page.
  4. From the left side of the page, choose None Selected .
  5. Select a user from the right of the screen and follow the prompt to assign the user for this application.
  6. Once the user is assigned successfully, open https://www.microsoft365.com/apps and sign in as the assigned user.
  7. After you are signed in, choose the application icon registered as the IdP-initiated SSO.

    Figure 14: Testing IdP-initiated SSO from an Office 365 application

    Figure 14: Testing IdP-initiated SSO from an Office 365 application

  8. The application will start the IdP-initiated authentication flow and the user will be redirected to the application as a signed-in user.

Signing an authentication request in case of SP-initiated flow

The preceding authentication flow that you tested uses IdP-initiated SSO. If you’re using an SP-initiated flow, you can enable signing of the SAML request that is sent from the SP (Amazon Cognito) to the IdP (Entra ID) for additional security and integrity of communication between them.

You can enable the authentication request signing in Cognito while creating the IdP or by updating your existing IdP.

To enable signing of the SAML request:

  1. In the Amazon Cognito console, when you create or edit your SAML identity provider, under SAML signing and encryption, select the box Sign SAML requests to this provider and choose Save changes.

    Figure 15: Enabling signing SAML request

    Figure 15: Enabling signing SAML request

  2. Sign in to the Azure portal and access your Entra ID enterprise application. Go to Set up single sign on and edit Verification certificates (optional).
  3. Select the checkbox Require verification certificates and upload the Cognito user pool SAML signing certificate already downloaded in Step 4 with a .cer file extension. You must convert the .crt file to a .cer file because Entra ID requires a verification certificate in a .cer extension.

To convert the .crt certificate extension to .cer:

  1. Right-click the .crt file and choose Open.
  2. Navigate to the Details tab.
  3. Select Copy to File… and choose Next.
  4. Select Base-64 encoded X.509 (.CER) and choose Next.
  5. Give your export file a name (for example, Entra ID.cer) and choose Save.
  6. Choose Next.
  7. Confirm the details and choose Finish.

Test the SP-initiated flow

Next, do a quick test to check if everything is configured properly.

  1. In the Amazon Cognito console, navigate to the App integration tab for the same user pool and locate App clients.
  2. Choose the app client you created in Step 2.
  3. Locate the Hosted UI section and choose View Hosted UI.
  4. From the hosted UI, authenticate yourself using Entra ID as the identity provider.
  5. After authentication is completed successfully, you will be redirected to the callback URL you configured in your app client with the authorization code.

If you capture the SAML request, you will see that Amazon Cognito is sending a cryptographic signature with the signing certificate in the SAML request to the IdP, and the IdP will match the cryptographic signature with the uploaded certificate to ensure the integrity of the request.

Conclusion

In this post, you learned the benefits of using IdP-initiated single sign-on. It helps centralize administration and lowers dependency on service provider applications. Also, you learned how to integrate an Amazon Cognito user pool with Microsoft Entra ID as an external SAML IdP using IdP-initiated SSO so your users can use their corporate ID to sign in to web or mobile applications. Also, you learned about how to enable signed authentication requests when using an SP-initiated flow and encrypting SAML responses for additional security between Cognito and the SAML IdP.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Vishal Jakharia

Vishal Jakharia

Vishal is a cloud support engineer based in New Jersey, USA. He is an Amazon Cognito subject matter expert who loves to work with customers and provide them solutions for implementing authentication and authorization. He helps customers migrate and build secure scalable architecture on the AWS Cloud.

Yungang Wu

Yungang Wu

Yungang is a senior cloud support engineer who specializes in the Amazon Cognito service. He helps AWS customers troubleshoot issues and suggests well-designed application authentication and authorization implementations.

AWS plans to invest €7.8B into the AWS European Sovereign Cloud, set to launch by the end of 2025

Post Syndicated from Max Peterson original https://aws.amazon.com/blogs/security/aws-plans-to-invest-e7-8b-into-the-aws-european-sovereign-cloud-set-to-launch-by-the-end-of-2025/

English | German

Amazon Web Services (AWS) continues to believe it’s essential that our customers have control over their data and choices for how they secure and manage that data in the cloud. AWS gives customers the flexibility to choose how and where they want to run their workloads, including a proven track record of innovation to support specialized workloads around the world. While many customers are able to meet their stringent security, sovereignty, and privacy requirements using our existing sovereign-by-design AWS Regions, we know there’s not a one-size-fits-all solution. AWS continues to innovate based on the criteria we know are most important to our customers to give them more choice and more control. Last year we announced the AWS European Sovereign Cloud, a new independent cloud for Europe, designed to give public sector organizations and customers in highly regulated industries further choice to meet their unique sovereignty needs. Today, we’re excited to share more details about the AWS European Sovereign Cloud roadmap so that customers and partners can start planning. The AWS European Sovereign Cloud is planning to launch its first AWS Region in the State of Brandenburg, Germany by the end of 2025. Available to all AWS customers, this effort is backed by a €7.8B investment in infrastructure, jobs creation, and skills development.

The AWS European Sovereign Cloud will utilize the full power of AWS with the same familiar architecture, expansive service portfolio, and APIs that customers use today. This means that customers using the AWS European Sovereign Cloud will get the benefits of AWS infrastructure including industry-leading security, availability, performance, and resilience. We offer a broad set of services, including a full suite of databases, compute, storage, analytics, machine learning and AI, networking, mobile, developer tools, IoT, security, and enterprise applications. Today, customers can start building applications in any existing Region and simply move them to the AWS European Sovereign Cloud when the first Region launches in 2025. Partners in the AWS Partner Network, which features more than 130,000 partners, already provide a range of offerings in our existing AWS Regions to help customers meet requirements and will now be able to seamlessly deploy applications on the AWS European Sovereign Cloud.

More control, more choice

Like our existing Regions, the AWS European Sovereign Cloud will be powered by the AWS Nitro System. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access customer workloads or data running on Amazon Elastic Compute Cloud (Amazon EC2) Nitro based instances. The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. The controls that help prevent operator access are so fundamental to the Nitro System that we’ve added them in our AWS Service Terms to provide an additional contractual assurance to all of our customers.

To date, we have launched 33 Regions around the globe with our secure and sovereign-by-design approach. Customers come to AWS because they want to migrate to and build on a secure cloud foundation. Customers who need to comply with European data residency requirements have the choice to deploy their data to any of our eight existing Regions in Europe (Ireland, Frankfurt, London, Paris, Stockholm, Milan, Zurich, and Spain) to keep their data securely in Europe.

For customers who need to meet additional stringent operational autonomy and data residency requirements within the European Union (EU), the AWS European Sovereign Cloud will be available as another option, with infrastructure wholly located within the EU and operated independently from existing Regions. The AWS European Sovereign Cloud will allow customers to keep all customer data and the metadata they create (such as the roles, permissions, resource labels, and configurations they use to run AWS) in the EU. Customers who need options to address stringent isolation and in-country data residency needs will be able to use AWS Dedicated Local Zones or AWS Outposts to deploy AWS European Sovereign Cloud infrastructure in locations they select. We continue to work with our customers and partners to shape the AWS European Sovereign Cloud, applying learnings from our engagements with European regulators and national cybersecurity authorities.

Continued investment in Europe

Over the last 25 years, we’ve driven economic development through our investment in infrastructure, jobs, and skills in communities and countries across Europe. Since 2010, Amazon has invested more than €150 billion in the EU, and we’re proud to employ more than 150,000 people in permanent roles across the European Single Market.

AWS now plans to invest €7.8 billion in the AWS European Sovereign Cloud by 2040, building on our long-term commitment to Europe and ongoing support of the region’s sovereignty needs. This long-term investment is expected to lead to a ripple effect in the local cloud community through accelerating productivity gains, empowering the digital transformation of businesses, empowering the AWS Partner Network (APN), upskilling the cloud and digital workforce, developing renewable energy projects, and creating a positive impact in the communities where AWS operates. In total, the AWS planned investment is estimated to contribute €17.2 billion to Germany’s total Gross Domestic Product (GDP) through 2040, and support an average 2,800 full-time equivalent jobs in local German businesses each year. These positions, including construction, facility maintenance, engineering, telecommunications, and other jobs within the broader local economy, are part of the AWS data center supply chain.

In addition, AWS is also creating new highly skilled permanent roles to build and operate the AWS European Sovereign Cloud. These jobs will include software engineers, systems developers, and solutions architects. This is part of our commitment that all day-to-day operations of the AWS European Sovereign Cloud will be controlled exclusively by personnel located in the EU, including access to data centers, technical support, and customer service.

In Germany, we also collaborate with local communities on long-term, innovative programs that will have a lasting impact in the areas where our infrastructure is located. This includes developing cloud workforce and education initiatives for learners of all ages, helping to solve for the skills gap and prepare for the tech jobs of the future. For example, last year AWS partnered with Siemens AG to design the first apprenticeship program for AWS data centers in Germany, launched the first national cloud computing certification with the German Chamber of Commerce (DIHK), and established the AWS Skills to Jobs Tech Alliance in Germany. We will work closely with local partners to roll out these skills programs and make sure they are tailored to regional needs.

“High performing, reliable, and secure infrastructure is the most important prerequisite for an increasingly digitalized economy and society. Brandenburg is making progress here. In recent years, we have set on a course to invest in modern and sustainable data center infrastructure in our state, strengthening Brandenburg as a business location. State-of-the-art data centers for secure cloud computing are the basis for a strong digital economy. I am pleased Amazon Web Services (AWS) has chosen Brandenburg for a long-term investment in its cloud computing infrastructure for the AWS European Sovereign Cloud.”

Brandenburg’s Minister of Economic Affairs, Prof. Dr. Jörg Steinbach

Build confidently with AWS

For customers that are early in their cloud adoption journey and are considering the AWS European Sovereign Cloud, we provide a wide range of resources to help adopt the cloud effectively. From lifting and shifting workloads to migrating entire data centers, customers get the organizational, operational, and technical capabilities needed for a successful migration to AWS. For example, we offer the AWS Cloud Adoption Framework (AWS CAF) to provide best practices for organizations to develop an efficient and effective plan for cloud adoption, and AWS Migration Hub to help assess migration needs, define migration and modernization strategy, and leverage automation. We frequently host AWS events, webinars, and workshops focused on cloud adoption and migration strategies, where customers can learn from AWS experts and connect with other customers and partners.

We’re committed to giving customers more control and more choice to help meet their unique digital sovereignty needs, without compromising on the full power of AWS. The AWS European Sovereign Cloud is a testament to this. To help customers and partners continue to plan and build, we will share additional updates as we drive towards launch. You can discover more about the AWS European Sovereign Cloud on our European Digital Sovereignty website.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.
 


German version

AWS European Sovereign Cloud bis Ende 2025: AWS plant Investitionen in Höhe von 7,8 Milliarden Euro

Amazon Web Services (AWS) ist davon überzeugt, dass es für Kunden von essentieller Bedeutung ist, die Kontrolle über ihre Daten und Auswahlmöglichkeiten zu haben, wie sie diese Daten in der Cloud sichern und verwalten. Daher können Kunden flexibel wählen, wie und wo sie ihre Workloads ausführen. Dazu gehört auch eine langjährige Erfolgsbilanz von Innovationen zur Unterstützung spezialisierter Workloads auf der ganzen Welt. Viele Kunden können bereits ihre strengen Sicherheits-, Souveränitäts- und Datenschutzanforderungen mit unseren AWS-Regionen unter dem „sovereign-by-design“-Ansatz erfüllen. Aber wir wissen ebenso: Es gibt keine Einheitslösung für alle. Daher arbeitet AWS kontinuierlich an Innovationen, die auf jenen Kriterien basieren, die für unsere Kunden am wichtigsten sind und ihnen mehr Auswahl sowie Kontrolle bieten. Vor diesem Hintergrund haben wir letztes Jahr die AWS European Sovereign Cloud angekündigt. Mit ihr entsteht eine neue, unabhängige Cloud für Europa. Sie soll Organisationen des öffentlichen Sektors und Kunden in stark regulierten Branchen dabei helfen, die sich wandelnden Anforderungen an die digitale Souveränität zu erfüllen.

Heute freuen wir uns, dass wir weitere Details über die Roadmap der AWS European Sovereign Cloud bekanntgeben können. So können unsere Kunden und Partner mit ihren weiteren Planungen beginnen. Der Start der ersten Region der AWS European Sovereign Cloud ist in Brandenburg bis zum Jahresende 2025 geplant. Dieses Angebot steht allen AWS-Kunden zur Verfügung und wird von einer Investition in Höhe von 7,8 Milliarden Euro in die Infrastruktur, Arbeitsplatzschaffung und Kompetenzentwicklung unterstützt.

Die AWS European Cloud in Brandenburg bietet die volle Leistungsfähigkeit, mit der bekannten Architektur, dem umfangreichen Angebot an Services und denselben APIs, die Millionen von Kunden bereits kennen. Das bedeutet: Kunden der AWS European Sovereign Cloud profitieren somit bei voller Unabhängigkeit von den bekannten Vorteilen der AWS-Infrastruktur, einschließlich der branchenführenden Sicherheit, Verfügbarkeit, Leistung und Resilienz.

AWS-Kunden haben Zugriff auf ein breites Spektrum an Services – darunter ein umfangreiches Angebot bestehend aus Datenbanken, Datenverarbeitung, Datenspeicherung, Analytics, maschinellem Lernen (ML) und künstlicher Intelligenz (KI), Netzwerken, mobilen Applikationen, Entwickler-Tools, Internet of Things (IoT), Sicherheit und Unternehmensanwendungen. Bereits heute können Kunden Anwendungen in jeder bestehenden Region entwickeln und diese einfach in die AWS European Sovereign Cloud auslagern, sobald die erste AWS-Region 2025 startet. Die Partner im AWS-Partnernetzwerks (APN), das mehr als 130.000 Partner umfasst, bietet bereits eine Reihe von Angeboten in den bestehenden AWS-Regionen an. Dadurch unterstützen sie Kunden dabei, ihre Anforderungen zu erfüllen und Anwendungen einfach in der AWS European Sovereign Cloud bereitzustellen.

Mehr Kontrolle, größere Auswahl

Die AWS European Sovereign Cloud nutzt wie auch unsere bestehenden Regionen das AWS Nitro System. Dabei handelt es sich um einen Computing-Backbone für AWS, bei dem Sicherheit und Leistung im Mittelpunkt stehen. Die spezialisierte Hardware und zugehörige Firmware sind so konzipiert, dass strikte Beschränkungen gelten und niemand, auch nicht AWS selbst, auf die Workloads oder Daten von Kunden zugreifen kann, die auf Amazon Elastic Compute Cloud (Amazon EC2) Nitro-basierten Instanzen laufen. Dieses Design wurde von der NCC Group validiert, einem unabhängigen Unternehmen für Cybersicherheit. Die Kontrollen, die den Zugriff durch Betreiber verhindern, sind grundlegend für das Nitro System. Daher haben wir sie in unsere AWS Service Terms aufgenommen, um allen unseren Kunden diese zusätzliche vertragliche Zusicherung zu geben.

Bis heute haben wir 33 Regionen rund um den Globus mit unserem sicheren und „sovereign-by-design“-Ansatz gestartet. Unsere Kunden nutzen AWS, weil sie auf einer sicheren Cloud-Umgebung migrieren und aufbauen möchten. Für Kunden, die europäische Anforderungen an den Ort der Datenverarbeitung erfüllen müssen, bietet AWS die Möglichkeit, ihre Daten in einer unserer acht bestehenden Regionen in Europa zu verarbeiten: Irland, Frankfurt, London, Paris, Stockholm, Mailand, Zürich und Spanien. So können sie ihre Daten sicher innerhalb Europas halten.

Müssen Kunden zusätzliche Anforderungen an die betriebliche Autonomie und den Ort der Datenverarbeitung innerhalb der Europäischen Union erfüllen, steht die AWS European Sovereign Cloud als weitere Option zur Verfügung. Die Infrastruktur hierfür ist vollständig in der EU angesiedelt und wird unabhängig von den bestehenden Regionen betrieben. Sie ermöglicht es AWS-Kunden, ihre Kundeninhalte und von ihnen erstellten Metadaten in der EU zu behalten – etwa Rollen, Berechtigungen, Ressourcenbezeichnungen und Konfigurationen für den Betrieb von AWS.

Sollten Kunden weitere Optionen benötigen, um eine Isolierung zu ermöglichen und strenge Anforderungen an den Ort der Datenverarbeitung in einem bestimmten Land zu erfüllen, können sie auf AWS Dedicated Local Zones oder AWS Outposts zurückgreifen. Auf diese Weise können sie die Infrastruktur der AWS European Sovereign Cloud am Ort ihrer Wahl einsetzen. Wir arbeiten mit unseren Kunden und Partnern kontinuierlich daran, die AWS European Sovereign Cloud so zu gestalten, dass sie den benötigten Anforderungen entspricht. Dabei nutzen wir auch Feedback aus unseren Gesprächen mit europäischen Regulierungsbehörden und nationalen Cybersicherheitsbehörden.

„Eine funktionierende, verlässliche und sichere Infrastruktur ist die wichtigste Vorrausetzung für eine zunehmend digitalisierte Wirtschaft und Gesellschaft. Brandenburg schreitet hier voran. Wir haben in den vergangenen Jahren entscheidende Weichen gestellt, um Investitionen in eine moderne und nachhaltige Rechenzentruminfrastruktur in unserem Land auszubauen und so den Wirtschaftsstandort Brandenburg zu stärken. Hochmoderne Rechenzentren für sicheres Cloud-Computing sind die Basis für eine digitale Wirtschaft. Für unsere digitale Souveränität ist es wichtig, dass Rechenleistungen vor Ort in Deutschland erbracht werden. Ich freue mich, dass Amazon Web Services Brandenburg für ein langfristiges Investment in ihre Cloud-Computing-Infrastruktur für die AWS European Sovereign Cloud ausgewählt hat.“

sagt Brandenburgs Wirtschaftsminister Prof. Dr.-Ing. Jörg Steinbach

Kontinuierliche Investitionen in Europa

Im Laufe der vergangenen 25 Jahre haben wir die wirtschaftliche Entwicklung in europäischen Ländern und Gemeinden vorangetrieben und in Infrastruktur, Arbeitsplätze sowie den Ausbau von Kompetenzen investiert. Seit 2010 hat Amazon über 150 Milliarden Euro in der Europäischen Union investiert und wir sind stolz darauf, im gesamten europäischen Binnenmarkt mehr als 150.000 Menschen in Festanstellung zu beschäftigen.

AWS plant bis zum Jahr 2040 7,8 Milliarden Euro in die AWS European Sovereign Cloud zu investieren. Diese Investition ist Teil der langfristigen Bestrebungen von AWS, das europäische Bedürfnis nach digitaler Souveränität zu unterstützen. Mit dieser langfristigen Investition löst AWS einen Multiplikatoreffekt für Cloud-Computing in Europa aus. Sie wird die digitale Transformation der Verwaltung und von Unternehmen vorantreiben, das AWS Partner Network (APN) stärken, die Zahl der Cloud- und Digitalfachkräfte erhöhen, erneuerbare Energieprojekte vorantreiben und eine positive Wirkung in den Gemeinden erzielen, in denen AWS präsent ist. Insgesamt wird die geplante AWS-Investition bis 2040 voraussichtlich 17,2 Milliarden Euro zum deutschen Bruttoinlandsprodukt und zur Schaffung von 2.800 Vollzeitstellen bei regionalen Unternehmen beitragen. Diese Arbeitsplätze in den Bereichen Bau, Instandhaltung, Ingenieurwesen, Telekommunikation und der breiteren regionalen Wirtschaft sind Teil der Lieferkette für AWS-Rechenzentren.

Darüber hinaus wird AWS neue Stellen für hochqualifizierte festangestellte Fachkräfte wie Softwareentwickler, Systemingenieure und Lösungsarchitekten schaffen, um die AWS European Sovereign Cloud aufzubauen und zu betreiben. Die Investition in zusätzliches Personal unterstreicht unser Commitment, dass der gesamte Betrieb dieser souveränen Cloud-Umgebung – angefangen bei der Zugangskontrolle zu den Rechenzentren über den technischen Support bis hin zum Kundendienst – ausnahmslos durch Fachkräfte innerhalb der Europäischen Union kontrolliert und gesteuert wird.

In Deutschland arbeitet AWS mit den Beteiligten vor Ort auch an langfristigen und innovativen Programmen zusammen. Diese sollen einen nachhaltigen positiven Einfluss auf die Gemeinden haben, in denen sich die Infrastruktur des Unternehmens befindet. AWS konzentriert sich auf die Entwicklung von Cloud-Fachkräften und Schulungsinitiativen für Lernende aller Altersgruppen. Diese Maßnahmen tragen dazu bei, den Fachkräftemangel zu beheben und sich auf die technischen Berufe der Zukunft vorzubereiten. Im vergangenen Jahr hat AWS beispielsweise gemeinsam mit der Siemens AG das erste Ausbildungsprogramm für AWS-Rechenzentren in Deutschland entwickelt. Ebenso hat das Unternehmen in Kooperation mit dem Deutschen Industrie und Handelstag (DIHK) den bundeseinheitlichen Zertifikatslehrgang zum „Cloud Business Expert“ entwickelt sowie die AWS Skills to Jobs Tech Alliance in Deutschland ins Leben gerufen. AWS wird gemeinsam mit lokalen Partnern daran arbeiten, Ausbildungsprogramme und Fortbildungen anzubieten, die auf die Bedürfnisse vor Ort zugeschnitten sind.

Vertrauensvoll bauen mit AWS

Für Kunden, die sich noch am Anfang ihrer Cloud-Reise befinden und die AWS European Sovereign Cloud in Betracht ziehen, bieten wir eine Vielzahl von Ressourcen an, um den Wechsel in die Cloud effektiv zu gestalten. Egal ob einzelne Workloads verlagert oder ganze Rechenzentren migriert werden sollen – Kunden erhalten von uns die nötigen organisatorischen, operativen und technischen Fähigkeiten für eine erfolgreiche Migration zu AWS. Beispielsweise bieten wir das AWS Cloud Adoption Framework (AWS CAF) an, das Unternehmen bei der Entwicklung eines effizienten und effektiven Cloud-Adoptionsplans mit Best Practices unterstützt. Auch der AWS Migration Hub hilft bei der Bewertung des Migrationsbedarfs, der Definition der Migrations- und Modernisierungsstrategie und der Nutzung von Automatisierung. Darüber hinaus veranstalten wir regelmäßig AWS-Events, Webinare und Workshops rund um die Themen Cloud-Adoption und Migrationsstrategie. Dabei können Kunden von AWS-Experten lernen und sich mit anderen Kunden und Partnern vernetzen.

Wir sind bestrebt, unseren Kunden mehr Kontrolle und weitere Optionen anzubieten, damit diese ihre ganz individuellen Anforderungen an die digitale Souveränität erfüllen können, ohne dabei auf die volle Leistungsfähigkeit von AWS verzichten zu müssen.

Um Kunden und Partnern bei der weiteren Planung und Entwicklung zu unterstützen, werden wir laufend zusätzliche Updates bereitstellen, während wir auf den Start der AWS European Sovereign Cloud hinarbeiten. Mehr über die AWS European Sovereign Cloud erfahren Sie auf unserer Website zur European Digital Sovereignty.

 

Max Peterson

Max Peterson

Max is the Vice President of AWS Sovereign Cloud. He leads efforts to ensure that all AWS customers around the world have the most advanced set of sovereignty controls, privacy safeguards, and security features available in the cloud. Before his current role, Max served as the VP of AWS Worldwide Public Sector (WWPS) and created and led the WWPS International Sales division, with a focus on empowering government, education, healthcare, aerospace and satellite, and nonprofit organizations to drive rapid innovation while meeting evolving compliance, security, and policy requirements. Max has over 30 years of public sector experience and served in other technology leadership roles before joining Amazon. Max has earned both a Bachelor of Arts in Finance and Master of Business Administration in Management Information Systems from the University of Maryland.

Investigating lateral movements with Amazon Detective investigation and Security Lake integration

Post Syndicated from Yue Zhu original https://aws.amazon.com/blogs/security/investigating-lateral-movements-with-amazon-detective-investigation-and-security-lake-integration/

According to the MITRE ATT&CK framework, lateral movement consists of techniques that threat actors use to enter and control remote systems on a network. In Amazon Web Services (AWS) environments, threat actors equipped with illegitimately obtained credentials could potentially use APIs to interact with infrastructures and services directly, and they might even be able to use APIs to evade defenses and gain direct access to Amazon Elastic Compute Cloud (Amazon EC2) instances. To help customers secure their AWS environments, AWS offers several security services, such as Amazon GuardDuty, a threat detection service that monitors for malicious activity and anomalous behavior, and Amazon Detective, an investigation service that helps you investigate, and respond to, security events in your AWS environment.

After the service is turned on, Amazon Detective automatically collects logs from your AWS environment to help you analyze and investigate security events in-depth. At re:Invent 2023, Detective released Detective Investigations, a one-click investigation feature that automatically investigates AWS Identity and Access Management (IAM) users and roles for indicators of compromise (IoC), and Security Lake integration, which enables customers to retrieve log data from Amazon Security Lake to use as original evidence for deeper analysis with access to more detailed parameters.

In this post, you will learn about the use cases behind these features, how to run an investigation using the Detective Investigation feature, and how to interpret the contents of investigation reports. In addition, you will also learn how to use the Security Lake integration to retrieve raw logs to get more details of the impacted resources.

Triage a suspicious activity

As a security analyst, one of the common workflows in your daily job is to respond to suspicious activities raised by security event detection systems. The process might start when you get a ticket about a GuardDuty finding in your daily operations queue, alerting you that suspicious or malicious activity has been detected in your environment. To view more details of the finding, one of the options is to use the GuardDuty console.

In the GuardDuty console, you will find more details about the finding, such as the account and AWS resources that are in scope, the activity that caused the finding, the IP address that caused the finding and information about its possible geographic location, and times of the first and last occurrences of the event. To triage the finding, you might need more information to help you determine if it is a false positive.

Every GuardDuty finding has a link labeled Investigate with Detective in the details pane. This link allows you to pivot to the Detective console based on aspects of the finding you are investigating to their respective entity profiles. The finding Recon:IAMUser/MaliciousIPCaller.Custom that’s shown in Figure 1 results from an API call made by an IP address that’s on the custom threat list, and GuardDuty observed it made API calls that were commonly used in reconnaissance activity, which commonly occurs prior to attempts at compromise. To investigate this finding, because it involves an IAM role, you can select the Role session link and it will take you to the role session’s profile in the Detective console.

Figure 1: Example finding in the GuardDuty console, with Investigate with Detective pop-up window

Figure 1: Example finding in the GuardDuty console, with Investigate with Detective pop-up window

Within the AWS Role session profile page, you will find security findings from GuardDuty and AWS Security Hub that are associated with the AWS role session, API calls the AWS role session made, and most importantly, new behaviors. Behaviors that deviate from expectations can be used as indicators of compromises to give you more information to determine if the AWS resource might be compromised. Detective highlights new behaviors first observed during the scope time of the events related to the finding that weren’t observed during the Detective baseline time window of 45 days.

If you switch to the New behavior tab within the AWS role session profile, you will find the Newly observed geolocations panel (Figure 2). This panel highlights geolocations of IP addresses where API calls were made from that weren’t observed in the baseline profile. Detective determines the location of requests using MaxMind GeoIP databases based on the IP address that was used to issue requests.

Figure 2: Detective’s Newly observed geolocations panel

Figure 2: Detective’s Newly observed geolocations panel

If you choose Details on the right side of each row, the row will expand and provide details of the API calls made from the same locations from different AWS resources, and you can drill down and get to the API calls made by the AWS resource from a specific geolocation (Figure 3). When analyzing these newly observed geolocations, a question you might consider is why this specific AWS role session made API calls from Bellevue, US. You’re pretty sure that your company doesn’t have a satellite office there, nor do your coworkers who have access to this role work from there. You also reviewed the AWS CloudTrail management events of this AWS role session, and you found some unusual API calls for services such as IAM.

Figure 3: Detective’s Newly observed geolocations panel expanded on details

Figure 3: Detective’s Newly observed geolocations panel expanded on details

You decide that you need to investigate further, because this role session’s anomalous behavior from a new geolocation is sufficiently unexpected, and it made unusual API calls that you would like to know the purpose of. You want to gather anomalous behaviors and high-risk API methods that can be used by threat actors to make impacts. Because you’re investigating an AWS role session rather than investigating a single role session, you decide you want to know what happened in other role sessions associated with the AWS role in case threat actors spread their activities across multiple sessions. To help you examine multiple role sessions automatically with additional analytics and threat intelligence, Detective introduced the Detective Investigation feature at re:Invent 2023.

Run an IAM investigation

Amazon Detective Investigation uses machine learning (ML) models and AWS threat intelligence to automatically analyze resources in your AWS environment to identify potential security events. It identifies tactics, techniques, and procedures (TTPs) used in a potential security event. The MITRE ATT&CK framework is used to classify the TTPs. You can use this feature to help you speed up the investigation and identify indicators of compromise and other TTPs quickly.

To continue with your investigation, you should investigate the role and its usage history as a whole to cover all involved role sessions at once. This addresses the potential case where threat actors assumed the same role under different session names. In the AWS role session profile page that’s shown in Figure 4, you can quickly identify and pivot to the corresponding AWS role profile page under the Assumed role field.

Figure 4: Detective’s AWS role session profile page

Figure 4: Detective’s AWS role session profile page

After you pivot to the AWS role profile page (Figure 5), you can run the automated investigations by choosing Run investigation.

Figure 5: Role profile page, from which an investigation can be run

Figure 5: Role profile page, from which an investigation can be run

The first thing to do in a new investigation is to choose the time scope you want to run the investigation for. Then, choose Confirm (Figure 6).

Figure 6: Setting investigation scope time

Figure 6: Setting investigation scope time

Next, you will be directed to the Investigations page (Figure 7), where you will be able to see the status of your investigation. Once the investigation is done, you can choose the hyperlinked investigation ID to access the investigation report.

Figure 7: Investigations page, with new report

Figure 7: Investigations page, with new report

Another way to run an investigation is to choose Investigations on the left menu panel in the Detective console, and then choose Run investigation. You will then be taken to the page where you will specify the AWS role Amazon Resource Number (ARN) you’re investigating, and the scope time (Figure 8). Then you can choose Run investigation to commence an investigation.

Figure 8: Configuring a new investigation from scratch rather than from an existing finding

Figure 8: Configuring a new investigation from scratch rather than from an existing finding

Detective also offers StartInvestigation and GetInvestigation APIs for running Detective Investigations and retrieving investigation reports programmatically.

Interpret the investigation report

The investigation report (Figure 9) includes information on anomalous behaviors, potential TTP mappings of observed CloudTrail events, and indicators of compromises of the resource (in this example, an IAM principal) that was investigated.

At the top of the report, you will find a severity level computed based on the observed behaviors during the scope window, as well as a summary statement to give you a quick understanding of what was found. In Figure 9, the AWS role that was investigated engaged in the following unusual behaviors:

  • Seven tactics showing that the API calls made by this AWS role were mapped to seven tactics of the MITRE ATT&CK framework.
  • Eleven cases of impossible travel representing API calls made from two geolocations that are too far apart for the same user to have physically travelled between them to make the calls from both, within the time span involved.
  • Zero flagged IP addresses. Detective would flag IP addresses that are considered suspicious according to its threat intelligence sources.
  • Two new Autonomous System Organizations (ASOs) which are entities with assigned Autonomous System Numbers (ASNs) as used in Border Gateway Protocol (BGP) routing.
  • Nine new user agents were used to make API calls that weren’t observed in the 45 days prior to the events being investigated.

These indicators of compromise represent unusual behaviors that have either not been observed before in the AWS account involved or that are intrinsically considered high risk. The following summary panel includes the report that shows a detailed breakdown of the investigation results.

Unusual activities are important factors that you should look for during investigations, and sudden behavior change can be a sign of compromise. When you’re investigating an AWS role that can be assumed by different users from different AWS Regions, you are likely to need to examine activity at the granularity of the specific AWS role session that made the APIs calls. Within the report, you can do this by choosing the hyperlinked role name in the summary panel, and it will take you to the AWS role profile page.

Figure 9: Investigation report summary page

Figure 9: Investigation report summary page

Further down on the investigation report is the TTP Mapping from CloudTrail Management Events panel. Detective Investigations maps CloudTrail events to the MITRE ATT&CK framework to help you understand how an API can be used by threat actors. For each mapped API, you can see the tactics, techniques, and procedures it can be used for. In Figure 10, at the top there is a summary of TTPs with different severity levels. At the bottom is a breakdown of potential TTP mappings of observed CloudTrail management events during the investigation scope time.

When you select one of the cards, a side panel appears on the right to give you more details about the APIs. It includes information such as the IP address that made the API call, the details of the TTP the API call was mapped to, and if the API call succeeded or failed. This information can help you understand how these APIs can potentially be used by threat actors to modify your environment, and whether or not the API call succeeded tells you if it might have affected the security of your AWS resources. In the example that’s shown in Figure 10, the IAM role successfully made API calls that are mapped to Lateral Movement in the ATT&CK framework.

Figure 10: Investigation report page with event ATT CK mapping

Figure 10: Investigation report page with event ATT CK mapping

The report also includes additional indicators of compromise (Figure 11). You can find these if you select the Indicators tab next to Overview. Within this tab, you can find the indicators identified during the scope time, and if you select one indicator, details for that indicator will appear on the right. In the example in Figure 11, the IAM role made API calls with a user agent that wasn’t used by this IAM role or other IAM principals in this account, and indicators like this one show sudden behavior change of your IAM principal. You should review them and identify the ones that aren’t expected. To learn more about indicators of compromise in Detective Investigation, see the Amazon Detective User Guide.

Figure 11: Indicators of compromise identified during scope time

Figure 11: Indicators of compromise identified during scope time

At this point, you’ve analyzed the new and unusual behaviors the IAM role made and learned that the IAM role made API calls using new user agents and from new ASOs. In addition, you went through the API calls that were mapped to the MITRE ATT&CK framework. Among the TTPs, there were three API calls that are classified as lateral movements. These should attract attention for the following reasons: first, the purpose of these API calls is to gain access to the EC2 instance involved; and second, ec2-instance-connect:SendSSHPublicKey was run successfully.

Based on the procedure description in the report, this API would grant threat actors temporary SSH access to the target EC2 instance. To gather original evidence, examine the raw logs stored in Security Lake. Security Lake is a fully managed security data lake service that automatically centralizes security data from AWS environments, SaaS providers, on-premises sources, and other sources into a purpose-built data lake stored in your account.

Retrieve raw logs

You can use Security Lake integration to retrieve raw logs from your Security Lake tables within the Detective console as original evidence. If you haven’t enabled the integration yet, you can follow the Integration with Amazon Security Lake guide to enable it. In the context of the example investigation earlier, these logs include details of which EC2 instance was associated with the ec2-instance-connect:SendSSHPublicKey API call. Within the AWS role profile page investigated earlier, if you scroll down to the bottom of the page, you will find the Overall API call volume panel (Figure 12). You can search for the specific API call using the Service and API method filters. Next, choose the magnifier icon, which will initiate a Security Lake query to retrieve the raw logs of the specific CloudTrail event.

Figure 12: Finding the CloudTrail record for a specific API call held in Security Lake

Figure 12: Finding the CloudTrail record for a specific API call held in Security Lake

You can identify the target EC2 instance the API was issued against from the query results (Figure 13). To determine whether threat actors actually made an SSH connection to the target EC2 instance as a result of the API call, you should examine the EC2 instance’s profile page:

Figure 13: Reviewing a CloudTrail log record from Security Lake

Figure 13: Reviewing a CloudTrail log record from Security Lake

From the profile page of the EC2 instance in the Detective console, you can go to the Overall VPC flow volume panel and filter the Amazon Virtual Private Cloud (Amazon VPC) flow logs using the attributes related to the threat actor identified as having made the SSH API call. In Figure 14, you can see the IP address that tried to connect to 22/tcp, which is the SSH port of the target instance. It’s common for threat actors to change their IP address in an attempt to evade detection, and you can remove the IP address filter to see inbound connections to port 22/tcp of your EC2 instance.

Figure 14: Examining SSH connections to the target instance in the Detective profile page

Figure 14: Examining SSH connections to the target instance in the Detective profile page

Iterate the investigation

At this point, you’ve made progress with the help of Detective Investigations and Security Lake integration. You started with a GuardDuty finding, and you got to the point where you were able to identify some of the intent of the threat actors and uncover the specific EC2 instance they were targeting. Your investigation shouldn’t stop here because you’ve successfully identified the EC2 instance, which is the next target to investigate.

You can reuse this whole workflow by starting with the EC2 instance’s New behavior panel, run Detective Investigations on the IAM role attached to the EC2 instance and other IAM principals you think are worth taking a closer look at, then use the Security Lake integration to gather raw logs of the APIs made by the EC2 instance to identify the specific actions taken and their potential consequences.

Conclusion

In this post, you’ve seen how you can use the Amazon Detective Investigation feature to investigate IAM user and role activity and use the Security Lake integration to determine the specific EC2 instances a threat actor appeared to be targeting.

The Detective Investigation feature is automatically enabled for both existing and new customers in AWS Regions that support Detective where Detective has been activated. The Security Lake integration feature can be enabled in your Detective console. If you don’t currently use Detective, you can start a free 30-day trial. For more information on Detective Investigation and Security Lake integration, see Investigating IAM resources using Detective investigations and Security Lake integration.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.

Yue Zhu

Yue Zhu

Yue is a security engineer at AWS. Before AWS, he worked as a security engineer focused on threat detection, incident response, vulnerability management, and security tooling development. Outside of work, Yue enjoys reading, cooking, and cycling.

Governing and securing AWS PrivateLink service access at scale in multi-account environments

Post Syndicated from Anandprasanna Gaitonde original https://aws.amazon.com/blogs/security/governing-and-securing-aws-privatelink-service-access-at-scale-in-multi-account-environments/

Amazon Web Services (AWS) customers have been adopting the approach of using AWS PrivateLink to have secure communication to AWS services, their own internal services, and third-party services in the AWS Cloud. As these environments scale, the number of PrivateLink connections outbound to external services and inbound to internal services increase and are spread out across multiple accounts in virtual private clouds (VPCs). While AWS Identity and Access Management (IAM) policies allow you to control access to individual PrivateLink services, customers want centralized governance for the use of PrivateLink in adherence with organizational standards and security needs.

This post provides an approach for centralized governance for PrivateLink based services across your multi-account environment. It provides a way to create preventative controls through the use of service control policies (SCPs) and detective controls through event-driven automation. This allows your application teams to consume internal and external services while adhering to organization policies and provides a mechanism for centralized control as your AWS environment grows.

Scenarios faced by customers

Figure 1 shows an example customer environment comprising a multi-account structure created through AWS Organizations or using AWS Control Tower. There are separate organizational units (OUs) pertaining to different business units (BUs) with respective accounts. The business services’ account hosts several backend services that are utilized by consuming applications for their functionality. Since these services provide functionality to more than one internal application and will require access across VPC and account boundaries, these are exposed through AWS PrivateLink. One such service is shown in the business services account.

The customer has partners that provide services for integration with the customer’s application stack. The approved partner account provides a service that is approved for use by the cloud administration team. The NotApproved partner account provides services that are not approved within the customer’s organization. The customer has another OU dedicated to application teams. The application 1 account has an application that consumes the business service of the approved partner account. It is also planning to use the service from the NotApproved partner, which should be blocked. The application in the application 2 account is planning on using AWS services through interface endpoints as well as the approved partner account through PrivateLink integration.

Note: Throughout this post, “organization” is used to refer to an organization that you create and manage through AWS Organizations.

Figure 1: A multi-account customer environment

Figure 1: A multi-account customer environment

Current challenges

Access to individual PrivateLink connections can be controlled through IAM policies. At scale, however, different teams use and adopt PrivateLink for incoming and outgoing connections, and the number of VPC endpoint policies to create and manage increases. As mentioned in the problem statement presented in the introduction, as the customer environment scales and the number of PrivateLink connections increases, customers want centralized guardrails to manage PrivateLink resources at scale. For our example, the customer would like to put the following controls in place:

Preventative controls:

Use case 1:

  • Allow creation of VPC endpoints and allow access only to PrivateLink enabled AWS services.
  • Allow creation of VPC endpoints and initiating connection only to approved PrivateLink enabled third-party services.
  • Allow creation of VPC endpoints and initiating connection only to internal business services owned by accounts in the same organization.

Use case 2:

  • Allow only a cloud admin role to add permissions to connect to an endpoint service to prevent connections from external clients to internal VPC endpoint services.

Detective controls:

Use case 3:

  • Detect if connections are made to PrivateLink services exposed by AWS accounts not belonging to the customer’s organization.

Use case 4:

  • Detect if connections are made by external AWS accounts (not belonging to the customer’s organization) to PrivateLink services exposed for internal use by the customer’s AWS accounts.

This post presents a solution that uses SCPs, AWS CloudTrail, and AWS Config to achieve governance. When the solution is deployed in your account, the following components are created as part of the architecture, as shown in Figure 2.

Figure 2: Resources deployed in the customer environment by the solution

Figure 2: Resources deployed in the customer environment by the solution

The following architecture is now in place:

  • SCPs to provide preventative controls for the PrivateLink connections.
  • Amazon EventBridge rules that are configured to trigger based on events from API calls captured by CloudTrail in specified accounts within specified OUs.
  • EventBridge rules in member accounts to send events to the event bus in the Audit account, and a central EventBridge rule in that account to trigger an AWS Lambda function based on PrivateLink related API calls.
  • A Lambda function that receives the events and validates if the VPC endpoint API call is allowed for the PrivateLink service and notifies a cloud administrator if a policy is violated.
  • An AWS Config rule that checks if PrivateLink enabled VPC endpoint services created within your AWS accounts have enabled auto accept of client connections and disabled notifications.

Use cases and solution approach

This section walks through each use case and how the solution components are used to address each use case.

Preventative control

Use case 1: Allowing the creation of a VPC endpoint connection to only AWS services and approved internal and third-party PrivateLink services

This solution allows creating a VPC endpoint for only approved partner PrivateLink services, PrivateLink services internal to the organization, and AWS services. This is implemented using an SCP and can be enforced at the individual account or OU. The approved partner services as well as the internal accounts that can host allowed PrivateLink services can be specified during the solution deployment. Application teams operating in AWS accounts within the customer environment can then create VPC endpoints to PrivateLink services of approved partners or AWS services. However, they will not be able to create a VPC endpoint to an unapproved PrivateLink service, for example. This is shown in Figure 3.

Figure 3: Allowed and disallowed paths in PrivateLink connections by SCP

Figure 3: Allowed and disallowed paths in PrivateLink connections by SCP

The SCP that allows you to do this preventative control is shown in the following code snippet. In this example SCP policy, AllowedPrivateLinkPartnerService-ServiceName refers to the service name of the allowed partner PrivateLink. Also, the SCP allows the creation of VPC endpoints to internal PrivateLink services that are hosted in AllowedPrivateLinkAccount. Make sure that this SCP does not interfere with the other policies you created within your organization. The solution currently uses ec2:VpceServiceName and ec2:VpceServiceOwner conditions to identify the PrivateLink service of AWS services or a third-party partner. These conditions can be used in an SCP to control the creation of VPC endpoints:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Condition": {
        "StringNotEquals": {
          "ec2:VpceServiceName": [
            "AllowedPrivateLinkPartnerService-ServiceName",
          ],
          "ec2:VpceServiceOwner": [
            "AllowedPrivateLinkAccount",
            "amazon"
          ]
        }
      },
      "Action": [
        "ec2:CreateVpcEndpoint"
      ],
      "Resource": "arn:aws:ec2:*:*:vpc-endpoint/*",
      "Effect": "Deny",
      "Sid": "SCPDenyPrivateLink"
    }
  ]
}

Use case 2: Allow only a cloud admin role to add permissions to connect to an endpoint service

This solution makes sure that PrivateLink services that are owned and created in AWS accounts of the customer cannot be connected to consumers unless it is allowed by the cloud administrator role. The cloud administrator can then make sure that only legitimate internal AWS accounts are allowed access to that service and restrict access from other accounts outside of the customer’s organization. This is achieved through the use of a service control policy that will restrict modifications of permissions of the PrivateLink endpoint service. This makes sure that individual teams are not able to use the Allow principals configuration to open access to other entities directly, and only a cloud administrator role with the right permissions can make that change.

{
  "Version": "2012-10-17",
  "Statement": [
  
      "Sid": "Statement1",
      "Effect": "Deny",
      "Action": [
        "ec2:ModifyVpcEndpointServicePermissions"
      ],
      "Resource": [
        "*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:PrincipalArn": [
            "arn:aws:iam::*:role/CloudNetworkAdmin"
          ]
        }
      }
    }
  ]
}

This policy can help in achieving the access control, as shown in Figure 4. The cloud administrator uses the Allow principals configuration of the business services PrivateLink service to provide access only to the application 1 account. The SCP allows only the cloud administrator to make the modification and does not allow another member of the team from bypassing that process and adding a nonapproved client application account to access the internal PrivateLink service.

Figure 4: Centralized control on access to the internal PrivateLink service to the customer’s own accounts

Figure 4: Centralized control on access to the internal PrivateLink service to the customer’s own accounts

Detective controls

For detective controls, we discuss two use cases that are deployed as part of the solution and can be enabled and disabled based on the test that you want to perform.

Use case 3: Detecting if connections are made by external AWS accounts (not belonging to the customer’s organization) to PrivateLink services exposed by the customer’s AWS accounts

In this use case, the customer would like to detect if connections are made to their business services from accounts outside of its organization. The solution uses individual member account trails for capturing API calls across the multi-account structure and cross-account EventBridge integration. CloudTrail events from member accounts capture events when a PrivateLink service connection is accepted through the API call event AcceptVPCConnectionEndpoint and sent to the event bus in the audit account. This triggers a Lambda function that then captures the information of the entity requesting the connection and details of the PrivateLink service and sends a notification to the cloud administrator. This is shown in Figure 5.

Figure 5: Detecting the creation of a VPC endpoint or accepting a PrivateLink service connection using CloudTrail events in EventBridge

Figure 5: Detecting the creation of a VPC endpoint or accepting a PrivateLink service connection using CloudTrail events in EventBridge

Custom AWS Config rule for detective control

This detective control mechanism works in cases where PrivateLink services are configured to manually accept client connections. If the endpoint is configured to automatically accept connections, CloudTrail will not generate an event when a connection is accepted. AWS PrivateLink allows customers to configure connection notifications to send connection notification events to an Amazon Simple Notification Service (Amazon SNS) topic. Cloud administrators can get the notifications if they are subscribed to the SNS topic. However, if the notification configuration is removed by the member account, there is no way for the cloud administrator to have visibility for new connections and effectively apply governance requirements.

This solution employs an AWS Config rule to detect if PrivateLink services are created with the Auto Accept Connections setting enabled or without a connection notification configuration and flag it as noncompliant.

This is depicted in Figure 6.

Figure 6: Custom AWS Config rule and SNS notification deployed as part of the solution

Figure 6: Custom AWS Config rule and SNS notification deployed as part of the solution

When a PrivateLink service is created by one of the business services teams, an AWS Config organization rule in the audit account will detect the event, and the custom Lambda function will check if the connection notification configuration is present. If not, then the AWS Config rule will flag the resource as noncompliant. Cloud administrators can view these in the AWS Config dashboard or receive notifications configured through AWS Config.

Use case 4: Detecting if connections are made to PrivateLink services exposed by AWS accounts not belonging to the customer’s organization.

Using the same approach as presented in use case 3, connections made to PrivateLink services exposed by AWS accounts outside of the customer’s organization can be detected through the API call event from CloudTrail CreateVPCEndpoint. This event is sent to the centralized event bus and the Lambda function to check against the criteria and provide notifications to the cloud administrator.

Deploy and test the solution

This section walks through how to deploy and test our recommended solution.

Prerequisites

To deploy the solution, first follow these steps.

  1. In your AWS Organizations multi-account environment, go to the management account and enable trusted access for AWS CloudFormation, enable trusted access for AWS Config, and enable trusted access for CloudTrail.
  2. Identify an account in your organization to serve as the audit account and set it up as a delegated administrator for CloudFormation, AWS Config, and CloudTrail. Follow these steps to perform this step:
    1. Register a delegated administrator for CloudFormation.
    2. Perform the steps mentioned in step 1 of this post to register a delegated administrator for AWS Config.
    3. Register a delegated admin for CloudTrail.
  3. The solution uses the deployment of CloudFormation StackSets with self-managed permissions to set up the resources in the audit account. In order to enable this, create AWSCloudFormationStackSetAdministrationRole in the management account and AWSCloudFormationStackSetExecutionRole in the audit account by using the steps in the topic Grant self-managed permissions.
  4. In a separate AWS account that is different than your multi-account environment, create two PrivateLink VPC endpoint services as explained in the documentation. You can use this template to create a test PrivateLink VPC endpoint service. These will serve as two partner services, one of which is allowed, and another is untrusted and not allowed. Make note of their service names.

Figure 7: Simulated partner services (approved and not approved) in a separate test account

Figure 7: Simulated partner services (approved and not approved) in a separate test account

Deploying the solution

  1. Go to the management account of your AWS Organizations multi-account environment and use this CloudFormation template to deploy the solution, or choose the following Launch Stack button:

    Launch stack

    CloudFormation stacks can be deployed using the AWS CloudFormation console or using the AWS CLI.

  2. This initially displays the Create stack page. Leave the details entered by default, and then choose Next.
  3. On the Specify stack details page, enter the details for the input parameters for this solution. The following table shows the details that you will provide when setting up the CloudFormation template on the Specify stack details page on the CloudFormation console.

    AWSOrganizationsId Identifier for your organization. This can be obtained from your management account as described in the AWS Organizations User Guide.
    AdminRoleArn Role of the persona who is allowed to modify PrivateLink endpoint permissions.
    AllowedPrivateLinkAccounts AWS account IDs of accounts in your OU that host PrivateLink services.
    AllowedPrivateLinkPartnerServices Specify the service name of the approved PrivateLink services from partners. If you want to test with a simulated partner PrivateLink, take the service name of PrivateLink services created in Step 4 of the prerequisites as the partner services to which connections should be allowed. The unique service name of the partner’s PrivateLink service is provided by the partner to the customer so that they can connect to it.
    AuditAccountId AWS account ID of the audit account in your multi-account environment.
    PLOrganizationUnit OU identifier for the organizational unit where the solution will perform preventative and detective control.
    Figure 8: CloudFormation template input parameters for the solution as it appears on the console

    Figure 8: CloudFormation template input parameters for the solution as it appears on the console

  4. Choose Next and keep the defaults for the rest of the fields. Then, on the Review and create page, choose Submit to finish deploying the solution.

Testing the solution

Once the solution is deployed successfully, follow these steps to test the solution:

  1. For an account specified in the AllowedPrivateLinkAccounts parameter, create a VPC endpoint service as explained in the topic Create a service powered by AWS PrivateLink. Instead of creating this manually, use this CloudFormation template to create a test VPC endpoint service.
  2. Sign in to a member account within the OU that you specified in the CloudFormation template.
  3. From the member account, create a VPC endpoint connection to the internal PrivateLink service created in the account from Step 1. This connection will set up successfully because it is internal to the organization and therefore allowed by the SCP policy, and is not flagged to the cloud administrator as violating organization policy.
  4. From the member account, create a VPC endpoint connection to the AWS service that is supporting PrivateLink, such as AWS Key Management Service (AWS KMS). This connection will set up successfully because it is internal to the organization and therefore allowed by the SCP policy, and is not flagged to the cloud administrator as violating organization policy.
  5. From the member account, create a VPC endpoint connection to the PrivateLink service created in Step 4 of the prerequisites. This connection will set up successfully because it is internal to the organization and therefore allowed by the SCP policy, and is not flagged to the cloud administrator as violating organization policy.
  6. From the member account, create a VPC endpoint connection to the PrivateLink service created in Step 4 of the prerequisites and that is not an allowed partner service. This connection will fail because it is not allowed by the SCP policy.
  7. From an account outside of your organization, create a VPC endpoint connection to the internal PrivateLink service created in Step 1. The connection setup is successful, but the cloud administrator will see the internal PrivateLink service as NOT COMPLIANT because the connection from external clients is considered to be not compliant with organization requirements in this solution. This information allows the cloud admin to quickly find the noncompliant resource and work with the PrivateLink service owner team to remediate the issue.
  8. From the member account, create another VPC endpoint service without configuring the notification configuration, and leave the Acceptance required field unchecked. Navigate to the AWS Config console in the audit account and go to Aggregator->Rules. Check the evaluation of the rule starting with “OrgConfigRule-pl-governance-rule….” Once the evaluation is complete, it will indicate that this VPC endpoint service is NOT COMPLIANT, whereas the service created in Step 1 will show as COMPLIANT.

Considerations

  • The solution described here takes the approach of allowing all VPC endpoint connections from within a customer’s organization to the PrivateLink services in specified accounts and detecting and notifying all external ones. This can be modified based on your specific use cases and requirements.
  • The solution uses AWS Config rules that are applied to specific accounts of your organization, even though the solution is applied at an OU level. The AWS Config rules created in this solution are scoped to evaluate VPC endpoint services and should incur charges accordingly. Refer to the AWS Config pricing page to understand usage-based pricing for the service.
  • Other services, such AWS Lambda and Amazon EventBridge, also incur usage-based charges. Please verify that these are deleted to prevent incurring unnecessary charges.
  • SCP policies only affect member accounts. They do not apply to the management account, so actions denied through an SCP policy multi-account will still be allowed in the management account.

Cleanup

You can delete the solution by following these steps to avoid unnecessary charges:

  • Delete the CloudFormation stack created as part of Step 4 of the prerequisites.
  • Delete the CloudFormation stack of the main solution deployed in the management account as part of the Deploying the solution section.
  • Delete the CloudFormation stack created as part of Step 1 of Testing the solution.

Summary

As customers adopt AWS PrivateLink throughout their environment, the mechanisms discussed in this post provide a way for administrators to govern and secure their PrivateLink services at scale. This approach can help you create a scalable solution where interconnections are aligned to the organization’s guidelines and security requirements. While this solution presents an approach to governance, customers can tailor this solution to their unique organizational requirements.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Anandprasanna Gaitonde

Anand is a Principal Solutions Architect at AWS, responsible for helping customers design and operate Well-Architected solutions to help them adopt the AWS Cloud successfully. He focuses on AWS networking and serverless technologies to design and develop solutions in the cloud across industry verticals. He holds a master of engineering in computer science and a postgraduate degree in software enterprise management.

Siva Devabakthini

Siva Devabakthini

Siva is a Senior Solutions Architect at AWS who covers hyperscale customers in the AWS Digital Native Business segment. He focuses on AWS security, data analytics, and artificial intelligence and machine learning (AI/ML) technologies to design and develop solutions in the cloud. Outside of work, Siva loves traveling, trying different cuisines, and being outdoors with his family.

Emmanuel Isimah

Emmanuel Isimah

Emmanuel is a Senior Solutions Architect at AWS who covers hyperscale customers in the enterprise retail space. He has a background in networking, security, and containers. Emmanuel helps customers build and secure innovative cloud solutions, solving their business problems by using data-driven approaches. Emmanuel’s areas of depth include security and compliance, containers, and networking.

How to use AWS managed applications with IAM Identity Center

Post Syndicated from Liam Wadman original https://aws.amazon.com/blogs/security/how-to-use-aws-managed-applications-with-iam-identity-center/

AWS IAM Identity Center is the preferred way to provide workforce access to Amazon Web Services (AWS) accounts, and enables you to provide workforce access to many AWS managed applications, such as Amazon Q Developer (Formerly known as Code Whisperer).

As we continue to release more AWS managed applications, customers have told us they want to onboard to IAM Identity Center to use AWS managed applications, but some aren’t ready to migrate their existing IAM federation for AWS account management to Identity Center.

In this blog post, I’ll show you how you can enable Identity Center and use AWS managed applications—such as Amazon Q Developer—without migrating existing IAM federation flows to Identity Center.

A recap on AWS managed applications and trusted identity propagation

Just before re:Invent 2023, AWS launched trusted identity propagation, a technology that allows you to use a user’s identity and groups when accessing AWS services. This allows you to assign permissions directly to users or groups, rather than model entitlements in AWS Identity and Access Management (IAM). This makes permissions management simpler for users. For example, with trusted identity propagation, you can grant users and groups access to specific Amazon Redshift clusters without modeling all possible unique combinations of permissions in IAM. Trusted identity propagation is available today for Redshift and Amazon Simple Storage Service (Amazon S3), with more services and features coming over time.

In 2023, we released Amazon Q Developer, which is integrated with IAM Identity Center, generally available as an AWS managed application. When you’re using Amazon Q Developer outside of AWS in integrated development environments (IDEs) such as Microsoft Visual Studio Code, Identity Center is used to sign in to Amazon Q Developer.

Amazon Q Developer is one of many AWS managed applications that are integrated with the OAuth 2.0 functionality of IAM Identity Center, and it doesn’t use IAM credentials to access the Q Developer service from within your IDEs. AWS managed applications and trusted identity propagation don’t require you to use the permission sets feature of Identity Center and instead use OpenID Connect to grant your workforce access to AWS applications and features.

IAM Identity Center for AWS application access only

In the following section, we use IAM Identity Center to sign in to Amazon Q Developer as an example of an AWS managed application.

Prerequisites

Step 1: Enable an organization instance of IAM Identity Center

To begin, you must enable an organization instance of IAM Identity Center. While it’s possible to use IAM Identity Center without an AWS Organizations organization, we generally recommend that customers operate with such an organization.

The IAM Identity Center documentation provides the steps to enable an organizational instance of IAM Identity Center, as well as prerequisites and considerations. One consideration I would emphasize here is the identity source. We recommend, wherever possible, that you integrate with an external identity provider (IdP), because this provides the most flexibility and allows you to take advantage of the advanced security features of modern identity platforms.

IAM Identity Center is available at no additional cost.

Note: In late 2023, AWS launched account instances for IAM Identity Center. Account instances allow you to create additional Identity Center instances within member accounts of your organization. Wherever possible, we recommend that customers use an organization instance of IAM Identity Center to give them a centralized place to manage their identities and permissions. AWS recommends account instances when you want to perform a proof of concept using Identity Center, when there isn’t a central IdP or directory that contains all the identities you want to use on AWS and you want to use AWS managed applications with distinct directories, or when your AWS account is a member of an organization in AWS Organizations that is managed by another party and you don’t have access to set up an organization instance.

Step 2: Set up your IdP and synchronize identities and groups

After you’ve enabled your IAM Identity Center instance, you need to set up your instance to work with your chosen IdP and synchronize your identities and groups. The IAM Identity Center documentation includes examples of how to do this with many popular IdPs.

After your identity source is connected, IAM Identity Center can act as the single source of identity and authentication for AWS managed applications, bridging your external identity source and AWS managed applications. You don’t have to create a bespoke relationship between each AWS application and your IdP, and you have a single place to manage user permissions.

Step 3: Set up delegated administration for IAM Identity Center

As a best practice, we recommend that you only access the management account of your AWS Organizations organization when absolutely necessary. IAM Identity Center supports delegated administration, which allows you to manage Identity Center from a member account of your organization.

To set up delegated administration

  1. Go to the AWS Management Console and navigate to IAM Identity Center.
  2. In the left navigation pane, select Settings. Then select the Management tab and choose Register account.
  3. From the menu that follows, select the AWS account that will be used for delegated administration for IAM Identity Center. Ideally, this member account is dedicated solely to the purpose of administrating IAM Identity Center and is only accessible to users who are responsible for maintaining IAM Identity Center.

Figure 1: Set up delegated administration

Figure 1: Set up delegated administration

Step 4: Configure Amazon Q Developer

You now have IAM Identity Center set up with the users and groups from your directory, and you’re ready to configure AWS managed applications with IAM Identity Center. From a member account within your organization, you can now enable Amazon Q Developer. This can be any member account in your organization and should not be the one where you set up delegated administration of IAM Identity Center, or the management account.

Note: If you’re doing this step immediately after configuring IAM Identity Center with an external IdP with SCIM synchronization, be aware that the users and groups from your external IdP might not yet be synchronized to Identity Center by your external IdP. Identity Center updates user information and group membership as soon as the data is received from your external IdP. How long it takes to finish synchronizing after the data is received depends on the number of users and groups being synchronized to Identity Center.

To enable Amazon Q Developer

  1. Open the Amazon Q Developer console. This will take you to the setup for Amazon Q Developer.

    Figure 2: Open the Amazon Q Developer console

    Figure 2: Open the Amazon Q Developer console

  2. Choose Subscribe to Amazon Q.

    Figure 3: The Amazon Q developer console

    Figure 3: The Amazon Q developer console

  3. You’ll be taken to the Amazon Q console. Choose Subscribe to subscribe to Amazon Q Developer Pro.

    Figure 4: Subscribe to Amazon Q Developer Pro

    Figure 4: Subscribe to Amazon Q Developer Pro

  4. After choosing Subscribe, you will be prompted to select users and groups you want to enroll for Amazon Q Developer. Select the users and groups you want and then choose Assign.

    Figure 5: Assign user and group access to Amazon Q Developer

    Figure 5: Assign user and group access to Amazon Q Developer

After you perform these steps, the setup of Amazon Q Developer as an AWS managed application is complete, and you can now use Amazon Q Developer. No additional configuration is required within your external IdP or on-premises Microsoft Active Directory, and no additional user profiles have to be created or synchronized to Amazon Q Developer.

Note: There are charges associated with using the Amazon Q Developer service.

Step 5: Set up Amazon Q Developer in the IDE

Now that Amazon Q Developer is configured, users and groups that you have granted access to can use Amazon Q Developer from their supported IDE.

In their IDE, a user can sign in to Amazon Q Developer by entering the start URL and AWS Region and choosing Sign in. Figure 6 shows what this looks like in Visual Studio Code. The Amazon Q extension for Visual Studio Code is available to download within Visual Studio Code.

Figure 6: Signing in to the Amazon Q Developer extension in Visual Studio Code

Figure 6: Signing in to the Amazon Q Developer extension in Visual Studio Code

After choosing Use with Pro license, and entering their Identity Center’s start URL and Region, the user will be directed to authenticate with IAM Identity Center and grant the Amazon Q Developer application access to use the Amazon Q Developer service.

When this is successful, the user will have the Amazon Q Developer functionality available in their IDE. This was achieved without migrating existing federation or AWS account access patterns to IAM Identity Center.

Clean up

If you don’t wish to continue using IAM Identity Center or Amazon Q Developer, you can delete the Amazon Q Developer Profile and Identity Center instance within their respective consoles, within the AWS account they are deployed into. Deleting your Identity Center instance won’t make changes to existing federation or AWS account access that is not done through IAM Identity Center.

Conclusion

In this post, we talked about some recent significant launches of AWS managed applications and features that integrate with IAM Identity Center and discussed how you can use these features without migrating your AWS account management to permission sets. We also showed how you can set up Amazon Q Developer with IAM Identity Center. While the example in this post uses Amazon Q Developer, the same approach and guidance applies to Amazon Q Business and other AWS managed applications integrated with Identity Center.

To learn more about the benefits and use cases of IAM Identity Center, visit the product page, and to learn more about Amazon Q Developer, visit the Amazon Q Developer product page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.

Liam Wadman

Liam Wadman

Liam is a Senior Solutions Architect with the Identity Solutions team. When he’s not building exciting solutions on AWS or helping customers, he’s often found in the mountains of British Columbia on his mountain bike. Liam points out that you cannot spell LIAM without IAM.

How to use WhatsApp to send Amazon Cognito notification messages

Post Syndicated from Nideesh K T original https://aws.amazon.com/blogs/security/how-to-use-whatsapp-to-send-amazon-cognito-notification-messages/

While traditional channels like email and SMS remain important, businesses are increasingly exploring alternative messaging services to reach their customers more effectively. In recent years, WhatsApp has emerged as a simple and effective way to engage with users. According to statista, as of 2024, WhatsApp is the most popular mobile messenger app worldwide and has reached over two billion monthly active users in January 2024.

Amazon Cognito lets you add user sign-up and authentication to your mobile and web applications. Among many other features, Cognito provides a custom SMS sender AWS Lambda trigger for using third-party providers to send notifications. In this post, we’ll be using WhatsApp as the third-party provider to send verification codes or multi-factor authentication (MFA) codes instead of SMS during Cognito user pool sign up.

Note: WhatsApp is a third-party service subject to additional terms and charges. Amazon Web Services (AWS) isn’t responsible for third-party services that you use to send messages with a custom SMS sender in Amazon Cognito.

Overview

By default, Amazon Cognito uses Amazon Simple Notification Service (Amazon SNS) for delivery of SMS text messages. Cognito also supports custom triggers that will allow you to invoke an AWS Lambda function to support additional providers such as WhatsApp.

The architecture shown in Figure 1 depicts how to use a custom SMS sender trigger and WhatsApp to send notifications. The steps are as follows:

  1. A user signs up to an Amazon Cognito user pool.
  2. Cognito invokes the custom SMS sender Lambda function and sends the user’s attributes, including the phone number and a one-time code to the Lambda function. This one-time code is encrypted using a custom symmetric encryption AWS Key Management Service (AWS KMS) key that you create.
  3. The Lambda function decrypts the one-time code using a Decrypt API call to your AWS KMS key.
  4. The Lambda function then obtains the WhatsApp access token from AWS Secrets Manager. The WhatsApp access token needs to be generated through Meta Business Settings (which are covered in the next section) and added to Secrets Manager. Lambda also parses the phone number, user attributes, and encrypted secrets.
  5. Lambda sends a POST API call to the WhatsApp API and WhatsApp delivers the verification code to the user as a message. The user can then use the verification code to verify their contact information and confirm the sign-up.

Figure 1: Custom SMS sender trigger flow

Figure 1: Custom SMS sender trigger flow

Prerequisites

Implementation

In the next steps, we look at how to create a Meta app, create a new system user, get the WhatsApp access token and create the template to send the WhatsApp token.

Create and configure an app for WhatsApp communication

To get started, create a Meta app with WhatsApp added to it, along with the customer phone number that will be used to test.

To create and configure an app

  1. Open the Meta for Developers console, choose My Apps and then choose Create App (or choose an existing Business type app and skip to step 4).
  2. Select Other choose Next and then select Business as the app type and choose Next.
  3. Enter an App name, App contact email, choose whether or not to attach a Business portfolio and choose Create app.
  4. Open the app Dashboard and in the Add product to your app section, under WhatsApp, choose Set up.
  5. Create or select an existing Meta business portfolio and choose Continue.
  6. In the left navigation pane, under WhatsApp, choose API Setup.
  7. Under Send and receive messages, take a note of the Phone number ID, which will be needed in the AWS CDK template later.
  8. Under To, add the customer phone number you want to use for testing. Follow the instructions to add and verify the phone number.

Note: You must have WhatsApp registered with the number and the WhatsApp client installed on your mobile device.

Create a user for accessing WhatsApp

Create a system user in Meta’s Business Manager and assign it to the app created in the previous step. The access tokens generated for this user will be used to make the WhatsApp API calls.

To create a user

  1. Open Meta’s Business Manager and select the business you created or associated your application with earlier from the dropdown menu under Business settings.
  2. Under Users, select System users and then choose Add to create a new system user.
  3. Enter a name for the System Username and set their role as Admin and choose Create system user.
  4. Choose Assign assets.
  5. From the Select asset type list, select Apps. Under Select assets, select your WhatsApp application’s name. Under Partial access, turn on the Test app option for the user. Choose Save Changes and then choose Done.
  6. Choose Generate New Token, select the WhatsApp application created earlier, and leave the default 60 days as the token expiration. Under Permissions select WhatsApp_business_messaging and WhatsApp_business_management and choose Generate Token at the bottom.
  7. Copy and save your access token. You will need this for the AWS CDK template later. Choose OK. For more details on creating the access token, see WhatsApp’s Business Management API Get Started guide.

Create a template in WhatsApp

Create a template for the verification messages that will be sent by WhatsApp.

To create a template

  1. Open Meta’s WhatsApp Manager.
  2. On the left icon pane, under Account tools, choose Message template and then choose Create Template.
  3. Select Authentication as the category.
  4. For the Name, enter otp_message.
  5. For Languages, enter English.
  6. Choose Continue.
  7. In the next screen, select Copy code and choose Submit.

Note: It’s possible that Meta might change the process or the UI. See the Meta documentation for specific details.

For more information on WhatsApp templates, see Create and Manage Templates.

Create a Secrets Manager secret

Use the Secrets Manager console to create a Secrets Manager secret and set the secret to the WhatsApp access token.

To create a secret

  1. Open the AWS Management Console and go to Secrets Manager.

    Figure 2: Open the Secrets Manager console

    Figure 2: Open the Secrets Manager console

  2. Choose Store a new secret.

    Figure 3: Store a new secret

    Figure 3: Store a new secret

  3. Under Choose a secret type, choose Other type of secret and under Key/value pairs, select the Plaintext tab and enter Bearer followed by the WhatsApp access token (Bearer <WhatsApp access token>).

    Figure 4: Add the secret

    Figure 4: Add the secret

  4. For the encryption key, you can use either the AWS KMS key that Secrets Manager creates or a customer managed AWS KMS key that you create and then choose Next.
  5. Provide the secret name as the WhatsAppAccessToken, choose Next, and then choose Store to create the secret.
  6. Note the secret Amazon Resource Name (ARN) to use in later steps.

Deploy the solution

In this section, you clone the GitHub repository and deploy the stack to create the resources in your account.

To clone the repository

  1. Create a new directory, navigate to that directory in a terminal and use the following command to clone the GitHub repository that has the Lambda and AWS CDK code:
  2. Change directory to the pattern directory:
    cd amazon-cognito-whatsapp-otp

To deploy the stack

  1. Configure the phone number ID obtained from WhatsApp, the secret name, secret ARN, and the Amazon Cognito user pool self-service sign-up option in the constants.ts file.

    Open the lib/constants.ts file and edit the fields. The SELF_SIGNUP value must be set to true for the purpose of this proof of concept. The SELF_SIGNUP value represents the Boolean value for the Amazon Cognito user pool sign-up option, which when set to true allows public users to sign up.

    export const PHONE_NUMBER_ID = '<phone number ID>'; 
    export const SECRET_NAME = '<WhatsAppAccessToken>'; 
    export const SECRET_ARN = 'arn:aws:secretsmanager:<AWSRegion>:<phone number ID>:secret:<WhatsAppAccessToken>'; 
    export const SELF_SIGNUP = <true>;

    Warning: If you activate user sign-up (enable self-registration) in your user pool, anyone on the internet can sign up for an account and sign in to your applications.

  2. Install the AWS CDK required dependencies by running the following command:
    npm install

  3. This project uses typescript as the client language for AWS CDK. Run the following command to compile typescript to JavaScript:
    npm run build

  4. From the command line, configure AWS CDK (if you have not already done so):
    cdk bootstrap <account number>/<AWS Region>

  5. Install and run Docker. We’re using the aws-lambda-python-alpha package in the AWS CDK code to build the Lambda deployment package. The deployment package installs the required modules in a Lambda compatible Docker container.
  6. Deploy the stack:
    cdk synth
    cdk deploy --all

Test the solution

Now that you’ve completed implementation, it’s time to test the solution by signing up a user on Amazon Cognito and confirming that the Lambda function is invoked and sends the verification code.

To test the solution

  1. Open AWS CloudFormation console.
  2. Select the WhatsappOtpStack that was deployed through AWS CDK.
  3. On the Outputs tab, copy the value of cognitocustomotpsenderclientappid.
  4. Run the following AWS Command Line Interface (AWS CLI) command, replacing the client ID with the output of cognitocustomotpsenderclientappid, username, password, email address, name, phone number, and AWS Region to sign up a new Amazon Cognito user.
    aws cognito-idp sign-up --client-id <cognitocustomsmssenderclientappid> --username <TestUserPhoneNumber> --password <Password> --user-attributes Name="email",Value="<TestUserEmail>" Name="name",Value="<TestUserName>" Name="phone_number",Value="<TestPhoneNumber>" --region <AWS Region>

    Example:

    aws cognito-idp sign-up --client-id xxxxxxxxxxxxxx --username +12065550100  --password Test@654321 --user-attributes Name="email",Value="[email protected]" Name="name",Value="Jane" Name="phone_number",Value=”+12065550100" --region us-east-1

    Note: Password requirements are a minimum length of eight characters with at least one number, one lowercase letter, and one special character.

The new user should receive a message on WhatsApp with a verification code that they can use to complete their sign-up.

Cleanup

  1. Run the following command to delete the resources that were created. It might take a few minutes for the CloudFormation stack to be deleted.
    cdk destroy --all

  2. Delete the secret WhatsAppAccessToken that was created from the Secrets Manager console.

Conclusion

In this post, we showed you how to use an alternative messaging platform such as WhatsApp to send notification messages from Amazon Cognito. This functionality is enabled through the Amazon Cognito custom SMS sender trigger, which invokes a Lambda function that has the custom code to send messages through the WhatsApp API. You can use the same method to use other third-party providers to send messages.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito re:Post or contact AWS Support.

Want more AWS Security news? Follow us on X.

Nideesh K T

Nideesh K T

Nideesh is an experienced IT professional with expertise in cloud computing and technical support. Nideesh has been working in the technology industry for 8 years. In his current role as a Sr. Cloud Support Engineer, Nideesh provides technical assistance and troubleshooting for cloud infrastructure issues. Outside of work, Nideesh enjoys staying active by going to the gym, playing sports, and spending time outdoors.

Reethi Joseph

Reethi Joseph

Reethi is a Sr. Cloud Support Engineer at AWS with 7 years of experience specializing in serverless technologies. In her role, she helps customers architect and build solutions using AWS services. When not delving into the world of servers and generative AI, she spends her time trying to perfect her swimming strokes, traveling, trying new baking recipes, gardening, and watching movies.

How to enforce a security baseline for an AWS WAF ACL across your organization using AWS Firewall Manager

Post Syndicated from Omner Barajas original https://aws.amazon.com/blogs/security/how-to-enforce-a-security-baseline-for-an-aws-waf-acl-across-your-organization-using-aws-firewall-manager/

Most organizations prioritize protecting their web applications that are exposed to the internet. Using the AWS WAF service, you can create rules to control bot traffic, help prevent account takeover fraud, and block common threat patterns such as SQL injection or cross-site scripting (XSS). Further, for those customers managing multi-account environments, it is possible to enforce security baselines for AWS WAF access control lists (ACLs) across the whole organization by using AWS Firewall Manager.

In a previous AWS Security Blog post, there is a good explanation about how to create Firewall Manager policies to deploy AWS WAF ACLs across multiple accounts. In addition, this AWS Architecture Blog post goes deeper, describing operating models for web applications security governance in Amazon Web Services (AWS). This post will show, in a central or hybrid operating model, how to create a policy to enforce a security baseline in your AWS WAF ACLs while still allowing application administrators or developers to apply specific ACL rules for their particular use case.

Centrally manage firewall policies

It’s a common scenario that a security team in an organization wants to implement a security baseline, consisting of a set of rules, across multiple applications that are distributed in multiple accounts. Those rules are not always applicable for all workloads because different applications might have different needs for protection or exposure to the public. Furthermore, sometimes local teams responsible for managing applications have permissions to create their own rules and decide not to follow policies mandated by the organization.

AWS Firewall Manager solves this problem by allowing you to centrally configure and manage firewall policies, deploy preconfigured AWS WAF rules across your organization, and automatically enforce them in existing and newly created resources.

The following architecture diagram describes how you can design a Firewall Manager policy from a central security account, establishing a security baseline that will be enforced within other member accounts in your organization. To do so, you create a managed AWS WAF ACL with the first and last group rules not editable, but allowing a custom rule group to be modified by administrators of member accounts.

Figure 1: AWS Firewall Manager enforcing security baseline for AWS WAF

Figure 1: AWS Firewall Manager enforcing security baseline for AWS WAF

Firewall Manager delegated administrators

At the time of writing this post, Firewall Manager supports up to 10 administrators who can manage firewall resources in your organization by applying scope conditions. For example, you can define an administrator for specific accounts or even a complete organization unit (OU), AWS Region, or policy type. Using this feature, you can enforce the principle of least privilege access, in addition to assigning administrators to enforce security baselines for your AWS ACL rules across your organization in a more granular way. This delegation needs to be completed from the AWS Organizations management account, as shown in Figure 2.

Figure 2: AWS Firewall Manager administrator account delegation

Figure 2: AWS Firewall Manager administrator account delegation

Firewall Manager policies

A Firewall Manager policy contains the rule groups that will be applied to your protected resources. The service creates a web ACL in each account where the policy is enforced. Account administrators can add rules or rule groups to the resulting web ACL in addition to the rules groups defined by the Firewall Manager policy.

Rules groups

AWS WAF ACLs that are managed by Firewall Manager policies contain three sets of rules that provide a higher level of prioritization in the ACL. AWS WAF evaluates rule groups in the following order:

  1. Rule groups that are defined in the Firewall Manager policy with the highest priority
  2. Rules that are defined by the account administrator in the web ACL after the first rule group
  3. Rule groups that are defined in Firewall Manager to be evaluated at the end

Within each rule set, AWS WAF evaluates rules according to their priority settings, evaluating the rules from the lowest number up until either finds a match that terminates the evaluation or exhausts all of the rules.

Security baseline policy

Figure 3 shows an example of a Firewall Manager policy that will serve as the security web ACL baseline across your organization. This policy should be created in a delegated administrator acco­­unt and enforced across all or specific accounts in your organization where the administrator has permissions. Refer to the service documentation for additional guidance on setting up this type of policy.

Figure 3: AWS Firewall Manager policy rules acting as the security baseline

Figure 3: AWS Firewall Manager policy rules acting as the security baseline

First rule group

The first rule group in the policy will contain the following:

  • Organization-level blocked list – Known bad IP addresses by organization.
  • AWS IP reputation list – Recommended AWS managed rules for IP addresses with a bad reputation.
  • AWS Anonymous IP list – Recommended AWS managed rules for anonymous IP addresses.
  • Organization-level rate limit – A high-level rate limit defined by the organization.

Last rule group

The last rule group in the policy will contain the following:

  • Organization-level allowed list – Even if these are well-known IP addresses, they still need to be evaluated against the set of rules enforced by the organization and specific rules per application. If a “good” IP address is supplanted, it might hide the real source identity, bypassing AWS WAF rules.
  • AWS bot control – Recommended if you want to enforce bot control across your organization or a set of accounts managed by an administrator.

This configuration will allow individual account administrators to define and include their own rules to protect applications based on specific use cases and the expected number of requests.

When designing your own security baselines, take into consideration that some managed rules, such as bot control, might have additional cost, and enforcing them across your organization would increase the overall cost of the service.

Policy scope

The policy scope for your security baseline defines where the policy applies. It can apply to all accounts and resources in your organization or just a subset of accounts and resources. Based on the settings selected, Firewall Manager will apply policy for accounts in scope by using the following options:

  1. All accounts in your organization
  2. Only a specific list of accounts and organization units
  3. All accounts and OUs except a specific list of those to exclude

On the other hand, when selecting the scope for resources, you can use the following options:

  1. All resources
  2. Resources that have all of the specified tags
  3. All resources except those that have all the specified tags

For delegated administrators, scope definition will apply only for accounts, Regions, or OUs defined during the delegation process. Figure 4 shows an example of the scope definition for a policy.

Figure 4: Firewall Manager scope definition

Figure 4: Firewall Manager scope definition

Use case–specific rule groups

Figure 5 is an example of a specific use case, where AWS WAF administrators in a member account within the Firewall Manager policy scope want to protect their web application by using the following rules.

Figure 5: Web ACL managed by Firewall Manager containing rules in a member account

Figure 5: Web ACL managed by Firewall Manager containing rules in a member account

Middle rule group

The middle rule group is configured in each account within the ACL deployed by Firewall Manager. The examples from Figure 5 are rules oriented to apply protection that is specific for the application where the ACL is assigned:

  • App-level blocked list – Known IP addresses blocked by the administrator.
  • App-level rate limit – The rate limit supported by the application.
  • Core rule set – The recommended rule set, focused on OWASP Top Ten vulnerabilities.
  • Technology-specific protection – An example for PHP applications.
  • App-level allowed list – Well-known IP addresses that still need to be evaluated against some rules but bypass others, such as fraud prevention.
  • Account takeover prevention – This managed rule needs specific configuration per application to work as expected. However, it is recommended that you use it after the bot control managed rule to optimize cost. Take that into consideration when building your own security baseline.

This rule group will be second priority between the first and the last rule groups coming from the Firewall Manager policy. This configuration provides account administrators the ability to design their set of rules to cover the specific use case for their application and also the possibility to override rules evaluated in a lower priority (last rule group). For example, having a higher rate limit in the app-level rule than the org-level rule would have no impact on the traffic being filtered, since the org-level rule in the first group of the policy will have priority. However, having more granular bot control rules at the app-level will supersede the org-level rules contained in the last group of the policy. Take that logic into consideration when you decide which rules need to be in the first and last groups of your Firewall Manager policies.

Recommended approach for testing

Before you deploy your web ACL implementation for production, test and tune it in a staging or testing environment until you are comfortable with the potential impact on your traffic. Then, test and tune the rules in count mode with your production traffic before enabling them.

  1. Prepare the environment for testing:
    1. Enable logging and web request sampling for your ACL.
    2. Set the protection to count mode.
    3. Associate the ACL with a resource.
  2. Monitor and tune in the test environment:
    1. Monitor traffic and rules matching by using logs, metrics, the dashboard, or sampled requests.
    2. Configure mitigation rules such as false positive, matching, scope-down, and label match.
  3. Enable protection in production:
    1. Remove any additional rules that are no longer needed.
    2. Enable rules in production accounts.
    3. Closely monitor your application behavior to be sure requests are being handled as expected.

Cleanup

To avoid unexpected charges in your accounts, delete any unnecessary policies and resources. You can do that from the console by following these steps.

  1. On the Firewall Manager policies page, choose the radio button next to the policy name, and then choose Delete.
  2. In the Delete confirmation box, select Delete all policy resources, and then choose Delete again.

AWS WAF removes the policy and any associated resources, like web ACLs, that it created in your account. The changes might take a few minutes to propagate to all accounts.

Conclusion

By using Firewall Manager, you can take advantage of native cloud features to enforce security baseline configurations for your AWS WAF rules in a multi-account environment across your organization. It is possible to centrally design policies with broad rule groups to protect workloads from a high-level perspective while allowing application administrators to design custom rules to protect, for instance, web applications from specific use cases such as OWASP Top Ten or technology-related vulnerabilities.

The examples provided in this post can be further customized and adapted to align with your organization’s needs. Design policies to comply with security requirements and specific use cases to protect your workloads.

If you want to learn more, visit the Automations for AWS Firewall Manager webpage, which provides a solution with preset rules to create a quick security baseline to protect against distributed denial of service (DDoS).

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.

Omner Barajas

Omner Barajas

Omner is a senior security specialist solutions architect based in Mexico, supporting customers in LATAM. He usually collaborates with account teams to help clients accelerate cloud adoption and improve security posture for their workloads, resolving complex technical challenges related to cybersecurity and compliance with international standards and regulations.

How Amazon Security Lake is helping customers simplify security data management for proactive threat analysis

Post Syndicated from Nisha Amthul original https://aws.amazon.com/blogs/security/how-amazon-security-lake-is-helping-customers-simplify-security-data-management-for-proactive-threat-analysis/

In this post, we explore how Amazon Web Services (AWS) customers can use Amazon Security Lake to efficiently collect, query, and centralize logs on AWS. We also discuss new use cases for Security Lake, such as applying generative AI to Security Lake data for threat hunting and incident response, and we share the latest service enhancements and developments from our growing landscape. Security Lake centralizes security data from AWS environments, software as a service (SaaS) providers, and on-premises and cloud sources into a purpose-built data lake that is stored in your AWS account. Using Open Cybersecurity Schema Framework (OCSF) support, Security Lake normalizes and combines security data from AWS and a broad range of third-party data sources. This helps provide your security team with the ability to investigate and respond to security events and analyze possible threats within your environment, which can facilitate timely responses and help to improve your security across multicloud and hybrid environments.

One year ago, AWS embarked on a mission driven by the growing customer need to revolutionize how security professionals centralize, optimize, normalize, and analyze their security data. As we celebrate the one-year general availability milestone of Amazon Security Lake, we’re excited to reflect on the journey and showcase how customers are using the service, yielding both productivity and cost benefits, while maintaining ownership of their data.

Figure 1 shows how Security Lake works, step by step. For more details, see What is Amazon Security Lake?

Figure 1: How Security Lake works

Figure 1: How Security Lake works

Customer use cases

In this section, we highlight how some of our customers have found the most value with Security Lake and how you can use Security Lake in your organization.

Simplify the centralization of security data management across hybrid environments to enhance security analytics

Many customers use Security Lake to gather and analyze security data from various sources, including AWS, multicloud, and on-premises systems. By centralizing this data in a single location, organizations can streamline data collection and analysis, help eliminate data silos, and improve cross-environment analysis. This enhanced visibility and efficiency allows security teams to respond more effectively to security events. With Security Lake, customers simplify data gathering and reduce the burden of data retention and extract, transform, and load (ETL) processes with key AWS data sources.

For example, Interpublic Group (IPG), an advertising company, uses Security Lake to gain a comprehensive, organization-wide grasp of their security posture across hybrid environments. Watch this video from re:Inforce 2023 to understand how IPG streamlined their security operations.

Before adopting Security Lake, the IPG team had to work through the challenge of managing diverse log data sources. This involved translating and transforming the data, as well as reconciling complex elements like IP addresses. However, with the implementation of Security Lake, IPG was able to access previously unavailable log sources. This enabled them to consolidate and effectively analyze security-related data. The use of Security Lake has empowered IPG to gain comprehensive insight into their security landscape, resulting in a significant improvement in their overall security posture.

“We can achieve a more complete, organization-wide understanding of our security posture across hybrid environments. We could quickly create a security data lake that centralized security-related data from AWS and third-party sources,” — Troy Wilkinson, Global CISO at Interpublic Group

Streamline incident investigation and reduce mean time to respond

As organizations expand their cloud presence, they need to gather information from diverse sources. Security Lake automates the centralization of security data from AWS environments and third-party logging sources, including firewall logs, storage security, and threat intelligence signals. This removes the need for custom, one-off security consolidation pipelines. The centralized data in Security Lake streamlines security investigations, making security management and investigations simpler. Whether you’re in operations or a security analyst, dealing with multiple applications and scattered data can be tough. Security Lake makes it simpler by centralizing the data, reducing the need for scattered systems. Plus, with data stored in your Amazon Simple Storage Service (Amazon S3) account, Security Lake lets you store a lot of historical data, which helps when looking back for patterns or anomalies indicative of security issues.

For example, SEEK, an Australian online employment marketplace, uses Security Lake to streamline incident investigations and reduce mean time to respond (MTTR). Watch this video from re:Invent 2023 to understand how SEEK improved its incident response. Prior to using Security Lake, SEEK had to rely on a legacy VPC Flow Logs system to identify possible indicators of compromise, which took a long time. With Security Lake, the company was able to more quickly identify a host in one of their accounts that was communicating with a malicious IP. Now, Security Lake has provided SEEK with swift access to comprehensive data across diverse environments, facilitating forensic investigations and enabling them scale their security operations better.

Optimize your log retention strategy

Security Lake simplifies data management for customers who need to store large volumes of security logs to meet compliance requirements. It provides customizable retention settings and automated storage tiering to optimize storage costs and security analytics. It automatically partitions and converts incoming security data into a storage and query-efficient Apache Parquet format. Security Lake uses the Apache Iceberg open table format to enhance query performance for your security analytics. Customers can now choose which logs to keep for compliance reasons, which logs to send to their analytics solutions for further analysis, and which logs to query in place for incident investigation use cases. Security Lake helps customers to retain logs that were previously unfeasible to store and to extend storage beyond their typical retention policy, within their security information and event management (SIEM) system.

Figure 2 shows a section of the Security Lake activation page, which presents users with options to set rollup AWS Regions and storage classes.

Figure 2: Security Lake activation page with options to select a roll-up Region and set storage classes

Figure 2: Security Lake activation page with options to select a roll-up Region and set storage classes

For example, Carrier, an intelligent climate and energy solutions company, relies on Security Lake to strengthen its security and governance practices. By using Security Lake, Carrier can adhere to compliance with industry standards and regulations, safeguarding its operations and optimizing its log retention.

“Amazon Security Lake simplifies the process of ingesting and analyzing all our security-related log and findings data into a single data lake, strengthening our enterprise-level security and governance practices. This streamlined approach has enhanced our ability to identify and address potential issues within our environment more effectively, enabling us to proactively implement mitigations based on insights derived from the service,” — Justin McDowell, Associate Director for Enterprise Cloud Services at Carrier

Proactive threat and vulnerability detection

Security Lake helps customers identify potential threats and vulnerabilities earlier in the development process by facilitating the correlation of security events across different data sources, allowing security analysts to identify complex attack patterns more effectively. This proactive approach shifts the responsibility for maintaining secure coding and infrastructure to earlier phases, enhancing overall security posture. Security Lake can also optimize security operations by automating safeguard protocols and involving engineering teams in decision-making, validation, and remediation processes. Also, by seamlessly integrating with security orchestration tools, Security Lake can help expedite responses to security incidents and preemptively help mitigate vulnerabilities before exploitation occurs.

For example, SPH Media, a Singapore media company, uses Security Lake to get better visibility into security activity across their organization, enabling proactive identification of potential threats and vulnerabilities.

Security Lake and generative AI for threat hunting and incident response practices

Centralizing security-related data allows organizations to efficiently analyze behaviors from systems and users across their environments, providing deeper insight into potential risks. With Security Lake, customers can centralize their security logs and have them stored in a standardized format using OCSF. OCSF implements a well-documented schema maintained at schema.ocsf.io, which generative AI can use to contextualize the structured data within Security Lake tables. Security personnel can then ask questions of the data using familiar security investigation terminology rather than needing data analytics skills to translate an otherwise simple question into complex SQL (Structured Query Language) queries. Building upon the previous processes, Amazon Bedrock then takes an organization’s natural language incident response playbooks and converts them into a sequence of queries capable of automating what would otherwise be manual investigation tasks.

“Organizations can benefit from using Amazon Security Lake to centralize and manage security data with the OCSF and Apache Parquet format. It simplifies the integration of disparate security log sources and findings, reducing complexity and cost. By applying generative AI to Security Lake data, SecOps can streamline security investigations, facilitate timely responses, and enhance their overall security,” — Phil Bues, Research Manager, Cloud Security at IDC

Enhance incident response workflow with contextual alerts

Incident responders commonly deal with a high volume of automatically generated alerts from their IT environment. Initially, the response team members sift through numerous log sources to uncover relevant details about affected resources and the causes of alerts, which adds time to the remediation process. This manual process also consumes valuable analyst time, particularly if the alert turns out to be a false positive.

To reduce this burden on incident responders, customers can use generative AI to perform initial investigations and generate human-readable context from alerts. Amazon QuickSight Q is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Incident responders can now combine these capabilities with data in Amazon Security Lake to begin their investigation and build detailed dashboards in minutes. This allows them to visually identify resources and activities that could potentially trigger alerts, giving incident responders a head start in their response efforts. The approach can also be used to validate alert information against alternative sources, enabling faster identification of false positives and allowing responders to focus on genuine security incidents.

As natural language processing (NLP) models evolve they can also interpret incident response playbooks, often written in a human-readable, step-by-step format. Amazon Q Apps will enable the incident responder to use natural language to quickly and securely build their own generative AI playbooks to automate their tasks. This automation can streamline incident response tasks such as creating tickets in the relevant ticketing system, logging bugs in the version control system that hosts application code, and tagging resources as non-compliant if they fail to adhere to the organization’s IT security policy.

Incident investigation with automatic dashboard generation

By using Amazon QuickSight Q and data in Security Lake, incident responders can now generate customized visuals based on the specific characteristics and context of the incident they are investigating. The incident response team can start with a generic investigative question and automatically produce dashboards without writing SQL queries or learning the complexities of how to build a dashboard.

After building a topic in Amazon QuickSight Q. The investigator can either ask their own question or utilize the AI generated questions that Q has provided. In the example shown in Figure 3, the incident responder suspects a potential security incident and is seeking more information about Create or Update API calls in their environment, by asking ” I’m looking for what accounts ran an update or create api call in the last week, can you display the API Operation?

Figure 3: AI Generated visual to begin the investigation

Figure 3: AI Generated visual to begin the investigation

This automatically generated dashboard can be saved for later and from this dashboard the investigator can just focus on one account with a simple right click and get more information about the API calls performed by a single account. as shown in Figure 4.

Figure 4: Investigating API calls for a single account

Figure 4: Investigating API calls for a single account

Using the centralized security data in Amazon Security Lake and Amazon QuickSight Q, incident responders can jump-start investigations and rapidly validate potential threats without having to write complex queries or manually construct dashboards.

Updates since GA launch

Security Lake makes it simpler to analyze security data, gain a more comprehensive understanding of security across your entire organization, and improve the protection of your workloads, applications, and data. Since the general availability release in 2023, we have made various updates to the service. It’s now available in 17 AWS Regions globally. To assist you in evaluating your current and future Security Lake usage and cost estimates, we’ve introduced a new usage page. If you’re currently in a 15-day free trial, your trial usage can serve as a reference for estimating post-trial costs. Accessing Security Lake usage and projected costs is as simple as logging into the Security Lake console.

We have released an integration with Amazon Detective that enables querying and retrieving logs stored in Security Lake. Detective begins pulling raw logs from Security Lake related to AWS CloudTrail management events and Amazon VPC Flow Logs. Security Lake enhances analytics performance with support for OCSF 1.1.0 and Apache Iceberg. Also, Security Lake has integrated several OCSF mapping enhancements, including OCSF Observables, and has adopted the latest version of the OCSF datetime profile for improved usability. Security Lake seamlessly centralizes and normalizes Amazon Elastic Kubernetes Service (Amazon EKS) audit logs, simplifying monitoring and investigation of potential suspicious activities in your Amazon EKS clusters, without requiring additional setup or affecting existing configurations.

Developments in Partner integrations

Over 100 sources of data are available in Security Lake from AWS and third-party sources. Customers can ingest logs into Security Lake directly from their sources. Security findings from over 50 partners are available through AWS Security Hub, and software as a service (SaaS) application logs from popular platforms such as Salesforce, Slack, and Smartsheet can be sent into Security Lake through AWS AppFabric. Since the GA launch, Security Lake has added 18 partner integrations, and we now offer over 70 direct partner integrations. New source integrations include AIShield by Bosch, Contrast Security, DataBahn, Monad, SailPoint, Sysdig, and Talon. New subscribers who have built integrations include Cyber Security Cloud, Devo, Elastic, Palo Alto Networks, Panther, Query.AI, Securonix, and Tego Cyber and new service partner offerings from Accenture, CMD Solutions, part of Mantel Group, Deloitte, Eviden, HOOP Cyber, Kyndryl, Kudelski Security, Infosys, Leidos, Megazone, PwC and Wipro.

Developments in the Open Cybersecurity Schema Framework

The OCSF community has grown over the last year to almost 200 participants across security-focused independent software vendors (ISVs), government agencies, educational institutions, and enterprises. Leading cybersecurity organizations have contributed to the OCSF schema, such as Tanium and Cisco Duo. Many existing Security Lake partners are adding support for OCSF version 1.1, including Confluent, Cribl, CyberArk, DataBahn, Datadog, Elastic, IBM, Netskope, Orca Security, Palo Alto Networks, Panther, Ripjar, Securonix, SentinelOne, SOC Prime, Sumo Logic, Splunk, Tanium, Tego Cyber, Torq, Trellix, Stellar Cyber, Query.AI, Swimlane, and more. Members of the OCSF community who want to validate their OCSF schema for Security Lake have access to a validation script on GitHub.

Get help from AWS Professional Services

The global team of experts in the AWS Professional Services organization can help customers realize their desired business outcomes with AWS. Our teams of data architects and security engineers collaborate with customer security, IT, and business leaders to develop enterprise solutions.

The AWS ProServe Amazon Security Lake Assessment offering is a complimentary workshop for customers. It entails a two-week interactive assessment, delving deep into customer use cases and creating a program roadmap to implement Security Lake alongside a suite of analytics solutions. Through a series of technical and strategic discussions, the AWS ProServe team analyzes use cases for data storage, security, search, visualization, analytics, AI/ML, and generative AI. The team then recommends a target future state architecture to achieve the customer’s security operations goals. At the end of the workshop, customers receive a draft architecture and implementation plan, along with infrastructure cost estimates, training, and other technical recommendations.

Summary

In this post, we showcased how customers use Security Lake to collect, query, and centralize logs on AWS. We discussed new applications for Security Lake and generative AI for threat hunting and incident response. We invite you to discover the benefits of using Security Lake through our 15-day free trial and share your feedback. To help you in getting started and building your first security data lake, we offer a range of resources including an infographic, eBook, demo videos, and webinars. There are many different use cases for Security Lake that can be tailored to suit your AWS environment. Join us at AWS re:Inforce 2024, our annual cloud security event, for more insights and firsthand experiences of how Security Lake can help you centralize, normalize, and optimize your security data.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.

Nisha Amthul

Nisha Amthul

Nisha is a Senior Product Marketing Manager at AWS Security, specializing in detection and response solutions. She has a strong foundation in product management and product marketing within the domains of information security and data protection. When not at work, you’ll find her cake decorating, strength training, and chasing after her two energetic kiddos, embracing the joys of motherhood.

Author

Ross Warren

Ross is a Senior Product SA at AWS for Amazon Security Lake based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Outside of work, he likes to spend time with his family, bake bread, make sawdust and enjoy time outside.

Jonathan Garzon

Jonathan Garzon

Jonathan is a Principal Product Manager at AWS with a passion for building products with delightful customer experiences and solving complex problems. He has launched and managed products in various domains, including networking, cybersecurity, and data analytics. Outside of work, Jonathan enjoys spending time with friends and family, playing soccer, mountain biking, hiking, and playing the guitar.

AWS achieves Spain’s ENS High 311/2022 certification across 172 services

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-311-2022-certification-across-172-services/

Amazon Web Services (AWS) has recently renewed the Esquema Nacional de Seguridad (ENS) High certification, upgrading to the latest version regulated under Royal Decree 311/2022. The ENS establishes security standards that apply to government agencies and public organizations in Spain and service providers on which Spanish public services depend.

This security framework has gone through significant updates since the Royal Decree 3/2010 to the latest Royal Decree 311/2022 to adapt to evolving cybersecurity threats and technologies. The current scheme defines basic requirements and lists additional security reinforcements to meet the bar of the different security levels (Low, Medium, High).

Achieving the ENS High certification for its 311/2022 version underscores AWS commitment to maintaining robust cybersecurity controls and highlights our proactive approach to cybersecurity.

We are happy to announce the addition of 14 services to the scope of our ENS certification, for a new total of 172 services in scope. The certification now covers 31 Regions. Some of the additional services in scope for ENS High include the following:

  • Amazon Bedrock – This fully managed service offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
  • Amazon EventBridge – Use this service to easily build loosely coupled, event-driven architectures. It creates point-to-point integrations between event producers and consumers without needing to write custom code or manage and provision servers.
  • AWS HealthOmics – This service helps healthcare and life science organizations and their software partners store, query, and analyze genomic, transcriptomic, and other omics data and then uses that data to generate insights to improve health.
  • AWS Signer – This is a fully managed code-signing service to ensure the trust and integrity of your code. AWS Signer manages the code-signing certificate’s public and private keys and enables central management of the code-signing lifecycle.
  • AWS Wickr – This service encrypts messages, calls, and files with a 256-bit end-to-end encryption protocol. Only the intended recipients and the customer organization can decrypt these communications, reducing the risk of adversary-in-the-middle attacks.

AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level as described in Royal Decree 311/2022.

AWS has also updated the existing eight Security configuration guidelines that map the ENS controls to the AWS Well-Architected Framework and provides guidance relating to the following topics: compliance profile, secure configuration, Prowler quick guide, hybrid connectivity, multi-account environments, Amazon WorkSpaces, incident response and monitorization and governance. AWS has also supported Prowler to offer new functionalities and to include the latest controls of the ENS.

For more information about ENS High and the AWS Security configuration guidelines, see the AWS Compliance page Esquema Nacional de Seguridad High. To view the complete list of services included in the scope, see the AWS Services in Scope by Compliance Program – Esquema Nacional de Seguridad (ENS) page. You can download the ENS High Certificate from AWS Artifact in the AWS Management Console or from Esquema Nacional de Seguridad High.

As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Daniel Fuertes

Daniel Fuertes

Daniel is a security audit program manager at AWS based in Madrid, Spain. Daniel leads multiple security audits, attestations, and certification programs in Spain and other EMEA countries. Daniel has ten years of experience in security assurance and compliance, including previous experience as an auditor for the PCI DSS security framework. He also holds the CISSP, PCIP, and ISO 27001 Lead Auditor certifications.

Borja Larrumbide

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. He received a bachelor’s degree in Computer Science from Boston University (USA). Since then, he has worked at companies such as Microsoft and BBVA. Borja is a seasoned security assurance practitioner with many years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

AWS is issued a renewed certificate for the BIO Thema-uitwerking Clouddiensten with increased scope

Post Syndicated from Ka Yie Lee original https://aws.amazon.com/blogs/security/aws-is-issued-a-renewed-certificate-for-the-bio-thema-uitwerking-clouddiensten-with-increased-scope/

We’re pleased to announce that Amazon Web Services (AWS) demonstrated continuous compliance with the Baseline Informatiebeveiliging Overheid (BIO) Thema-uitwerking Clouddiensten while increasing the AWS services and AWS Regions in scope. This alignment with the BIO Thema-uitwerking Clouddiensten requirements demonstrates our commitment to adhere to the heightened expectations for cloud service providers.

AWS customers across the Dutch public sector can use AWS certified services with confidence, knowing that the AWS services listed in the certificate adhere to the strict requirements imposed on the consumption of cloud-based services.

Baseline Informatiebeveiliging Overheid (BIO)

The BIO framework is an information security framework that the four layers of the Dutch public sector are required to adhere to. This means that it’s mandatory for the Dutch central government, all provinces, municipalities, and regional water authorities to be compliant with the BIO framework.

To support AWS customers in demonstrating their compliance with the BIO framework, AWS developed a Landing Zone for the BIO framework. This Landing Zone for the BIO framework is a pre-configured AWS environment that includes a subset of the technical requirements of the BIO framework. It’s a helpful tool that provides a starting point from which customers can further build their own AWS environment.

For more information regarding the Landing Zone for the BIO framework, see the AWS Reference Guide for Dutch BIO Framework and BIO Theme-elaboration Cloud Services in AWS Artifact. You can also reach out to your AWS account team or contact AWS through the Contact Us page.

Baseline Informatiebeveiliging Overheid Thema-uitwerking Clouddiensten

In addition to the BIO framework, there’s another information security framework designed specifically for the use of cloud services, called BIO Thema-uitwerking Clouddiensten. The BIO Thema-uitwerking Clouddiensten is a guidance document for Dutch cloud service consumers to help them formulate controls and objectives when using cloud services. Consumers can consider it to be an additional control framework on top of the BIO framework.

AWS was evaluated by the monitoring body, EY CertifyPoint, in March 2024, and it was determined that AWS successfully demonstrated compliance for eight AWS services in total. In addition to Amazon Elastic Compute Cloud (Amazon EC2)Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS), which were assessed as compliant in March 2023; Amazon Virtual Private Cloud (Amazon VPC), Amazon GuardDuty, AWS LambdaAWS Config and AWS CloudTrail are added to the scope. The renewed Certificate of Compliance illustrating the compliance status of AWS and the assessment summary report from EY CertifyPoint are available on AWS Artifact. The certificate is available in Dutch and English.

For more information regarding the BIO Thema-uitwerking Clouddiensten, see the AWS Reference Guide for Dutch BIO Framework and BIO Theme-elaboration Cloud Services in AWS Artifact. You can also reach out to your AWS account team or contact AWS through the Contact Us page.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Ka Yie Lee

Ka Yie Lee

Ka Yie is a Security Assurance Specialist for the Benelux region at AWS, based in Amsterdam. She engages with regulators and industry groups in the Netherlands and Belgium. She also helps ensure that AWS addresses local information security frameworks. Ka Yie holds master’s degrees in Accounting, Auditing and Control, and Commercial and Company Law. She also holds professional certifications such as CISSP.

Gokhan Akyuz

Gokhan Akyuz

Gokhan is a Security Audit Program Manager at AWS, based in Amsterdam. He leads security audits, attestations, and certification programs across Europe and the Middle East. He has 17 years of experience in IT and cybersecurity audits, IT risk management, and controls implementation in a wide range of industries.

Authorize API Gateway APIs using Amazon Verified Permissions and Amazon Cognito

Post Syndicated from Kevin Hakanson original https://aws.amazon.com/blogs/security/authorize-api-gateway-apis-using-amazon-verified-permissions-and-amazon-cognito/

Externalizing authorization logic for application APIs can yield multiple benefits for Amazon Web Services (AWS) customers. These benefits can include freeing up development teams to focus on application logic, simplifying application and resource access audits, and improving application security by using continual authorization. Amazon Verified Permissions is a scalable permissions management and fine-grained authorization service that you can use for externalizing application authorization. Along with controlling access to application resources, you can use Verified Permissions to restrict API access to authorized users by using Cedar policies. However, a key challenge in adopting an external authorization system like Verified Permissions is the effort involved in defining the policy logic and integrating with your API. This blog post shows how Verified Permissions accelerates the process of securing REST APIs that are hosted on Amazon API Gateway for Amazon Cognito customers.

Setting up API authorization using Amazon Verified Permissions

As a developer, there are several tasks you need to do in order to use Verified Permissions to store and evaluate policies that define which APIs a user is permitted to access. Although Verified Permissions enables you to decouple authorization logic from your application code, you may need to spend time up front integrating Verified Permissions with your applications. You may also need to spend time learning the Cedar policy language, defining a policy schema, and authoring policies that enforce access control on APIs. Lastly, you may need to spend additional time developing and testing the AWS Lambda authorizer function logic that builds the authorization request for Verified Permissions and enforces the authorization decision.

Getting started with the simplified wizard

Amazon Verified Permissions now includes a console-based wizard that you can use to quickly create building blocks to set up your application’s API Gateway to use Verified Permissions for authorization. Verified Permissions generates an authorization model based on your APIs and policies that allows only authorized Cognito groups access to your APIs. Additionally, it deploys a Lambda authorizer, which you attach to the APIs you want to secure. After the authorizer is attached, API requests are authorized by Verified Permissions. The generated Cedar policies and schema flatten the learning curve, yet allow you full control to modify and help you adhere to your security requirements.

Overview of sample application

In this blog post, we demonstrate how you can simplify the task of securing permissions to a sample application API by using the Verified Permissions console-based wizard. We use a sample pet store application which has two resources:

  1. PetStorePool – An Amazon Cognito user pool with users in one of three groups: customers, employees, and owners.
  2. PetStore – An Amazon API Gateway REST API derived from importing the PetStore example API and extended with a mock integration for administration. This mock integration returns a message with a URI path that uses {“statusCode”: 200} as the integration request and {“Message”: “User authorized for $context.path”} as the integration response.

The PetStore has the following four authorization requirements that allow access to the related resources. All other behaviors should be denied.

  1. Both authenticated and unauthenticated users are allowed to access the root URL.
    • GET /
  2. All authenticated users are allowed to get the list of pets, or get a pet by its identifier.
    • GET /pets
    • GET /pets/{petid}
  3. The employees and owners group are allowed to add new pets.
    • POST /pets
  4. Only the owners group is allowed to perform administration functions. These are defined using an API Gateway proxy resource that enables a single integration to implement a set of API resources.
    • ANY /admin/{proxy+}

Walkthrough

Verified Permissions includes a setup wizard that connects a Cognito user pool to an API Gateway REST API and secures resources based on Cognito group membership. In this section, we provide a walkthrough of the wizard that generates authorization building blocks for our sample application.

To set up API authorization based on Cognito groups

  1. On the Amazon Verified Permissions page in the AWS Management Console, choose Create a new policy store.
  2. On the Specify policy store details page under Starting options, select Set up with Cognito and API Gateway, and then choose Next.

    Figure 1: Starting options

    Figure 1: Starting options

  3. On the Import resources and actions page under API Gateway details, select the API and Deployment stage from the dropdown lists. (A REST API stage is a named reference to a deployment.) For this example, we selected the PetStore API and the demo stage.

    Figure 2: API Gateway and deployment stage

    Figure 2: API Gateway and deployment stage

  4. Choose Import API to generate a Map of imported resources and actions. For our example, this list includes Action::”get /pets” for getting the list of pets, Action::”get /pets/{petId}” for getting a single pet, and Action::”post /pets” for adding a new pet. Choose Next.

    Figure 3: Map of imported resources and actions

    Figure 3: Map of imported resources and actions

  5. On the Choose identity source page, select an Amazon Cognito user pool (PetStorePool in our example). For Token type to pass to API, select a token type. For our example, we chose the default value, Access token, because Cognito recommends using the access token to authorize API operations. The additional claims available in an id token may support more fine-grained access control. For Client application validation, we also specified the default, to not validate that tokens match a configured app client ID. Consider validation when you have multiple user pool app clients configured with different permissions.

    Figure 4: Choose Cognito user pool as identity source

    Figure 4: Choose Cognito user pool as identity source

  6. Choose Next.
  7. On the Assign actions to groups page under Group selection, choose the Cognito user pool groups that can take actions in the application. This solution uses native Cognito group membership to control permissions. In Figure 5, the customers group is not used for access control, we deselected it and it isn’t included in the generated policies. Instead, access to get /pets and get/pets/{petId} is granted to all authenticated users using a different authorizer that we define later in this post.

    Figure 5: Assign actions to groups

    Figure 5: Assign actions to groups

  8. For each of the groups, choose which actions are allowed. In our example, post /pets is the only action selected for the employees group. For the owners group, all of the /admin/{proxy+} actions are additionally selected. Choose Next.

    Figure 6: Groups employees and owners

    Figure 6: Groups employees and owners

  9. On the Deploy app integration page, review the API Gateway Integration details. Choose Create policy store.

    Figure 7: API Gateway integration

    Figure 7: API Gateway integration

  10. On the Create policy store summary page, review the progress of the setup. Choose Check deployment to check the progress of Lambda authorizer.

    Figure 8: Create policy store

    Figure 8: Create policy store

The setup wizard deployed a CloudFormation stack with a Lambda authorizer. This authorizes access to the API Gateway resources for the employees and owners groups. For the resources that should be authorized for all authenticated users, a separate Cognito User Pool authorizer is required. You can use the following AWS CLI apigateway create-authorizer command to create the authorizer.

aws apigateway create-authorizer \
--rest-api-id wrma51eup0 \
--name "Cognito-PetStorePool" \
--type COGNITO_USER_POOLS \
--identity-source "method.request.header.Authorization" \
--provider-arns "arn:aws:cognito-idp:us-west-2:000000000000:userpool/us-west-2_iwWG5nyux"

After the CloudFormation stack deployment completes and the second Cognito authorizer is created, there are two authorizers that can be attached to PetStore API resources, as shown in Figure 9.

Figure 9: PetStore API Authorizers

Figure 9: PetStore API Authorizers

In Figure 9, Cognito-PetStorePool is a Cognito user pool authorizer. Because this example uses an access token, an authorization scope (for example, a custom scope like petstore/api) is specified when attached to the GET /pets and GET /pets/{petId} resources.

AVPAuthorizer-XXX is a request parameter-based Lambda authorizer, which determines the caller’s identity from the configured identity sources. In Figure 9, these sources are Authorization (Header), httpMethod (Context), and path (Context). This authorizer is attached to the POST /pets and ANY /admin/{proxy+} resources. Authorization caching is initially set at 120 seconds and can be configured using the API Gateway console.

This combination of multiple authorizers and caching reduces the number of authorization requests to Verified Permissions. For API calls that are available to all authenticated users, using the Cognito-PetStorePool authorizer instead of a policy permitting the customers group helps avoid chargeable authorization requests to Verified Permissions. Applications where the users initiate the same action multiple times or have a predictable sequence of actions will experience high cache hit rates. For repeated API calls that use the same token, AVPAuthorizer-XXX caching results in lower latency, fewer requests per second, and reduced costs from chargeable requests. The use of caching can delay the time between policy updates and policy enforcement, meaning that the policy updates to Verified Permissions are not realized until the timeout or the FlushStageAuthorizersCache API is called.

Deployment architecture

Figure 10 illustrates the runtime architecture after you have used the Verified Permissions setup wizard to perform the deployment and configuration steps. After the users are authenticated with the Cognito PetStorePool, API calls to the PetStore API are authorized with the Cognito access token. Fine-grained authorization is performed by Verified Permissions using a Lambda authorizer. The wizard automatically created the following four items for you, which are labelled in Figure 10:

  1. A Verified Permissions policy store that is connected to a Cognito identity source.
  2. A Cedar schema that defines the User and UserGroup entities, and an action for each API Gateway resource.
  3. Cedar policies that assign permissions for the employees and owners groups to related actions.
  4. Lambda authorizer that is configured on the API Gateway.

Figure 10: Architecture diagram after deployment

Figure 10: Architecture diagram after deployment

Verified Permissions uses the Cedar policy language to define fine-grained permissions. The default decision for an authorization response is “deny.” The Cedar policies that are generated by the setup wizard can determine an “allow” decision. The principal for each policy is a UserGroup entity with an entity ID format of {user pool id}|{group name}. The action IDs for each policy represent the set of selected API Gateway HTTP methods and resource paths. Note that post /pets is permitted for both employees and owners. The resource in the policy scope is unspecified, because the resource is implicitly the application.

permit (
    principal in PetStore::UserGroup::"us-west-2_iwWG5nyux|employees",
    action in [PetStore::Action::"post /pets"],
    resource
);

permit (
    principal in PetStore::UserGroup::"us-west-2_iwWG5nyux|owners",
    action in
        [PetStore::Action::"delete /admin/{proxy+}",
         PetStore::Action::"post /admin/{proxy+}",
         PetStore::Action::"get /admin/{proxy+}",
         PetStore::Action::"patch /admin/{proxy+}",
         PetStore::Action::"put /admin/{proxy+}",
         PetStore::Action::"post /pets"],
    resource
);

Validating API security

A set of terminal-based curl commands validate API security for both authorized and unauthorized users, by using different access tokens. For readability, a set of environment variables is used to represent the actual values. TOKEN_C, TOKEN_E, and TOKEN_O contain valid access tokens for respective users in the customers, employees, and owners groups. API_STAGE is the base URL for the PetStore API and demo stage that we selected earlier.

To test that an unauthenticated user is allowed for the GET / root path (Requirement 1 as described in the Overview section of this post), but not allowed to call the GET /pets API (Requirement 2), run the following curl commands. The Cognito-PetStorePool authorizer should return {“message”:”Unauthorized”}.

curl -X GET ${API_STAGE}/
<html>
...Welcome to your Pet Store API...
</html>

curl -X GET ${API_STAGE}/pets
{"message":"Unauthorized"}

To test that an authenticated user is allowed to call the GET /pets API (Requirement 2) by using an access token (due to the Cognito-PetStorePool authorizer), run the following curl commands. The user should receive an error message when they try to call the POST /pets API (Requirement 3), because of the AVPAuthorizer. There are no Cedar polices defined for the customers group with the action post /pets.

curl -H "Authorization: Bearer ${TOKEN_C}" -X GET ${API_STAGE}/pets
[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

curl -H "Authorization: Bearer ${TOKEN_C}" -X POST ${API_STAGE}/pets
{"Message":"User is not authorized to access this resource with an explicit deny"}

The following commands will verify that a user in the employees group is allowed the post /pets action (Requirement 3).

curl -H "Authorization: Bearer ${TOKEN_E}" \
     -H "Content-Type: application/json" \
     -d '{"type": "dog","price": 249.99}' \
     -X POST ${API_STAGE}/pets
{
  "pet": {
    "type": "dog",
    "price": 249.99
  },
  "message": "success"
}

The following commands will verify that a user in the employees group is not authorized for the admin APIs, but a user in the owners group is allowed (Requirement 4).

curl -H "Authorization: Bearer ${TOKEN_E}" -X GET ${API_STAGE}/admin/curltest1
{"Message":"User is not authorized to access this resource with an explicit deny"} 

curl -H "Authorization: Bearer ${TOKEN_O}" -X GET ${API_STAGE}/admin/curltest1
{"Message": "User authorized for /demo/admin/curltest1"}

Try it yourself

How could this work with your user pool and REST API? Before you try out the solution, make sure that you have the following prerequisites in place, which are required by the Verified Permissions setup wizard:

  1. A Cognito user pool, along with Cognito groups that control authorization to the API endpoints.
  2. An API Gateway REST API in the same Region as the Cognito user pool.

As you review the resources generated by the solution, consider these authorization modeling topics:

  • Are access tokens or id tokens preferable for your API? Are there custom claims on your tokens that you would use in future Cedar policies for fine-grained authorization?
  • Do multiple authorizers fit your model, or do you have an “all users” group for use in Cedar policies?
  • How might you extend the Cedar schema, allowing for new Cedar policies that include URL path parameters, such as {petId} from the example?

Conclusion

This post demonstrated how the Amazon Verified Permissions setup wizard provides you with a step-by-step process to build authorization logic for API Gateway REST APIs using Cognito user groups. The wizard generates a policy store, schema, and Cedar policies to manage access to API endpoints based on the specification of the APIs deployed. In addition, the wizard creates a Lambda authorizer that authorizes access to the API Gateway resources based on the configured Cognito groups. This removes the modeling effort required for initial configuration of API authorization logic and setup of Verified Permissions to receive permission requests. You can use the wizard to set up and test access controls to your APIs based on Cognito groups in non-production accounts. You can further extend the policy schema and policies to accommodate fine-grained or attribute-based access controls, based on specific requirements of the application, without making code changes.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Verified Permissions re:Post or contact AWS Support.

Kevin Hakanson

Kevin Hakanson

Kevin is a Senior Solutions Architect for AWS World Wide Public Sector, based in Minnesota. He works with EdTech and GovTech customers to ideate, design, validate, and launch products using cloud-native technologies and modern development practices. When not staring at a computer screen, he is probably staring at another screen, either watching TV or playing video games with his family.

sowjir-1.jpeg

Sowjanya Rajavaram

Sowjanya is a Senior Solutions Architect who specializes in identity and security solutions at AWS. Her career has been focused on helping customers of all sizes solve their identity and access management problems. She enjoys traveling and experiencing new cultures and food.

Using Amazon Verified Permissions to manage authorization for AWS IoT smart home applications

Post Syndicated from Rajat Mathur original https://aws.amazon.com/blogs/security/using-amazon-verified-permissions-to-manage-authorization-for-aws-iot-smart-thermostat-applications/

This blog post introduces how manufacturers and smart appliance consumers can use Amazon Verified Permissions to centrally manage permissions and fine-grained authorizations. Developers can offer more intuitive, user-friendly experiences by designing interfaces that align with user personas and multi-tenancy authorization strategies, which can lead to higher user satisfaction and adoption. Traditionally, implementing authorization logic using role based access control (RBAC) or attribute based access control (ABAC) within IoT applications can become complex as the number of connected devices and associated user roles grows. This often leads to an unmanageable increase in access rules that must be hard-coded into each application, requiring excessive compute power for evaluation. By using Verified Permissions, you can externalize the authorization logic using Cedar policy language, enabling you to define fine-grained permissions that combine RBAC and ABAC models. This decouples permissions from your application’s business logic, providing a centralized and scalable way to manage authorization while reducing development effort.

In this post, we walk you through a reference architecture that outlines an end-to-end smart thermostat application solution using AWS IoT Core, Verified Permissions, and other AWS services. We show you how to use Verified Permissions to build an authorization solution using Cedar policy language to define dynamic policy-based access controls for different user personas. The post includes a link to a GitHub repository that houses the code for the web dashboard and the Verified Permissions logic to control access to the solution APIs.

Solution overview

This solution consists of a smart thermostat IoT device and an AWS hosted web application using Verified Permissions for fine-grained access to various application APIs. For this use case, the AWS IoT Core device is being simulated by an AWS Cloud9 environment and communicates with the IoT service using AWS IoT Device SDK for Python. After being configured, the device connects to AWS IoT Core to receive commands and send messages to various MQTT topics.

As a general practice, when a user-facing IoT solution is implemented, the manufacturer performs administrative tasks such as:

  1. Embedding AWS Private Certificate Authority certificates into each IoT device (in this case a smart thermostat). Usually this is done on the assembly line and the certificates used to verify the IoT endpoints are burned into device memory along with the firmware.
  2. Creating an Amazon Cognito user pool that provides sign-up and sign-in options for web and mobile application users and hosts the authentication process.
  3. Creating policy stores and policy templates in Verified Permissions. Based on who signs up, the manufacturer creates policies with Verified Permissions to link each signed-up user to certain allowed resources or IoT devices.
  4. The mapping of user to device is stored in a datastore. For this solution, you’ll use an Amazon DynamoDB table to record the relationship.

The user who purchases the device (the primary device owner) performs the following tasks:

  1. Signs up on the manufacturer’s web application or mobile app and registers the IoT device by entering a unique serial number. The mapping between user details and the device serial number is stored in the datastore through an automated process that is initiated after sign-up and device claim.
  2. Connects the new device to an existing wireless network, which initiates a registration process to securely connect to AWS IoT Core services within the manufacturer’s account.
  3. Invites other users (such as guests, family members, or the power company) through a referral, invitation link, or a designated OAuth process.
  4. Assign roles to the other users and therefore permissions.
     
Figure 1: Sample smart home application architecture built using AWS services

Figure 1: Sample smart home application architecture built using AWS services

Figure 1 depicts the solution as three logical components:

  1. The first component depicts device operations through AWS IoT Core. The smart thermostat is on site and it communicates with AWS IoT Core and its state is managed through the AWS IoT Device Shadow Service.
  2. The second component depicts the web application, which is the application interface that customers use. It’s a ReactJS-backed single page application deployed using AWS Amplify.
  3. The third component shows the backend application, which is built using Amazon API Gateway, AWS Lambda, and DynamoDB. A Cognito user pool is used to manage application users and their authentication. Authorization is handled by Verified Permissions where you create and manage policies that are evaluated when the web application calls backend APIs. These policies are evaluated against each authorization policy to provide an access decision to deny or allow an action.

The solution flow itself can be broken down into three steps after the device is onboarded and users have signed up:

  1. The smart thermostat device connects and communicates with AWS IoT Core using the MQTT protocol. A classic Device Shadow is created for the AWS IoT thing Thermostat1 when the UpdateThingShadow call is made the first time through the AWS SDK for a new device. AWS IoT Device Shadow service lets the web application query and update the device’s state in case of connectivity issues.
  2. Users sign up or sign in to the Amplify hosted smart home application and authenticate themselves against a Cognito user pool. They’re mapped to a device, which is stored in a DynamoDB table.
  3. After the users sign in, they’re allowed to perform certain tasks and view certain sections of the dashboard based on the different roles and policies managed by Verified Permissions. The underlying Lambda function that’s responsible for handling the API calls queries the DynamoDB table to provide user context to Verified Permissions.

Prerequisites

  1. To deploy this solution, you need access to the AWS Management Console and AWS Command Line Interface (AWS CLI) on your local machine with sufficient permissions to access required services, including Amplify, Verified Permissions, and AWS IoT Core. For this solution, you’ll give the services full access to interact with different underlying services. But in production, we recommend following security best practices with AWS Identity and Access Management (IAM), which involves scoping down policies.
  2. Set up Amplify CLI by following these instructions. We recommend the latest NodeJS stable long-term support (LTS) version. At the time of publishing this post, the LTS version was v20.11.1. Users can manage multiple NodeJS versions on their machines by using a tool such as Node Version Manager (nvm).

Walkthrough

The following table describes the actions, resources, and authorization decisions that will be enforced through Verified Permissions policies to achieve fine-grained access control. In this example, John is the primary device owner and has purchased and provisioned a new smart thermostat device called Thermostat1. He has invited Jane to access his device and has given her restricted permissions. John has full control over the device whereas Jane is only allowed to read the temperature and set the temperature between 72°F and 78°F.

John has also decided to give his local energy provider (Power Company) access to the device so that they can set the optimum temperature during the day to manage grid load and offer him maximum savings on his energy bill. However, they can only do so between 2:00 PM and 5:00 PM.

For security purposes the verified permissions default decision is DENY for unauthorized principals.

Name Principal Action Resource Authorization decision
Any Default Default Default Deny
John john_doe Any Thermostat1 Allow
Jane jane_doe GetTemperature Thermostat1 Allow
Jane jane_doe SetTemperature Thermostat1 Allow only if desired temperature is between 72°F and 78°F.
Power Company powercompany GetTemperature Thermostat1 Allow only if accessed between the hours of 2:00 PM and 5:00 PM
Power Company powercompany SetTemperature Thermostat1 Allow only if the temperature is set between the hours of 2:00 PM and 5:00 PM

Create a Verified Permissions policy store

Verified Permissions is a scalable permissions management and fine-grained authorization service for the applications that you build. The policies are created using Cedar, a dedicated language for defining access permissions in applications. Cedar seamlessly integrates with popular authorization models such as RBAC and ABAC.

A policy is a statement that either permits or forbids a principal to take one or more actions on a resource. A policy store is a logical container that stores your Cedar policies, schema, and principal sources. A schema helps you to validate your policy and identify errors based on the definitions you specify. See Cedar schema to learn about the structure and formal grammar of a Cedar schema.

To create the policy store

  1. Sign in to the Amazon Verified Permissions console and choose Create policy store.
  2. In the Configuration Method section, select Empty Policy Store and choose Create policy store.
     
Figure 2: Create an empty policy store

Figure 2: Create an empty policy store

Note: Make a note of the policy store ID to use when you deploy the solution.

To create a schema for the application

  1. On the Verified Permissions page, select Schema.
  2. In the Schema section, choose Create schema.
     
    Figure 3: Create a schema

    Figure 3: Create a schema

  3. In the Edit schema section, choose JSON mode, paste the following sample schema for your application, and choose Save changes.
    {
        "AwsIotAvpWebApp": {
            "entityTypes": {
                "Device": {
                    "shape": {
                        "attributes": {
                            "primaryOwner": {
                                "name": "User",
                                "required": true,
                                "type": "Entity"
                            }
                        },
                        "type": "Record"
                    },
                    "memberOfTypes": []
                },
                "User": {}
            },
            "actions": {
                "GetTemperature": {
                    "appliesTo": {
                        "context": {
                            "attributes": {
                                "desiredTemperature": {
                                    "type": "Long"
                                },
                                "time": {
                                    "type": "Long"
                                }
                            },
                            "type": "Record"
                        },
                        "resourceTypes": [
                            "Device"
                        ],
                        "principalTypes": [
                            "User"
                        ]
                    }
                },
                "SetTemperature": {
                    "appliesTo": {
                        "resourceTypes": [
                            "Device"
                        ],
                        "principalTypes": [
                            "User"
                        ],
                        "context": {
                            "attributes": {
                                "desiredTemperature": {
                                    "type": "Long"
                                },
                                "time": {
                                    "type": "Long"
                                }
                            },
                            "type": "Record"
                        }
                    }
                }
            }
        }
    }

When creating policies in Cedar, you can define authorization rules using a static policy or a template-linked policy.

Static policies

In scenarios where a policy explicitly defines both the principal and the resource, the policy is categorized as a static policy. These policies are immediately applicable for authorization decisions, as they are fully defined and ready for implementation.

Template-linked policies

On the other hand, there are situations where a single set of authorization rules needs to be applied across a variety of principals and resources. Consider an IoT application where actions such as SetTemperature and GetTemperature must be permitted for specific devices. Using static policies for each unique combination of principal and resource can lead to an excessive number of almost identical policies, differing only in their principal and resource components. This redundancy can be efficiently addressed with policy templates. Policy templates allow for the creation of policies using placeholders for the principal, the resource, or both. After a policy template is established, individual policies can be generated by referencing this template and specifying the desired principal and resource. These template-linked policies function the same as static policies, offering a streamlined and scalable solution for policy management.

To create a policy that allows access to the primary owner of the device using a static policy

  1. In the Verified Permissions console, on the left pane, select Policies, then choose Create policy and select Create static policy from the drop-down menu.
     
    Figure 4: Create static policy

    Figure 4: Create static policy

  2. Define the policy scope:
    1. Select Permit for the Policy effect.
       
      Figure 5: Define policy effect

      Figure 5: Define policy effect

    2. Select All Principals for Principals scope.
    3. Select All Resources for Resource scope.
    4. Select All Actions for Actions scope and choose Next.
       
      Figure 6: Define policy scope

      Figure 6: Define policy scope

  3. On the Details page, under Policy, paste the following full-access policy, which grants the primary owner permission to perform both SetTemperature and GetTemperature actions on the smart thermostat unconditionally. Choose Create policy.
    	permit (principal, action, resource)
    	when { resource.primaryOwner == principal };
    Figure 7: Write and review policy statement

    Figure 7: Write and review policy statement

To create a static policy to allow a guest user to read the temperature

In this example, the guest user is Jane (username: jane_doe).

  1. Create another static policy and specify the policy scope.
    1. Select Permit for the Policy effect.
       
      Figure 8: Define the policy effect

      Figure 8: Define the policy effect

    2. Select Specific principal for the Principals scope.
    3. Select AwsIotAvpWebApp::User and enter jane_doe.
       
      Figure 9: Define the policy scope

      Figure 9: Define the policy scope

    4. Select Specific resource for the Resources scope.
    5. Select AwsIotAvpWebApp::Device and enter Thermostat1.
    6. Select Specific set of actions for the Actions scope.
    7. Select GetTemperature and choose Next.
       
      Figure 10: Define resource and action scopes

      Figure 10: Define resource and action scopes

    8. Enter the Policy description: Allow jane_doe to read thermostat1.
    9. Choose Create policy.

Next, you will create reusable policy templates to manage policies efficiently. To create a policy template for a guest user with restricted temperature settings that limit the temperature range they can set to between 72°F and 78°F. In this case, the guest user is going to be Jane (username: jane_doe)

To create a reusable policy template

  1. Select Policy template and enter Guest user template as the description.
  2. Paste the following sample policy in the Policy body and choose Create policy template.
    permit (
        principal == ?principal,
        action in [AwsIotAvpWebApp::Action::"SetTemperature"],
        resource == ?resource
    )
    when { context.desiredTemperature >= 72 && context.desiredTemperature <= 78 };
Figure 11: Create guest user policy template

Figure 11: Create guest user policy template

As you can see, you don’t specify the principal and resource yet. You enter those when you create an actual policy from the policy template. The context object will be populated with the desiredTemperature property in the application and used to evaluate the decision.

You also need to create a policy template for the Power Company user with restricted time settings. Cedar policies don’t support date/time format, so you must represent 2:00 PM and 5:00 PM as elapsed minutes from midnight.

To create a policy template for the power company

  1. Select Policy template and enter Power company user template as the description.
  2. Paste the following sample policy in the Policy body and choose Create policy template.
    permit (
        principal == ?principal,
        action in [AwsIotAvpWebApp::Action::"SetTemperature", AwsIotAvpWebApp::Action::"GetTemperature"],
        resource == ?resource
    )
    when { context.time >= 840 && context.time < 1020 };

The policy templates accept the user and resource. The next step is to create a template-linked policy for Jane to set and get thermostat readings based on the Guest user template that you created earlier. For simplicity, you will manually create this policy using the Verified Permissions console. In production, application policies can be dynamically created using the Verified Permissions API.

To create a template-linked policy for a guest user

  1. In the Verified Permissions console, on the left pane, select Policies, then choose Create policy and select Create template-linked policy from the drop-down menu.
     
    Figure 12: Create new template-linked policy

    Figure 12: Create new template-linked policy

  2. Select the Guest user template and choose next.
     
    Figure 13: Select Guest user template

    Figure 13: Select Guest user template

  3. Under parameter selection:
    1. For Principal enter AwsIotAvpWebApp::User::”jane_doe”.
    2. For Resource enter AwsIotAvpWebApp::Device::”Thermostat1″.
    3. Choose Create template-linked policy.
       
      Figure 14: Create guest user template-linked policy

      Figure 14: Create guest user template-linked policy

Note that with this policy in place, jane_doe can only set the temperature of the device Thermostat1 to between 72°F and 78°F.

To create a template-linked policy for the power company user

Based on the template that was set up for power company, you now need an actual policy for it.

  1. In the Verified Permissions console, go to the left pane and select Policies, then choose Create policy and select Create template-linked policy from the drop-down menu.
  2. Select the Power company user template and choose next.
  3. Under Parameter selection, for Principal enter AwsIotAvpWebApp::User::”powercompany”, and for Resource enter AwsIotAvpWebApp::Device::”Thermostat1″, and choose Create template-linked policy.

Now that you have a set of policies in a policy store, you need to update the backend codebase to include this information and then deploy the web application using Amplify.

The policy statements in this post intentionally use human-readable values such as jane_doe and powercompany for the principal entity. This is useful when discussing general concepts but in production systems, customers should use unique and immutable values for entities. See Get the best out of Amazon Verified Permissions by using fine-grained authorization methods for more information.

Deploy the solution code from GitHub

Go to the GitHub repository to set up the Amplify web application. The repository Readme file provides detailed instructions on how to set up the web application. You will need your Verified Permissions policy store ID to deploy the application. For convenience, we’ve provided an onboarding script—deploy.sh—which you can use to deploy the application.

To deploy the application

  1. Close the repository.
    git clone https://github.com/aws-samples/amazon-verified-permissions-iot-
    amplify-smart-home-application.git

  2. Deploy the application.
    ./deploy.sh <region> <Verified Permissions Policy Store ID>

After the web dashboard has been deployed, you’ll create an IoT device using AWS IoT Core.

Create an IoT device and connect it to AWS IoT Core

With the users, policies, and templates, and the Amplify smart home application in place, you can now create a device and connect it to AWS IoT Core to complete the solution.

To create Thermostat1” device and connect it to AWS IoT Core

  1. From the left pane in the AWS IoT console, select Connect one device.
     
    Figure 15: Connect device using AWS IoT console

    Figure 15: Connect device using AWS IoT console

  2. Review how IoT Thing works and then choose Next.
     
    Figure 16: Review how IoT Thing works before proceeding

    Figure 16: Review how IoT Thing works before proceeding

  3. Choose Create a new thing and enter Thermostat1 as the Thing name and choose next.
    &bsp;
    Figure 17: Create the new IoT thing

    Figure 17: Create the new IoT thing

  4. Select Linux/macOS as the Device platform operating system and Python as the AWS IoT Core Device SDK and choose next.
     
    Figure 18: Choose the platform and SDK for the device

    Figure 18: Choose the platform and SDK for the device

  5. Choose Download connection kit and choose next.
     
    Figure 19: Download the connection kit to use for creating the Thermostat1 device

    Figure 19: Download the connection kit to use for creating the Thermostat1 device

  6. Review the three steps to display messages from your IoT device. You will use them to verify the thermostat1 IoT device connectivity to the AWS IoT Core platform. They are:
    1. Step 1: Add execution permissions
    2. Step 2: Run the start script
    3. Step 3: Return to the AWS IoT Console to view the device’s message
       
      Figure 20: How to display messages from an IoT device

      Figure 20: How to display messages from an IoT device

Solution validation

With all of the pieces in place, you can now test the solution.

Primary owner signs in to the web application to set Thermostat1 temperature to 82°F

Figure 21: Thermostat1 temperature update by John

Figure 21: Thermostat1 temperature update by John

  1. Sign in to the Amplify web application as John. You should be able to view the Thermostat1 controller on the dashboard.
  2. Set the temperature to 82°F.
  3. The Lambda function processes the request and performs an API call to Verified Permissions to determine whether to ALLOW or DENY the action based on the policies. Verified Permissions sends back an ALLOW, as the policy that was previously set up allows unrestricted access for primary owners.
  4. Upon receiving the response from Verified Permissions, the Lambda function sends ALLOW permission back to the web application and an API call to the AWS IoT Device Shadow service to update the device (Thermostat1) temperature to 82°F.
     
Figure 22: Policy evaluation decision is ALLOW when a primary owner calls SetTemperature

Figure 22: Policy evaluation decision is ALLOW when a primary owner calls SetTemperature

Guest user signs in to the web application to set Thermostat1 temperature to 80°F

Figure 23: Thermostat1 temperature update by Jane

Figure 23: Thermostat1 temperature update by Jane

  1. If you sign in as Jane to the Amplify web application, you can view the Thermostat1 controller on the dashboard.
  2. Set the temperature to 80°F.
  3. The Lambda function validates the actions by sending an API call to Verified Permissions to determine whether to ALLOW or DENY the action based on the established policies. Verified Permissions sends back a DENY, as the policy only permits temperature adjustments between 72°F and 78°F.
  4. Upon receiving the response from Verified Permissions, the Lambda function sends DENY permissions back to the web application and an unauthorized response is returned.
     
    Figure 24: Guest user jane_doe receives a DENY when calling SetTemperature for a desired temperature of 80°F

    Figure 24: Guest user jane_doe receives a DENY when calling SetTemperature for a desired temperature of 80°F

  5. If you repeat the process (still as Jane) but set Thermostat1 to 75°F, the policy will cause the request to be allowed.
     
    Figure 25: Guest user jane_doe receives an ALLOW when calling SetTemperature for a desired temperature of 75°F

    Figure 25: Guest user jane_doe receives an ALLOW when calling SetTemperature for a desired temperature of 75°F

  6. Similarly, jane_doe is allowed run GetTemperature on the device Thermostat1. When the temperature is set to 74°F, the device shadow is updated. The IoT device being simulated by your AWS Cloud9 instance reads desired the temperature field and sets the reported value to 74.
  7. Now, when jane_doe runs GetTemperature, the value of the device is reported as 74 as shown in Figure 26. We encourage you to try different restrictions in the World Settings (outside temperature and time) by adding restrictions to the static policy that allows GetTemperature for guest user.
     
    Figure 26: Guest user jane_doe receives an ALLOW when calling GetTemperature for the reported temperature

    Figure 26: Guest user jane_doe receives an ALLOW when calling GetTemperature for the reported temperature

Power company signs in to the web application to set Thermostat1 to 78°F at 3.30 PM

Figure 27: Thermostat1 temperature set to 78°F by powercompany user at a specified time

Figure 27: Thermostat1 temperature set to 78°F by powercompany user at a specified time

  1. Sign in as the powercompany user to the Amplify web application using an API. You can view the Thermostat1 controller on the dashboard.
  2. To test this scenario, set the current time to 3:30 PM, and try to set the temperature to 78°F.
  3. The Lambda function validates the actions by sending an API call to Verified Permissions to determine whether to ALLOW or DENY the action based on pre-established policies. Verified Permissions returns ALLOW permission, because the policy for powercompany permits device temperature changes between 2:00 PM and 5:00 PM.
  4. Upon receiving the response from Verified Permissions, the Lambda function sends ALLOW permission back to the web application and an API call to the AWS IoT Device Shadow service to update the Thermostat1 temperature to 78°F.
     
    Figure 28: powercompany receives an ALLOW when SetTemperature is called with the desired temperature of 78°F

    Figure 28: powercompany receives an ALLOW when SetTemperature is called with the desired temperature of 78°F

Note: As an optional exercise, we also made jane_doe a device owner for device Thermostat2. This can be observed in the users.json file in the Github repository. We encourage you to create your own policies and restrict functions for Thermostat2 after going through this post. You will need to create separate Verified Permissions policies and update the Lambda functions to interact with these policies.

We encourage you to create policies for guests and the power company and restrict permissions based on the following criteria:

  1. Verify Jane Doe can perform GetTemperature and SetTemperature actions on Thermostat2.
  2. John Doe should not be able to set the temperature on device Thermostat2 outside of the time range of 4:00 PM and 6:00 PM and outside of the temperature range of 68°F and 72°F.
  3. Power Company can only perform the GetTemperature operation, but there are no restrictions on time and outside temperature.

To help you verify the solution, we’ve provided the correct policies under the challenge directory in the GitHub repository.

Clean up

Deploying the Thermostat application in your AWS account will incur costs. To avoid ongoing charges, when you’re done examining the solution, delete the resources that were created. This includes the Amplify hosted web application, API Gateway resource, AWS Cloud 9 environment, the Lambda function, DynamoDB table, Cognito user pool, AWS IoT Core resources, and Verified Permissions policy store.

Amplify resources can be deleted by going to the AWS CloudFormation console and deleting the stacks that were used to provision various services.

Conclusion

In this post, you learned about creating and managing fine-grained permissions using Verified Permissions for different user personas for your smart thermostat IoT device. With Verified Permissions, you can strengthen your security posture and build smart applications aligned with Zero Trust principles for real-time authorization decisions. To learn more, we recommend:

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Author

Rajat Mathur

Rajat is a Principal Solutions Architect at Amazon Web Services. Rajat is a passionate technologist who enjoys building innovative solutions for AWS customers. His core areas of focus are IoT, Networking, and Serverless computing. In his spare time, Rajat enjoys long drives, traveling, and spending time with family.

Pronoy Chopra

Pronoy Chopra

Pronoy is a Senior Solutions Architect with the Startups Generative AI team at AWS. He specializes in architecting and developing IoT and Machine Learning solutions. He has co-founded two startups and enjoys being hands-on with projects in the IoT, AI/ML and Serverless domain. His work in Magnetoencephalography has been cited many times in the effort to build better brain-compute interfaces.

Syed Sanoor

Syed Sanoor

Syed serves as a Solutions Architect, assisting customers in the enterprise sector. With a foundation in software engineering, he takes pleasure in crafting solutions tailored to client needs. His expertise predominantly lies in C# and IoT. During his leisure time, Syed enjoys piloting drones and playing cricket.

2023 ISO 27001 certificate available in Spanish and French, and 2023 ISO 22301 certificate available in Spanish

Post Syndicated from Atulsing Patil original https://aws.amazon.com/blogs/security/2023-iso-27001-certificate-available-in-spanish-and-french-and-2023-iso-22301-certificate-available-in-spanish/

French »
Spanish »

Amazon Web Services (AWS) is pleased to announce that a translated version of our 2023 ISO 27001 and 2023 ISO 22301 certifications are now available:

  • The 2023 ISO 27001 certificate is available in Spanish and French.
  • The 2023 ISO 22301 certificate is available in Spanish.

Translated certificates are available to customers through AWS Artifact.

These translated certificates will help drive greater engagement and alignment with customer and regulatory requirements across France, Latin America, and Spain.

We continue to listen to our customers, regulators, and stakeholders to understand their needs regarding audit, assurance, certification, and attestation programs at AWS. If you have questions or feedback about ISO compliance, reach out to your AWS account team.
 


French version

La certification ISO 27001 2023 est désormais disponible en espagnol et en français et le certification ISO 22301 est désormais disponible en espagnol

Nous restons à l’écoute de nos clients, des autorités de régulation et des parties prenantes pour mieux comprendre leurs besoins en matière de programmes d’audit, d’assurance, de certification et d’attestation au sein d’Amazon Web Services (AWS). La certification ISO 27001 2023 est désormais disponible en espagnol et en français. La certification ISO 22301 2023 est également désormais disponible en espagnol. Ces certifications traduites contribueront à renforcer notre engagement et notre conformité aux exigences des clients et de la réglementation en France, en Amérique latine et en Espagne.

Les certifications traduites sont mises à la disposition des clients via AWS Artifact.

Si vous avez des commentaires sur cet article, soumettez-les dans la section Commentaires ci-dessous.

Vous souhaitez davantage de contenu, d’actualités et d’annonces sur les fonctionnalités AWS Security ? Suivez-nous sur Twitter.
 


Spanish version

El certificado ISO 27001 2023 ahora está disponible en Español y Francés y el certificado ISO 22301 ahora está disponible en Español

Seguimos escuchando a nuestros clientes, reguladores y partes interesadas para comprender sus necesidades en relación con los programas de auditoría, garantía, certificación y atestación en Amazon Web Services (AWS). El certificado ISO 27001 2023 ya está disponible en español y francés. Además, el certificado ISO 22301 de 2023 ahora está disponible en español. Estos certificados traducidos ayudarán a impulsar un mayor compromiso y alineación con los requisitos normativos y de los clientes en Francia, América Latina y España.

Los certificados traducidos están disponibles para los clientes en AWS Artifact.

Si tienes comentarios sobre esta publicación, envíalos en la sección Comentarios a continuación.

¿Desea obtener más noticias sobre seguridad de AWS? Síguenos en Twitter.

Atul Patil

Atulsing Patil

Atulsing is a Compliance Program Manager at AWS. He has 27 years of consulting experience in information technology and information security management. Atulsing holds a master of science in electronics degree and professional certifications such as CCSP, CISSP, CISM, CDPSE, ISO 27001 Lead Auditor, HITRUST CSF, Archer Certified Consultant, and AWS CCP.

Nimesh Ravas

Nimesh Ravasa

Nimesh is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Nimesh has 15 years of experience in information security and holds CISSP, CDPSE, CISA, PMP, CSX, AWS Solutions Architect – Associate, and AWS Security Specialty certifications.

Chinmaee Parulekar

Chinmaee Parulekar

Chinmaee is a Compliance Program Manager at AWS. She has 5 years of experience in information security. Chinmaee holds a master of science degree in management information systems and professional certifications such as CISA.

Integrate Kubernetes policy-as-code solutions into Security Hub

Post Syndicated from Joaquin Manuel Rinaudo original https://aws.amazon.com/blogs/security/integrate-kubernetes-policy-as-code-solutions-into-security-hub/

Using Kubernetes policy-as-code (PaC) solutions, administrators and security professionals can enforce organization policies to Kubernetes resources. There are several publicly available PAC solutions that are available for Kubernetes, such as Gatekeeper, Polaris, and Kyverno.

PaC solutions usually implement two features:

  • Use Kubernetes admission controllers to validate or modify objects before they’re created to help enforce configuration best practices for your clusters.
  • Provide a way for you to scan your resources created before policies were deployed or against new policies being evaluated.

This post presents a solution to send policy violations from PaC solutions using Kubernetes policy report format (for example, using Kyverno) or from Gatekeeper’s constraints status directly to AWS Security Hub. With this solution, you can visualize Kubernetes security misconfigurations across your Amazon Elastic Kubernetes Service (Amazon EKS) clusters and your organizations in AWS Organizations. This can also help you implement standard security use cases—such as unified security reporting, escalation through a ticketing system, or automated remediation—on top of Security Hub to help improve your overall Kubernetes security posture and reduce manual efforts.

Solution overview

The solution uses the approach described in A Container-Free Way to Configure Kubernetes Using AWS Lambda to deploy an AWS Lambda function that periodically synchronizes the security status of a Kubernetes cluster from a Kubernetes or Gatekeeper policy report with Security Hub. Figure 1 shows the architecture diagram for the solution.

Figure 1: Diagram of solution

Figure 1: Diagram of solution

This solution works using the following resources and configurations:

  1. A scheduled event which invokes a Lambda function on a 10-minute interval.
  2. The Lambda function iterates through each running EKS cluster that you want to integrate and authenticate by using a Kubernetes Python client and an AWS Identity and Access Management (IAM) role of the Lambda function.
  3. For each running cluster, the Lambda function retrieves the selected Kubernetes policy reports (or the Gatekeeper constraint status, depending on the policy selected) and sends active violations, if present, to Security Hub. With Gatekeeper, if more violations exist than those reported in the constraint, an additional INFORMATIONAL finding is generated in Security Hub to let security teams know of the missing findings.

    Optional: EKS cluster administrators can raise the limit of reported policy violations by using the –constraint-violations-limit flag in their Gatekeeper audit operation.

  4. For each running cluster, the Lambda function archives archive previously raised and resolved findings in Security Hub.

You can download the solution from this GitHub repository.

Walkthrough

In the walkthrough, I show you how to deploy a Kubernetes policy-as-code solution and forward the findings to Security Hub. We’ll configure Kyverno and a Kubernetes demo environment with findings in an existing EKS cluster to Security Hub.

The code provided includes an example constraint and noncompliant resource to test against.

Prerequisites

An EKS cluster is required to set up this solution within your AWS environments. The cluster should be configured with either aws-auth ConfigMap or access entries. Optional: You can use eksctl to create a cluster.

The following resources need to be installed on your computer:

Step 1: Set up the environment

The first step is to install Kyverno on an existing Kubernetes cluster. Then deploy examples of a Kyverno policy and noncompliant resources.

Deploy Kyverno example and policy

  1. Deploy Kyverno in your Kubernetes cluster according to its installation manual using the Kubernetes CLI.
    kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.10.0/install.yaml

  2. Set up a policy that requires namespaces to use the label thisshouldntexist.
    kubectl create -f - << EOF
    apiVersion: kyverno.io/v1
    kind: ClusterPolicy
    metadata:
      name: require-ns-labels
    spec:
      validationFailureAction: Audit
      background: true
      rules:
      - name: check-for-labels-on-namespace
        match:
          any:
          - resources:
              kinds:
              - Namespace
        validate:
          message: "The label thisshouldntexist is required."
          pattern:
            metadata:
              labels:
                thisshouldntexist: "?*"
    EOF

Deploy a noncompliant resource to test this solution

  1. Create a noncompliant namespace.
    kubectl create namespace non-compliant

  2. Check the Kubernetes policy report status using the following command:
    kubectl get clusterpolicyreport -o yaml

You should see output similar to the following:

apiVersion: v1
items:
- apiVersion: wgpolicyk8s.io/v1alpha2
  kind: ClusterPolicyReport
  metadata:
    creationTimestamp: "2024-02-20T14:00:37Z"
    generation: 1
    labels:
      app.kubernetes.io/managed-by: kyverno
      cpol.kyverno.io/require-ns-labels: "3734083"
    name: cpol-require-ns-labels
    resourceVersion: "3734261"
    uid: 3cfcf1da-bd28-453f-b2f5-512c26065986
  results:
   ...
  - message: 'validation error: The label thisshouldntexist is required. rule check-for-labels-on-namespace
      failed at path /metadata/labels/thisshouldntexist/'
    policy: require-ns-labels
    resources:
    - apiVersion: v1
      kind: Namespace
      name: non-compliant
      uid: d62eb1ad-8a0b-476b-848d-ff6542c57840
    result: fail
    rule: check-for-labels-on-namespace
    scored: true
    source: kyverno
    timestamp:
      nanos: 0
      seconds: 1708437615

Step 2: Solution code deployment and configuration

The next step is to clone and deploy the solution that integrates with Security Hub.

To deploy the solution

  1. Clone the GitHub repository by using your preferred command line terminal:
    git clone https://github.com/aws-samples/securityhub-k8s-policy-integration.git

  2. Open the parameters.json file and configure the following values:
    1. Policy – Name of the product that you want to enable, in this case policyreport, which is supported by tools such as Kyverno.
    2. ClusterNames – List of EKS clusters. When AccessEntryEnabled is enabled, this solution deploys an access entry for the integration to access your EKS clusters.
    3. SubnetIds – (Optional) A comma-separated list of your subnets. If you’ve configured the API endpoints of your EKS clusters as private only, then you need to configure this parameter. If your EKS clusters have public endpoints enabled, you can remove this parameter.
    4. SecurityGroupId – (Optional) A security group ID that allows connectivity to the EKS clusters. This parameter is only required if you’re running private API endpoints; otherwise, you can remove it. This security group should be allowed ingress from the security group of the EKS control plane.
    5. AccessEntryEnabled – (Optional) If you’re using EKS access entries, the solution automatically deploys the access entries with read-only-group permissions deployed in the next step. This parameter is True by default.
  3. Save the changes and close the parameters file.
  4. Set up your AWS_REGION (for example, export AWS_REGION=eu-west-1) and make sure that your credentials are configured for the delegated administrator account.
  5. Enter the following command to deploy:
    ./deploy.sh

You should see the following output:

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - aws-securityhub-k8s-policy-integration

Step 3: Set up EKS cluster access

You need to create the Kubernetes Group read-only-group to allow read-only permissions to the IAM role of the Lambda function. If you aren’t using access entries, you will also need to modify the aws-auth ConfigMap of the Kubernetes clusters.

To configure access to EKS clusters

  1. For each cluster that’s running in your account, run the kube-setup.sh script to create the Kubernetes read-only cluster role and cluster role binding.
  2. (Optional) Configure aws-auth ConfigMap using eksctl if you aren’t using access entries.

Step 4: Verify AWS service integration

The next step is to verify that the Lambda integration to Security Hub is running.

To verify the integration is running

  1. Open the Lambda console, and navigate to the aws-securityhub-k8s-policy-integration-<region> function.
  2. Start a test to import your cluster’s noncompliant findings to Security Hub.
  3. In the Security Hub console, review the recently created findings from Kyverno.
     
    Figure 2: Sample Kyverno findings in Security Hub

    Figure 2: Sample Kyverno findings in Security Hub

Step 5: Clean up

The final step is to clean up the resources that you created for this walkthrough.

To destroy the stack

  • Use the command line terminal in your laptop to run the following command:
    ./cleanup.sh

Conclusion

In this post, you learned how to integrate Kubernetes policy report findings with Security Hub and tested this setup by using the Kyverno policy engine. If you want to test the integration of this solution with Gatekeeper, you can find alternative commands for step 1 of this post in the GitHub repository’s README file.

Using this integration, you can gain visibility into your Kubernetes security posture across EKS clusters and join it with a centralized view, together with other security findings such as those from AWS Config, Amazon Inspector, and more across your organization. You can also try this solution with other tools, such as kube-bench or Gatekeeper. You can extend this setup to notify security teams of critical misconfigurations or implement automated remediation actions by using AWS Security Hub.

For more information on how to use PaC solutions to secure Kubernetes workloads in the AWS cloud, see Amazon Elastic Kubernetes Service (Amazon EKS) workshop, Amazon EKS best practices, Using Gatekeeper as a drop-in Pod Security Policy replacement in Amazon EKS and Policy-based countermeasures for Kubernetes.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Author

Joaquin Manuel Rinaudo

Joaquin is a Principal Security Architect with AWS Professional Services. He is passionate about building solutions that help developers improve their software quality. Prior to AWS, he worked across multiple domains in the security industry, from mobile security to cloud and compliance related topics. In his free time, Joaquin enjoys spending time with family and reading science fiction novels.

How the unique culture of security at AWS makes a difference

Post Syndicated from Chris Betz original https://aws.amazon.com/blogs/security/how-the-unique-culture-of-security-at-aws-makes-a-difference/

Our customers depend on Amazon Web Services (AWS) for their mission-critical applications and most sensitive data. Every day, the world’s fastest-growing startups, largest enterprises, and most trusted governmental organizations are choosing AWS as the place to run their technology infrastructure. They choose us because security has been our top priority from day one. We designed AWS from its foundation to be the most secure way for our customers to run their workloads, and we’ve built our internal culture around security as a business imperative.

While technical security measures are important, organizations are made up of people. A recent report from the Cyber Safety Review Board (CSRB) makes it clear that a deficient security culture can be a root cause for avoidable errors that allow intrusions to succeed and remain undetected.

Security is our top priority

Our security culture starts at the top, and it extends through every part of our organization. Over eight years ago, we made the decision for our security team to report directly to our CEO. This structural design redefined how we build security into the culture of AWS and informs everyone at the company that security is our top priority by providing direct visibility to senior leadership. We empower our service teams to fully own the security of their services and scale security best practices and programs so our customers have the confidence to innovate on AWS.

We believe that there are four key principles to building a strong culture of security:

  1. Security is built into our organizational structure

    At AWS, we view security as a core function of our business, deeply connected to our mission objectives. This goes beyond good intentions—it’s embedded directly into our organizational structure. At Amazon, we make an intentional choice for all our security teams to report directly to the CEO while also being deeply embedded in our respective business units. The goal is to build security into the structural fabric of how we make decisions. Every week, the AWS leadership team, led by our CEO, meets with my team to discuss security and ensure we’re making the right choices on tactical and strategic security issues and course-correcting when needed. We report internally on operational metrics that tie our security culture to the impact that it has on our customers, connecting data to business outcomes and providing an opportunity for leadership to engage and ask questions. This support for security from the top levels of executive leadership helps us reinforce the idea that security is accelerating our business outcomes and improving our customers’ experiences rather than acting as a roadblock.

  2. Security is everyone’s job

    AWS operates with a strong ownership model built around our culture of security. Ownership is one of our key Leadership Principles at Amazon. Employees in every role receive regular training and reinforcement of the message that security is everyone’s job. Every service and product team is fully responsible for the security of the service or capability that they deliver. Security is built into every product roadmap, engineering plan, and weekly stand-up meeting, just as much as capabilities, performance, cost, and other core responsibilities of the builder team. The best security is not something that can be “bolted on” at the end of a process or on the outside of a system; rather, security is integral and foundational.

    AWS business leaders prioritize building products and services that are designed to be secure. At the same time, they strive to create an environment that encourages employees to identify and escalate potential security concerns even when uncertain about whether there is an actual issue. Escalation is a normal part of how we work in AWS, and our practice of escalation provides a “security reporting safe space” to everyone. Our teams and individuals are encouraged to report and escalate any possible security issues or concerns with a high-priority ticket to the security team. We would much rather hear about a possible security concern and investigate it, regardless of whether it is unlikely or not. Our employees know that we welcome reports even for things that turn out to be nonissues.

  3. Distributing security expertise and ownership across AWS

    Our central AWS Security team provides a number of critical capabilities and services that support and enable our engineering and service teams to fulfill their security responsibilities effectively. Our central team provides training, consultation, threat-modeling tools, automated code-scanning frameworks and tools, design reviews, penetration testing, automated API test frameworks, and—in the end—a final security review of each new service or new feature. The security reviewer is empowered to make a go or no-go decision with respect to each and every release. If a service or feature does not pass the security review process in the first review, we dive deep to understand why so we can improve processes and catch issues earlier in development. But, releasing something that’s not ready would be an even bigger failure, so we err on the side of maintaining our high security bar and always trying to deliver to the high standards that our customers expect and rely on.

    One important mechanism to distribute security ownership that we’ve developed over the years is the Security Guardians program. The Security Guardians program trains, develops, and empowers service team developers in each two-pizza team to be security ambassadors, or Guardians, within the product teams. At a high level, Guardians are the “security conscience” of each team. They make sure that security considerations for a product are made earlier and more often, helping their peers build and ship their product faster, while working closely with the central security team to help ensure the security bar remains high at AWS. Security Guardians feel empowered by being part of a cross-organizational community while also playing a critical role for the team and for AWS as a whole.

  4. Scaling security through innovation

    Another way we scale security across our culture at AWS is through innovation. We innovate to build tools and processes to help all of our people be as effective as possible and maintain focus. We use artificial intelligence (AI) to accelerate our secure software development process, as well as new generative AI–powered features in Amazon Inspector, Amazon Detective, AWS Config, and Amazon CodeWhisperer that complement the human skillset by helping people make better security decisions, using a broader collection of knowledge. This pattern of combining sophisticated tooling with skilled engineers is highly effective because it positions people to make the nuanced decisions required for effective security.

    For large organizations, it can take years to assess every scenario and prove systems are secure. Even then, their systems are constantly changing. Our automated reasoning tools use mathematical logic to answer critical questions about infrastructure to detect misconfigurations that could potentially expose data. This provable security provides higher assurance in the security of the cloud and in the cloud. We apply automated reasoning in key service areas such as storage, networking, virtualization, identity, and cryptography. Amazon scientists and engineers also use automated reasoning to prove the correctness of critical internal systems. We process over a billion mathematical queries per day that power AWS Identity and Access Management Access Analyzer, Amazon Simple Storage Service (Amazon S3) Block Public Access, and other security offerings. AWS is the first and only cloud provider to use automated reasoning at this scale.

Advancing the future of cloud security

At AWS, we care deeply about our culture of security. We’re consistently working backwards from our customers and investing in raising the bar on our security tools and capabilities. For example, AWS enables encryption of everything. AWS Key Management Service (AWS KMS) is the first and only highly scalable, cloud-native key management system that is also FIPS 140-2 Level 3 certified. No one can retrieve customer plaintext keys, not even the most privileged admins within AWS. With the AWS Nitro System, which is the foundation of the AWS compute service Amazon Elastic Compute Cloud (Amazon EC2), we designed and delivered first-of-a-kind and still unique in the industry innovation to maximize the security of customers’ workloads. The Nitro System provides industry-leading privacy and isolation for all their compute needs, including GPU-based computing for the latest generative AI systems. No one, not even the most privileged admins within AWS, can access a customer’s workloads or data in Nitro-based EC2 instances.

We continue to innovate on behalf of our customers so they can move quickly, securely, and with confidence to enable their businesses, and our track record in the area of cloud security is second to none. That said, cybersecurity challenges continue to evolve, and while we’re proud of our achievements to date, we’re committed to constant improvement as we innovate and advance our technologies and our culture of security.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Chris Betz

Chris is CISO at AWS. He oversees security teams and leads the development and implementation of security policies with the aim of managing risk and aligning the company’s security posture with business objectives. Chris joined Amazon in August 2023 after holding CISO and security leadership roles at leading companies. He lives in Northern Virginia with his family.

Winter 2023 SOC 1 report now available in Japanese, Korean, and Spanish

Post Syndicated from Brownell Combs original https://aws.amazon.com/blogs/security/winter-2023-soc-1-report-now-available-in-japanese-korean-and-spanish/

Japanese | Korean | Spanish

We continue to listen to our customers, regulators, and stakeholders to understand their needs regarding audit, assurance, certification, and attestation programs at Amazon Web Services (AWS). We are pleased to announce that for the first time an AWS System and Organization Controls (SOC) 1 report is now available in Japanese and Korean, along with Spanish. This translated report will help drive greater engagement and alignment with customer and regulatory requirements across Japan, Korea, Latin America, and Spain.

The Japanese, Korean, and Spanish language versions of the report do not contain the independent opinion issued by the auditors, but you can find this information in the English language version. Stakeholders should use the English version as a complement to the Japanese, Korean, or Spanish versions.

Going forward, the following reports in each quarter will be translated. Because the SOC 1 controls are included in the Spring and Fall SOC 2 reports, this schedule provides year-round coverage in all translated languages when paired with the English language versions.

  • Winter SOC 1 (January 1 – December 31)
  • Spring SOC 2 (April 1 – March 31)
  • Summer SOC 1 (July 1 – June 30)
  • Fall SOC 2 (October 1 – September 30)

Customers can download the translated Winter 2023 SOC 1 report in Japanese, Korean, and Spanish through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

The Winter 2023 SOC 1 report includes a total of 171 services in scope. For up-to-date information, including when additional services are added, visit the AWS Services in Scope by Compliance Program webpage and choose SOC.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about SOC compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on X.
 


Japanese version

冬季 2023 SOC 1 レポートの日本語、韓国語、スペイン語版の提供を開始

当社はお客様、規制当局、利害関係者の声に継続的に耳を傾け、Amazon Web Services (AWS) における監査、保証、認定、認証プログラムに関するそれぞれのニーズを理解するよう努めています。この度、AWS System and Organization Controls (SOC) 1 レポートが今回初めて、日本語、韓国語、スペイン語で利用可能になりました。この翻訳版のレポートは、日本、韓国、ラテンアメリカ、スペインのお客様および規制要件との連携と協力体制を強化するためのものです。

本レポートの日本語版、韓国語版、スペイン語版には監査人による独立した第三者の意見は含まれていませんが、英語版には含まれています。利害関係者は、日本語版、韓国語版、スペイン語版の補足として英語版を参照する必要があります。

今後、四半期ごとの以下のレポートで翻訳版が提供されます。SOC 1 統制は、春季 および 秋季 SOC 2 レポートに含まれるため、英語版と合わせ、1 年間のレポートの翻訳版すべてがこのスケジュールで網羅されることになります。

  • 冬季SOC 1 (1 月 1 日〜12 月 31 日)
  • 春季 SOC 2 (4 月 1 日〜3 月 31 日)
  • 夏季SOC 1 (7 月 1 日〜6 月 30 日)
  • 秋季SOC 2 (10 月 1 日〜9 月 30 日)

冬季2023 SOC 1 レポートの日本語、韓国語、スペイン語版は AWS Artifact (AWS のコンプライアンスレポートをオンデマンドで入手するためのセルフサービスポータル) を使用してダウンロードできます。AWS マネジメントコンソール内の AWS Artifact にサインインするか、AWS Artifact の開始方法ページで詳細をご覧ください。

冬季2023 SOC 1 レポートの対象範囲には合計 171 のサービスが含まれます。その他のサービスが追加される時期など、最新の情報については、コンプライアンスプログラムによる対象範囲内の AWS のサービスで [SOC] を選択してご覧いただけます。

AWS では、アーキテクチャおよび規制に関するお客様のニーズを支援するため、コンプライアンスプログラムの対象範囲に継続的にサービスを追加するよう努めています。SOC コンプライアンスに関するご質問やご意見については、担当の AWS アカウントチームまでお問い合わせください。

コンプライアンスおよびセキュリティプログラムに関する詳細については、AWS コンプライアンスプログラムをご覧ください。当社ではお客様のご意見・ご質問を重視しています。お問い合わせページより AWS コンプライアンスチームにお問い合わせください。

本記事に関するフィードバックは、以下のコメントセクションからご提出ください。

AWS セキュリティによるハウツー記事、ニュース、機能の紹介については、X でフォローしてください。
 


Korean version

2023년 동계 SOC 1 보고서가 한국어, 일본어, 스페인어로 제공됩니다.

Amazon은 고객, 규제 기관 및 이해 관계자의 의견을 지속적으로 경청하여 Amazon Web Services (AWS)의 감사, 보증, 인증 및 증명 프로그램과 관련된 요구 사항을 파악하고 있습니다. 이제 처음으로 스페인어와 함께 한국어와 일본어로도 AWS System and Organization Controls(SOC) 1 보고서를 이용할 수 있게 되었음을 알려드립니다. 이 번역된 보고서는 일본, 한국, 중남미, 스페인의 고객 및 규제 요건을 준수하고 참여도를 높이는 데 도움이 될 것입니다.

보고서의 일본어, 한국어, 스페인어 버전에는 감사인의 독립적인 의견이 포함되어 있지 않지만, 영어 버전에서는 해당 정보를 확인할 수 있습니다. 이해관계자는 일본어, 한국어 또는 스페인어 버전을 보완하기 위해 영어 버전을 사용해야 합니다.

앞으로 매 분기마다 다음 보고서가 번역본으로 제공됩니다. SOC 1 통제 조치는 춘계 및 추계 SOC 2 보고서에 포함되어 있으므로, 이 일정은 영어 버전과 함께 모든 번역된 언어로 연중 내내 제공됩니다.

  • 동계 SOC 1(1/1~12/31)
  • 춘계 SOC 2(4/1~3/31)
  • 하계 SOC 1(7/1~6/30)
  • 추계 SOC 2(10/1~9/30)

고객은 AWS 규정 준수 보고서를 필요할 때 이용할 수 있는 셀프 서비스 포털인 AWS Artifact를 통해 일본어, 한국어, 스페인어로 번역된 2023년 동계 SOC 1 보고서를 다운로드할 수 있습니다. AWS Management Console의 AWS Artifact에 로그인하거나 Getting Started with AWS Artifact(AWS Artifact 시작하기)에서 자세한 내용을 알아보세요.

2023년 동계 SOC 1 보고서에는 총 171개의 서비스가 포함됩니다. 추가 서비스가 추가되는 시기 등의 최신 정보는 AWS Services in Scope by Compliance Program(규정 준수 프로그램별 범위 내 AWS 서비스)에서 SOC를 선택하세요.

AWS는 고객이 아키텍처 및 규제 요구 사항을 충족할 수 있도록 지속적으로 서비스를 규정 준수 프로그램의 범위에 포함시키기 위해 노력하고 있습니다. SOC 규정 준수에 대한 질문이나 피드백이 있는 경우 AWS 계정 팀에 문의하시기 바랍니다.

규정 준수 및 보안 프로그램에 대한 자세한 내용은 AWS 규정 준수 프로그램을 참조하세요. 언제나 그렇듯이 AWS는 여러분의 피드백과 질문을 소중히 여깁니다. 문의하기 페이지를 통해 AWS 규정 준수 팀에 문의하시기 바랍니다.

이 게시물에 대한 피드백이 있는 경우, 아래의 의견 섹션에 의견을 제출해 주세요.

더 많은 AWS 보안 방법 콘텐츠, 뉴스 및 기능 발표를 원하시나요? X 에서 팔로우하세요.
 


Spanish version

El informe SOC 1 invierno 2023 se encuentra disponible actualmente en japonés, coreano y español

Seguimos escuchando a nuestros clientes, reguladores y partes interesadas para comprender sus necesidades en relación con los programas de auditoría, garantía, certificación y acreditación en Amazon Web Services (AWS). Nos enorgullece anunciar que, por primera vez, un informe de controles de sistema y organización (SOC) 1 de AWS se encuentra disponible en japonés y coreano, junto con la versión en español. Estos informes traducidos ayudarán a impulsar un mayor compromiso y alineación con los requisitos normativos y de los clientes en Japón, Corea, Latinoamérica y España.

Estas versiones del informe en japonés, coreano y español no contienen la opinión independiente emitida por los auditores, pero se puede acceder a esta información en la versión en inglés del documento. Las partes interesadas deben usar la versión en inglés como complemento de las versiones en japonés, coreano y español.

De aquí en adelante, los siguientes informes trimestrales estarán traducidos. Dado que los controles SOC 1 se incluyen en los informes de primavera y otoño de SOC 2, esta programación brinda una cobertura anual para todos los idiomas traducidos cuando se la combina con las versiones en inglés.

  • SOC 1 invierno (del 1/1 al 31/12)
  • SOC 2 primavera (del 1/4 al 31/3)
  • SOC 1 verano (del 1/7 al 30/6)
  • SOC 2 otoño (del 1/10 al 30/9)

Los clientes pueden descargar los informes SOC 1 invierno 2023 traducidos al japonés, coreano y español a través de AWS Artifact, un portal de autoservicio para el acceso bajo demanda a los informes de conformidad de AWS. Inicie sesión en AWS Artifact mediante la Consola de administración de AWS u obtenga más información en Introducción a AWS Artifact.

El informe SOC 1 invierno 2023 incluye un total de 171 servicios que se encuentran dentro del alcance. Para acceder a información actualizada, que incluye novedades sobre cuándo se agregan nuevos servicios, consulte los Servicios de AWS en el ámbito del programa de conformidad y seleccione SOC.

AWS se esfuerza de manera continua por añadir servicios dentro del alcance de sus programas de conformidad para ayudarlo a cumplir con sus necesidades de arquitectura y regulación. Si tiene alguna pregunta o sugerencia sobre la conformidad de los SOC, no dude en comunicarse con su equipo de cuenta de AWS.

Para obtener más información sobre los programas de conformidad y seguridad, consulte los Programas de conformidad de AWS. Como siempre, valoramos sus comentarios y preguntas; de modo que no dude en comunicarse con el equipo de conformidad de AWS a través de la página Contacte con nosotros.

Si tiene comentarios sobre esta publicación, envíelos a través de la sección de comentarios que se encuentra a continuación.

¿Quiere acceder a más contenido instructivo, novedades y anuncios de características de seguridad de AWS? Síganos en X.

Brownell Combs

Brownell Combs

Brownell is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Brownell holds a master’s degree in computer science from the University of Virginia and a bachelor’s degree in computer science from Centre College. He has over 20 years of experience in information technology risk management and CISSP, CISA, CRISC, and GIAC GCLD certifications.

Author

Rodrigo Fiuza

Rodrigo is a Security Audit Manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, the Caribbean, and Europe. Rodrigo has worked in risk management, security assurance, and technology audits for the past 12 years.

Paul Hong

Paul Hong

Paul is a Compliance Program Manager at AWS. He leads multiple security, compliance, and training initiatives within AWS and has 9 years of experience in security assurance. Paul is a CISSP, CEH, and CPA, and holds a master’s degree in Accounting Information Systems and a bachelor’s degree in Business Administration from James Madison University, Virginia.

Hwee Hwang

Hwee Hwang

Hwee is an Audit Specialist at AWS based in Seoul, South Korea. Hwee is responsible for third-party and customer audits, certifications, and assessments in Korea. Hwee previously worked in security governance, risk, and compliance. Hwee is laser focused on building customers’ trust and providing them assurance in the cloud.

Tushar Jain

Tushar Jain

Tushar is a Compliance Program Manager at AWS. He leads multiple security, compliance, and training initiatives within AWS and holds a master’s degree in Business Administration from the Indian Institute of Management, Shillong, India and a bachelor’s degree in Technology in Electronics and Telecommunication Engineering from Marathwada University, India. He also holds CCSK and CSXF certifications.

Eun Jin Kim

Eun Jin Kim

Eun Jin is a security assurance professional working as the Audit Program Manager at AWS. She mainly leads compliance programs in South Korea, in particular for the financial sector. She has more than 25 years of experience and holds a master’s degree in Management Information Systems from Carnegie Mellon University in Pittsburgh, Pennsylvania and a master’s degree in law from George Mason University in Arlington, Virginia.

Michael Murphy

Michael Murphy

Michael is a Compliance Program Manager at AWS, where he leads multiple security and privacy initiatives. Michael has 12 years of experience in information security. He holds a master’s degree and bachelor’s degree in computer engineering from Stevens Institute of Technology. He also holds CISSP, CRISC, CISA, and CISM certifications.

Nathan Samuel

Nathan Samuel

Nathan is a Compliance Program Manager at AWS, where he leads multiple security and privacy initiatives. Nathan has a bachelor’s degree in Commerce from the University of the Witwatersrand, South Africa and has 21 years’ experience in security assurance. He holds the CISA, CRISC, CGEIT, CISM, CDPSE, and Certified Internal Auditor certifications.

Author

Seulun Sung

Seul Un is a Security Assurance Audit Program Manager at AWS. Seul Un has been leading South Korea audit programs, including K-ISMS and RSEFT, for the past 4 years in AWS. She has a bachelor’s degree in Information Communication and Electronic Engineering degree from Ewha Womans University. She has 14 years of experience in IT risk, compliance, governance, and audit and holds the CISA certification.

Hidetoshi Takeuchi

Hidetoshi Takeuchi

Hidetoshi is a Senior Audit Program Manager, leading the Japan and India security certification and authorization programs, based in Japan. Hidetoshi has been leading information technology, cybersecurity, risk management, compliance, security assurance, and technology audits for the past 28 years and holds the CISSP certifications.

ryan wilks

Ryan Wilks

Ryan is a Compliance Program Manager at AWS. He leads multiple security and privacy initiatives within AWS. Ryan has 13 years of experience in information security. He has a bachelor of arts degree from Rutgers University and holds ITIL, CISM, and CISA certifications.

Accelerate security automation using Amazon CodeWhisperer

Post Syndicated from Brendan Jenkins original https://aws.amazon.com/blogs/security/accelerate-security-automation-using-amazon-codewhisperer/

In an ever-changing security landscape, teams must be able to quickly remediate security risks. Many organizations look for ways to automate the remediation of security findings that are currently handled manually. Amazon CodeWhisperer is an artificial intelligence (AI) coding companion that generates real-time, single-line or full-function code suggestions in your integrated development environment (IDE) to help you quickly build software. By using CodeWhisperer, security teams can expedite the process of writing security automation scripts for various types of findings that are aggregated in AWS Security Hub, a cloud security posture management (CSPM) service.

In this post, we present some of the current challenges with security automation and walk you through how to use CodeWhisperer, together with Amazon EventBridge and AWS Lambda, to automate the remediation of Security Hub findings. Before reading further, please read the AWS Responsible AI Policy.

Current challenges with security automation

Many approaches to security automation, including Lambda and AWS Systems Manager Automation, require software development skills. Furthermore, the process of manually writing code for remediation can be a time-consuming process for security professionals. To help overcome these challenges, CodeWhisperer serves as a force multiplier for qualified security professionals with development experience to quickly and effectively generate code to help remediate security findings.

Security professionals should still cultivate software development skills to implement robust solutions. Engineers should thoroughly review and validate any generated code, as manual oversight remains critical for security.

Solution overview

Figure 1 shows how the findings that Security Hub produces are ingested by EventBridge, which then invokes Lambda functions for processing. The Lambda code is generated with the help of CodeWhisperer.

Figure 1: Diagram of the solution

Security Hub integrates with EventBridge so you can automatically process findings with other services such as Lambda. To begin remediating the findings automatically, you can configure rules to determine where to send findings. This solution will do the following:

  1. Ingest an Amazon Security Hub finding into EventBridge.
  2. Use an EventBridge rule to invoke a Lambda function for processing.
  3. Use CodeWhisperer to generate the Lambda function code.

It is important to note that there are two types of automation for Security Hub finding remediation:

  • Partial automation, which is initiated when a human worker selects the Security Hub findings manually and applies the automated remediation workflow to the selected findings.
  • End-to-end automation, which means that when a finding is generated within Security Hub, this initiates an automated workflow to immediately remediate without human intervention.

Important: When you use end-to-end automation, we highly recommend that you thoroughly test the efficiency and impact of the workflow in a non-production environment first before moving forward with implementation in a production environment.

Prerequisites

To follow along with this walkthrough, make sure that you have the following prerequisites in place:

Implement security automation

In this scenario, you have been tasked with making sure that versioning is enabled across all Amazon Simple Storage Service (Amazon S3) buckets in your AWS account. Additionally, you want to do this in a way that is programmatic and automated so that it can be reused in different AWS accounts in the future.

To do this, you will perform the following steps:

  1. Generate the remediation script with CodeWhisperer
  2. Create the Lambda function
  3. Integrate the Lambda function with Security Hub by using EventBridge
  4. Create a custom action in Security Hub
  5. Create an EventBridge rule to target the Lambda function
  6. Run the remediation

Generate a remediation script with CodeWhisperer

The first step is to use VS Code to create a script so that CodeWhisperer generates the code for your Lambda function in Python. You will use this Lambda function to remediate the Security Hub findings generated by the [S3.14] S3 buckets should use versioning control.

Note: The underlying model of CodeWhisperer is powered by generative AI, and the output of CodeWhisperer is nondeterministic. As such, the code recommended by the service can vary by user. By modifying the initial code comment to prompt CodeWhisperer for a response, customers can change the corresponding output to help meet their needs. Customers should subject all code generated by CodeWhisperer to typical testing and review protocols to verify that it is free of errors and is in line with applicable organizational security policies. To learn about best practices on prompt engineering with CodeWhisperer, see this AWS blog post.

To generate the remediation script

  1. Open a new VS Code window, and then open or create a new folder for your file to reside in.
  2. Create a Python file called cw-blog-remediation.py as shown in Figure 2.
     
    Figure 2: New VS Code file created called cw-blog-remediation.py

    Figure 2: New VS Code file created called cw-blog-remediation.py

  3. Add the following imports to the Python file.
    import json
    import boto3

  4. Because you have the context added to your file, you can now prompt CodeWhisperer by using a natural language comment. In your file, below the import statements, enter the following comment and then press Enter.
    # Create lambda function that turns on versioning for an S3 bucket after the function is triggered from Amazon EventBridge

  5. Accept the first recommendation that CodeWhisperer provides by pressing Tab to use the Lambda function handler, as shown in Figure 3.
    &ngsp;
    Figure 3: Generation of Lambda handler

    Figure 3: Generation of Lambda handler

  6. To get the recommendation for the function from CodeWhisperer, press Enter. Make sure that the recommendation you receive looks similar to the following. CodeWhisperer is nondeterministic, so its recommendations can vary.
    import json
    import boto3
    
    # Create lambda function that turns on versioning for an S3 bucket after function is triggered from Amazon EventBridge
    def lambda_handler(event, context):
        s3 = boto3.client('s3')
        bucket = event['detail']['requestParameters']['bucketName']
        response = s3.put_bucket_versioning(
            Bucket=bucket,
            VersioningConfiguration={
                'Status': 'Enabled'
            }
        )
        print(response)
        return {
            'statusCode': 200,
            'body': json.dumps('Versioning enabled for bucket ' + bucket)
        }
    

  7. Take a moment to review the user actions and keyboard shortcut keys. Press Tab to accept the recommendation.
  8. You can change the function body to fit your use case. To get the Amazon Resource Name (ARN) of the S3 bucket from the EventBridge event, replace the bucket variable with the following line:
    bucket = event['detail']['findings'][0]['Resources'][0]['Id']

  9. To prompt CodeWhisperer to extract the bucket name from the bucket ARN, use the following comment:
    # Take the S3 bucket name from the ARN of the S3 bucket

    Your function code should look similar to the following:

    import json
    import boto3
    
    # Create lambda function that turns on versioning for an S3 bucket after function is triggered from Amazon EventBridge
    def lambda_handler(event, context):
        s3 = boto3.client('s3')
       bucket = event['detail']['findings'][0]['Resources'][0]['Id']
             # Take the S3 bucket name from the ARN of the S3 bucket
       bucket = bucket.split(':')[5]
    
        response = s3.put_bucket_versioning(
            Bucket=bucket,
            VersioningConfiguration={
                'Status': 'Enabled'
            }
        )
        print(response)
        return {
            'statusCode': 200,
            'body': json.dumps('Versioning enabled for bucket ' + bucket)
        }
    

  10. Create a .zip file for cw-blog-remediation.py. Find the file in your local file manager, right-click the file, and select compress/zip. You will use this .zip file in the next section of the post.

Create the Lambda function

The next step is to use the automation script that you generated to create the Lambda function that will enable versioning on applicable S3 buckets.

To create the Lambda function

  1. Open the AWS Lambda console.
  2. In the left navigation pane, choose Functions, and then choose Create function.
  3. Select Author from Scratch and provide the following configurations for the function:
    1. For Function name, select sec_remediation_function.
    2. For Runtime, select Python 3.12.
    3. For Architecture, select x86_64.
    4. For Permissions, select Create a new role with basic Lambda permissions.
  4. Choose Create function.
  5. To upload your local code to Lambda, select Upload from and then .zip file, and then upload the file that you zipped.
  6. Verify that you created the Lambda function successfully. In the Code source section of Lambda, you should see the code from the automation script displayed in a new tab, as shown in Figure 4.
     
    Figure 4: Source code that was successfully uploaded

    Figure 4: Source code that was successfully uploaded

  7. Choose the Code tab.
  8. Scroll down to the Runtime settings pane and choose Edit.
  9. For Handler, enter cw-blog-remediation.lambda_handler for your function handler, and then choose Save, as shown in Figure 5.
     
    Figure 5: Updated Lambda handler

    Figure 5: Updated Lambda handler

  10. For security purposes, and to follow the principle of least privilege, you should also add an inline policy to the Lambda function’s role to perform the tasks necessary to enable versioning on S3 buckets.
    1. In the Lambda console, navigate to the Configuration tab and then, in the left navigation pane, choose Permissions. Choose the Role name, as shown in Figure 6.
       
      Figure 6: Lambda role in the AWS console

      Figure 6: Lambda role in the AWS console

    2. In the Add permissions dropdown, select Create inline policy.
       
      Figure 7: Create inline policy

      Figure 7: Create inline policy

    3. Choose JSON, add the following policy to the policy editor, and then choose Next.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor0",
                  "Effect": "Allow",
                  "Action": "s3:PutBucketVersioning",
                  "Resource": "*"
              }
          ]
      }

    4. Name the policy PutBucketVersioning and choose Create policy.

Create a custom action in Security Hub

In this step, you will create a custom action in Security Hub.

To create the custom action

  1. Open the Security Hub console.
  2. In the left navigation pane, choose Settings, and then choose Custom actions.
  3. Choose Create custom action.
  4. Provide the following information, as shown in Figure 8:
    • For Name, enter TurnOnS3Versioning.
    • For Description, enter Action that will turn on versioning for a specific S3 bucket.
    • For Custom action ID, enter TurnOnS3Versioning.
       
      Figure 8: Create a custom action in Security Hub

      Figure 8: Create a custom action in Security Hub

  5. Choose Create custom action.
  6. Make a note of the Custom action ARN. You will need this ARN when you create a rule to associate with the custom action in EventBridge.

Create an EventBridge rule to target the Lambda function

The next step is to create an EventBridge rule to capture the custom action. You will define an EventBridge rule that matches events (in this case, findings) from Security Hub that were forwarded by the custom action that you defined previously.

To create the EventBridge rule

  1. Navigate to the EventBridge console.
  2. On the right side, choose Create rule.
  3. On the Define rule detail page, give your rule a name and description that represents the rule’s purpose—for example, you could use the same name and description that you used for the custom action. Then choose Next.
  4. Scroll down to Event pattern, and then do the following:
    1. For Event source, make sure that AWS services is selected.
    2. For AWS service, select Security Hub.
    3. For Event type, select Security Hub Findings – Custom Action.
    4. Select Specific custom action ARN(s) and enter the ARN for the custom action that you created earlier.
       
    Figure 9: Specify the EventBridge event pattern for the Security Hub custom action workflow

    Figure 9: Specify the EventBridge event pattern for the Security Hub custom action workflow

    As you provide this information, the Event pattern updates.

  5. Choose Next.
  6. On the Select target(s) step, in the Select a target dropdown, select Lambda function. Then from the Function dropdown, select sec_remediation_function.
  7. Choose Next.
  8. On the Configure tags step, choose Next.
  9. On the Review and create step, choose Create rule.

Run the automation

Your automation is set up and you can now test the automation. This test covers a partial automation workflow, since you will manually select the finding and apply the remediation workflow to one or more selected findings.

Important: As we mentioned earlier, if you decide to make the automation end-to-end, you should assess the impact of the workflow in a non-production environment. Additionally, you may want to consider creating preventative controls if you want to minimize the risk of event occurrence across an entire environment.

To run the automation

  1. In the Security Hub console, on the Findings tab, add a filter by entering Title in the search box and selecting that filter. Select IS and enter S3 general purpose buckets should have versioning enabled (case sensitive). Choose Apply.
  2. In the filtered list, choose the Title of an active finding.
  3. Before you start the automation, check the current configuration of the S3 bucket to confirm that your automation works. Expand the Resources section of the finding.
  4. Under Resource ID, choose the link for the S3 bucket. This opens a new tab on the S3 console that shows only this S3 bucket.
  5. In your browser, go back to the Security Hub tab (don’t close the S3 tab—you will need to return to it), and on the left side, select this same finding, as shown in Figure 10.
     
    Figure 10: Filter out Security Hub findings to list only S3 bucket-related findings

    Figure 10: Filter out Security Hub findings to list only S3 bucket-related findings

  6. In the Actions dropdown list, choose the name of your custom action.
     
    Figure 11: Choose the custom action that you created to start the remediation workflow

    Figure 11: Choose the custom action that you created to start the remediation workflow

  7. When you see a banner that displays Successfully started action…, go back to the S3 browser tab and refresh it. Verify that the S3 versioning configuration on the bucket has been enabled as shown in figure 12.
     
    Figure 12: Versioning successfully enabled

    Figure 12: Versioning successfully enabled

Conclusion

In this post, you learned how to use CodeWhisperer to produce AI-generated code for custom remediations for a security use case. We encourage you to experiment with CodeWhisperer to create Lambda functions that remediate other Security Hub findings that might exist in your account, such as the enforcement of lifecycle policies on S3 buckets with versioning enabled, or using automation to remove multiple unused Amazon EC2 elastic IP addresses. The ability to automatically set public S3 buckets to private is just one of many use cases where CodeWhisperer can generate code to help you remediate Security Hub findings.

To sum up, CodeWhisperer acts as a tool that can help boost the productivity of security experts who have coding abilities, assisting them to swiftly write code to address security issues. However, security specialists should continue building their software development capabilities to implement robust solutions. Engineers should carefully review and test any generated code, since human oversight is still vital for security.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Brendan Jenkins

Brendan Jenkins

Brendan is a Solutions Architect at AWS who works with enterprise customers, providing them with technical guidance and helping them achieve their business goals. He specializes in DevOps and machine learning (ML) technology.

Chris Shea

Chris Shea

Chris is an AWS Solutions Architect serving enterprise customers in the PropTech and AdTech industry verticals, providing guidance and the tools that customers need for success. His areas of interest include AI for DevOps and AI/ML technology.

Tim Manik

Tim Manik

Tim is a Solutions Architect at AWS working with enterprise customers on migrations and modernizations. He specializes in cybersecurity and AI/ML and is passionate about bridging the gap between the two fields.

Angel Tolson

Angel Tolson

Angel is a Solutions Architect at AWS working with small to medium size businesses, providing them with technical guidance and helping them achieve their business goals. She is particularly interested in cloud operations and networking.

The curious case of faster AWS KMS symmetric key rotation

Post Syndicated from Jeremy Stieglitz original https://aws.amazon.com/blogs/security/the-curious-case-of-faster-aws-kms-symmetric-key-rotation/

Today, AWS Key Management Service (AWS KMS) is introducing faster options for automatic symmetric key rotation. We’re also introducing rotate on-demand, rotation visibility improvements, and a new limit on the price of all symmetric keys that have had two or more rotations (including existing keys). In this post, I discuss all those capabilities and changes. I also present a broader overview of how symmetric cryptographic key rotation came to be, and cover our recommendations on when you might need rotation and how often to rotate your keys. If you’ve ever been curious about AWS KMS automatic key rotation—why it exists, when to enable it, and when to use it on-demand—read on.

How we got here

There are longstanding reasons for cryptographic key rotation. If you were Caesar in Roman times and you needed to send messages with sensitive information to your regional commanders, you might use keys and ciphers to encrypt and protect your communications. There are well-documented examples of using cryptography to protect communications during this time, so much so that the standard substitution cipher, where you swap each letter for a different letter that is a set number of letters away in the alphabet, is referred to as Caesar’s cipher. The cipher is the substitution mechanism, and the key is the number of letters away from the intended letter you go to find the substituted letter for the ciphertext.

The challenge for Caesar in relying on this kind of symmetric key cipher is that both sides (Caesar and his field generals) needed to share keys and keep those keys safe from prying eyes. What happens to Caesar’s secret invasion plans if the key used to encipher his attack plan was secretly intercepted in transmission down the Appian Way? Caesar had no way to know. But if he rotated keys, he could limit the scope of which messages could be read, thus limiting his risk. Messages sent under a key created in the year 52 BCE wouldn’t automatically work for messages sent the following year, provided that Caesar rotated his keys yearly and the newer keys weren’t accessible to the adversary. Key rotation can reduce the scope of data exposure (what a threat actor can see) when some but not all keys are compromised. Of course, every time the key changed, Caesar had to send messengers to his field generals to communicate the new key. Those messengers had to ensure that no enemies intercepted the new keys without their knowledge – a daunting task.

Illustration of Roman solider on horseback riding through countryside on cobblestone trail.

Figure 1: The state of the art for secure key rotation and key distribution in 52 BC.

Fast forward to the 1970s–2000s

In modern times, cryptographic algorithms designed for digital computer systems mean that keys no longer travel down the Appian Way. Instead, they move around digital systems, are stored in unprotected memory, and sometimes are printed for convenience. The risk of key leakage still exists, therefore there is a need for key rotation. During this period, more significant security protections were developed that use both software and hardware technology to protect digital cryptographic keys and reduce the need for rotation. The highest-level protections offered by these techniques can limit keys to specific devices where they can never leave as plaintext. In fact, the US National Institute of Standards and Technologies (NIST) has published a specific security standard, FIPS 140, that addresses the security requirements for these cryptographic modules.

Modern cryptography also has the risk of cryptographic key wear-out

Besides addressing risks from key leakage, key rotation has a second important benefit that becomes more pronounced in the digital era of modern cryptography—cryptographic key wear-out. A key can become weaker, or “wear out,” over time just by being used too many times. If you encrypt enough data under one symmetric key, and if a threat actor acquires enough of the resulting ciphertext, they can perform analysis against your ciphertext that will leak information about the key. Current cryptographic recommendations to protect against key wear-out can vary depending on how you’re encrypting data, the cipher used, and the size of your key. However, even a well-designed AES-GCM implementation with robust initialization vectors (IVs) and large key size (256 bits) should be limited to encrypting no more than 4.3 billion messages (232), where each message is limited to about 64 GiB under a single key.

Figure 2: Used enough times, keys can wear out.

Figure 2: Used enough times, keys can wear out.

During the early 2000s, to help federal agencies and commercial enterprises navigate key rotation best practices, NIST formalized several of the best practices for cryptographic key rotation in the NIST SP 800-57 Recommendation for Key Management standard. It’s an excellent read overall and I encourage you to examine Section 5.3 in particular, which outlines ways to determine the appropriate length of time (the cryptoperiod) that a specific key should be relied on for the protection of data in various environments. According to the guidelines, the following are some of the benefits of setting cryptoperiods (and rotating keys within these periods):

5.3 Cryptoperiods

A cryptoperiod is the time span during which a specific key is authorized for use by legitimate entities or the keys for a given system will remain in effect. A suitably defined cryptoperiod:

  1. Limits the amount of information that is available for cryptanalysis to reveal the key (e.g. the number of plaintext and ciphertext pairs encrypted with the key);
  2. Limits the amount of exposure if a single key is compromised;
  3. Limits the use of a particular algorithm (e.g., to its estimated effective lifetime);
  4. Limits the time available for attempts to penetrate physical, procedural, and logical access mechanisms that protect a key from unauthorized disclosure;
  5. Limits the period within which information may be compromised by inadvertent disclosure of a cryptographic key to unauthorized entities; and
  6. Limits the time available for computationally intensive cryptanalysis.

Sometimes, cryptoperiods are defined by an arbitrary time period or maximum amount of data protected by the key. However, trade-offs associated with the determination of cryptoperiods involve the risk and consequences of exposure, which should be carefully considered when selecting the cryptoperiod (see Section 5.6.4).

(Source: NIST SP 800-57 Recommendation for Key Management, page 34).

One of the challenges in applying this guidance to your own use of cryptographic keys is that you need to understand the likelihood of each risk occurring in your key management system. This can be even harder to evaluate when you’re using a managed service to protect and use your keys.

Fast forward to the 2010s: Envisioning a key management system where you might not need automatic key rotation

When we set out to build a managed service in AWS in 2014 for cryptographic key management and help customers protect their AWS encryption workloads, we were mindful that our keys needed to be as hardened, resilient, and protected against external and internal threat actors as possible. We were also mindful that our keys needed to have long-term viability and use built-in protections to prevent key wear-out. These two design constructs—that our keys are strongly protected to minimize the risk of leakage and that our keys are safe from wear out—are the primary reasons we recommend you limit key rotation or consider disabling rotation if you don’t have compliance requirements to do so. Scheduled key rotation in AWS KMS offers limited security benefits to your workloads.

Specific to key leakage, AWS KMS keys in their unencrypted, plaintext form cannot be accessed by anyone, even AWS operators. Unlike Caesar’s keys, or even cryptographic keys in modern software applications, keys generated by AWS KMS never exist in plaintext outside of the NIST FIPS 140-2 Security Level 3 fleet of hardware security modules (HSMs) in which they are used. See the related post AWS KMS is now FIPS 140-2 Security Level 3. What does this mean for you? for more information about how AWS KMS HSMs help you prevent unauthorized use of your keys. Unlike many commercial HSM solutions, AWS KMS doesn’t even allow keys to be exported from the service in encrypted form. Why? Because an external actor with the proper decryption key could then expose the KMS key in plaintext outside the service.

This hardened protection of your key material is salient to the principal security reason customers want key rotation. Customers typically envision rotation as a way to mitigate a key leaking outside the system in which it was intended to be used. However, since KMS keys can be used only in our HSMs and cannot be exported, the possibility of key exposure becomes harder to envision. This means that rotating a key as protection against key exposure is of limited security value. The HSMs are still the boundary that protects your keys from unauthorized access, no matter how many times the keys are rotated.

If we decide the risk of plaintext keys leaking from AWS KMS is sufficiently low, don’t we still need to be concerned with key wear-out? AWS KMS mitigates the risk of key wear-out by using a key derivation function (KDF) that generates a unique, derived AES 256-bit key for each individual request to encrypt or decrypt under a 256-bit symmetric KMS key. Those derived encryption keys are different every time, even if you make an identical call for encrypt with the same message data under the same KMS key. The cryptographic details for our key derivation method are provided in the AWS KMS Cryptographic Details documentation, and KDF operations use the KDF in counter mode, using HMAC with SHA256. These KDF operations make cryptographic wear-out substantially different for KMS keys than for keys you would call and use directly for encrypt operations. A detailed analysis of KMS key protections for cryptographic wear-out is provided in the Key Management at the Cloud Scale whitepaper, but the important take-away is that a single KMS key can be used for more than a quadrillion (250) encryption requests without wear-out risk.

In fact, within the NIST 800-57 guidelines is consideration that when the KMS key (key-wrapping key in NIST language) is used with unique data keys, KMS keys can have longer cryptoperiods:

“In the case of these very short-term key-wrapping keys, an appropriate cryptoperiod (i.e., which includes both the originator and recipient-usage periods) is a single communication session. It is assumed that the wrapped keys will not be retained in their wrapped form, so the originator-usage period and recipient-usage period of a key-wrapping key is the same. In other cases, a key-wrapping key may be retained so that the files or messages encrypted by the wrapped keys may be recovered later. In such cases, the recipient-usage period may be significantly longer than the originator-usage period of the key-wrapping key, and cryptoperiods lasting for years may be employed.

Source: NIST 800-57 Recommendations for Key Management, section 5.3.6.7.

So why did we build key rotation in AWS KMS in the first place?

Although we advise that key rotation for KMS keys is generally not necessary to improve the security of your keys, you must consider that guidance in the context of your own unique circumstances. You might be required by internal auditors, external compliance assessors, or even your own customers to provide evidence of regular rotation of all keys. A short list of regulatory and standards groups that recommend key rotation includes the aforementioned NIST 800-57, Center for Internet Security (CIS) benchmarks, ISO 27001, System and Organization Controls (SOC) 2, the Payment Card Industry Data Security Standard (PCI DSS), COBIT 5, HIPAA, and the Federal Financial Institutions Examination Council (FFIEC) Handbook, just to name a few.

Customers in regulated industries must consider the entirety of all the cryptographic systems used across their organizations. Taking inventory of which systems incorporate HSM protections, which systems do or don’t provide additional security against cryptographic wear-out, or which programs implement encryption in a robust and reliable way can be difficult for any organization. If a customer doesn’t have sufficient cryptographic expertise in the design and operation of each system, it becomes a safer choice to mandate a uniform scheduled key rotation.

That is why we offer an automatic, convenient method to rotate symmetric KMS keys. Rotation allows customers to demonstrate this key management best practice to their stakeholders instead of having to explain why they chose not to.

Figure 3 details how KMS appends new key material within an existing KMS key during each key rotation.

Figure 3: KMS key rotation process

Figure 3: KMS key rotation process

We designed the rotation of symmetric KMS keys to have low operational impact to both key administrators and builders using those keys. As shown in Figure 3, a keyID configured to rotate will append new key material on each rotation while still retaining and keeping the existing key material of previous versions. This append method achieves rotation without having to decrypt and re-encrypt existing data that used a previous version of a key. New encryption requests under a given keyID will use the latest key version, while decrypt requests under that keyID will use the appropriate version. Callers don’t have to name the version of the key they want to use for encrypt/decrypt, AWS KMS manages this transparently.

Some customers assume that a key rotation event should forcibly re-encrypt any data that was ever encrypted under the previous key version. This is not necessary when AWS KMS automatically rotates to use a new key version for encrypt operations. The previous versions of keys required for decrypt operations are still safe within the service.

We’ve offered the ability to automatically schedule an annual key rotation event for many years now. Lately, we’ve heard from some of our customers that they need to rotate keys more frequently than the fixed period of one year. We will address our newly launched capabilities to help meet these needs in the final section of this blog post.

More options for key rotation in AWS KMS (with a price reduction)

After learning how we think about key rotation in AWS KMS, let’s get to the new options we’ve launched in this space:

  • Configurable rotation periods: Previously, when using automatic key rotation, your only option was a fixed annual rotation period. You can now set a rotation period from 90 days to 2,560 days (just over seven years). You can adjust this period at any point to reset the time in the future when rotation will take effect. Existing keys set for rotation will continue to rotate every year.
  • On-demand rotation for KMS keys: In addition to more flexible automatic key rotation, you can now invoke on-demand rotation through the AWS Management Console for AWS KMS, the AWS Command Line Interface (AWS CLI), or the AWS KMS API using the new RotateKeyOnDemand API. You might occasionally need to use on-demand rotation to test workloads, or to verify and prove key rotation events to internal or external stakeholders. Invoking an on-demand rotation won’t affect the timeline of any upcoming rotation scheduled for this key.

    Note: We’ve set a default quota of 10 on-demand rotations for a KMS key. Although the need for on-demand key rotation should be infrequent, you can ask to have this quota raised. If you have a repeated need for testing or validating instant key rotation, consider deleting the test keys and repeating this operation for RotateKeyOnDemand on new keys.

  • Improved visibility: You can now use the AWS KMS console or the new ListKeyRotations API to view previous key rotation events. One of the challenges in the past is that it’s been hard to validate that your KMS keys have rotated. Now, every previous rotation for a KMS key that has had a scheduled or on-demand rotation is listed in the console and available via API.
     
    Figure 4: Key rotation history showing date and type of rotation

    Figure 4: Key rotation history showing date and type of rotation

  • Price cap for keys with more than two rotations: We’re also introducing a price cap for automatic key rotation. Previously, each annual rotation of a KMS key added $1 per month to the price of the key. Now, for KMS keys that you rotate automatically or on-demand, the first and second rotation of the key adds $1 per month in cost (prorated hourly), but this price increase is capped at the second rotation. Rotations after your second rotation aren’t billed. Existing customers that have keys with three or more annual rotations will see a price reduction for those keys to $3 per month (prorated) per key starting in the month of May, 2024.

Summary

In this post, I highlighted the more flexible options that are now available for key rotation in AWS KMS and took a broader look into why key rotation exists. We know that many customers have compliance needs to demonstrate key rotation everywhere, and increasingly, to demonstrate faster or immediate key rotation. With the new reduced pricing and more convenient ways to verify key rotation events, we hope these new capabilities make your job easier.

Flexible key rotation capabilities are now available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about this new capability, see the Rotating AWS KMS keys topic in the AWS KMS Developer Guide.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Author

Jeremy Stieglitz

Jeremy is the Principal Product Manager for AWS KMS, where he drives global product strategy and roadmap. Jeremy has more than 25 years of experience defining security products and platforms across large companies (RSA, Entrust, Cisco, and Imperva) and start-up environments (Dataguise, Voltage, and Centrify). Jeremy is the author or co-author of 23 patents in network security, user authentication, and network automation and control.

Detecting and remediating inactive user accounts with Amazon Cognito

Post Syndicated from Harun Abdi original https://aws.amazon.com/blogs/security/detecting-and-remediating-inactive-user-accounts-with-amazon-cognito/

For businesses, particularly those in highly regulated industries, managing user accounts isn’t just a matter of security but also a compliance necessity. In sectors such as finance, healthcare, and government, where regulations often mandate strict control over user access, disabling stale user accounts is a key compliance activity. In this post, we show you a solution that uses serverless technologies to track and disable inactive user accounts. While this process is particularly relevant for those in regulated industries, it can also be beneficial for other organizations looking to maintain a clean and secure user base.

The solution focuses on identifying inactive user accounts in Amazon Cognito and automatically disabling them. Disabling a user account in Cognito effectively restricts the user’s access to applications and services linked with the Amazon Cognito user pool. After their account is disabled, the user cannot sign in, access tokens are revoked for their account and they are unable to perform API operations that require user authentication. However, the user’s data and profile within the Cognito user pool remain intact. If necessary, the account can be re-enabled, allowing the user to regain access and functionality.

While the solution focuses on the example of a single Amazon Cognito user pool in a single account, you also learn considerations for multi-user pool and multi-account strategies.

Solution overview

In this section, you learn how to configure an AWS Lambda function that captures the latest sign-in records of users authenticated by Amazon Cognito and write this data to an Amazon DynamoDB table. A time-to-live (TTL) indicator is set on each of these records based on the user inactivity threshold parameter defined when deploying the solution. This TTL represents the maximum period a user can go without signing in before their account is disabled. As these items reach their TTL expiry in DynamoDB, a second Lambda function is invoked to process the expired items and disable the corresponding user accounts in Cognito. For example, if the user inactivity threshold is configured to be 7 days, the accounts of users who don’t sign in within 7 days of their last sign-in will be disabled. Figure 1 shows an overview of the process.

Note: This solution functions as a background process and doesn’t disable user accounts in real time. This is because DynamoDB Time to Live (TTL) is designed for efficiency and to remain within the constraints of the Amazon Cognito quotas. Set your users’ and administrators’ expectations accordingly, acknowledging that there might be a delay in the reflection of changes and updates.

Figure 1: Architecture diagram for tracking user activity and disabling inactive Amazon Cognito users

Figure 1: Architecture diagram for tracking user activity and disabling inactive Amazon Cognito users

As shown in Figure 1, this process involves the following steps:

  1. An application user signs in by authenticating to Amazon Cognito.
  2. Upon successful user authentication, Cognito initiates a post authentication Lambda trigger invoking the PostAuthProcessorLambda function.
  3. The PostAuthProcessorLambda function puts an item in the LatestPostAuthRecordsDDB DynamoDB table with the following attributes:
    1. sub: A unique identifier for the authenticated user within the Amazon Cognito user pool.
    2. timestamp: The time of the user’s latest sign-in, formatted in UTC ISO standard.
    3. username: The authenticated user’s Cognito username.
    4. userpool_id: The identifier of the user pool to which the user authenticated.
    5. ttl: The TTL value, in seconds, after which a user’s inactivity will initiate account deactivation.
  4. Items in the LatestPostAuthRecordsDDB DynamoDB table are automatically purged upon reaching their TTL expiry, launching events in DynamoDB Streams.
  5. DynamoDB Streams events are filtered to allow invocation of the DDBStreamProcessorLambda function only for TTL deleted items.
  6. The DDBStreamProcessorLambda function runs to disable the corresponding user accounts in Cognito.

Implementation details

In this section, you’re guided through deploying the solution, demonstrating how to integrate it with your existing Amazon Cognito user pool and exploring the solution in more detail.

Note: This solution begins tracking user activity from the moment of its deployment. It can’t retroactively track or manage user activities that occurred prior to its implementation. To make sure the solution disables currently inactive users in the first TTL period after deploying the solution, you should do a one-time preload of those users into the DynamoDB table. If this isn’t done, the currently inactive users won’t be detected because users are detected as they sign in. For the same reason, users who create accounts but never sign in won’t be detected either. To detect user accounts that sign up but never sign in, implement a post confirmation Lambda trigger to invoke a Lambda function that processes user sign-up records and writes them to the DynamoDB table.

Prerequisites

Before deploying this solution, you must have the following prerequisites in place:

  • An existing Amazon Cognito user pool. This user pool is the foundation upon which the solution operates. If you don’t have a Cognito user pool set up, you must create one before proceeding. See Creating a user pool.
  • The ability to launch a CloudFormation template. The second prerequisite is the capability to launch an AWS CloudFormation template in your AWS environment. The template provisions the necessary AWS services, including Lambda functions, a DynamoDB table, and AWS Identity and Access Management (IAM) roles that are integral to the solution. The template simplifies the deployment process, allowing you to set up the entire solution with minimal manual configuration. You must have the necessary permissions in your AWS account to launch CloudFormation stacks and provision these services.

To deploy the solution

  1. Choose the following Launch Stack button to deploy the solution’s CloudFormation template:

    Launch Stack

    The solution deploys in the AWS US East (N. Virginia) Region (us-east-1) by default. To deploy the solution in a different Region, use the Region selector in the console navigation bar and make sure that the services required for this walkthrough are supported in your newly selected Region. For service availability by Region, see AWS Services by Region.

  2. On the Quick Create Stack screen, do the following:
    1. Specify the stack details.
      1. Stack name: The stack name is an identifier that helps you find a particular stack from a list of stacks. A stack name can contain only alphanumeric characters (case sensitive) and hyphens. It must start with an alphabetic character and can’t be longer than 128 characters.
      2. CognitoUserPoolARNs: A comma-separated list of Amazon Cognito user pool Amazon Resource Names (ARNs) to monitor for inactive users.
      3. UserInactiveThresholdDays: Time (in days) that the user account is allowed to be inactive before it’s disabled.
    2. Scroll to the bottom, and in the Capabilities section, select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
    3. Choose Create Stack.

Integrate with your existing user pool

With the CloudFormation template deployed, you can set up Lambda triggers in your existing user pool. This is a key step for tracking user activity.

Note: This walkthrough is using the new AWS Management Console experience. Alternatively, These steps could also be done using CloudFormation.

To integrate with your existing user pool

  1. Navigate to the Amazon Cognito console and select your user pool.
  2. Navigate to User pool properties.
  3. Under Lambda triggers, choose Add Lambda trigger. Select the Authentication radio button, then add a Post authentication trigger and assign the PostAuthProcessorLambda function.

Note: Amazon Cognito allows you to set up one Lambda trigger per event. If you already have a configured post authentication Lambda trigger, you can refactor the existing Lambda function, adding new features directly to minimize the cold starts associated with invoking additional functions (for more information, see Anti-patterns in Lambda-based applications). Keep in mind that when Cognito calls your Lambda function, the function must respond within 5 seconds. If it doesn’t and if the call can be retried, Cognito retries the call. After three unsuccessful attempts, the function times out. You can’t change this 5-second timeout value.

Figure 2: Add a post-authentication Lambda trigger and assign a Lambda function

Figure 2: Add a post-authentication Lambda trigger and assign a Lambda function

When you add a Lambda trigger in the Amazon Cognito console, Cognito adds a resource-based policy to your function that permits your user pool to invoke the function. When you create a Lambda trigger outside of the Cognito console, including a cross-account function, you must add permissions to the resource-based policy of the Lambda function. Your added permissions must allow Cognito to invoke the function on behalf of your user pool. You can add permissions from the Lambda console or use the Lambda AddPermission API operation. To configure this in CloudFormation, you can use the AWS::Lambda::Permission resource.

Explore the solution

The solution should now be operational. It’s configured to begin monitoring user sign-in activities and automatically disable inactive user accounts according to the user inactivity threshold. Use the following procedures to test the solution:

Note: When testing the solution, you can set the UserInactiveThresholdDays CloudFormation parameter to 0. This minimizes the time it takes for user accounts to be disabled.

Step 1: User authentication

  1. Create a user account (if one doesn’t exist) in the Amazon Cognito user pool integrated with the solution.
  2. Authenticate to the Cognito user pool integrated with the solution.
     
    Figure 3: Example user signing in to the Amazon Cognito hosted UI

    Figure 3: Example user signing in to the Amazon Cognito hosted UI

Step 2: Verify the sign-in record in DynamoDB

Confirm the sign-in record was successfully put in the LatestPostAuthRecordsDDB DynamoDB table.

  1. Navigate to the DynamoDB console.
  2. Select the LatestPostAuthRecordsDDB table.
  3. Select Explore Table Items.
  4. Locate the sign-in record associated with your user.
     
Figure 4: Locating the sign-in record associated with the signed-in user

Figure 4: Locating the sign-in record associated with the signed-in user

Step 3: Confirm user deactivation in Amazon Cognito

After the TTL expires, validate that the user account is disabled in Amazon Cognito.

  1. Navigate to the Amazon Cognito console.
  2. Select the relevant Cognito user pool.
  3. Under Users, select the specific user.
  4. Verify the Account status in the User information section.
     
Figure 5: Screenshot of the user that signed in with their account status set to disabled

Figure 5: Screenshot of the user that signed in with their account status set to disabled

Note: TTL typically deletes expired items within a few days. Depending on the size and activity level of a table, the actual delete operation of an expired item can vary. TTL deletes items on a best effort basis, and deletion might take longer in some cases.

The user’s account is now disabled. A disabled user account can’t be used to sign in, but still appears in the responses to GetUser and ListUsers API requests.

Design considerations

In this section, you dive deeper into the key components of this solution.

DynamoDB schema configuration:

The DynamoDB schema has the Amazon Cognito sub attribute as the partition key. The Cognito sub is a globally unique user identifier within Cognito user pools that cannot be changed. This configuration ensures each user has a single entry in the table, even if the solution is configured to track multiple user pools. See Other considerations for more about tracking multiple user pools.

Using DynamoDB Streams and Lambda to disable TTL deleted users

This solution uses DynamoDB TTL and DynamoDB Streams alongside Lambda to process user sign-in records. The TTL feature automatically deletes items past their expiration time without write throughput consumption. The deleted items are captured by DynamoDB Streams and processed using Lambda. You also apply event filtering within the Lambda event source mapping, ensuring that the DDBStreamProcessorLambda function is invoked exclusively for TTL-deleted items (see the following code example for the JSON filter pattern). This approach reduces invocations of the Lambda functions, simplifies code, and reduces overall cost.

{
    "Filters": [
        {
            "Pattern": { "userIdentity": { "type": ["Service"], "principalId": ["dynamodb.amazonaws.com"] } }
        }
    ]
}

Handling API quotas:

The DDBStreamProcessorLambda function is configured to comply with the AdminDisableUser API’s quota limits. It processes messages in batches of 25, with a parallelization factor of 1. This makes sure that the solution remains within the nonadjustable 25 requests per second (RPS) limit for AdminDisableUser, avoiding potential API throttling. For more details on these limits, see Quotas in Amazon Cognito.

Dead-letter queues:

Throughout the architecture, dead-letter queues (DLQs) are used to handle message processing failures gracefully. They make sure that unprocessed records aren’t lost but instead are queued for further inspection and retry.

Other considerations

The following considerations are important for scaling the solution in complex environments and maintaining its integrity. The ability to scale and manage the increased complexity is crucial for successful adoption of the solution.

Multi-user pool and multi-account deployment

While this solution discussed a single Amazon Cognito user pool in a single AWS account, this solution can also function in environments with multiple user pools. This involves deploying the solution and integrating with each user pool as described in Integrating with your existing user pool. Because of the AdminDisableUser API’s quota limit for the maximum volume of requests in one AWS Region in one AWS account, consider deploying the solution separately in each Region in each AWS account to stay within the API limits.

Efficient processing with Amazon SQS:

Consider using Amazon Simple Queue Service (Amazon SQS) to add a queue between the PostAuthProcessorLambda function and the LatestPostAuthRecordsDDB DynamoDB table to optimize processing. This approach decouples user sign-in actions from DynamoDB writes, and allows for batching writes to DynamoDB, reducing the number of write requests.

Clean up

Avoid unwanted charges by cleaning up the resources you’ve created. To decommission the solution, follow these steps:

  1. Remove the Lambda trigger from the Amazon Cognito user pool:
    1. Navigate to the Amazon Cognito console.
    2. Select the user pool you have been working with.
    3. Go to the Triggers section within the user pool settings.
    4. Manually remove the association of the Lambda function with the user pool events.
  2. Remove the CloudFormation stack:
    1. Open the CloudFormation console.
    2. Locate and select the CloudFormation stack that was used to deploy the solution.
    3. Delete the stack.
    4. CloudFormation will automatically remove the resources created by this stack, including Lambda functions, Amazon SQS queues, and DynamoDB tables.

Conclusion

In this post, we walked you through a solution to identify and disable stale user accounts based on periods of inactivity. While the example focuses on a single Amazon Cognito user pool, the approach can be adapted for more complex environments with multiple user pools across multiple accounts. For examples of Amazon Cognito architectures, see the AWS Architecture Blog.

Proper planning is essential for seamless integration with your existing infrastructure. Carefully consider factors such as your security environment, compliance needs, and user pool configurations. You can modify this solution to suit your specific use case.

Maintaining clean and active user pools is an ongoing journey. Continue monitoring your systems, optimizing configurations, and keeping up-to-date on new features. Combined with well-architected preventive measures, automated user management systems provide strong defenses for your applications and data.

For further reading, see the AWS Well-Architected Security Pillar and more posts like this one on the AWS Security Blog.

If you have feedback about this post, submit comments in the Comments section. If you have questions about this post, start a new thread on the Amazon Cognito re:Post forum or contact AWS Support.

Harun Abdi

Harun Abdi

Harun is a Startup Solutions Architect based in Toronto, Canada. Harun loves working with customers across different sectors, supporting them to architect reliable and scalable solutions. In his spare time, he enjoys playing soccer and spending time with friends and family.

Dylan Souvage

Dylan Souvage

Dylan is a Partner Solutions Architect based in Austin, Texas. Dylan loves working with customers to understand their business needs and enable them in their cloud journey. In his spare time, he enjoys going out in nature and going on long road trips.