Tag Archives: Security, Identity & Compliance

Selecting and migrating a Facebook API version for Amazon Cognito

Post Syndicated from James Li original https://aws.amazon.com/blogs/security/selecting-and-migrating-a-facebook-api-version-for-amazon-cognito/

On May 1, 2020, Facebook will remove version 2.12 of the Facebook Graph API. This change impacts Amazon Cognito customers who are using version 2.12 of the Facebook Graph API in their identity federation configuration. In this post, I explain how to migrate your Amazon Cognito configuration to use the latest version of the Facebook API.

Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party, such as Facebook, Amazon, Google, or Apple.

An Amazon Cognito User Pool is a user directory that helps you manage identities. It’s also where users can sign into your web or mobile app. User pools support federation through third-party identity providers, such as Google, Facebook, and Apple, as well as Amazon’s own Login with Amazon. Additionally, federation can use identity providers that work with OpenID Connect (OIDC) or Security Assertion Markup Language (SAML) 2.0. Federating a user through the third-party identity provider streamlines the user experience, because users don’t need to sign up directly for your web or mobile app.

Amazon Cognito User Pools now enable users to select the version of the Facebook API for federated login. Previously, version 2.12 of Facebook’s Graph API was automatically used for federated login and to retrieve user attributes from Facebook. By selecting a specific version of Facebook’s API, you can now upgrade versions and test changes. This provides a mechanism to revert back to earlier versions if necessary.

To help ease this transition for our customers, we are doing two phases of mitigation. In the first phase, already underway, you can choose which Facebook version to use for federated login. You can test out the new API version and discover the impact upgrading has on your application. If you must make changes, you can revert to the older version, and you have until May 1, 2020 to perform updates. In the second phase, starting sometime in April, we will automatically migrate customers to version 5.0 if they haven’t selected an API version.

There are benefits to having access to newer versions of Facebook APIs. For instance, if customers who use version 5.0 store a Facebook access token and use it to call the Messenger API, they can use webhook events. This type of benefit is useful for users who react or reply to messages from businesses. You can also use business asset groups to manage a large number of assets with Facebook API v4.0 and the Facebook Marketing API.

How to use different Facebook API versions with Amazon Cognito

These instructions assume you’re familiar with Amazon Cognito User Pools and the User Pool clients. You also need a User Pool domain already set up with the appropriate settings for a hosted UI. If you haven’t set up a user pool yet, you can find the instructions in the Amazon Cognito Developer Guide. You need your User Pool domain information when you set up your Facebook app.

Set up the Facebook app

  1. Go to the Facebook for Developers website and sign in, or sign up if you do not have an account. Create a new Facebook app if you must, or you can reuse an existing one.
  2. Navigate to the App Dashboard and select your App.
  3. On the navigation menu, select Products, then Facebook Login, and then Settings.
  4. In the Valid OAuth Redirect URLs field, add your user pool domain with the endpoint /oauth2/idpresponse. As shown in Figure 1, it should look like https://<yourDomainPrefix>.auth.<region>.amazoncognito.com/oauth2/idpresponse.

    Figure 1

    Figure 1

  5. In the navigation menu, select Settings, then choose Basic.
  6. Note your App ID and your App Secret for the next step.

Adding your Facebook app to your Amazon Cognito user pool

Next, you need to add your Facebook app to your user pool. This can be done either through the AWS Management Console or the command line interface (CLI) and I will show you both methods.

Adding the Facebook app to a user pool through using the AWS Management Console

    1. On the AWS Management Console, navigate to Amazon Cognito, then select Manage Pools. From the list that shows up, select your user pool.
    2. On the navigation menu, select Federation, then Identity Providers.
    3. Select Facebook. Enter the Facebook App ID and App Secret from step 6 above. Then, under Authorize Scopes, enter the appropriate scopes.
    4. In the navigation menu, select Federation and go to Attributes Mapping.
    5. Now select the version of the Facebook API you want to use. By default, the highest available version (v6.0) for newly created Facebook identity providers is pre-selected for you.
    6. After choosing your API version and attribute mapping, click Save.

 

Figure 2

Figure 2

Adding the Facebook app to a user pool through the CLI

The command below adds the Facebook app configuration to your user pool. Use the values for <USER_POOL_ID>,<FACEBOOK_APP_ID> and <FACEBOOK_APP_SECRET> that you noted earlier:


aws cognito-idp create-identity-provider --cli-input-json '{
    "UserPoolId": "<USER_POOL_ID>",
    "ProviderName": "Facebook",
    "ProviderType": "Facebook",
    "ProviderDetails": {
        "client_id": "<FACEBOOK_APP_ID>",
        "client_secret": "<FACEBOOK_APP_SECRET>",
        "authorize_scopes": "email",
        "api_version": "v5.0"
    },
    "AttributeMapping": {
        "email": "email"
    }
}'

The command below updates the Facebook app configuration to your user pool. Use the values for <USER_POOL_ID>, <FACEBOOK_APP_ID> and <FACEBOOK_APP_SECRET> that you noted earlier:


aws cognito-idp update-identity-provider --cli-input-json '{
    "UserPoolId": "<USER_POOL_ID>",
    "ProviderName": "Facebook",
    "ProviderType": "Facebook",
    "ProviderDetails": {
        "client_id": "<FACEBOOK_APP_ID>",
        "client_secret": "<FACEBOOK_APP_SECRET>",
        "authorize_scopes": "email",
        "api_version": "v5.0"
    },
    "AttributeMapping": {
        "email": "email"
    }
}'

You can verify that the create or update was successful by checking the version returned in the describe-identity-provider call:


aws cognito-idp describe-identity-provider --user-pool-id "" --provider-name "Facebook"
{
    "IdentityProvider": {
        "UserPoolId": "<USER_POOL_ID>",
        "ProviderName": "Facebook",
        "ProviderType": "Facebook",
        "ProviderDetails": {
            "api_version": "v5.0",
            "attributes_url": "https://graph.facebook.com/v5.0/me?fields=",
            "attributes_url_add_attributes": "true",
            "authorize_scopes": "email",
            "authorize_url": "https://www.facebook.com/v5.0/dialog/oauth",
            "client_id": "<FACEBOOK_APP_ID>",
            "client_secret": "<FACEBOOK_APP_SECRET>",
            "token_request_method": "GET",
            "token_url": "https://graph.facebook.com/v5.0/oauth/access_token"
        },
        "AttributeMapping": {
            "email": "email",
            "username": "id"
        },
        ...
    }
}

Use the updated configuration with the Cognito Hosted UI:

  1. On the AWS Console for Amazon Cognito, navigate to your user pool and go to the navigation menu. In App Integration, go to App client settings, find your app, and check Facebook as the Enabled Identity Providers.
  2. Select Launch Hosted UI.
  3. Select Continue with Facebook.
  4. If you aren’t automatically signed in at this point, the URL displays your selected version. For example, if v5.0 was selected, the URL starts with: https://www.facebook.com/v5.0/dialog/oauth. If you would like to disable automatic sign-in, simply remove your app from Facebook so that the sign-in prompts for permissions again. Follow these instructions to learn more.
  5. The browser returns to your redirect URL with a code issued by Amazon Cognito if it was successful.

Notes on testing

Facebook will redirect your API call to a more recent version if your app is not allowed to call it. For example, if you created your Facebook app in November 2018, the latest available version at the time was version 3.2. If you were to call the Graph API using version 3.0, the call is upgraded to version 3.2. You can tell which version you are using by referring to the facebook-api-version header in Facebook’s response headers.

If an attribute was not marked as required, and the attribute is missing from Facebook, federation still succeeds, but the attribute is empty in the user pool. There have been various deprecations of fields from Facebook since Facebook federation was launched for Amazon Cognito. For instance, gender and birthday attributes have since changed to be explicitly requested on their own separate permissions rather than granted by default. The cover attribute has also been deprecated. You can confirm that your attribute has successfully federated on the user’s page in the user pools page of the AWS Management Console for Amazon Cognito. You should, as part of your migration, validate that end attributes that you are working with are passed in the way you expect.

Summary

In this post, I explained how to select the version of Facebook’s Graph API for federated login. If you already use Amazon Cognito for federated login with Facebook, you should migrate to the most recent version as soon as possible. Use this process to make sure you get all the attributes you need for your application. New customers can immediately take advantage of the latest API version.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Cognito Forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

James Li

James is a Software Development Engineer at Amazon Cognito. He values operational excellence and security. James is from Toronto, Canada, where he has worked as a software developer for 4 years.

Amazon Detective – Rapid Security Investigation and Analysis

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-detective-rapid-security-investigation-and-analysis/

Almost five years ago, I blogged about a solution that automatically analyzes AWS CloudTrail data to generate alerts upon sensitive API usage. It was a simple and basic solution for security analysis and automation. But demanding AWS customers have multiple AWS accounts, collect data from multiple sources, and simple searches based on regular expressions are not enough to conduct in-depth analysis of suspected security-related events. Today, when a security issue is detected, such as compromised credentials or unauthorized access to a resource, security analysts cross-analyze several data logs to understand the root cause of the issue and its impact on the environment. In-depth analysis often requires scripting and ETL to connect the dots between data generated by multiple siloed systems. It requires skilled data engineers to answer basic questions such as “is this normal?”. Analysts use Security Information and Event Management (SIEM) tools, third-party libraries, and data visualization tools to validate, compare, and correlate data to reach their conclusions. To further complicate the matters, new AWS accounts and new applications are constantly introduced, forcing analysts to constantly reestablish baselines of normal behavior, and to understand new patterns of activities every time they evaluate a new security issue.

Amazon Detective is a fully managed service that empowers users to automate the heavy lifting involved in processing large quantities of AWS log data to determine the cause and impact of a security issue. Once enabled, Detective automatically begins distilling and organizing data from AWS Guard Duty, AWS CloudTrail, and Amazon Virtual Private Cloud Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment.

At re:invent 2019, we announced a preview of Amazon Detective. Today, it is our pleasure to announce its availability for all AWS Customers.

Amazon Detective uses machine learning models to produce graphical representations of your account behavior and helps you to answer questions such as “is this an unusual API call for this role?” or “is this spike in traffic from this instance expected?”. You do not need to write code, to configure or to tune your own queries.

To get started with Amazon Detective, I open the AWS Management Console, I type “detective” in the search bar and I select Amazon Detective from the provided results to launch the service. I enable the service and I let the console guide me to configure “member” accounts to monitor and the “master” account in which to aggregate the data. After this one-time setup, Amazon Detective immediately starts analyzing AWS telemetry data and, within a few minutes, I have access to a set of visual interfaces that summarize my AWS resources and their associated behaviors such as logins, API calls, and network traffic. I search for a finding or resource from the Amazon Detective Search bar and, after a short while, I am able to visualize the baseline and current value for a set of metrics.

I select the resource type and ID and start to browse the various graphs.

I can also investigate a AWS Guard Duty finding by using the native integrations within the Guard Duty and AWS Security Hub consoles. I click the “Investigate” link from any finding from AWS Guard Duty and jump directly into a Amazon Detective console that provides related details, context, and guidance to investigate and to respond to the issue. In the example below, Guard Duty reports an unauthorized access that I decide to investigate:

Amazon Detective console opens:

I scroll down the page to check the graph of failed API calls. I click a bar in the graph to get the details, such as the IP addresses where the calls originated:

Once I know the source IP addresses, I click New behavior: AWS role and observe where these calls originated from to compare with the automatically discovered baseline.

Amazon Detective works across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned “master” account making it easy to view behavioral patterns and connections across your entire AWS environment.

There are no agents, sensors, or additional software to deploy in order to use the service. Amazon Detective retrieves, aggregates and analyzes data from AWS Guard Duty, AWS CloudTrail and Amazon Virtual Private Cloud Flow Logs. Amazon Detective collects existing logs directly from AWS without touching your infrastructure, thereby not causing any impact to cost or performance.

Amazon Detective can be administered via the AWS Management Console or via the Amazon Detective management APIs. The management APIs enable you to build Amazon Detective into your standard account registration, enablement, and deployment processes.

Amazon Detective is a regional service. I activate the service in every AWS Regions in which I want to analyze findings. All data are processed in the AWS Region where they are generated. Amazon Detective maintains data analytics and log summaries in the behavior graph for a 1-year rolling period from the date of log ingestion. This allows for visual analysis and deep dives over a large data set for a long period of time. When I disable the service, all data is expunged to ensure no data remains.

There are no additional charges or upfront commitments required to use Amazon Detective. We charge per GB of data ingested from AWS AWS CloudTrail, Amazon Virtual Private Cloud Flow Logs, and AWS Guard Duty findings. Amazon Detective offers a 30-day free trial. As usual, check the pricing page for the details.

Amazon Detective is available in all commercial AWS Regions, except China. You can start to use it today.

— seb

TLS 1.2 to become the minimum for all AWS FIPS endpoints

Post Syndicated from Janelle Hopper original https://aws.amazon.com/blogs/security/tls-1-2-to-become-the-minimum-for-all-aws-fips-endpoints/

To improve security for data in transit, AWS will update all of our AWS Federal Information Processing Standard (FIPS) endpoints to a minimum Transport Layer Security (TLS) version TLS 1.2 over the next year. This update will deprecate the ability to use TLS 1.0 and TLS 1.1 on all FIPS endpoints across all AWS Regions by March 31, 2021. No other AWS endpoints are affected by this change.

As outlined in the AWS Shared Responsibility Model, security and compliance is a shared responsibility between AWS and our customers. When a customer makes a connection from their client application to an AWS service endpoint, the client provides its TLS minimum and TLS maximum version. The AWS service endpoint selects the maximum version offered.

What should customers do to prepare for this update?

Customers should confirm that their client applications support TLS 1.2 by verifying it is encapsulated between the clients’ minimum and the maximum TLS versions. We encourage customers to be proactive with security standards in order to avoid any impact to availability and to protect the integrity of their data in transit. Also, we recommend configuration changes should be tested in a staging environment, before introduction into production workloads.

When will these changes happen?

To minimize the impact to our customers who use TLS 1.0 and TLS 1.1, AWS is rolling out changes on a service-by-service basis between now and the end of March 2021. For each service, after a 30-day period during which no connections are detected, AWS will deploy a configuration change to remove support for them. After March 31, 2021, AWS may update the endpoint configuration to remove TLS 1.0 and 1.1, even if we detect customer connections. Additional reminders will be provided before these updates are final.

What are AWS FIPS endpoints?

All AWS services offer Transport Layer Security (TLS) 1.2 encrypted endpoints that can be used for all API calls. Some AWS services also offer FIPS 140-2 endpoints for customers that require use of FIPS validated cryptographic libraries.

What is Transport Layer Security (TLS)?

Transport Layer Security (TLS) is a cryptographic protocol designed to provide secure communication across a computer network. API calls to AWS services are secured using TLS.

Is there more assistance available to help verify or update client applications?

Customers using an AWS Software Development Kit (AWS SDK) can find information about how to properly configure their client’s minimum and maximum TLS versions on the following topics in the AWS SDKs:

Or see Tools to Build on AWS, and browse by programming language to find the relevant SDK.

Additionally, AWS IQ enables customers to find, securely collaborate with, and pay AWS Certified third-party experts for on-demand project work. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log into your console and select Get Started with AWS IQ to start a request.

The AWS Technical Support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support does not include code development for client applications.

If you have any questions or issues, please start a new thread on one of the AWS Forums, or contact AWS Support or your Technical Account Manager (TAM). If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sincerely,
Amazon Web Services

Author

Janelle Hopper

Janelle Hopper is a Senior Technical Program Manager in AWS Security with over 15 years of experience in the IT security field. She works with AWS services, infrastructure, and administrative teams to identify and drive innovative solutions that improve AWS’ security posture.

Use AWS Lambda authorizers with a third-party identity provider to secure Amazon API Gateway REST APIs

Post Syndicated from Bryant Bost original https://aws.amazon.com/blogs/security/use-aws-lambda-authorizers-with-a-third-party-identity-provider-to-secure-amazon-api-gateway-rest-apis/

Note: This post focuses on Amazon API Gateway REST APIs used with OAuth 2.0 and custom AWS Lambda authorizers. API Gateway also offers HTTP APIs, which provide native OAuth 2.0 features. For more information about which is right for your organization, see Choosing Between HTTP APIs and REST APIs.

Amazon API Gateway is a fully managed AWS service that simplifies the process of creating and managing REST APIs at any scale. If you are new to API Gateway, check out Amazon API Gateway Getting Started to get familiar with core concepts and terminology. In this post, I will demonstrate how an organization using a third-party identity provider can use AWS Lambda authorizers to implement a standard token-based authorization scheme for REST APIs that are deployed using API Gateway.

In the context of this post, a third-party identity provider refers to an entity that exists outside of AWS and that creates, manages, and maintains identity information for your organization. This identity provider issues cryptographically signed tokens to users containing information about the user identity and their permissions. In order to use these non-AWS tokens to control access to resources within API Gateway, you will need to define custom authorization code using a Lambda function to “map” token characteristics to API Gateway resources and permissions.

Defining custom authorization code is not the only way to implement authorization in API Gateway and ensure resources can only be accessed by the correct users. In addition to Lambda authorizers, API Gateway offers several “native” options that use existing AWS services to control resource access and do not require any custom code. To learn more about the established practices and authorization mechanisms, see Controlling and Managing Access to a REST API in API Gateway.

Lambda authorizers are a good choice for organizations that use third-party identity providers directly (without federation) to control access to resources in API Gateway, or organizations requiring authorization logic beyond the capabilities offered by “native” authorization mechanisms.

Benefits of using third-party tokens with API Gateway

Using a Lambda authorizer with third-party tokens in API Gateway can provide the following benefits:

  • Integration of third-party identity provider with API Gateway: If your organization has already adopted a third-party identity provider, building a Lambda authorizer allows users to access API Gateway resources by using their third-party credentials without having to configure additional services, such as Amazon Cognito. This can be particularly useful if your organization is using the third-party identity provider for single sign-on (SSO).
  • Minimal impact to client applications: If your organization has an application that is already configured to sign in to a third-party identity provider and issue requests using tokens, then minimal changes will be required to use this solution with API Gateway and a Lambda authorizer. By using credentials from your existing identity provider, you can integrate API Gateway resources into your application in the same manner that non-AWS resources are integrated.
  • Flexibility of authorization logic: Lambda authorizers allow for the additional customization of authorization logic, beyond validation and inspection of tokens.

Solution overview

The following diagram shows the authentication/authorization flow for using third-party tokens in API Gateway:

Figure 1: Example Solution Architecture

Figure 1: Example Solution Architecture

  1. After a successful login, the third-party identity provider issues an access token to a client.
  2. The client issues an HTTP request to API Gateway and includes the access token in the HTTP Authorization header.
  3. The API Gateway resource forwards the token to the Lambda authorizer.
  4. The Lambda authorizer authenticates the token with the third-party identity provider.
  5. The Lambda authorizer executes the authorization logic and creates an identity management policy.
  6. API Gateway evaluates the identity management policy against the API Gateway resource that the user requested and either allows or denies the request. If allowed, API Gateway forwards the user request to the API Gateway resource.

Prerequisites

To build the architecture described in the solution overview, you will need the following:

  • An identity provider: Lambda authorizers can work with any type of identity provider and token format. The post uses a generic OAuth 2.0 identity provider and JSON Web Tokens (JWT).
  • An API Gateway REST API: You will eventually configure this REST API to rely on the Lambda authorizer for access control.
  • A means of retrieving tokens from your identity provider and calling API Gateway resources: This can be a web application, a mobile application, or any application that relies on tokens for accessing API resources.

For the REST API in this example, I use API Gateway with a mock integration. To create this API yourself, you can follow the walkthrough in Create a REST API with a Mock Integration in Amazon API Gateway.

You can use any type of client to retrieve tokens from your identity provider and issue requests to API Gateway, or you can consult the documentation for your identity provider to see if you can retrieve tokens directly and issue requests using a third-party tool such as Postman.

Before you proceed to building the Lambda authorizer, you should be able to retrieve tokens from your identity provider and issue HTTP requests to your API Gateway resource with the token included in the HTTP Authorization header. This post assumes that the identity provider issues OAuth JWT tokens, and the example below shows a raw HTTP request addressed to the mock API Gateway resource with an OAuth JWT access token in the HTTP Authorization header. This request should be sent by the client application that you are using to retrieve your tokens and issue HTTP requests to the mock API Gateway resource.


# Example HTTP Request using a Bearer token\
GET /dev/my-resource/?myParam=myValue HTTP/1.1\
Host: rz8w6b1ik2.execute-api.us-east-1.amazonaws.com\
Authorization: Bearer eyJraWQiOiJ0ekgtb1Z5eEpPSF82UDk3...}

Building a Lambda authorizer

When you configure a Lambda authorizer to serve as the authorization source for an API Gateway resource, the Lambda authorizer is invoked by API Gateway before the resource is called. Check out the Lambda Authorizer Authorization Workflow for more details on how API Gateway invokes and exchanges information with Lambda authorizers. The core functionality of the Lambda authorizer is to generate a well-formed identity management policy that dictates the allowed actions of the user, such as which APIs the user can access. The Lambda authorizer will use information in the third-party token to create the identity management policy based on “permissions mapping” documents that you define — I will discuss these permissions mapping documents in greater detail below.

After the Lambda authorizer generates an identity management policy, the policy is returned to API Gateway and API Gateway uses it to evaluate whether the user is allowed to invoke the requested API. You can optionally configure a setting in API Gateway to automatically cache the identity management policy so that subsequent API invocations with the same token do not invoke the Lambda authorizer, but instead use the identity management policy that was generated on the last invocation.

In this post, you will build your Lambda authorizer to receive an OAuth access token and validate its authenticity with the token issuer, then implement custom authorization logic to use the OAuth scopes present in the token to create an identity management policy that dictates which APIs the user is allowed to access. You will also configure API Gateway to cache the identity management policy that is returned by the Lambda authorizer. These patterns provide the following benefits:

  • Leverage third-party identity management services: Validating the token with the third party allows for consolidated management of services such as token verification, token expiration, and token revocation.
  • Cache to improve performance: Caching the token and identity management policy in API Gateway removes the need to call the Lambda authorizer for each invocation. Caching a policy can improve performance; however, this increased performance comes with addition security considerations. These considerations are discussed below.
  • Limit access with OAuth scopes: Using the scopes present in the access token, along with custom authorization logic, to generate an identity management policy and limit resource access is a familiar OAuth practice and serves as a good example of customizable authentication logic. Refer to Defining Scopes for more information on OAuth scopes and how they are typically used to control resource access.

The Lambda authorizer is invoked with the following object as the event parameter when API Gateway is configured to use a Lambda authorizer with the token event payload; refer to Input to an Amazon API Gateway Lambda Authorizer for more information on the types of payloads that are compatible with Lambda authorizers. Since you are using a token-based authorization scheme, you will use the token event payload. This payload contains the methodArn, which is the Amazon Resource Name (ARN) of the API Gateway resource that the request was addressed to. The payload also contains the authorizationToken, which is the third-party token that the user included with the request.


# Lambda Token Event Payload  
{   
 type: 'TOKEN',  
 methodArn: 'arn:aws:execute-api:us-east-1:2198525...',  
 authorizationToken: 'Bearer eyJraWQiOiJ0ekgt...'  
}

Upon receiving this event, your Lambda authorizer will issue an HTTP POST request to your identity provider to validate the token, and use the scopes present in the third-party token with a permissions mapping document to generate and return an identity management policy that contains the allowed actions of the user within API Gateway. Lambda authorizers can be written in any Lambda-supported language. You can explore some starter code templates on GitHub. The example function in this post uses Node.js 10.x.

The Lambda authorizer code in this post uses a static permissions mapping document. This document is represented by apiPermissions. For a complex or highly dynamic permissions document, this document can be decoupled from the Lambda authorizer and exported to Amazon Simple Storage Service (Amazon S3) or Amazon DynamoDB for simplified management. The static document contains the ARN of the deployed API, the API Gateway stage, the API resource, the HTTP method, and the allowed token scope. The Lambda authorizer then generates an identity management policy by evaluating the scopes present in the third-party token against those present in the document.

The fragment below shows an example permissions mapping. This mapping restricts access by requiring that users issuing HTTP GET requests to the ARN arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2 and the my-resource resource in the DEV API Gateway stage are only allowed if they provide a valid token that contains the email scope.


# Example permissions document  
{  
 "arn": "arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2",  
 "resource": "my-resource",  
 "stage": "DEV",  
 "httpVerb": "GET",  
 "scope": "email"  
}

The logic to create the identity management policy can be found in the generateIAMPolicy() method of the Lambda function. This method serves as a good general example of the extent of customization possible in Lambda authorizers. While the method in the example relies solely on token scopes, you can also use additional information such as request context, user information, source IP address, user agents, and so on, to generate the returned identity management policy.

Upon invocation, the Lambda authorizer below performs the following procedure:

  1. Receive the token event payload, and isolate the token string (trim “Bearer ” from the token string, if present).
  2. Verify the token with the third-party identity provider.

    Note: This Lambda function does not include this functionality. The method, verifyAccessToken(), will need to be customized based on the identity provider that you are using. This code assumes that the verifyAccessToken() method returns a Promise that resolves to the decoded token in JSON format.

  3. Retrieve the scopes from the decoded token. This code assumes these scopes can be accessed as an array at claims.scp in the decoded token.
  4. Iterate over the scopes present in the token and create identity and access management (IAM) policy statements based on entries in the permissions mapping document that contain the scope in question.
  5. Create a complete, well-formed IAM policy using the generated IAM policy statements. Refer to IAM JSON Policy Elements Reference for more information on programmatically building IAM policies.
  6. Return complete IAM policy to API Gateway.
    
    /*
     * Sample Lambda Authorizer to validate tokens originating from
     * 3rd Party Identity Provider and generate an IAM Policy
     */
    
    const apiPermissions = [
      {
        "arn": "arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2", // NOTE: Replace with your API Gateway API ARN
        "resource": "my-resource", // NOTE: Replace with your API Gateway Resource
        "stage": "dev", // NOTE: Replace with your API Gateway Stage
        "httpVerb": "GET",
        "scope": "email"
      }
    ];
    
    var generatePolicyStatement = function (apiName, apiStage, apiVerb, apiResource, action) {
      'use strict';
      // Generate an IAM policy statement
      var statement = {};
      statement.Action = 'execute-api:Invoke';
      statement.Effect = action;
      var methodArn = apiName + "/" + apiStage + "/" + apiVerb + "/" + apiResource + "/";
      statement.Resource = methodArn;
      return statement;
    };
    
    var generatePolicy = function (principalId, policyStatements) {
      'use strict';
      // Generate a fully formed IAM policy
      var authResponse = {};
      authResponse.principalId = principalId;
      var policyDocument = {};
      policyDocument.Version = '2012-10-17';
      policyDocument.Statement = policyStatements;
      authResponse.policyDocument = policyDocument;
      return authResponse;
    };
    
    var verifyAccessToken = function (accessToken) {
      'use strict';
      /*
      * Verify the access token with your Identity Provider here (check if your 
      * Identity Provider provides an SDK).
      *
      * This example assumes this method returns a Promise that resolves to 
      * the decoded token, you may need to modify your code according to how
      * your token is verified and what your Identity Provider returns.
      */
    };
    
    var generateIAMPolicy = function (scopeClaims) {
      'use strict';
      // Declare empty policy statements array
      var policyStatements = [];
      // Iterate over API Permissions
      for ( var i = 0; i  -1 ) {
          // User token has appropriate scope, add API permission to policy statements
          policyStatements.push(generatePolicyStatement(apiPermissions[i].arn, apiPermissions[i].stage, apiPermissions[i].httpVerb,
                                                        apiPermissions[i].resource, "Allow"));
        }
      }
      // Check if no policy statements are generated, if so, create default deny all policy statement
      if (policyStatements.length === 0) {
        var policyStatement = generatePolicyStatement("*", "*", "*", "*", "Deny");
        policyStatements.push(policyStatement);
      }
      return generatePolicy('user', policyStatements);
    };
    
    exports.handler = async function(event, context) {
      // Declare Policy
      var iamPolicy = null;
      // Capture raw token and trim 'Bearer ' string, if present
      var token = event.authorizationToken.replace("Bearer ", "");
      // Validate token
      await verifyAccessToken(token).then(data => {
        // Retrieve token scopes
        var scopeClaims = data.claims.scp;
        // Generate IAM Policy
        iamPolicy = generateIAMPolicy(scopeClaims);
      })
      .catch(err => {
        console.log(err);
        // Generate default deny all policy statement if there is an error
        var policyStatements = [];
        var policyStatement = generatePolicyStatement("*", "*", "*", "*", "Deny");
        policyStatements.push(policyStatement);
        iamPolicy = generatePolicy('user', policyStatements);
      });
      return iamPolicy;
    };  
    

The following is an example of the identity management policy that is returned from your function.


# Example IAM Policy
{
  "principalId": "user",
  "policyDocument": {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "execute-api:Invoke",
        "Effect": "Allow",
        "Resource": "arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2/get/DEV/my-resource/"
      }
    ]
  }
}

It is important to note that the Lambda authorizer above is not considering the method or resource that the user is requesting. This is because you want to generate a complete identity management policy that contains all the API permissions for the user, instead of a policy that only contains allow/deny for the requested resource. By generating a complete policy, this policy can be cached by API Gateway and used if the user invokes a different API while the policy is still in the cache. Caching the policy can reduce API latency from the user perspective, as well as the total amount of Lambda invocations; however, it can also increase vulnerability to Replay Attacks and acceptance of expired/revoked tokens.

Shorter cache lifetimes introduce more latency to API calls (that is, the Lambda authorizer must be called more frequently), while longer cache lifetimes introduce the possibility of a token expiring or being revoked by the identity provider, but still being used to return a valid identity management policy. For example, the following scenario is possible when caching tokens in API Gateway:

  • Identity provider stamps access token with an expiration date of 12:30.
  • User calls API Gateway with access token at 12:29.
  • Lambda authorizer generates identity management policy and API Gateway caches the token/policy pair for 5 minutes.
  • User calls API Gateway with same access token at 12:32.
  • API Gateway evaluates access against policy that exists in the cache, despite original token being expired.

Since tokens are not re-validated by the Lambda authorizer or API Gateway once they are placed in the API Gateway cache, long cache lifetimes may also increase susceptibility to Replay Attacks. Longer cache lifetimes and large identity management policies can increase the performance of your application, but must be evaluated against the trade-off of increased exposure to certain security vulnerabilities.

Deploying the Lambda authorizer

To deploy your Lambda authorizer, you first need to create and deploy a Lambda deployment package containing your function code and dependencies (if applicable). Lambda authorizer functions behave the same as other Lambda functions in terms of deployment and packaging. For more information on packaging and deploying a Lambda function, see AWS Lambda Deployment Packages in Node.js. For this example, you should name your Lambda function myLambdaAuth and use a Node.js 10.x runtime environment.

After the function is created, add the Lambda authorizer to API Gateway.

  1. Navigate to API Gateway and in the navigation pane, under APIs, select the API you configured earlier
  2. Under your API name, choose Authorizers, then choose Create New Authorizer.
  3. Under Create Authorizer, do the following:
    1. For Name, enter a name for your Lambda authorizer. In this example, the authorizer is named Lambda-Authorizer-Demo.
    2. For Type, select Lambda
    3. For Lambda Function, select the AWS Region you created your function in, then enter the name of the Lambda function you just created.
    4. Leave Lambda Invoke Role empty.
    5. For Lambda Event Payload choose Token.
    6. For Token Source, enter Authorization.
    7. For Token Validation, enter:
      
      ^(Bearer )[a-zA-Z0-9\-_]+?\.[a-zA-Z0-9\-_]+?\.([a-zA-Z0-9\-_]+)$
      			

      This represents a regular expression for validating that tokens match JWT format (more below).

    8. For Authorization Caching, select Enabled and enter a time to live (TTL) of 1 second.
  4. Select Save.

 

Figure 2: Create a new Lambda authorizer

Figure 2: Create a new Lambda authorizer

This configuration passes the token event payload mentioned above to your Lambda authorizer, and is necessary since you are using tokens (Token Event Payload) for authentication, rather than request parameters (Request Event Payload). For more information, see Use API Gateway Lambda Authorizers.

In this solution, the token source is the Authorization header of the HTTP request. If you know the expected format of your token, you can include a regular expression in the Token Validation field, which automatically rejects any request that does not match the regular expression. Token validations are not mandatory. This example assumes the token is a JWT.


# Regex matching JWT Bearer Tokens  
^(Bearer )[a-zA-Z0-9\-_]+?\.[a-zA-Z0-9\-_]+?\.([a-zA-Z0-9\-_]+)$

Here, you can also configure how long the token/policy pair will be cached in API Gateway. This example enables caching with a TTL of 1 second.

In this solution, you leave the Lambda Invoke Role field empty. This field is used to provide an IAM role that allows API Gateway to execute the Lambda authorizer. If left blank, API Gateway configures a default resource-based policy that allows it to invoke the Lambda authorizer.

The final step is to point your API Gateway resource to your Lambda authorizer. Select the configured API Resource and HTTP method.

  1. Navigate to API Gateway and in the navigation pane, under APIs, select the API you configured earlier.
  2. Select the GET method.

    Figure 3: GET Method Execution

    Figure 3: GET Method Execution

  3. Select Method Request.
  4. Under Settings, edit Authorization and select the authorizer you just configured (in this example, Lambda-Authorizer-Demo).

    Figure 4: Select your API authorizer

    Figure 4: Select your API authorizer

Deploy the API to an API Gateway stage that matches the stage configured in the Lambda authorizer permissions document (apiPermissions variable).

  1. Navigate to API Gateway and in the navigation pane, under APIs, select the API you configured earlier.
  2. Select the / resource of your API.
  3. Select Actions, and under API Actions, select Deploy API.
  4. For Deployment stage, select [New Stage] and for the Stage name, enter dev. Leave Stage description and Deployment description blank.
  5. Select Deploy.

    Figure 5: Deploy your API stage

    Figure 5: Deploy your API stage

Testing the results

With the Lambda authorizer configured as your authorization source, you are now able to access the resource only if you provide a valid token that contains the email scope.

The following example shows how to issue an HTTP request with curl to your API Gateway resource using a valid token that contains the email scope passed in the HTTP Authorization header. Here, you are able to authenticate and receive an appropriate response from API Gateway.


# HTTP Request (including valid token with "email" scope)  
$ curl -X GET \  
> 'https://rz8w6b1ik2.execute-api.us-east-1.amazonaws.com/dev/my-resource/?myParam=myValue' \  
> -H 'Authorization: Bearer eyJraWQiOiJ0ekgtb1Z5eE...'  
  
{  
 "statusCode" : 200,  
 "message" : "Hello from API Gateway!"  
}

The following JSON object represents the decoded JWT payload used in the previous example. The JSON object captures the token scopes in scp, and you can see that the token contained the email scope.

Figure 6: JSON object that contains the email scope

Figure 6: JSON object that contains the email scope

If you provide a token that is expired, is invalid, or that does not contain the email scope, then you are not able to access the resource. The following example shows a request to your API Gateway resource with a valid token that does not contain the email scope. In this example, the Lambda authorizer rejects the request.


# HTTP Request (including token without "email" scope)  
$ curl -X GET \  
> 'https://rz8w6b1ik2.execute-api.us-east-1.amazonaws.com/dev/my-resource/?myParam=myValue' \  
> -H 'Authorization: Bearer eyJraWQiOiJ0ekgtb1Z5eE...'  
  
{  
 "Message" : "User is not authorized to access this resource with an explicit deny"  
}

The following JSON object represents the decoded JWT payload used in the above example; it does not include the email scope.

Figure 7: JSON object that does not contain the email scope

Figure 7: JSON object that does not contain the email scope

If you provide no token, or you provide a token not matching the provided regular expression, then you are immediately rejected by API Gateway without invoking the Lambda authorizer. API Gateway only forwards tokens to the Lambda authorizer that have the HTTP Authorization header and pass the token validation regular expression, if a regular expression was provided. If the request does not pass token validation or does not have an HTTP Authorization header, API Gateway rejects it with a default HTTP 401 response. The following example shows how to issue a request to your API Gateway resource using an invalid token that does match the regular expression you configured on your authorizer. In this example, API Gateway rejects your request automatically without invoking the authorizer.


# HTTP Request (including a token that is not a JWT)  
$ curl -X GET \  
> 'https://rz8w6b1ik2.execute-api.us-east-1.amazonaws.com/dev/my-resource/?myParam=myValue' \  
> -H 'Authorization: Bearer ThisIsNotAJWT'  
  
{  
 "Message" : "Unauthorized"  
}

These examples demonstrate how your Lambda authorizer allows and denies requests based on the token format and the token content.

Conclusion

In this post, you saw how Lambda authorizers can be used with API Gateway to implement a token-based authentication scheme using third-party tokens.

Lambda authorizers can provide a number of benefits:

  • Leverage third-party identity management services directly, without identity federation.
  • Implement custom authorization logic.
  • Cache identity management policies to improve performance of authorization logic (while keeping in mind security implications).
  • Minimally impact existing client applications.

For organizations seeking an alternative to Amazon Cognito User Pools and Amazon Cognito identity pools, Lambda authorizers can provide complete, secure, and flexible authentication and authorization services to resources deployed with Amazon API Gateway. For more information about Lambda authorizers, see API Gateway Lambda Authorizers.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Bryant Bost

Bryant Bost is an Application Consultant for AWS Professional Services based out of Washington, DC. As a consultant, he supports customers with architecting, developing, and operating new applications, as well as migrating existing applications to AWS. In addition to web application development, Bryant specializes in serverless and container architectures, and has authored several posts on these topics.

Top 10 security items to improve in your AWS account

Post Syndicated from Nathan Case original https://aws.amazon.com/blogs/security/top-10-security-items-to-improve-in-your-aws-account/

If you’re looking to improve your cloud security, a good place to start is to follow the top 10 most important cloud security tips that Stephen Schmidt, Chief Information Security Officer for AWS, laid out at AWS re:Invent 2019. Below are the tips, expanded to help you take action.

10 most important security tips

1) Accurate account information

When AWS needs to contact you about your AWS account, we use the contact information defined in the AWS Management Console, including the email address used to create the account and those listed under Alternate Contacts. All email addresses should be set up to go to aliases that are not dependent on a single person. You should also have a process for regularly checking that these email addresses work, and that you are responding to emails—especially security notifications you might receive from [email protected]. Learn how to set the alternate contacts to help ensure someone is receiving important messages, even when you are unavailable.

Alternative Contacts user interface

2) Use multi-factor authentication (MFA)

MFA is the best way to protect accounts from inappropriate access. Always set up MFA on your Root user and AWS Identity and Access Management (IAM) users. If you use AWS Single Sign-On (SSO) to control access to AWS or to federate your corporate identity store, you can enforce MFA there. Implementing MFA at the federated identity provider (IdP) means that you can take advantage of existing MFA processes in your organization. To get started, see Using Multi-Factor Authentication (MFA) in AWS.

3) No hard-coding secrets

When you build applications on AWS, you can use AWS IAM roles to deliver temporary, short-lived credentials for calling AWS services. However, some applications require longer-lived credentials, such as database passwords or other API keys. If this is the case, you should never hard code these secrets in the application or store them in source code.

You can use AWS Secrets Manager to control the information in your application. Secrets Manager allows you to rotate, manage, and retrieve database credentials, API keys, and other secrets through their lifecycle. Users and applications can retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hard code sensitive information in plain text.

You should also learn how to use AWS IAM roles for applications running on Amazon EC2. Also, for best results, learn how to securely provide database credentials to AWS Lambda functions by using AWS Secrets Manager.

4) Limit security groups

Security groups are a key way that you can enable network access to resources you have provisioned on AWS. Ensuring that only the required ports are open and the connection is enabled from known network ranges is a foundational approach to security. You can use services such as AWS Config or AWS Firewall Manager to programmatically ensure that the virtual private cloud (VPC) security group configuration is what you intended. The Network Reachability rules package analyzes your Amazon Virtual Private Cloud (Amazon VPC) network configuration to determine whether your Amazon EC2 instances can be reached from external networks, such as the Internet, a virtual private gateway, or AWS Direct Connect. AWS Firewall Manager can also be used to automatically apply AWS WAF rules to internet-facing resources across your AWS accounts. Learn more about detecting and responding to changes in VPC Security Groups.

5) Intentional data policies

Not all data is created equal, which means classifying data properly is crucial to its security. It’s important to accommodate the complex tradeoffs between a strict security posture and a flexible agile environment. A strict security posture, which requires lengthy access-control procedures, creates stronger guarantees about data security. However, such a security posture can work counter to agile and fast-paced development environments, where developers require self-service access to data stores. Design your approach to data classification to meet a broad range of access requirements.

How you classify data doesn’t have to be as binary as public or private. Data comes in various degrees of sensitivity and you might have data that falls in all of the different levels of sensitivity and confidentiality. Design your data security controls with an appropriate mix of preventative and detective controls to match data sensitivity appropriately. In the suggestions below, we deal mostly with the difference between public and private data. If you have no classification policy currently, public versus private is a good place to start.

To protect your data once it has been classified, or while you are classifying it:

  1. If you have Amazon Simple Storage Service (Amazon S3) buckets that are for public usage, move all of that data into a separate AWS account set aside for public access. Set up policies to allow only processes — not humans — to move data into those buckets. This lets you block the ability to make a public Amazon S3 bucket in any other AWS account.
  2. Use Amazon S3 to block public access in any account that should not be able to share data through Amazon S3.
  3. Use two different IAM roles for encryption and decryption with KMS. This lets you separate the data entry (encryption) and data review (decryption), and it allows you to do threat detection on the failed decryption attempts by analyzing that role.

6) Centralize CloudTrail logs

Logging and monitoring are important parts of a robust security plan. Being able to investigate unexpected changes in your environment or perform analysis to iterate on your security posture relies on having access to data. AWS recommends that you write logs, especially AWS CloudTrail, to an S3 bucket in an AWS account designated for logging (Log Archive). The permissions on the bucket should prevent deletion of the logs, and they should also be encrypted at rest. Once the logs are centralized, you can integrate with SIEM solutions or use AWS services to analyze them. Learn how to use AWS services to visualize AWS CloudTrail logs. Once you have CloudTrail logs centralized, you can also use the same Log Archive account to centralize logs from other sources, such as CloudWatch Logs and AWS load balancers.

7) Validate IAM roles

As you operate your AWS accounts to iterate and build capability, you may end up creating multiple IAM roles that you discover later you don’t need. Use AWS IAM Access Analyzer to review access to your internal AWS resources and determine where you have shared access outside your AWS accounts. Routinely reevaluating AWS IAM roles and permissions with Security Hub or open source products such as Prowler will give you the visibility needed to validate compliance with your Governance, Risk, and Compliance (GRC) policies. If you’re already past this point, and have already created multiple roles, you can search for unused IAM roles and remove them.

8) Take actions on findings (This isn’t just GuardDuty anymore!)

AWS Security Hub, Amazon GuardDuty, and AWS Identity and Access Management Access Analyzer are managed AWS services that provide you with actionable findings in your AWS accounts. They are easy to turn on and can integrate across multiple accounts. Turning them on is the first step. You also need to take action when you see findings. The action(s) to take are determined by your own incident response policy. For each finding, ensure that you have determined what your required response actions should be.

Action can be notifying a human to respond, but as you get more experienced in AWS services, you will want to automate the response to the findings generated by Security Hub or GuardDuty. Learn more about how to automate your response and remediation from Security Hub findings.

9) Rotate keys

One of the things that Security Hub provides is a view of the compliance posture of your AWS accounts using the CIS Benchmarks. One of these checks is to look for IAM users with access keys more than 90 days old. If you need to use access keys rather than roles, you should rotate them regularly. Review best practices for managing AWS access keys for more guidance. If your users access AWS via federation, then you can remove the need to issue AWS access keys for your users. Users authenticate to the IdP and assume an IAM role in the target AWS account. The result is that long-term credentials are not needed, and your user will have short-term credentials associated with an IAM role.

10) Be involved in the dev cycle

All of the guidance to this point has been focused on the technology configuration that you can implement. The last piece of advice, “be involved in the dev cycle,” is about people, and can be broadly summarized as “raise the security culture of your organization.” The role of people in all parts of the organization is to help the business launch their solutions securely. As people focused on security, we can guide and educate the rest of our organization to understand what they need to do to raise the bar for security in everything they build. Security is everyone’s job — not just for those folks with it in their job title.

What the security people in every organization can do is to make security easier, by shifting the process to make the easiest and most desirable action one that is almost the most secure. For example, each team should not build their own identity federation or logging solution. We are stronger when we work together, and this applies to securing the cloud as well. The goal is to make security more approachable so that co-workers want to talk to the security team because they know it is the place to get help. For more about creating this type of security team, read Cultivating Security Leadership.

Now that you’ve revisited the top 10 things to make your cloud more secure, make sure you have them set up in your AWS accounts — and go build securely!

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Nathan Case

Nathan Case

Nathan is a Security Strategist, Geek. He joined AWS in 2016. You can learn more about him here.

15 additional AWS services authorized at DoD Impact Level 6 for the AWS Secret Region

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/15-additional-aws-services-authorized-dod-impact-level-6-aws-secret-region/

The Defense Information Systems Agency (DISA) has authorized 15 additional AWS services in the AWS Secret Region for production workloads at the Department of Defense (DoD) Impact Level (IL) 6 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). The authorization at DoD IL 6 allows DoD Mission Owners to process classified and mission-critical workloads for National Security Systems in the AWS Secret Region. The AWS Secret Region was built as part of the Commercial Cloud Services (C2S) contract and is available to the DoD on the AWS GSA IT70 schedule.

The AWS services successfully completed an independent evaluation by members of the Intelligence Community (IC), which confirmed that the AWS services effectively implemented 859 security controls using applicable criteria from NIST SP 800-53 Rev 4, the DoD CC SRG, and the Committee on National Security Systems Instruction No. 1253 at the Moderate Confidentiality, Moderate Integrity, and Moderate Availability impact levels.

The 15 AWS services newly authorized by DISA at IL 6 provide additional choices for DoD Mission Owners to leverage the capabilities of the AWS Cloud in service areas such as compute, storage, database, networking, and security, bringing our total IL 6 authorizations to 26 services as listed below.

Authorized AWS services and features at DoD Impact Level 6

  1. Amazon CloudWatch
  2. Amazon DynamoDB
  3. Amazon Elastic Block Store
  4. Amazon Elastic Compute Cloud (including VM Import/Export)
  5. Amazon EC2 Auto Scaling
  6. Amazon ElastiCache
  7. Amazon Kinesis Data Streams
  8. Amazon Redshift
  9. Amazon Relational Database Service (including MariaDB, MySQL, Oracle, PostgreSQL, and SQL Server)
  10. Amazon S3 Glacier
  11. Amazon Simple Notification Service
  12. Amazon Simple Queue Service
  13. Amazon Simple Storage Service
  14. Amazon Simple Workflow
  15. Amazon Virtual Private Cloud
  16. AWS CloudFormation
  17. AWS CloudTrail
  18. AWS Config
  19. AWS Database Migration Service
  20. AWS Direct Connect
  21. AWS Identity and Access Management
  22. AWS Key Management Service
  23. AWS Snowball
  24. AWS Step Functions
  25. AWS Trusted Advisor
  26. Elastic Load Balancing (Classic and Application Load Balancer)

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Follow the AWS Security Blog for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler Harding is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.

How financial institutions can approve AWS services for highly confidential data

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/how-financial-institutions-can-approve-aws-services-for-highly-confidential-data/

As a Principal Solutions Architect within the Worldwide Financial Services industry group, one of the most frequently asked questions I receive is whether a particular AWS service is financial-services-ready. In a regulated industry like financial services, moving to the cloud isn’t a simple lift-and-shift exercise. Instead, financial institutions use a formal service-by-service assessment process, often called whitelisting, to demonstrate how cloud services can help address their regulatory obligations. When this process is not well defined, it can delay efforts to migrate data to the cloud.

In this post, I will provide a framework consisting of five key considerations that financial institutions should focus on to help streamline the whitelisting of cloud services for their most confidential data. I will also outline the key AWS capabilities that can help financial services organizations during this process.

Here are the five key considerations:

  1. Achieving compliance
  2. Data protection
  3. Isolation of compute environments
  4. Automating audits with APIs
  5. Operational access and security

For many of the business and technology leaders that I work with, agility and the ability to innovate quickly are the top drivers for their cloud programs. Financial services institutions migrate to the cloud to help develop personalized digital experiences, break down data silos, develop new products, drive down margins for existing products, and proactively address global risk and compliance requirements. AWS customers who use a wide range of AWS services achieve greater agility as they move through the stages of cloud adoption. Using a wide range of services enables organizations to offload undifferentiated heavy lifting to AWS and focus on their core business and customers.

My goal is to guide financial services institutions as they move their company’s highly confidential data to the cloud — in both production environments and mission-critical workloads. The following considerations will help financial services organizations determine cloud service readiness and to achieve success in the cloud.

1. Achieving compliance

For financial institutions that use a whitelisting process, the first step is to establish that the underlying components of the cloud service provider’s (CSP’s) services can meet baseline compliance needs. A key prerequisite to gaining this confidence is to understand the AWS shared responsibility model. Shared responsibility means that the secure functioning of an application on AWS requires action on the part of both the customer and AWS as the CSP. AWS customers are responsible for their security in the cloud. They control and manage the security of their content, applications, systems, and networks. AWS manages security of the cloud, providing and maintaining proper operations of services and features, protecting AWS infrastructure and services, maintaining operational excellence, and meeting relevant legal and regulatory requirements.

In order to establish confidence in the AWS side of the shared responsibility model, customers can regularly review the AWS System and Organization Controls 2 (SOC 2) Type II report prepared by an independent, third-party auditor. The AWS SOC 2 report contains confidential information that can be obtained by customers under an AWS non-disclosure agreement (NDA) through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

Key takeaway: Currently, 116 AWS services are in scope for SOC compliance, which will help organizations streamline their whitelisting process. For more information about which services are in scope, see AWS Services in Scope by Compliance Program.

2. Data protection

Financial institutions use comprehensive data loss prevention strategies to protect confidential information. Customers using AWS data services can employ encryption to mitigate the risk of disclosure, alteration of sensitive information, or unauthorized access. The AWS Key Management Service (AWS KMS) allows customers to manage the lifecycle of encryption keys and control how they are used by their applications and AWS services. Allowing encryption keys to be generated and maintained in the FIPS 140-2 validated hardware security modules (HSMs) in AWS KMS is the best practice and most cost-effective option.

For AWS customers who want added flexibility for key generation and storage, AWS KMS allows them to either import their own key material into AWS KMS and keep a copy in their on-premises HSM, or generate and store keys in dedicated AWS CloudHSM instances under their control. For each of these key material generation and storage options, AWS customers can control all the permissions to use keys from any of their applications or AWS services. In addition, every use of a key or modification to its policy is logged to AWS CloudTrail for auditing purposes. This level of control and audit over key management is one of the tools organizations can use to address regulatory requirements for using encryption as a data privacy mechanism.

All AWS services offer encryption features, and most AWS services that financial institutions use integrate with AWS KMS to give organizations control over their encryption keys used to protect their data in the service. AWS offers customer-controlled key management features in twice as many services as any other CSP.

Financial institutions also encrypt data in transit to ensure that it is accessed only by the intended recipient. Encryption in transit must be considered in several areas, including API calls to AWS service endpoints, encryption of data in transit between AWS service components, and encryption in transit within applications. The first two considerations fall within the AWS scope of the shared responsibility model, whereas the latter is the responsibility of the customer.

All AWS services offer Transport Layer Security (TLS) 1.2 encrypted endpoints that can be used for all API calls. Some AWS services also offer FIPS 140-2 endpoints in selected AWS Regions. These FIPS 140-2 endpoints use a cryptographic library that has been validated under the Federal Information Processing Standards (FIPS) 140-2 standard. For financial institutions that operate workloads on behalf of the US government, using FIPS 140-2 endpoints helps them to meet their compliance requirements.

To simplify configuring encryption in transit within an application, which falls under the customer’s responsibility, customers can use the AWS Certificate Manager (ACM) service. ACM enables easy provisioning, management, and deployment of x.509 certificates used for TLS to critical application endpoints hosted in AWS. These integrations provide automatic certificate and private key deployment and automated rotation for Amazon CloudFront, Elastic Load Balancing, Amazon API Gateway, AWS CloudFormation, and AWS Elastic Beanstalk. ACM offers both publicly-trusted and private certificate options to meet the trust model requirements of an application. Organizations may also import their existing public or private certificates to ACM to make use of existing public key infrastructure (PKI) investments.

Key takeaway: AWS KMS allows organizations to manage the lifecycle of encryption keys and control how encryption keys are used for over 50 services. For more information, see AWS Services Integrated with AWS KMS. AWS ACM simplifies the deployment and management of PKI as compared to self-managing in an on-premises environment.

3. Isolation of compute environments

Financial institutions have strict requirements for isolation of compute resources and network traffic control for workloads with highly confidential data. One of the core competencies of AWS as a CSP is to protect and isolate customers’ workloads from each other. Amazon Virtual Private Cloud (Amazon VPC) allows customers to control their AWS environment and keep it separate from other customers’ environments. Amazon VPC enables customers to create a logically separate network enclave within the Amazon Elastic Compute Cloud (Amazon EC2) network to house compute and storage resources. Customers control the private environment, including IP addresses, subnets, network access control lists, security groups, operating system firewalls, route tables, virtual private networks (VPNs), and internet gateways.

Amazon VPC provides robust logical isolation of customers’ resources. For example, every packet flow on the network is individually authorized to validate the correct source and destination before it is transmitted and delivered. It is not possible for information to pass between multiple tenants without specifically being authorized by both the transmitting and receiving customers. If a packet is being routed to a destination without a rule that matches it, the packet is dropped. AWS has also developed the AWS Nitro System, a purpose-built hypervisor with associated custom hardware components that allocates central processing unit (CPU) resources for each instance and is designed to protect the security of customers’ data, even from operators of production infrastructure.

For more information about the isolation model for multi-tenant compute services, such as AWS Lambda, see the Security Overview of AWS Lambda whitepaper. When Lambda executes a function on a customer’s behalf, it manages both provisioning and the resources necessary to run code. When a Lambda function is invoked, the data plane allocates an execution environment to that function or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment. Each function runs in one or more dedicated execution environments that are used for the lifetime of the function and are then destroyed. Execution environments run on hardware-virtualized lightweight micro-virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. Execution environments are never shared across functions, and microVMs are never shared across AWS accounts. AWS continues to innovate in the area of hypervisor security, and resource isolation enables our financial services customers to run even the most sensitive workloads in the AWS Cloud with confidence.

Most financial institutions require that traffic stay private whenever possible and not leave the AWS network unless specifically required (for example, in internet-facing workloads). To keep traffic private, customers can use Amazon VPC to carve out an isolated and private portion of the cloud for their organizational needs. A VPC allows customers to define their own virtual networking environments with segmentation based on application tiers.

To connect to regional AWS services outside of the VPC, organizations may use VPC endpoints, which allow private connectivity between resources in the VPC and supported AWS services. Endpoints are managed virtual devices that are highly available, redundant, and scalable. Endpoints enable private connection between a customer’s VPC and AWS services using private IP addresses. With VPC endpoints, Amazon EC2 instances running in private subnets of a VPC have private access to regional resources without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Furthermore, when customers create an endpoint, they can attach a policy that controls the use of the endpoint to access only specific AWS resources, such as specific Amazon Simple Storage Service (Amazon S3) buckets within their AWS account. Similarly, by using resource-based policies, customers can restrict access to their resources to only allow access from VPC endpoints. For example, by using bucket policies, customers can restrict access to a given Amazon S3 bucket only through the endpoint. This ensures that traffic remains private and only flows through the endpoint without traversing public address space.

Key takeaway: To help customers keep traffic private, more than 45 AWS services have support for VPC Endpoints.

4. Automating audits with APIs

Visibility into user activities and resource configuration changes is a critical component of IT governance, security, and compliance. On-premises logging solutions require installing agents, setting up configuration files and log servers, and building and maintaining data stores to store the data. This complexity may result in poor visibility and fragmented monitoring stacks, which in turn takes longer to troubleshoot and resolve issues. CloudTrail provides a simple, centralized solution to record AWS API calls and resource changes in the cloud that helps alleviate this burden.

CloudTrail provides a history of activity in a customer’s AWS account to help them meet compliance requirements for their internal policies and regulatory standards. CloudTrail helps identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help customers analyze and respond to activity in their AWS account. CloudTrail management events provide insights into the management (control plane) operations performed on resources in an AWS account. For example, customers can log administrative actions, such as creation, deletion, and modification of Amazon EC2 instances. For each event, they receive details such as the AWS account, IAM user role, and IP address of the user that initiated the action as well as time of the action and which resources were affected.

CloudTrail data events provide insights into the resource (data plane) operations performed on or within the resource itself. Data events are often high-volume activities and include operations, such as Amazon S3 object-level APIs, and AWS Lambda function Invoke APIs. For example, customers can log API actions on Amazon S3 objects and receive detailed information, such as the AWS account, IAM user role, IP address of the caller, time of the API call, and other details. Customers can also record activity of their Lambda functions and receive details about Lambda function executions, such as the IAM user or service that made the Invoke API call, when the call was made, and which function was executed.

To help customers simplify continuous compliance and auditing, AWS uniquely offers the AWS Config service to help them assess, audit, and evaluate the configurations of AWS resources. AWS Config continuously monitors and records AWS resource configurations, and allows customers to automate the evaluation of recorded configurations against internal guidelines. With AWS Config, customers can review changes in configurations and relationships between AWS resources and dive into detailed resource configuration histories.

Key takeaway: Over 160 AWS services are integrated with CloudTrail, which helps customers ensure compliance with their internal policies and regulatory standards by providing a history of activity within their AWS account. For more information about how to use CloudTrail with specific AWS services, see AWS Service Topics for CloudTrail in the CloudTrail user guide. For more information on how to enable AWS Config in an environment, see Getting Started with AWS Config.

5. Operational access and security

In our discussions with financial institutions, they’ve told AWS that they are required to have a clear understanding of access to their data. This includes knowing what controls are in place to ensure that unauthorized access does not occur. AWS has implemented layered controls that use preventative and detective measures to ensure that only authorized individuals have access to production environments where customer content resides. For more information about access and security controls, see the AWS SOC 2 report in AWS Artifact.

One of the foundational design principles of AWS security is to keep people away from data to minimize risk. As a result, AWS created an entirely new virtualization platform called the AWS Nitro System. This highly innovative system combines new hardware and software that dramatically increases both performance and security. The AWS Nitro System enables enhanced security with a minimized attack surface because virtualization and security functions are offloaded from the main system board where customer workloads run to dedicated hardware and software. Additionally, the locked-down security model of the AWS Nitro System prohibits all administrative access, including that of Amazon employees, which eliminates the possibility of human error and tampering.

Key takeaway: Review third-party auditor reports (including SOC 2 Type II) available in AWS Artifact, and learn more about the AWS Nitro System.

Conclusion

AWS can help simplify and expedite the whitelisting process for financial services institutions to move to the cloud. When organizations take advantage of a wide range of AWS services, it helps maximize their agility by making use of the existing security and compliance measures built into AWS services to complete whitelisting so financial services organizations can focus on their core business and customers.

After organizations have completed the whitelisting process and determined which cloud services can be used as part of their architecture, the AWS Well-Architected Framework can then be implemented to help build and operate secure, resilient, performant, and cost-effective architectures on AWS.

AWS also has a dedicated team of financial services professionals to help customers navigate a complex regulatory landscape, as well as other resources to guide them in their migration to the cloud – no matter where they are in the process. For more information, see the AWS Financial Services page, or fill out this AWS Financial Services Contact form.

Additional resources

  • AWS Security Documentation
    The security documentation repository shows how to configure AWS services to help meet security and compliance objectives. Cloud security at AWS is the highest priority. AWS customers benefit from a data center and network architecture that are built to meet the requirements of the most security-sensitive organizations.
  • AWS Compliance Center
    The AWS Compliance Center is an interactive tool that provides customers with country-specific requirements and any special considerations for cloud use in the geographies in which they operate. The AWS Compliance Center has quick links to AWS resources to help with navigating cloud adoption in specific countries, and includes details about the compliance programs that are applicable in these jurisdictions. The AWS Compliance Center covers many countries, and more countries continue to be added as they update their regulatory requirements related to technology use.
  • AWS Well-Architected Framework and AWS Well-Architected Tool
    The AWS Well-Architected Framework helps customers understand the pros and cons of decisions they make while building systems on AWS. The AWS Well-Architected Tool helps customers review the state of their workloads and compares them to the latest AWS architectural best practices. For more information about the AWS Well-Architected Framework and security, see the Security Pillar – AWS Well-Architected Framework whitepaper.

If you have feedback about this blog post, submit comments in the Comments section below.

Author

Ilya Epshteyn

Ilya is a solutions architect with AWS. He helps customers to innovate on the AWS platform by building highly available, scalable, and secure architectures. He enjoys spending time outdoors and building Lego creations with his kids.

How to run AWS CloudHSM workloads on AWS Lambda

Post Syndicated from Mohamed AboElKheir original https://aws.amazon.com/blogs/security/how-to-run-aws-cloudhsm-workloads-on-aws-lambda/

AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM also automatically manages synchronization, high availability and failover within a cluster.

When the service first launched, many customers ran CloudHSM workloads on Amazon Elastic Compute Cloud (Amazon EC2), which required the CloudHSM client to be installed on the Amazon EC2 instance in order to communicate with the CloudHSM cluster. Today, we see customers who are interested in leveraging CloudHSM for serverless workloads using AWS Lambda, but when using Lambda there is no “instance” to install the CloudHSM client on. This blog post shows a workaround that can be used to satisfy the CloudHSM client installation requirement on Lambda functions to be able to run CloudHSM workloads within these Lambda functions.

The workaround is performed by first packaging the CloudHSM client and its requirements in a Lambda layer, and then running the CloudHSM client in a child process from within the Lambda function code to allow communication with the HSMs in your CloudHSM cluster. By leveraging this approach, you gain the benefits of serverless computing (such as increased scalability and decreased admin overhead), as well as the ability to integrate with other AWS services like Amazon CloudWatch Events, Amazon Simple Storage Service (Amazon S3) and AWS Config.

Why would I want to run CloudHSM workloads on Lambda?

Below are some specific use cases enabled by this solution:

  1. When a file is added to an Amazon S3 bucket, you can trigger a Lambda function to encrypt or decrypt the file using keys stored in CloudHSM.
  2. When a file is added to an Amazon S3 bucket, you can trigger a Lambda function to create a digital signature for the file using a private key stored in CloudHSM. This digital signature can then be used to ensure file integrity.
  3. You can create a custom AWS Config rule that checks to ensure files in a directory or a bucket have not been tampered with by verifying their digital signatures using keys stored in CloudHSM.

Solution overview

This solution shows you how to package the CloudHSM client binary and its dependencies (configuration files and libraries) as well as the CloudHSM Java JCE library to a Lambda layer which is attached to the Lambda function. This enables the function to run the CloudHSM client daemon in the background as a child process, allowing it to connect to the CloudHSM cluster and to perform cryptographic tasks such as encryption and decryption operations.

Using a Lambda layer decouples the code of the Lambda function from the CloudHSM client and the CloudHSM Java JCE library. This way, when a new version of the CloudHSM client and the CloudHSM Java JCE library is released, it can be included in a new Lambda layer version and attached to the Lambda function without needing to rebuild the Lambda function package.

The example solution below includes a complete Java sample for the Lambda function. It uses the CloudHSM Java JCE library to generate a symmetric key on the HSM, and it uses this key to encrypt and decrypt after starting the CloudHSM client. Maven (a build automation tool) will be used to build the Lambda function package.

The solution uses AWS Secrets Manager to store and retrieve the crypto user (CU) credentials that are needed to perform cryptographic operations. If the HSM IPs of the CloudHSM cluster are changed (for example, if the HSMs are deleted and re-created), the Lambda function will automatically update the configuration during runtime.

Note:

  1. The solution only works with version 2.0.4 or later of the CloudHSM client and CloudHSM Java JCE library.
  2. In this workaround, the client is started at the beginning of each Lambda invocation, and is stopped at the end of the invocation. Due to the way Lambda works, the client can’t persist through multiple invocations.
  3. Secrets Manager uses AWS Key Management Service to secure its data. If your workload requires that all data be secured using HSMs under your sole control, without reliance on IAM credentials, this solution may not be appropriate. You should work with your security or compliance officer to ensure you’re using a method of securing HSM login credentials that meets your application and security needs.

Prerequisites

Figure 1: Architectural diagram

Figure 1: Architectural diagram

Here are the resources you’ll need in order to follow along with the example in Figure 1:

  1. An Amazon Virtual Private Cloud (Amazon VPC) with the following components:
    1. Private subnets in multiple Availability Zones to be used for the HSM’s elastic network interfaces (ENIs).
    2. A public subnet that contains a network address translation (NAT) gateway.
    3. A private subnet with a route table that routes internet traffic (0.0.0.0/0) to the NAT gateway. You’ll use this subnet to run the Lambda function. The NAT gateway allows you to connect to the CloudHSM, CloudWatch Logs and Secrets Manager endpoints.

    Note: For high availability, you can add multiple instances of the public and private subnets mentioned in Prerequisites 1.b and 1.c. For more information about how to create an Amazon VPC with public and private subnets as well as a NAT gateway, refer to the Amazon VPC user guide.

  2. An active CloudHSM cluster with at least one active HSM. The HSMs should be created in the private subnets mentioned in Prerequisite 1.a. You can follow the Getting Started with AWS CloudHSM guide to create and initialize the CloudHSM cluster.
  3. An Amazon Linux 2 EC2 instance with the CloudHSM client installed and configured to connect to the CloudHSM cluster. The client instance should be launched in the public subnet mentioned in Prerequisite 1.b. You can again refer to Getting Started With AWS CloudHSM to configure and connect the client instance.

    Note: You only need the client instance to build the Lambda function package. You can terminate the instance after the package has been created.

  4. CU credentials. You can create a CU by following the steps in the user guide.
  5. A server/machine with AWS Command Line Interface (AWS CLI) installed and configured. You’ll need this to follow along, as the example uses AWS CLI to create and configure the necessary AWS resources. The IAM user/role should have at minimum the permissions in the below policy attached to it to follow this example. Make sure you replace the <REGION> and <ACCOUNT-ID> tags below with the actual Region and account ID you are using.
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": "secretsmanager:CreateSecret",
                "Resource": "*",
                "Condition": {
                    "StringEquals": {
                        "secretsmanager:Name": "CloudHSM_CU"
                    }
                }
            },
            {
                "Sid": "VisualEditor1",
                "Effect": "Allow",
                "Action": [
                    "ec2:AuthorizeSecurityGroupEgress",
                    "lambda:CreateFunction",
                    "lambda:InvokeFunction",
                    "lambda:GetLayerVersion",
                    "lambda:PublishLayerVersion",
                    "iam:GetRole",
                    "iam:CreateRole",
                    "iam:AttachRolePolicy",
                    "iam:PutRolePolicy",
                    "iam:PassRole",
                    "secretsmanager:DescribeSecret",
                    "secretsmanager:GetResourcePolicy",
                    "secretsmanager:GetSecretValue",
                    "secretsmanager:PutResourcePolicy",
                    "logs:FilterLogEvents"
                ],
                "Resource": [
                    "arn:aws:ec2:<REGION>:<ACCOUNT-ID>:security-group/outbound-443",
                    "arn:aws:lambda:<REGION>:<ACCOUNT-ID>:function:cloudhsm_lambda_example",
                    "arn:aws:lambda:<REGION>:<ACCOUNT-ID>:layer:cloudhsm-client-layer",
                    "arn:aws:lambda:<REGION>:<ACCOUNT-ID>:layer:cloudhsm-client-layer:*",
                    "arn:aws:iam::<ACCOUNT-ID>:role/cloudhsm_lambda_example_role",
                    "arn:aws:secretsmanager:<REGION>:<ACCOUNT-ID>:secret:CloudHSM_CU*",
                    "arn:aws:logs:<REGION>:<ACCOUNT-ID>:log-group:/aws/lambda/cloudhsm_lambda_example:log-stream:"
                ]
            },
            {
                "Sid": "VisualEditor3",
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVpcs",
                    "ec2:CreateSecurityGroup",
                    "ec2:DescribeSubnets",
                    "cloudhsm:DescribeClusters",
                    "ec2:DescribeSecurityGroups",
                    "ec2:AuthorizeSecurityGroupEgress"
                ],
                "Resource": "*"
            }
        ]
    }
    	

Step 1: Build the Lambda function package

In this step, you’ll build the Lambda function package using Maven. For more information about using Maven to build an AWS Lambda Java package, refer to the AWS Lambda developer guide.

  1. On your CloudHSM client instance, install the CloudHSM Java JCE library by following the steps in the user guide.
  2. Install OpenJDK 8 and Maven:
    
    $ sudo yum install -y java maven
    	

  3. Download the sample code, unzip it and move to the created directory. The directory will have the name aws-cloudhsm-on-aws-lambda-sample-master and will include:
    • A file with the name pom.xml that contains the Maven project configuration.
    • A file with the name SymmetricKeys.java which is also available on the AWS CloudHSM Java JCE samples repo. This file contains the function that you’ll use to generate the advanced encryption standard (AES) key.
    • A file with the name AESGCMEncryptDecryptLambda.java, which will run when the Lambda function is invoked:
      
      $ wget https://github.com/aws-samples/aws-cloudhsm-on-aws-lambda-sample/archive/master.zip
      $ unzip master.zip
      $ cd aws-cloudhsm-on-aws-lambda-sample-master/
      	

  4. Create a Java Archive (JAR) package by running the below commands. This will create the JAR file under the target/ directory with the name cloudhsm_lambda_project-1.0-SNAPSHOT.jar.

    
    $ export CLOUDHSM_VER=$(ls /opt/cloudhsm/java/ | grep "cloudhsm-[0-9\.]\+.jar" | grep -o "[0-9\.]\+[0-9]")
    $ export LOG4JCORE_VER=$(ls /opt/cloudhsm/java/ | grep "log4j-core-[0-9\.]\+.jar" | grep -o "[0-9\.]\+[0-9]")
    $ export LOG4JAPI_VER=$(ls /opt/cloudhsm/java/ | grep "log4j-api-[0-9\.]\+.jar" | grep -o "[0-9\.]\+[0-9]")
    $ mvn validate && mvn clean package 
    	

Step 2: Create the Lambda layer

In this step, you’ll create the Lambda layer that contains the CloudHSM client and its dependencies and the CloudHSM Java library JARs.

  1. On your CloudHSM client instance, create a directory called “layer” and change directories to it:
    
    $ mkdir ~/layer && cd ~/layer
    	

  2. Create the following directories, which you’ll use in the next steps to hold the CloudHSM binary and its prerequisites such as configuration files and libraries, and the CloudHSM Java JCE JARs:
    
    $ mkdir -p lib cloudhsm/bin cloudhsm/etc java/lib
    	

  3. Copy the cloudhsm_client binary and the needed configuration files to the directories you created in the previous step.
    
    $ cp /opt/cloudhsm/bin/cloudhsm_client cloudhsm/bin
    $ cp -r /opt/cloudhsm/etc/{cloudhsm_client.cfg,customerCA.crt,client.crt,client.key,certs} cloudhsm/etc
    	

  4. Add the necessary libraries by running the commands below. These libraries are needed by the Lambda function to be able to run the cloudhsm_client binary.
    
    $ cp /opt/cloudhsm/lib/libcaviumjca.so lib/
    $ ldd /opt/cloudhsm/bin/cloudhsm_client | awk '{print $3}' | grep "^/" | xargs -I{} cp {} lib/
    	

  5. Add the CloudHSM Java JCE Jars by running the commands below. These JARs include the classes needed by the Lambda function code to run.
    
    $ cp /opt/cloudhsm/java/{cloudhsm-[0-9]*.jar,log4j-*-*.jar} java/lib/
    	

  6. Create the Lambda layer ZIP archive by running the command below. This will create the archive with the name layer.zip in the home directory.
    
    $ zip -r ~/layer.zip * 
    	

  7. Move the ZIP archive (layer.zip) to the server/machine with AWS CLI installed and configured, and run the below command to create the Lambda layer with the name cloudhsm-client-layer.
    
    $ aws lambda publish-layer-version --layer-name cloudhsm-client-layer --zip-file fileb://layer.zip --compatible-runtimes java8
    	

Step 3: Create a secret to store the CU credentials

In this step, you will use Secrets Manager to create a secret to store your CU credentials. You must perform this step on your server/machine that has AWS CLI installed and configured.

Run the following command to create a secret with the name CloudHSM_CU that contains your CU user name and password (Prerequisite 4). Make sure to replace the user name and password below with your actual CU user name and password.


$ export HSM_USER=<user>
$ export HSM_PASSWORD=<password>
$ aws secretsmanager create-secret --name CloudHSM_CU --secret-string "{ \"HSM_USER\": \"$HSM_USER\", \"HSM_PASSWORD\": \"$HSM_PASSWORD\"}"

Step 4: Create an IAM role for the Lambda function

In this step, you’ll create an IAM role that has the permissions necessary for it to be assumed by the Lambda function.

  1. On the server/machine with AWS CLI installed and configured, create a new file with the name trust.json.
    
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    	

  2. Create a role named cloudhsm_lambda_example_role using the following AWS CLI command:

    
    $ aws iam create-role --role-name cloudhsm_lambda_example_role --assume-role-policy-document file://trust.json
    	

  3. Run the commands below to create a new file named policy.json. The policy in this file allows the IAM role to perform the following actions:
    • Writing to CloudWatch Logs. This permission allows the IAM role to write to the CloudWatch Logs of the Lambda function. You can then use the logs for troubleshooting. For more information about accessing CloudWatch Logs for Lambda, refer to this guide.
    • Retrieving the CU secret value from Secrets Manager. The CU credentials stored in the CU secret are needed by the Lambda function to be able to log-in to the CloudHSM cluster.
    • Describing CloudHSM clusters. This permission allows the Lambda function to check the current HSM IPs and update its configuration if the IPs have changed.
    
    $ export SECRET_ARN=$(aws secretsmanager describe-secret --secret-id "CloudHSM_CU" --query "ARN" --output text)
    
    $ cat <<EOF> policy.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "CWLogs",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": "*"
            },
            {
                "Sid": "SecretsManager",
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "$SECRET_ARN"
            },
            {
                "Sid": "CloudHSM",
                "Effect": "Allow",
                "Action": "cloudhsm:DescribeClusters",
                "Resource": "*"
            }
        ]
    }
    EOF
    	

  4. Attach the policy to the IAM role created in step 2 of this section by running the following command:
    
    $ aws iam put-role-policy --role-name cloudhsm_lambda_example_role --policy-name cloudhsm_lambda_example_policy --policy-document file://policy.json
    	

  5. Attach the AWS managed policy AWSLambdaVPCAccessExecutionRole to the created role by running the command below. This policy allows the IAM role to access the VPC, which is necessary in order to run the Lambda function in a VPC and a subnet.
    
    $ aws iam attach-role-policy --role-name cloudhsm_lambda_example_role --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
    	

  6. To make sure the CU secret is only accessible to the Lambda function role, run the below commands to attach a resource-based policy to the secret:
    
    $ export ROLE_ARN=$(aws iam get-role --role-name cloudhsm_lambda_example_role --query Role.Arn --output text)
    $ export ASSUMED_ROLE_ARN=$(echo $ROLE_ARN | sed -e "s/:iam:/:sts:/" -e "s/:role/:assumed-role/" -e "s/$/\/cloudhsm_lambda_example/")
    $ export ROOT_ARN=$(echo $ROLE_ARN | sed "s/:role.*/:root/")
    $ cat <<EOF> sm_policy.json
    { "Version": "2012-10-17",
    	"Statement": [
    		{
    			"Effect": "Deny",
    			"Action": "secretsmanager:GetSecretValue",
    			"NotPrincipal": {"AWS": [
    				"$ASSUMED_ROLE_ARN",
    				"$ROLE_ARN",
    				"$ROOT_ARN"
    			]},
    				"Resource": "*"
    		}
    	]
    }
    EOF
    
    $ aws secretsmanager put-resource-policy --resource-policy file://sm_policy.json --secret-id CloudHSM_CU
    	

Step 5: Create the Lambda function

In this step, you will create a Lambda function with the necessary settings.

  1. On the server/machine with AWS CLI installed and configured, run the command below to create a security group with the name outbound-443. This security group will be attached to the Lambda function to allow it to connect to the CloudWatch Logs, Secrets Manager and CloudHSM endpoints. Make sure to replace the CLUSTER_ID below with the actual CloudHSM cluster ID of your environment.
    
    $ export CLUSTER_ID=<cluster-xxxxxxxxxx>
    $ export CLUSTER_VPC=$(aws cloudhsmv2 describe-clusters --filters clusterIds=$CLUSTER_ID --query Clusters[0].VpcId --output text)
    $ export OUTBOUND_SG=$(aws ec2 create-security-group --group-name outbound-443 --description "Allow outbound access to port 443" --vpc-id $CLUSTER_VPC --output text)
    $ aws ec2 authorize-security-group-egress --group-id $OUTBOUND_SG --protocol tcp --port 443 --cidr 0.0.0.0/0
    	

  2. Move the JAR package generated in step 4 of the Step 1 section to the current directory on the server/machine that has AWS CLI installed and configured (The file was generated on the CloudHSM client instance under ~/aws-cloudhsm-on-aws-lambda-sample-master/target/cloudhsm_lambda_project-1.0-SNAPSHOT.jar).
  3. Replace the cluster ID and subnet ID below with the CloudHSM cluster ID of your environment, and the ID of the private Lambda subnet in your environment (Prerequisite 1.c), then run the commands below. These commands set environment variables that you’ll need for the next command.
    
    $ export CLUSTER_ID=<cluster-xxxxxxxxxx>
    $ export SUBNET_ID=<subnet-xxxxxxxx>
    $ export CLUSTER_VPC=$(aws cloudhsmv2 describe-clusters --filters clusterIds=$CLUSTER_ID --query Clusters[0].VpcId --output text)
    $ export OUTBOUND_SG=$(aws ec2 describe-security-groups --filters Name=group-name,Values=outbound-443  --query SecurityGroups[0].GroupId --output text)
    $ export CLUSTER_SG=$(aws cloudhsmv2 describe-clusters --filters clusterIds=$CLUSTER_ID --query Clusters[0].SecurityGroup --output text)
    $ export ROLE_ARN=$(aws iam get-role --role-name cloudhsm_lambda_example_role --query Role.Arn --output text)
    $ export LAYER_ARN=$(aws lambda get-layer-version --layer-name cloudhsm-client-layer --version-number 1 --query LayerVersionArn --output text)
    	

  4. Create a Lambda function with the name cloudhsm_lambda_example by running the below command:
    
    $ aws lambda create-function --function-name "cloudhsm_lambda_example" \
    --runtime java8 \
    --role $ROLE_ARN \
    --handler "com.amazonaws.cloudhsm.examples.AESGCMEncryptDecryptLambda::myhandler" \
    --timeout 600 \
    --memory-size 512 \
    --vpc-config SubnetIds=$SUBNET_ID,SecurityGroupIds=$CLUSTER_SG,$OUTBOUND_SG \
    --environment "Variables={CLUSTER_ID=$CLUSTER_ID, SECRET_ID=CloudHSM_CU,liquidsecurity_daemon_id=1}" \
    --layers $LAYER_ARN \
    --zip-file fileb://cloudhsm_lambda_project-1.0-SNAPSHOT.jar
    	

The command will create a Lambda function with the following configuration:

  • Runtime: Java8
  • Execution Role: The role you created in the Step 4 section.
  • Handler: The name of the class and the function in the package created in the Step 1 section.
  • Timeout: 10 minutes.
  • Memory size: 512 MB.
  • Subnet: The private Lambda subnet in your environment (Prerequisite 1.c).
  • Security Groups: The CloudHSM cluster security group AND the security group created in step 1 of the Step 5 section for outbound access to port 443 (outbound-443).
  • Code/Package: The JAR package you created in step 4 of the Step 1 section.
  • Layer: The layer created in the Step 2 section.
  • Environmental Variables:
    • CLUSTER_ID = the CloudHSM cluster ID in your environment
    • SECRET_ID = the ID of the secret you created in the Step 3 section
    • liquidsecurity_daemon_id = 1 (this is needed by the cloudhsm_client binary)

Step 6: Run the Lambda function

In this step, you will invoke the Lambda function and check the logs to view the output.

  1. You can invoke the Lambda function using the following command. This will execute the code in the package you created in Step 1.
    
    $ aws lambda invoke --function-name cloudhsm_lambda_example out.txt
    	

  2. You can check the function’s CloudWatch Log group with a command like this one:
    
    $ aws logs filter-log-events --log-group-name "/aws/lambda/cloudhsm_lambda_example" --start-time "`date -d "now -5min" +%s`000" --query events[*].message --output text | sed "s/\t/\n/g" 
    	

    If the Lambda function was successful, the output of the function should look something like the example below:

    
    START RequestId: 39c627f2-3908-4424-97ef-038c28a72f9a Version: $LATEST
    
    * Running GetSecretValue to get the CU credentials ...
    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    
    * Running DescribeClusters to get the HSM IP ...
    DescribeClusters returned the HSM IP = 1.2.3.4
    * Getting the HSM IP inf the configuration file ...
    The configuration file has the HSM IP = 1.2.3.4
    * Starting the cloudhsm client ...
    * Waiting for the cloudhsm client to start ...
    * cloudhsm client started ...
    * Adding the Cavium provider ...
    ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
    
    * Using credentials to Login to the CloudHSM Cluster ...
    Login successful!
    * Generating AES Key ...
    * Generating Random data to encrypt ...
    Plain Text data = 3B0566E9A3FADA8FED7D6C88FE92ECBE8526922E84489AB48F1F3F3116235E69
    * Encrypting data ...
    Cipher Text data = CA6D80AD34BBADEF34275743F309E6730ABC66BA19C2EADC731899B0FB86564EDDB9F7FC103E1C9C2A6A1E64BF2D2C48
    * Decrypting ciphertext ...
    Decrypted Text data = 3B0566E9A3FADA8FED7D6C88FE92ECBE8526922E84489AB48F1F3F3116235E69
     * Successful decryption
    * Logging out the CloudHSM Cluster
    * Closing client ...
    END RequestId: 39c627f2-3908-4424-97ef-038c28a72f9a
    
    REPORT RequestId: 39c627f2-3908-4424-97ef-038c28a72f9a
    Duration: 11990.69 ms
    Billed Duration: 12000 ms
    Memory Size: 512 MB
    Max Memory Used: 103 MB
    	

Note: The StatusLogger No log4j2 configuration file found error above is normal and can be ignored. This is related to missing log4j configuration which is normally used to configure logging, but is not needed in this case as the log messages are being written to CloudWatch Logs by default.

Conclusion

This solution demonstrates how to run CloudHSM workloads on Lambda, which allows you to not only leverage the flexibility of serverless computing, but also helps you meet security and compliance requirements by performing cryptographic tasks such as encryption and decryption operations. This approach also allows you to integrate with other AWS services like Amazon CloudWatch Events, Amazon Simple Storage Service (Amazon S3), or AWS Config for a seamless experience across your environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir is an Application Security Engineer who works with different teams to ensure AWS services, applications, and websites are designed and implemented to the highest security standards. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

Continuous compliance monitoring with Chef InSpec and AWS Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/continuous-compliance-monitoring-with-chef-inspec-and-aws-security-hub/

In this post, I will show you how to run a Chef InSpec scan with AWS Systems Manager and Systems Manager Run Command across your managed instances. InSpec is an open-source runtime framework that lets you create human-readable profiles to define security, compliance, and policy requirements and then test your Amazon Elastic Compute Cloud (Amazon EC2) instances against those profiles. InSpec profiles can also be used to make sure certain network ports aren’t reachable, to verify that certain packages are not installed, and/or to confirm that certain processes are running on your instances.

InSpec is integrated within AWS Systems Manager, an AWS service that you can use to view and control your infrastructure on AWS. InSpec compliance scans are run by using an AWS Systems Manager document (SSM document), which installs InSpec on your servers and removes InSpec after scans are completed.

In this post, you will create the supporting infrastructure to collect non-compliant findings from AWS Systems Manager Configuration Compliance and send them to AWS Security Hub via a Lambda function. You will also explore methods to correlate finding information in Security Hub for non-compliant resources.

Note: AWS Systems Manager (Systems Manager) was formerly known as “Amazon Simple Systems Manager (SSM)” and “Amazon EC2 Systems Manager (SSM).” The original abbreviated name of the service, “SSM,” is still reflected in various AWS resources. For more information, see Systems Manager Service Name History.

Solution overview

The following diagram shows the flow of events in the solution I describe in this post.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

  1. Invoke an AWS-RunInSpecChecks document on-demand by using Run Command against your target instances (State Manager is another option for scheduling InSpec scans, but is not covered in this post).
  2. Systems Manager downloads the InSpec Ruby files from Amazon Simple Storage Service (Amazon S3), installs InSpec on your server, runs the scan, and removes InSpec when complete.
  3. AWS Systems Manager pushes scan results to the Compliance API and presents the information in the Systems Manager Compliance console, to include severity and compliance state.
  4. A CloudWatch Event is emitted for Compliance state changes.
  5. A CloudWatch Event Rule listens for these state changes and when detected, invokes a Lambda function.
  6. Lambda calls the Compliance APIs for additional data about which InSpec check failed.
  7. Lambda calls the EC2 APIs to further enrich the data about the non-compliant instance.
  8. Lambda maps these details to the AWS Security Finding Format and sends them to Security Hub.

To support the steps above, you will deploy a CloudFormation template that creates a CloudWatch Event Rule and a placeholder Lambda function. You will then create an InSpec profile, upload it to Amazon S3, and use Run Command to invoke an InSpec compliance scan.

When the scan completes, you can then search for the findings in Security Hub. You can create saved searches with insights in AWS Security Hub, and use different filtering to correlate InSpec compliance failures with other information from Amazon Inspector and Amazon GuardDuty.

Prerequisites

Getting started

Before creating your CloudFormation stack, you’ll upload your InSpec profile and Lambda function package to an S3 bucket.

To upload the Lambda package:

  1. Download the Lambda function ZIP file from GitHub.
  2. Save the file to your workstation. In your S3 bucket, choose Upload to upload the InSpecToSecurityHub.zip file.

The next step is to create and upload an InSpec profile to Amazon S3.

To create and upload an InSpec profile:

  1. From your workstation, create a file named Inspec_SSH.rb and paste in the code that follows:
    
    	control 'Linux instance check' do
        title 'SSH access'
        desc 'SSH port should not be open to the world'
        impact 0.9
        require 'rbconfig'
        is_windows = (RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/)
        if ! is_windows
          describe port(22) do
            it { should be_listening }
            its('addresses') {should_not include '0.0.0.0'}
          end
        end
      end
    	

  2. Save your file, and in your S3 bucket, choose Upload to upload the Inspec_SSH.rb file.

Note: The InSpec profile in the example code above ensures that SSH (Port 22) is listening on your instance and that SSH access is not publicly available, as noted by {should_not include ‘0.0.0.0’}. For other InSpec profiles, see the DevSec chef-os-hardening project on GitHub for profiles to help you secure your instances and servers, or you can learn more about Compliance Automation with InSpec on the Chef website.

You will now deploy the CloudFormation template to finish setting up the solution.

To deploy the CloudFormation template:

  1. Download the CloudFormation template from GitHub and create a CloudFormation stack.

    Note: For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the AWS CloudFormation User Guide.

  2. Under Parameters, enter the name of the bucket you uploaded the package to, as shown in Figure 2, and finish creating your stack
     
    Figure 2: CloudFormation parameters

    Figure 2: CloudFormation parameters

After your stack is successfully created, you will run an InSpec scan.

Run an InSpec scan

Now that you have your Lambda function and InSpec profile ready, you will run your InSpec scan against one or more managed instances with Run Command.

  1. Navigate to the AWS Systems Manager Console and in the navigation pane, choose Run Command.
  2. In the search box, enter AWS-RunInspecChecks. In the search results, select AWS-RunInspecChecks.
  3. For Source Type, select S3. The Source Info format is:
    
    	{"path":"https://s3.amazonaws.com/BUCKET_NAME/PATH/ Inspec_SSH.rb"}
    	

    Change BUCKET_NAME, PATH, and the InSpec profile name as appropriate, and enter it in the Source Info box, as shown in Figure 3.

  4. Under Targets, select Choose instances manually and select one or more instances, as shown in Figure 3.
     
    Figure 3: Run Command interface

    Figure 3: Run Command interface

  5. Scroll down to Output options, and select S3, CloudWatch Logs, or both, then select Run.
  6. In the Run Command console for the specific Command invocation you just sent, you will see your instances and their status, as shown in Figure 4. Refresh the page after a few minutes to see if the invocation was successful.
     
    Figure 4: Run Command invocation status

    Figure 4: Run Command invocation status

  7. If you see anything other than Success, you can find information about different statuses in Understanding Command Statuses. You can also see Troubleshooting Systems Manager Run Command.
  8. In the AWS Systems Manager Console, navigate to the Compliance menu to see your full Compliance resources summary, as shown in Figure 5. You may also see additional compliance types and statuses. Only failed Custom:InSpec compliance types will make their way to Security Hub.
     
    Figure 5: SSM Compliance dashboard

    Figure 5: SSM Compliance dashboard

Analyze with Security Hub

After Lambda has sent your non-compliant InSpec results to Security Hub, you can create saved searches by using Security Hub insights to do basic correlation.

  1. Navigate to the Security Hub console and in the navigation pane, choose Findings.
  2. To find InSpec-related findings, select the search bar, and choose Product fields.
  3. For the Key field, enter Provider Name and for the Value field enter AWS Systems Manager Compliance, then choose Apply, as shown in Figure 6.
     
    Figure 6: User defined field - filter

    Figure 6: User defined field – filter

    This will give you your InSpec-related findings, as shown in Figure 7, but there is not enough data to correlate across to other findings from GuardDuty or Amazon Inspector.
     

    Figure 7: Security Hub InSpec findings

    Figure 7: Security Hub InSpec findings

  4. Navigate the Insights menu on the navigation pane and choose Create insight on the top-right.
  5. Select the search bar, enter Resource type, enter AwsEc2Instance and select Apply, as shown in Figure 8. This will group all findings related to EC2 instances together.
     
    Figure 8: Filter insights for EC2 instances

    Figure 8: Filter insights for EC2 instances

  6. Select the search bar again, select Group by, scroll down to select Severity label and select Apply. We now can see our total findings by severity across all EC2 instances, as shown in Figure 9.
     
    Figure 9: Insights grouped by severity

    Figure 9: Insights grouped by severity

  7. On the top right select Create insight, enter a name such as EC2 Instances by Severity and choose Create insight to save your insight.
  8. Navigate back to the main Insights menu and select your new Insight for more detailed graphics to appear on the right-hand side, as shown in Figure 10.
     
    Figure 10: Detailed Insight menu

    Figure 10: Detailed Insight menu

    You can hover over the values to get the exact count and select them to filter down further on details such as the ARN of the resource, the name of the product, or what account the finding originated from. This view can be used by security analysts who are looking to do fast correlation of findings for a resource or to take action on high severity findings first.

Conclusion

In this post, I showed you how to run InSpec scans to monitor the compliance of your instances against your policy requirements, as defined by InSpec profiles. InSpec can help identify when certain points are improperly configured or publicly accessible. By using Systems Manager, you can continuously monitor the compliance against these profiles with State Manager, and run these checks on demand by using Run Command. Systems Manager allows you to rapidly scale across your managed instances, and enriches instance data through the SSM Agent.

You also learned how to use Security Hub to perform correlation within the findings menu, so you can quickly search based on resource type, or the VPC and subnet that the EC2 instances are in. When you have an instance that fails compliance against an InSpec profile, you can look for any threat-related findings from GuardDuty, or vulnerability information from Amazon Inspector or partner integrations.

To avoid incurring additional charges from the resources created in this blog, delete the CloudFormation stack you deployed. For more information on Security Hub pricing, see the AWS Security Hub Pricing page.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

How to set case sensitivity in the Amazon Cognito console

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-set-case-sensitivity-in-the-amazon-cognito-console/

AWS recently updated how Amazon Cognito user pools are created so that new user pools are case insensitive by default. An Amazon Cognito user pool is a user directory that helps you manage end-user identities. With this new feature, the native user name, email alias, and preferred user name alias are marked as case insensitive when a new user pool is created. For example, [email protected] is now treated the same as [email protected].

If you want to create a user pool that is case sensitive, you can change the default setting.

Note: This new feature does not change the behavior of existing user pools, which remain case sensitive.

When you create a new user pool, enabling case insensitivity is selected by default, creating a user pool that is case insensitive (see Figure 1). To create a user pool that is case sensitive, clear the case-insensitive option.

Note: Case sensitivity can’t be changed after the user pool has been created.

 

Figure 1: The case-insensitive user pool is selected by default

Figure 1: The case-insensitive user pool is selected by default

How to migrate to a new user pool

Case-sensitive user pools can have conflicting identities, so there is no automated migration path to change user pools from case-sensitive to case-insensitive. Migration to a new user pool requires scenario-based logic to handle conflicts. To make an existing user pool case insensitive, you can create a new user pool that is case insensitive, and then use the Migrate User Lambda Trigger to migrate existing users to the new pool. The trigger will allow you to migrate users at the time of sign-in or during the “forgot-password” flow. It will also allow you to handle conflicts. For more details, see the documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Cognito forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

How to define least-privileged permissions for actions called by AWS services

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/how-to-define-least-privileged-permissions-for-actions-called-by-aws-services/

When you perform certain actions in AWS, the service you called sometimes takes additional actions in other AWS services on your behalf. AWS Identity and Access Management (IAM) now includes condition keys to make it easier to grant only the minimum level of access necessary for IAM principals (users and roles) and AWS services to take those actions. Using the aws:CalledVia condition key, you can create distinct access rules for the actions performed by your IAM principals, and for the subsequent actions taken by AWS services on your behalf.

For example, if you use AWS CloudFormation to launch an Amazon Elastic Compute Cloud (EC2) instance, the CloudFormation service independently uses your credentials to launch the instance in EC2. Your principals need permissions from both services. Based on the principle of granting least privileged permissions, you might want to prevent your principals from taking each of those actions independently. Using the new condition, you can grant your IAM principals the ability to launch EC2 instances only by using CloudFormation, without granting them direct access to EC2.

You can also use the aws:CalledVia condition key to define rules for the initial call made to AWS by your principals, without impacting the additional calls the service makes. For example, you can require that all initial calls to AWS services come from inside your Virtual Private Cloud (VPC) or your private IP subnet, but you will not impose the same rule for downstream requests to other services, since those requests come from AWS rather than your private network.

In this post, I explain the aws:CalledVia condition key and outline the context it provides during authorization. Then, I walk through a detailed use case with examples that show you how to secure access to a database managed in Amazon Athena behind a VPC. In the examples, I’ll cover how to grant access to execute queries in Athena without granting direct access to dependent services such as Amazon Simple Storage Service (S3). I will also explain how you can use the aws:CalledVia condition key to prevent access to your databases from outside your private networks.

How to use the aws:CalledVia condition key

The aws:CalledVia condition key contains an ordered list of each service principal that triggers the AWS action in another service using your credentials. It is a global condition key, meaning you can use it in combination with any AWS service action. For example, say that you use CloudFormation to read and write from an Amazon DynamoDB table that uses encryption supplied by the AWS Key Management Service (AWS KMS). When you call CloudFormation, it calls Amazon DynamoDB to read from the table, then Amazon DynamoDB calls AWS KMS to decrypt the data. Each call gets its own authorization check, and the aws:CalledVia key keeps track of who called whom.

  • For the call to CloudFormation, aws:CalledVia will be empty.
  • For the call from CloudFormation to Amazon DynamoDB, aws:CalledVia will contain [ “cloudformation.amazonaws.com” ]
  • For the call from Amazon DynamoDB to AWS KMS, aws:CalledVia will contain [ “cloudformation.amazonaws.com” , “dynamodb.amazonaws.com” ]

    Important: The aws:CalledVia context is only available when AWS reuses your credentials after you make a request. For example, if you provided a service role to CloudFormation instead of triggering the action directly, the aws:CalledVia context would be empty for a call from CloudFormation to Amazon DynamoDB.

To identify the service principal that aws:CalledVia returns for each AWS service, see to the CalledVia Services table in the documentation. AWS will update this list as more services add support for this key.

You can use condition operators, such as StringLike and StringEquals, in your policies to check the contents of the key during authorization. Because aws:CalledVia is a multivalued condition key, you also need to use the ForAnyValue or ForAllValues set operators along with your comparison operators to check the content of the key. For more information, see Creating a Condition with Multiple Keys or Values in the IAM documentation.

Here’s an example policy that follows this use case.


{ 
   "Version":"2012-10-17",
   "Statement":[ 
      { 
         "Sid":"AllowCFNActions",
         "Effect":"Allow",
         "Action":[ 
            "cloudformation:CreateStack*"
         ],
         "Resource":"*"
      },
      { 
         "Sid":"AllowDDBActionsViaCFN",
         "Effect":"Allow",
         "Action":[ 
            "dynamodb:GetItem",
            "dynamodb:BatchGetItem",
            "dynamodb:PutItem",
            "dynamodb:UpdateItem",
            "dynamodb:DeleteItem"
         ],
         "Resource":"arn:aws:dynamodb:region:111122223333:table",
         "Condition":{ 
            "ForAnyValue:StringEquals":{ 
               "aws:CalledVia":[ 
                  "cloudformation.amazonaws.com"
               ]
            }
         }
      },
      { 
         "Sid":"AllowKMSActionsViaDDB",
         "Effect":"Allow",
         "Action":[ 
            "kms:Encrypt",
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:GenerateDataKey",
            "kms:DescribeKey"
         ],
         "Resource":[
            "arn:aws:kms:region:111122223333:key/example"
         ],
         "Condition":{ 
            "ForAnyValue:StringEquals":{ 
               "aws:CalledVia":[ 
                  "dynamodb.amazonaws.com"
               ]
            }
         }
      }
   ]
}

I’ll walk you through each statement in this policy. In the first statement, AllowCFNActions, I grant access to use CloudFormation to create stacks and stack sets. The second statement, AllowDDBActionsViaCFN, grants access to read and write from a specific DynamoDB table, but only if CloudFormation took the action. Finally, the third statement AllowKMSActionsViaDDB allows AWS KMS encrypt and decrypt operations that are triggered through Amazon DynamoDB, using the customer master key (CMK) specified in the Resource element. Each condition statement uses the ForAnyValue:StringEquals combination of operators to check for the existence of the CloudFormation or DynamoDB service principals. Putting it all together, the policy allows an IAM principal to do the following:

  1. Create stacks and stack sets by using CloudFormation.
  2. Read and write to Amazon DynamoDB tables via CloudFormation.
  3. Encrypt and decrypt by using AWS KMS actions via Amazon DynamoDB.

Controlling access based on the first and last requesting services

Along with aws:CalledVia, AWS has introduced two companion keys to make it easy to retrieve the first and last services in the chain of requests. The aws:CalledViaFirst condition key returns the first service principal in the chain, and aws:CalledViaLast returns the last service principal in the chain. For example, if aws:CalledVia contained [ “cloudformation.amazonaws.com” , “dynamodb.amazonaws.com” ], then aws:CalledViaFirst would contain “cloudformation.amazonaws.com” and aws:CalledViaLast would contain “dynamodb.amazonaws.com”.

Still following the previous example, here’s a policy that uses aws:CalledViaFirst to allow access to Amazon DynamoDB and AWS KMS, as long as the entry point is CloudFormation. You can use this policy to ensure that Amazon DynamoDB tables in your organization are accessed according to the best practices you’ve defined in your CloudFormation templates.


{ 
   "Version":"2012-10-17",
   "Statement":[ 
      { 
         "Sid":"AllowCFNActions",
         "Effect":"Allow",
         "Action":[ 
            "cloudformation:CreateStack*"
         ],
         "Resource":"*"
      },
      { 
         "Sid":"AllowDDBAndKMSActionsViaCFN",
         "Effect":"Allow",
         "Action":[ 
            "dynamodb:GetItem",
            "dynamodb:BatchGetItem",
            "dynamodb:PutItem",
            "dynamodb:UpdateItem",
            "dynamodb:DeleteItem",
            "kms:Encrypt",
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:GenerateDataKey",
            "kms:DescribeKey"
         ],
         "Resource":[
            "arn:aws:dynamodb:region:111122223333:table"
            "arn:aws:kms:region:111122223333:key/example"
         ],
         "Condition":{ 
            "StringEquals":{ 
               "aws:CalledViaFirst":[ 
                  "cloudformation.amazonaws.com"
               ]
            }
         }
      }
   ]
}

I’ll walk you through each statement in this policy. The first statement of the policy, AllowCFNActions, enables the principal to create stacks and stack sets through CloudFormation. The second statement, AllowDDBAndKMSActionsViaCFN, allows Amazon DynamoDB and AWS KMS actions for a specific CMK and table, but only under the condition that the first request made by the principal was to the CloudFormation service.

Now that I’ve described the conditions and their usage, I’ll share detailed examples in the use case that follows.

Use case: Permissions to use Athena inside your Virtual Private Cloud

Amazon Athena is a service you can use to analyze data in Amazon S3 by using standard SQL. Athena is serverless, which makes it cost-effective for operating a data lake. In this example, I define the IAM permissions for a role that manages and executes Athena queries from behind a secure network perimeter.

In this use case, my business metrics team needs an IAM role to execute queries against my data lake that is stored in Amazon S3. They’ll use the IAM role to manage the BizMetrics workgroup in Athena, which is responsible for managing exploratory queries and generating regular reports for key performance indicators within my company. Because my business metrics are internal to my organization, I want to ensure the data is only accessible inside my Amazon Virtual Private Cloud (VPC). VPCs let you create a logically-isolated section of the AWS Cloud and connect it to your private network. For more information about VPCs, see What is Amazon VPC? in the VPC documentation.

When an IAM role makes a call to Athena to execute a query inside a VPC, Athena makes subsequent calls to Amazon S3 and other services to complete the task. You can see the interaction on the following diagram.
 

Figure 1: IAM role makes a call to Athena to execute a query inside a VPC

Figure 1: IAM role makes a call to Athena to execute a query inside a VPC

The calls from the IAM role to Athena, and from Athena to Amazon S3, use the same role credentials. This means that the principal needs permissions for both Athena and Amazon S3 actions to accomplish the query. You can also see that the IAM role calls Athena through the VPC endpoint, rather than the public AWS endpoint. VPC endpoints allow you to communicate with AWS from your private network without a connection to the public internet. For more information about VPC endpoints, see Interface VPC Endpoints (AWS PrivateLink) in the VPC documentation.

The subsequent calls between Athena and Amazon S3 don’t use the VPC endpoint. If I want to require that each call to Athena must use the VPC endpoint, I cannot apply the same restriction to Athena’s calls to Amazon S3. I will need to use aws:CalledVia to define distinct permissions for the initial call to Athena, and the call to Amazon S3 from Athena. I’ll apply these permissions to the IAM role I create, as well as the Amazon S3 bucket that contains the data.

Step 1: Define permissions for the IAM Role

For the purposes of this use case, my organization’s data lake has one database, revenuedata, in the bucket called examplecorp-business-data. When you follow along with these examples, you should replace the bucket and database names in the policies with your own resources.

  1. First, I create the IAM role BizMetricsQuery. I can create the role in the IAM console without any permission policies attached. If you haven’t created a role before, see Creating IAM Roles in the IAM documentation.
  2. Next, I create a managed policy that defines permissions for the BizMetricsQuery role. For more information about creating a managed policy or attaching it to the BizMetricsQuery role, see Managing IAM Policies in the IAM documentation.

    The following policy grants the permissions I need:

    
    	{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowAthenaReadActions",
                "Effect": "Allow",
                "Action": [
                    "athena:ListWorkGroups",
                    "athena:GetExecutionEngine",
                    "athena:GetExecutionEngines",
                    "athena:GetNamespace",
                    "athena:GetCatalogs",
                    "athena:GetNamespaces",
                    "athena:GetTables",
                    "athena:GetTable"
                ],
                "Resource": "*",
                "Condition":{ 
                   "StringEquals":{ 
                      "aws:SourceVpce":[ 
                         "vpce-0e880cb0a9EXAMPLE"
                      ]
                   }
                }
            },
            {
                "Sid": "AllowAthenaWorkgroupActions",
                "Effect": "Allow",
                "Action": [
                    "athena:StartQueryExecution",
                    "athena:GetQueryResults",
                    "athena:DeleteNamedQuery",
                    "athena:GetNamedQuery",
                    "athena:ListQueryExecutions",
                    "athena:StopQueryExecution",
                    "athena:GetQueryResultsStream",
                    "athena:ListNamedQueries",
                    "athena:CreateNamedQuery",
                    "athena:GetQueryExecution",
                    "athena:BatchGetNamedQuery",
                    "athena:BatchGetQueryExecution",
                    "athena:GetWorkGroup"
                ],
                "Resource": [
                    "arn:aws:athena:us-east-1:111122223333:workgroup/BizMetrics"
                ],
                "Condition":{ 
                   "StringEquals":{ 
                      "aws:SourceVpce":[ 
                         "vpce-0e880cb0a9EXAMPLE"
                      ]
                   }
                }
            },
            {
                "Sid": "AllowGlueActionsViaVPCE",
                "Effect": "Allow",
                "Action": [
                    "glue:GetDatabase",
                    "glue:GetDatabases",
                    "glue:CreateDatabase",
                    "glue:GetTables",
                    "glue:GetTable"
                ],
                "Resource": [
                    "arn:aws:glue:us-east-1:111122223333:catalog",
                    "arn:aws:glue:us-east-1:111122223333:database/default",
                    "arn:aws:glue:us-east-1:111122223333:database/revenuedata",
                    "arn:aws:glue:us-east-1:111122223333:table/revenuedata/*"
                ],
                "Condition":{ 
                   "StringEquals":{ 
                      "aws:SourceVpce":[ 
                         "vpce-0e880cb0a9EXAMPLE"
                      ]
                   }
                }
            },
            {
                "Sid": "AllowGlueActionsViaAthena",
                "Effect": "Allow",
                "Action": [
                    "glue:GetDatabase",
                    "glue:GetDatabases",
                    "glue:CreateDatabase",
                    "glue:GetTables",
                    "glue:GetTable"
                ],
                "Resource": [
                    "arn:aws:glue:us-east-1:111122223333:catalog",
                    "arn:aws:glue:us-east-1:111122223333:database/default",
                    "arn:aws:glue:us-east-1:111122223333:database/revenuedata",
                    "arn:aws:glue:us-east-1:111122223333:table/revenuedata/*"
                ],
                "Condition":{ 
                   "ForAnyValue:StringEquals":{ 
                      "aws:CalledVia":[ 
                         "athena.amazonaws.com"
                      ]
                   }
                }
            },
    
            {
                "Sid": "AllowS3ActionsViaAthena",
                "Effect": "Allow",
                "Action": [
                    "s3:GetBucketLocation",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:ListBucketMultipartUploads",
                    "s3:ListMultipartUploadParts",
                    "s3:AbortMultipartUpload",
                    "s3:CreateBucket",
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::examplecorp-business-data/*",
                    "arn:aws:s3:::athena-examples-*"
                ],
                "Condition":{ 
                   "ForAnyValue:StringEquals":{ 
                      "aws:CalledVia":[ 
                         "athena.amazonaws.com"
                      ]
                   }
                }
            }
        ]
    }
    	

    I’ll walk you through each statement in this policy. First, the AllowAthenaReadActions and AllowAthenaWorkgroupActions statements allow the role to use Athena actions within the BizMetrics workgroup. The role calls Athena directly from inside the VPC, so there is a condition on these statements that requires that the calls must come from my VPC endpoint.

    Next, the AllowGlueActionsViaVPCE and AllowGlueActionsViaAthena statements allow the role to access the AWS Glue Data Catalog and view the tables within my data lake. AWS Glue makes it easy to catalog your data and make it searchable, queryable, and available for ETL operations. To view the database and tables in the Athena console, I need access to these AWS Glue actions. The role calls AWS Glue directly, and allows Athena to call AWS Glue, so the policy has two statements that allow both paths of communication respectively. For more information about required permissions for Athena and AWS Glue, see Fine-Grained Access to Databases and Tables in the AWS Glue Data Catalog in the Amazon Athena documentation.

    Finally, the AllowS3ActionsViaAthena statement enables Athena to call Amazon S3 on my behalf. I use the aws:CalledVia condition to require that these S3 actions are only available through Athena. In the resource section of the policy, I specify that the access is limited to the examplecorp-business-data bucket, as well as the Athena examples repository, which is required for using Athena with the AWS console.

  3. Next, I attach the policy to the BizMetricsQuery role. With this policy attached, I ensure the role can use Athena to access data in the Amazon S3 bucket without granting permissions to access the bucket directly. I also ensure that the role can only access the data through the VPC endpoint located within my private network.

Step 2: Define permissions on the S3 bucket

I need to make sure the BizMetricsQuery role is the only IAM identity that has access to the data in the examplecorp-business-data S3 bucket. To do this, I need to update the bucket policy that is attached to the bucket.

The following bucket policy grants the BizMetricsQuery role access to the data through the VPC endpoint.


{
	"Version": "2012-10-17",
	"Statement": [
    {
      "Sid": "AllowS3ActionsThroughAthena",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/BizMetricsQuery"
      },
      "Action": [
        "s3:GetBucketLocation",
        "s3:GetObject",
        "s3:ListBucketMultipartUploads",
        "s3:ListMultipartUploadParts",
        "s3:AbortMultipartUpload",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::examplecorp-business-data/*",
        "arn:aws:s3:::examplecorp-business-data"
      ],
      "Condition": {
        "ForAnyValue:StringEquals": {
          "aws:CalledVia": [
            "athena.amazonaws.com"
          ]
        }
      }
    }
  ]
}

This policy is very similar to the statement on the role policy that grants permissions to Amazon S3. In this policy, I allow the role to take Amazon S3 operations, but only if the aws:CalledVia context includes the Athena service. In the Principal element of the policy, I include the BizMetricsQuery role’s ARN. This is an important step because without this requirement, anyone could interact with the data in the bucket as long as they make the call using Athena. I want to require that the only role that can reach the data is BizMetricsQuery, which is also subject to the VPC rules.

With this policy attached to the examplecorp-business-data bucket, I ensure that access to the databases is only available through the BizMetricsQuery role. Because the BizMetricsQuery role can only access the data through Athena, and all calls to Athena must use my VPC endpoint, this also ensures the Amazon S3 data is only accessible from within my VPC.

Step 3: Run the Athena queries

I’ll try a simple query in Athena from inside the VPC to make sure everything works as expected. I use the BizMetricsQuery role to view the service_revenue table in the revenuedata database. If you are following along with this example, you should replace the database and table in the query with your own resource names.

  1. In the Athena console, on the Query Editor page, enter the following query in a new tab:
     
    SELECT * FROM “revenuedata”.”service_revenue” limit 10;
     
    Then select Run query.
     
    Figure 2: Run an Athena test query

    Figure 2: Run an Athena test query

    When I run this test query, my role performs the StartQueryExecution action. For this request, the aws:CalledVia context is empty because the role takes the action directly. The role is allowed to use Athena actions, so it passes the first authorization check.

    To perform the query, Athena subsequently calls the Amazon S3 service. These requests have the aws:CalledVia context athena.amazonaws.com. The role has access to the Amazon S3 actions when aws:CalledVia is equal to this value, so it passes the second and final authorization check.

  2. When I try to access the data directly by using Amazon S3, I get an Access Denied error message.
     
    Figure 3: Error message attempting to access directly from Amazon S3

    Figure 3: Error message attempting to access directly from Amazon S3

    In this case, the aws:CalledVia context is empty because I made the call directly to the Amazon S3 service. The policy requires me to call Amazon S3 only through Athena, so this request is denied.

  3. When I call Athena by using the same query from outside my VPC, I get the following error message.
     
    Figure 4: Error message attempting to call Athena from outside my VPC

    Figure 4: Error message attempting to call Athena from outside my VPC

In the Athena case, the SourceVPCE context is not present because I didn’t make the call using the VPC endpoint. In Amazon S3, the policy requires that the calls can only happen from inside my private network, so I get an access denied message in that case, as well. With this policy and example, I’ve demonstrated that I can only call Athena from within the VPC, and shown that calling the dependent services directly is not possible.

Next steps

By using the aws:CalledVia condition key, you can now define specific permissions for when AWS services make calls to other services on your behalf. This makes it easier to ensure that your company’s guidelines are followed consistently. For more information about aws:CalledVia and other IAM conditions, see AWS Global Condition Context Keys in the IAM documentation. If you have any questions, comments, or concerns, please reach out to AWS Support or our Identity and Access Management Forums.

If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

How to create certificates with custom extensions using AWS Certificate Manager Private CA

Post Syndicated from Josh Rosenthol original https://aws.amazon.com/blogs/security/how-to-create-certificates-with-custom-extensions-using-aws-certificate-manager-private-ca/

Digital certificates, also known as X.509 or TLS/SSL certificates, are used to prove the identity of entities like web servers or VPN users and to establish secure communication channels between them. In this blog post, I’ll discuss certificate extensions. You can use certificate extensions for applications beyond the common use case of identifying TLS server endpoints. These additional applications include code signing, signing Online Certificate Status Protocol (OCSP) responses, TLS clients for establishing two-way (mutual) authentication, and custom applications that include extensions you specify. I’ll also discuss how you can use AWS Identity and Access Management (IAM) policies to selectively control who can issue each kind of certificate. Finally, I’ll describe two use cases for certificate templates: OCSP signing and two-way TLS. A certificate template allows CA administrators and public key infrastructure (PKI) operators a way to control and specify X.509 certificate extensions for the certificates they issue with AWS Certificate Manager Private Certificate Authority (ACM Private CA).

What are certificate extensions?

Extension fields (“extensions”) define the usage of the certificate. The values of these fields — and the corresponding allowed usage of the certificates — can be very different. Certificates bind a subject name and public key for the named subject with a digital signature from a certificate authority. What makes one certificate useful for signing code, while another certificate is useful for terminating TLS are the extension fields in the certificate.

Digital certificates used on the internet (and with other modern applications) are defined in the internet standards specification RFC 5280. Extensions were introduced in v3 of the X.509 specification as a way to give certificates more flexibility. There are 19 extension types defined in RFC 5280, but the extensions widely used for defining certificate usage are basic constraints, key usage, and extended key usage.

  • Basic constraints indicate whether the certificate can be used to identify a certificate authority versus an end entity, such as a web server. Certificates that include basic constraints with a value of “CA=true” are CA certificates and are allowed to issue other certificates.
  • Key usage refers to the purpose of the public key contained in the certificate and the actions that can be performed with the key. An example of a key usage value is “digital signature.” If you use this value, it allows your public key to be used for verifying digital signatures (other than signatures on certificates and CRLs), such as those used for authentication and integrity, like when a TLS connection is established. The key usage extension is limited to nine key usage values that define the specific cryptographic functions the associated key pair can perform. Those values are: decipherOnly, encipherOnly, cRLSign, keyCertSign, keyAgreement, dataEncipherment, keyEncipherment, contentCommitment, nonRepudiation, and digitalSignature.
  • In contrast to key usage, the extended key usage extension defines the specific protocols and functions that the certificate can be used with. For example, one extended key usage value is “TLS web server authentication,” which indicates the public key can be used to terminate TLS as a server. There’s also “TLS web client authentication,” which indicates the key can be used to terminate TLS as a client, and “code signing,” which means the key can be used to validate digital signatures on software. RFC5246 defines the TLS protocol and the use of server and client certificates.

Available templates

ACM Private CA offers ten templates so you can control and specify the X.509 certificate extensions for the certificates you issue with ACM Private CA. The templates allow CA administrators to maintain control of specific X.509 certificate extensions — including key usage, extended key usage, and basic constraints — while allowing users to customize certificates with additional extensions.

Template Details

End-entity certificates

Use the end-entity certificate templates to create certificates for resources that are TLS clients or servers, such as web and application servers, API endpoints, or TLS clients like IoT (internet of things) devices and API users. All of the certificates issued by these template types have the key usage values of “digital signature” and “key encipherment.”

  • EndEntityCertificate — This template generates certificates that can be used as the server certificate in a one-way TLS session or as the server or client certificate in a two-way TLS handshake. This certificate has the extended key usage values of “TLS web server authentication” and “TLS client server authentication.”
  • EndEntityClientAuthCertificate — This template only generates certificates used by the client in two-way TLS. This certificate has an extended key usage value of “TLS Client Server Authentication.”
  • EndEntityServerAuthCertificate — This template only generates server certificates used for one-way or two-way TLS. This certificate has an extended key usage value of “TLS Web Server Authentication.”

OCSP signing certificates

An Online Certificate Status Protocol (OCSP) responder provides a mechanism to check if a certificate has been revoked. Clients that check certificate revocation status with OCSP require a response signed either by the CA or a certificate issued by the CA. Creating an OCSP signing certificate to sign OCSP responses allows you to offload response generation and signing from the CA to the responder or to another server. The ACM Private CA OCSPSigningCertificate template allows you to issue OCSP signing certificates that you can use to sign OCSP responses and operate your own OCSP responder. The key usage value for an OCSP certificate is “digital signature” and the extended key usage value is “OCSP Signing.”

Code signing certificates

Software developers use code signing certificates to digitally sign software, apps, drivers, and programs. Operating systems check code signatures before running the code to ensure it was signed by a trusted entity and has not been modified by an unauthorized third-party. The ACM Private CA CodeSigningCertificate template allows you to issue code signing certificates. The key usage value of a code signing certificate is “digital signatures” and the extended key usage value is “code signing.”

CSR passthrough certificates

A certificate signing request (CSR) is an identity document that contains a public key and a subject. You can also configure the CSR to contain a set of extension requests that the requested certificate should inclue. Normally, the CA ignores certificate extension requests included in the CSR rather than including them in the certificate it issues. In many cases, “ignore” is desired behavior because it allows the CA to remove unnecessary extensions and enforce the CA configurations. The templates described so far work this way, but there is another set of templates for which extensions from the CSR are passed to the certificate.

Private CA users can customize certificates by adding any X.509 extension to the certificate signing request (CSR) when they issue the certificate. If the CSR includes extensions that overlap with the extensions in the template, the value from the template will be used. The CA administrator can thus control the extensions in certificates by controlling access to the templates. One common example is for code signing certificates. For example, the subject directory attributes extension can be used to convey identification attributes such as the nationality of the certificate subject. If a customer is issuing certificates for different geographic locations from a single CA, they can include the SubjectDirectoryAttributes value in each individual CSR and have it correspond to the location of the domain for the certificate.

The ten certificate templates can be broken up into five pairs of templates, as shown in Table 1 below. For example, one pair would be EndEntityCertificate and EndEntityCertificate_CSRPassthrough. Each member of the pair issues the same certificate extension but with one major difference: the first template in the pair utilizes the CA values as the override for all the X509v3 fields. The second template (the CSRPassthrough template) allows CA administrators to maintain control of the most important and fundamental X.509 certificate extensions that determine how the certificate can be used, including key usage, extended key usage, and basic constraints.

The CSRPassthrough templates allow users making certificate requests to customize their certificates by including additional extensions when necessary. For example, CA users who want to build their own online certificate status protocol (OCSP) responders can include the location of the responder in certificates they issue by including a special authority information access (AIA) extension that clients use to identify the OCSP responder endpoint.

Table 1: The full list of supported templates

EndEntityCertificateEndEntityCertificate_CSRPassthrough
EndEntityClientAuthCertificateEndEntityClientAuthCertificate_CSRPassthrough
EndEntityServerAuthCertificateEndEntityServerAuthCertificate_CSRPassthrough
OCSPSigningCertificateOCSPSigningCertificate_CSRPassthrough
CodeSigningCertificateCodeSigningCertificate_CSRPassthrough

How to issue a certificate with a template

You can use ACM Private CA to issue a certificate with a template by using the ACM Private CA Command Line Interface (CLI). Here’s an example of a call to create a Server-Only TLS certificate:


aws acm-pca issue-certificate --certificate-authority-arn arn:aws:acm-pca:us-west-2:123456789012:certificate-authority/12345678-1234-1234-1234-123456789012 --csr file://C:\cert_1.csr --signing-algorithm "SHA256WITHRSA" --template-arn “arn:aws:acm-pca:::template/EndEntityServerAuthCertificate/V1” --validity Value=365,Type="DAYS"

Here are the pieces of this command:

  • The command is <issue-certificate>
  • The certificate authority ARN identifies the private CA that will be used to issue the certificate.
  • The CSR is a pointer to the file which contains the CSR.
  • The signing algorithm is used by the CA to sign the final certificate; in this example it’s SHA-256 with RSA.
  • The template ARN is for a server-only TLS certificate. The ACM Private CA documentation has the full list of templates.
  • The validity period of this certificate is set to 365 days.

This command creates a certificate with the extended key usage extension value of clientAuth. Other than that change, it’s identical to the EndEntityCertificate template as specified in the ACM Private CA User Guide.

How to configure user-based permissions

You can combine templates with Identity and Access Management (IAM) permissions to create roles with fine-grained access controls that are assigned to individuals or systems. In AWS, an IAM role has specific permissions that determine what the identity can and cannot do in AWS. For example, you might have a team in charge of building and maintaining OCSP infrastructure. The account this team uses could be configured so that it only has permission to use the OCSP signing template, as opposed to being able to issue any type of certificate. In other words, you can restrict their issuance of certificates to align with their job responsibilities. You could also use IAM policies to grant or deny the team permission to use the CSR Passthrough Template.

Here’s a sample of the IAM permissions which would deny the use of CSR Passthrough templates. This template might be used to ensure that no certificate requestors inserted custom values through a CSR passthrough template variant.


{
    "Effect": "Deny",
    "Action": [
        "acm-pca:IssueCertificate"
    ],
    "Resource": "arn:aws:acm-pca:*:*:certificate-authority/*",
    "Condition": {
        "StringLike": {
            "acm-pca:TemplateArn": [
                "arn:aws:acm-pca:::template/*CSRPassthrough*/V*"
            ]
        }
    }
},

Two use cases for certificate extensions

IoT devices with client-only TLS certificates

To authenticate and allow for secure communications with an IoT device, manufacturers frequently install TLS certificates on their devices. The production system that installs TLS certificates requires a CA in order to issue certificates with the appropriate extensions. Let’s say that a device manufacturer only wants their IoT devices to be able to initiate a TLS connection with the server. Using certificates with the client-only TLS template would prevent someone from trying to establish a connection with the device to start an inappropriate communication channel. The manufacturer also wants to enforce a fixed set of certificate extensions for the production environment, while allowing more flexibility in the extensions in locations that are considered more physically secure.

To achieve this, the manufacturer can restrict the issuing CA via IAM policies, enforcing the use of an End Entity Client Certificate (EndEntityClientAuthCertificate) that includes specific and fixed certificate extensions that will prevent the use of CSR Passthrough in the production environment. This configuration will prevent the certificate from being used as the server of a TLS transaction for inappropriate access from a third party. Not enabling CSR passthrough ensures the certificate fields are set to defined/required values.

This is the command to create an IoT certificate with client-only TLS:


aws acm-pca issue-certificate --certificate-authority-arn arn:aws:acm-pca:us-west-2:123456789012:certificate-authority/12345678-1234-1234-1234-123456789012 --csr file://C:\cert_2.csr --signing-algorithm "SHA256WITHRSA" --template-arn “arn:aws:acm-pca:::template/EndEntityClientAuthCertificate/V1” --validity Value=40,Type="YEARS"

This certificate is similar to the preceding certificate example, with two major changes: It will only work as a server-only certificate, and the validity is 40 years instead of one year. The 40-year validity is because IoT devices won’t return to the floor to be able to get a new certificate issued. After 40 years (which is beyond the expected lifespan of the device), the certificate would expire and the device would no longer be able to communicate.

In this example, the value of the ExtendedKeyUsage field is configured to “clientAuth.”

OCSP responder

One element of the construction of an OCSP responder is the OCSP signing certificate. (I’ve described the details of how the elements of an OCSP are built in the “OCSP signing certificates” section of this post.) When configuring an OCSP service, the administrator responsible for renewing that certificate should be given permission for that template. The OCSP signing certificate normally has a short duration, so the certificate needs to be renewed frequently. An important note is that an OCSP signing certificate generated by ACM Private CA only supports certificates issued by the same CA that created the OCSP signing certificate. It cannot be used to handle certificates up or down the CA hierarchy.

The command to generate a certificate for this use case is:


aws acm-pca issue-certificate --certificate-authority-arn arn:aws:acm-pca:us-west-2:123456789012:certificate-authority/12345678-1234-1234-1234-123456789012 --csr file://C:\cert_3.csr --signing-algorithm "SHA256WITHRSA" --template-arn “arn:aws:acm-pca:::template/OCSPSigningCertificate_CSRPassthrough/V1” --validity Value=365,Type="DAYS" --idempotency-token 1234

This certificate must be a CSR Passthrough because the Authority Information Access (AIA) will have the value ocsp for the accessMethod field and the URI address of the OCSP responder in the accessLocation field. Setting these two values requires CSR passthrough.

Conclusion

Certificates issued by Private CA can be used for a variety of use cases. TLS certificates are the most common. You can also generate certificates for other uses such as code signing. I’ve shown some examples of how to use templates in ACM to ensure that the certificates are created with the correct extensions. As your organization looks beyond TLS, or expands to the point where privilege for use matters, certificate templates can support the design and security guidelines of your environment.

To get started with ACM Private CA for TLS, OCSP or other use cases, read the getting started pages of the ACM Private CA documentation. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Josh Rosenthol

Josh is a Product Manager who helps solve customer problems with public and private certificate and CAs from AWS. He enjoys listening to customers describe their use cases and translate them into improvements to AWS Certificate Manager and ACM Private CA.

How to use the AWS Security Hub PCI DSS v3.2.1 standard

Post Syndicated from Rima Tanash original https://aws.amazon.com/blogs/security/how-to-use-the-aws-security-hub-pci-dss-v3-2-1-standard/

On February 13, 2020, AWS added partial support for the Payment Card Industry Data Security Standard (PCI DSS) version 3.2.1 requirements to AWS Security Hub.

This update enables you to validate a subset of PCI DSS’s requirements and helps with ongoing PCI DSS security activities by conducting continuous and automated checks. The new Security Hub standard also makes it easier to proactively monitor AWS resources, which is critical for any company involved with the storage, processing, or transmission of cardholder data. There’s also a Security score feature for the Security Hub standard, which can help support preparations for PCI DSS assessment.

Use this post to learn how to:

  • Enable the AWS Security Hub PCI DSS v3.2.1 standard and navigating results
  • Interpret your security score
  • Remediate failed security checks
  • Understand requirements related to findings

Enable Security Hub’s PCI DSS v3.2.1 standard and navigate results

Note: This section assumes that you have Security Hub enabled in one or more accounts. To learn how to enable Security Hub, follow these instructions. If you don’t have Security Hub enabled, the first time you enable Security Hub you will be given the option to enable PCI DSS v3.2.1.

To enable the PCI DSS v3.2.1 security standard in Security Hub:

  1. Open Security Hub and enable PCI DSS v3.2.1 Security standards.
    (Once enabled, Security Hub will begin evaluating related resources in the current AWS account and region against the AWS controls within the standard. The scope of the assessment is the current AWS account).
  2. When the evaluation completes, select View results.
  3. Now you are on the PCI DSS v3.2.1 page (Figure 1). You can see all 32 currently-implemented security controls in this standard, their severities, and their status for this account and region. Use search and filters to narrow down the controls by status, severity, title, or related requirement.

    Figure 1: PCI DSS v3.2.1 standard results page

    Figure 1: PCI DSS v3.2.1 standard results page

  4. Select the name of the control to review detailed information about it. This action will take you to the control’s detail page (Figure 2), which gives you related findings.

    Figure 2: Detailed control information

    Figure 2: Detailed control information

  5. If a specific control is not relevant for you, you can disable the control by selecting Disable and providing a Reason for disabling. (See Disabling Individual Compliance Controls for instructions).

How to interpret and improve your “Security score”

After enabling the PCI DSS v3.2.1 standard in Security Hub, you will notice a Security score appear for the standard itself, and for your account overall. These scores range between 0% and 100%.

Figure 3: Security score for PCI DSS standard (left) and overall (right)

Figure 3: Security score for PCI DSS standard (left) and overall (right)

The PCI DSS standard’s Security score represents the proportion of passed PCI DSS controls over enabled PCI DSS controls. The score is displayed as a percentage. Similarly, the overall Security score represents the proportion of passed controls over enabled controls, including controls from every enabled Security Hub standard, displayed as a percentage.

Your aim should be to pass all enabled security checks to reach a score of 100%. Reaching a 100% security score for the AWS Security Hub PCI DSS standard will help you prepare for a PCI DSS assessment. The PCI DSS Compliance Standard in Security Hub is designed to help you with your ongoing PCI DSS security activities.

An important note, the controls cannot verify whether your systems are compliant with the PCI DSS standard. They can neither replace internal efforts nor guarantee that you will pass a PCI DSS assessment.

Remediating failed security checks

To remediate a failed control, you need to remediate every failed finding for that control.

  1. To prioritize remediation, we recommend filtering by Failed controls and then remediating issues starting with critical– and ending with low severity controls.
  2. Identify a control you want to remediate and visit the control detail page.
  3. Follow the Remediation instructions link, and then follow the step-by-step remediation instructions, applying them for every failed finding.

    Figure 4: The control detail page, with a link to the remediation instructions

    Figure 4: The control detail page, with a link to the remediation instructions

How to interpret “Related requirements”

Every control displays Related requirements in the control card and in the control’s detail page. For PCI DSS, the Related requirements show which PCI DSS requirements are related to the Security Hub PCI DSS control. A single AWS control might relate to multiple PCI DSS requirements.

Figure 5: Related requirements in the control detail page

Figure 5: Related requirements in the control detail page

The user guide lists the related PCI DSS requirements and explains how the specific Security Hub PCI DSS control is related to the requirement.

For example, the AWS Config rule cmk-backing-key-rotation-enabled checks that key rotation is enabled for each customer master key (CMK), but it doesn’t check for CMKs that are using key material imported with the AWS Key Management Service (AWS KMS) BYOK mechanism. The related PCI DSS requirement that is mapped to this rule is PCI DSS 3.6.4 – “Cryptographic keys should be changed once they have reached the end of their cryptoperiod.” Although PCI DSS doesn’t specify the time frame for cryptoperiods, this rule is mapped because, if key rotation is enabled, rotation occurs annually by default with a customer-managed CMK.

Conclusion

The new AWS Security Hub PCI DSS v3.2.1 standard is fundamental for any company involved with storing, processing, or transmitting cardholder data. In this post, you learned how to enable the standard to begin proactively monitoring your AWS resources against the Security Hub PCI DSS controls. You also learned how to navigate the PCI DSS results within Security Hub. By frequently reviewing failed security checks, prioritizing their remediation, and aiming to achieve a 100% security score for PCI DSS within Security Hub, you’ll be better prepared for a PCI DSS assessment.

Further reading

If you have feedback about this post, submit comments in the Comments section below. If you have questions, please start a new thread on the Security Hub forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Rima Tanash

Rima Tanash is the Lead Security Engineer on the Amazon Security Hub service team. At Amazon Web Services, she applies automated technologies to audit various access and security configurations. She has a research background in data privacy using graph properties and machine learning.

Author

Michael Guzman

Michael is a Security Assurance Consultant with AWS Security Assurance Services. He is a current Qualified Security Assessor (QSA), certified by the PCI SSC. Michael has 20+ years of experience in IT in the financial, professional services, and retail industry. He helps customers on their cloud journey of critical workloads to the AWS cloud in a PCI DSS compliant manner.

Author

Logan Culotta

Logan Culotta is a Security Assurance Consultant on the AWS Security Assurance team. He is also a current Qualified Security Assessor (QSA), certified by the PCI SSC. Logan enjoys finding ways to automate compliance and security in the AWS cloud. In his free time, you can find him spending time with family, road cycling, and cooking.

Author

Avik Mukherjee

Avik is a Security Architect with over a decade of experience in IT governance, security, risk, and compliance. He’s been a Qualified Security Assessor for PCI DSS and Point-to-Point-Encryption and has deep knowledge of security advisory and assessment work in various industries, including retail, financial, and technology. He loves spending time with family and working on his culinary skills.

Manage your AWS KMS API request rates using Service Quotas and Amazon CloudWatch

Post Syndicated from Raj Copparapu original https://aws.amazon.com/blogs/security/manage-your-aws-kms-api-request-rates-using-service-quotas-and-amazon-cloudwatch/

AWS Key Management Service (KMS) publishes API usage metrics to Amazon CloudWatch and Service Quotas allowing you to both monitor and manage your AWS KMS API request rate quotas. This functionality helps you understand trends in your usage of AWS KMS and can help prevent API request throttling as you grow your use of AWS KMS.

When you surpass your AWS KMS API request rate quotas, you receive an error “You have exceeded the rate at which you may call KMS. Reduce the frequency of your calls.” Such errors can also be caused by an increased use of AWS services that encrypt your data under keys managed in AWS KMS. For example, if you are using Amazon Redshift Spectrum, you might encounter this error – “HTTP response error code: 503 Message: SlowDown. Please reduce your request rate for operations involving AWS KMS.” Historically, in order to understand how close to a request rate quota you were, you had to perform three tasks: (i) send AWS CloudTrail events generated by AWS KMS to Amazon CloudWatch Logs; (ii) write queries in Amazon CloudWatch Logs Insights to track your API request usage; and (iii) submit an AWS Support case to request a quota increase. Now, you can view your AWS KMS API usage and request quota increases within the AWS Service Quotas console itself without doing any special configuration.

In this post, we will show you how to 1) view your KMS API utilization within Service Quotas 2) create a CloudWatch Alarm that alerts you to an approaching quota so you can request quota increases before you are throttled.

View your AWS KMS API utilization

Background

API utilization is the percentage rate at which you are calling a particular API compared to that API’s request rate quota in your account. For AWS KMS, the default request rate for cryptographic operations using symmetric keys is 10,000 requests per second in 6 specific AWS Regions*, aggregated across all requesting clients in an account. AWS KMS aggregates your API requests every minute and sends it to CloudWatch, where it is consumed by AWS Service Quotas for you to see. Because quota usage is aggregated by the minute, your effective quota would be 600,000 requests per minute.

*See Request Quotas for Each AWS KMS API Operation for the specific quotas in the AWS Region in which you operate.

Scenario

Imagine that all the applications in your account using AWS KMS collectively made 100,000 requests to the Decrypt API, 100,000 requests to the GenerateDataKey API, and 100,000 requests to the Encrypt API in a minute. AWS KMS sends a count of 300,000 requests to Amazon CloudWatch for that particular minute. Your utilization for that minute will be 50% of your quota (300,000 divided by 600,000, which is 60 seconds times your quota of 10,000 requests per second). Within the Service Quotas console, you can view utilization across several time frames, from the most recent hour up to a week.

Here are the steps to view your AWS KMS API Utilization within Service Quotas:

  1. Sign in to the AWS Management Console.
  2. Click on “Services” dropdown on the top left corner and search for “Service Quotas” and select it from the dropdown.
  3. Click on the AWS Key Management Service (AWS KMS) tile on the Service Quotas dashboard.
  4. Search for “symmetric” and click on the link for “Cryptographic operations (symmetric) request rate”.
  5. The Monitoring section will display the combined utilization percentage for the following APIs – Decrypt, Encrypt, GenerateDataKey, GenerateDataKeyWithoutPlaintext, GenerateRandom, and ReEncrypt. All these APIs are grouped under the shared “Cryptographic operations (symmetric) request rate”.
  6. Adjust the graph to view the utilization trend over a week by selecting “1w” from the top right corner of the graph.

You can view the utilization for any of the other available AWS KMS APIs from the Service Quotas dashboard in a similar fashion.

The API utilization provides you the overall trend of your API usage. Because the requests sent from AWS KMS are aggregated per minute, you could still experience throttling errors at a less than 100% utilization, especially if your usage is spiky and if you do not have exponential back off built into your applications’ error handling logic. For example, you might have surpassed the requests per second quota between the 12th second and the 15th second of the minute, but you were below the quota for the other 57 seconds of that minute.
 
Customizable CloudWatch graph

The utilization shown is across your entire AWS account in a given region, so if you are introducing a new application, you can monitor and see how it impacts your overall utilization. If you need a request rate quota increase before deploying your new application to production, you can request a quota increase at the top right portion of the Details section of the AWS Service Quotas page.

Create a CloudWatch Alarm

In the previous section we described how you can view historical utilization of API request rates from the Monitoring section of the AWS Service Quotas console. What if you want to be alerted when you have reached a predetermined utilization percentage so you can request a quota increase before you begin to experience extended throttling?

Here are the steps to do so:

  1. Click on the API of your interest from the Service Quotas console. In this example, let’s select Cryptographic operations (symmetric) request rate.
  2. In the Amazon CloudWatch alarms section (under the Monitoring section), click Create on the right hand corner.
  3. From the Alarm threshold dropdown select “80% of applied quota value”.
  4. Enter “80threshold” as the Alarm name and click the orange Create button on the right side.
  5. Click on the “80threshold” link that now appears in the table. A new browser window will appear that takes you to the Amazon CloudWatch console.
  6. Click Edit on the top right corner.
  7. Leave all the default values selected on the Specify metrics and condition page and click Next on the bottom right.
  8. Click Add notification and select Create new topic under the Select an SNS topic section. Enter “SNS-Topic” as the topic name. Add your email address to receive notifications when the alarm is set. Click Create topic.
  9. Click Update alarm.
  10. Confirm your SNS subscription by clicking on View SNS Subscriptions.
     
  11.  

  12. Select your email address endpoint and click Request confirmation.
  13. You will receive an email to confirm your subscription. Once you confirm the subscription, you are all set to receive email notifications on the new alarm.
     
    User interface after CloudWatch alarm created

Here are more details on creating CloudWatch alarms if you want to make additional modifications to your alarms. We recommend 80% as a good threshold to set your alarm to begin with. When you are testing a new application, you can start with this threshold and run your application for a period of time and monitor its utilization. When an alarm fires, you can you can proactively request a quota increase at the top right portion of the Details section of the AWS Service Quotas page.

Conclusion

We’ve explored how to view your AWS KMS API request usage, how to add alarms on the most critical items in your application’s use of AWS KMS, and how to request quota increases. These items provide visibility and control over how your applications interact with AWS KMS.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread in the AWS Key Management Service forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Raj Copparapu

Raj Copparapu is a Senior Product Manager Technical on the AWS KMS team who focuses on defining the product roadmap to satisfy customer requirements. Raj has spent over 5 years innovating to deliver products that help customers secure their data in the cloud. In his spare time, he enjoys yoga and spending time with family.

How to improve LDAP security in AWS Directory Service with client-side LDAPS

Post Syndicated from Dave Martinez original https://aws.amazon.com/blogs/security/how-to-improve-ldap-security-in-aws-directory-service-with-client-side-ldaps/

You can now better protect your organization’s identity data by encrypting Lightweight Directory Access Protocol (LDAP) communications between AWS Directory Service products (AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, and AD Connector) and self-managed Active Directory. Client-side secure LDAP (LDAPS) support enables applications that integrate with AWS Directory Service, such as Amazon WorkSpaces and AWS Single Sign-On, to connect to AD using Secure Sockets Layer/Transport Layer Security (SSL/TLS).

Note: In 2017, AWS Directory Service released server-side LDAPS support in AWS Managed Microsoft AD. This update adds client-side LDAPS support to both AWS Managed Microsoft AD and AD Connector.

In this post, I’ll step through configuring client-side LDAPS to enable encrypted communications between Amazon WorkSpaces and an Amazon Elastic Compute Cloud (Amazon EC2)-based self-managed AD.

Solution architecture

When you have completed the steps outlined in this post, your solution will look like Figure 1:

Figure 1: Solution architecture

Figure 1: Solution architecture

To build the solution, you will follow a three step process:

  1. Prepare all prerequisites, including the setup of certificate-based security in the self-managed AD environment.
  2. Register your certificate authority (CA) certificate into AWS Directory Service and enable client-side LDAPS (purple arrow in diagram above).
  3. Test client-side LDAPS using Amazon WorkSpaces and AWS Directory Service (yellow arrows in diagram above).

Step one: Set up prerequisites

To follow the steps described in this blog, you will need:

  1. A self-managed AD deployment to store your user identities. You can find setup guidance in “Step 1: Set Up Your Environment for Trusts” of the Tutorial: Creating a Trust from AWS Managed Microsoft AD to a Self-Managed Active Directory Installation on Amazon EC2.
  2. A server authentication certificate installed on your self-managed AD domain controller. Creating the certificate is typically done one of two ways:
    1. Using Active Directory Certificate Services (AD CS) in Windows Server to deploy an in-house CA for issuing server certificates. For help with setting up an AD CS deployment that supports LDAPS, see Microsoft’s LDAP over SSL (LDAPS) Certificate.
    2. Purchasing SSL certificates from a commercial CA like Verisign or AWS Certificate Manager. For help using commercial certificates with AD, see How to enable LDAP over SSL with a third-party certification authority.
  3. An AWS Directory Service directory, either AWS Managed Microsoft AD or AD Connector, to act as a bridge from AWS to your self-managed AD. See the documentation for AWS Managed Microsoft AD or AD Connector for detailed steps and tutorials. If you’re using AWS Managed Microsoft AD, also set up a two-way trust with your self-managed AD using Tutorial: Creating a Trust from AWS Managed Microsoft AD to a Self-Managed Active Directory Installation on Amazon EC2.
  4. Amazon WorkSpaces connected to your AWS Directory Service directory to look up and authenticate users. See the WorkSpaces documentation for detailed steps on using AWS Managed Microsoft AD with a Trusted Domain or AD Connector.

The remainder of this post assumes you have:

  1. Created an AWS Managed Microsoft AD instance called corp.example.com
  2. Connected corp.example.com via two-way trust to an EC2-based self-managed AD called example.local
  3. Deployed an AD CS enterprise root certificate authority in example.local with the common name Example SelfManaged CA.

When you perform the steps described below, you should replace these names with the names you selected.

Step two: Configure client-side LDAPS in AWS Directory Service

Now, you’ll retrieve the CA certificate — which represents the issuing certificate authority — from your self-managed AD and use it to enable client-side LDAPS in AWS Directory Service. To review CA certificate requirements for AWS Directory Service, see the client-side LDAPS documentation for AWS Managed Microsoft AD or AD Connector.

  1. Export the CA certificate from the example.local CA:
    1. To open the Certification Authority MMC snap-in, on the example.local server hosting AD CS, right-click the Windows icon, select Run, type certsrv.msc, and select OK.
    2. Right-click the name of the CA (in this case, Example SelfManaged CA) and select Properties.
    3. In the Properties window, on the General tab, under CA certificates, select the CA certificate listed, and then select View Certificate.
       
      Figure 2: View the CA certificate

      Figure 2: View the CA certificate

    4. In the Certificate window, on the Details tab, select Copy to File.
    5. In the Certificate Export Wizard, select Next.
    6. In the Export File Format screen, select Base-64 encoded X.509 (.CER), and then select Next. This saves the file in the format required by AWS.
       
      Figure 3: Select the base-64 encoded export file format

      Figure 3: Select the base-64 encoded export file format

    7. Select Browse, and then select a file name and save location for the CA certificate.
    8. Select Save, and then click Next.
    9. Select Finish, then select OK to complete the export process.
    10. Copy the file to a location accessible by the machine where you will be performing the AWS Directory Service configuration.
  2. Register the example.local CA certificate in AWS Directory Service:
    1. In the AWS Management Console, select Directory Service, and then select the Directory ID link for the AWS Directory Service directory connected to example.local (in this case, corp.example.com).
       
      Figure 4: Select the Directory ID

      Figure 4: Select the Directory ID

    2. On the Directory details page, in the Networking & security tab, in the Client-side LDAPS section (shown in Figure 5), select the Actions menu, and then select Register certificate.
       
      Figure 5: Select “Register certificate”

      Figure 5: Select “Register certificate”

    3. In the Register a CA certificate dialog box, select Browse, navigate to the location where you stored the CA certificate for your AD CS certificate authority, select Open, and then select Register certificate.
       
      Figure 6: Register a CA certificate

      Figure 6: Register a CA certificate

  3. Enable client-side LDAPS in AWS Directory Service:
    1. In the Client-side LDAPS section, once the Registration status field for the certificate reads Registered, select the Enable button. Click the Refresh button for updated status.
       
      Figure 7: Check the “Registration status” and then select “Enable”

      Figure 7: Check the “Registration status” and then select “Enable”

    2. In the Enable client-side LDAPS dialog box, select Enable.
    3. In the Client-side LDAPS section, under Status, when the status field changes to Enabled, LDAPS is successfully configured. Click the Refresh button for updated status.
       
      Figure 8: LDAPS successfully configured

      Figure 8: LDAPS successfully configured

Step three: Test client-side LDAPS with Amazon WorkSpaces

The last step is to test client-side LDAPS with an AWS application. Now that client-side LDAPS has been configured, all LDAP traffic to the self-managed AD will be encrypted and travel over port 636.

Note: Ensure that AWS security group, network firewall, and Windows firewall settings applied to the AWS Directory Service directory (outbound) and self-managed AD (inbound) allow TCP communications on port 636.

To test your client-side LDAPS configuration, perform a WorkSpaces user look up:

  1. In the AWS Management Console, choose WorkSpaces, and then click Launch WorkSpaces.
  2. On the Select a Directory screen, pick corp.example.com and then select Next Step.
  3. On the Identify Users screen, In the Select trust from forest menu, select example.local, and then select Show All Users (see Figure 9 for an example). This search will be executed over LDAPS.
     
    Figure 9: Searching users from a trusted domain with client-side LDAPS

    Figure 9: Searching users from a trusted domain with client-side LDAPS

Summary

In this post, we’ve explored how client-side LDAPS support in AWS Managed Microsoft AD and AD Connector improves LDAP security for AWS applications and services like Amazon WorkSpaces, AWS Single Sign-On, and Amazon QuickSight by encrypting sensitive network traffic between AWS and Active Directory.

To learn more about using AWS Managed Microsoft AD or AD Connector, visit the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page. If you have comments about this blog post, submit a comment in the Comments section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dave Martinez

Dave is a Senior Product Manager working on AWS Directory Service. Outside of work he enjoys Seattle sports and coaching his son’s Little League baseball team.

How to use KMS and IAM to enable independent security controls for encrypted data in S3

Post Syndicated from Paco Hope original https://aws.amazon.com/blogs/security/how-to-use-kms-and-iam-to-enable-independent-security-controls-for-encrypted-data-in-s3/

Typically, when you protect data in Amazon Simple Storage Service (Amazon S3), you use a combination of Identity and Access Management (IAM) policies and S3 bucket policies to control access, and you use the AWS Key Management Service (AWS KMS) to encrypt the data. This approach is well-understood, documented, and widely implemented. However, many customers want to extend the value of encryption beyond basic protection against unauthorized access to the storage layer where the data resides. They want to enforce a separation of duties between which team manages access to the storage layer and which team manages access to the encryption keys. This model ensures that configuration errors made by only one of these teams won’t compromise the data in ways that grant unauthorized access to plaintext data. For example, if the team that owns permissions to the S3 bucket mistakenly grants access to unauthorized users, when those users attempt to access objects in S3 they will fail. Why? Because the separate team who manages access to the keys didn’t grant those users access to use the keys for decryption.

You can create this kind of independent access control by combining KMS encryption with IAM policies and S3 bucket policies. When data is encrypted with a customer-managed KMS customer master key (CMK), the key’s policy acts as an independent access control. Users can be prevented from accessing the data, even though the IAM permissions and the S3 bucket policy would permit the access. Figure 1 shows a Venn diagram of the access that is required. The bucket policy, the IAM policy, and the KMS key policy all play a role. Users have permission for the data only when they are granted permissions in all three policies.
 

Figure 1: Venn diagram showing the required permissions for access

Figure 1: Venn diagram showing the required permissions for access

This exercise builds the resources shown in Figure 2:

  • Three AWS IAM roles
    1. A role (1) with permission to create and manage permissions on an S3 bucket (secure-bucket-admin)
    2. A role (2) with permission to create and manage permissions on a KMS master key (secure-key-admin)
    3. A role (3) with permissions to access (but not manage) a specific S3 bucket and to use (but not manage) a specific AWS KMS customer master key (authorized-users).
  • An S3 bucket (4) with a custom bucket policy (5) that only allows data to be stored if that data is encrypted with a specific KMS key. The ability to write to or read from this bucket will be restricted to the IAM role authorized-users.
  • A KMS key (6) with a specific key policy (7) that can only be used by the IAM role authorized-users and only managed by the IAM user secure-key-admin.

 

Figure 2: Architecture diagram

Figure 2: Architecture diagram

When you have completed this exercise, you will have:

  • Created an S3 bucket protected by IAM policies, and a bucket policy that enforces encryption.
  • Attached the IAM role authorized-users to an EC2 instance so your applications in that instance can assume that role and access encrypted data in the S3 bucket.
  • Uploaded and downloaded data from the bucket that is protected by the KMS key.
  • Demonstrated that when the KMS key policy is modified, removing access for the IAM role authorized-users, the applications on the EC2 instance no longer have access to the data in the S3 bucket.

Set things up

For simplicity, I create the S3 bucket, KMS keys, and EC2 instances all in the same region and in the same AWS account. It’s possible to use KMS keys that are owned by a different AWS account, to assume roles across accounts, and to have instances in different regions from the buckets and the keys. I discuss those variations at the end.

I assume you have at least one administrator identity available to you already: one that has broad rights for creating users, creating roles, managing KMS keys, and launching EC2 instances. I will refer to this as your “Admin identity” throughout these instructions. This can be a federated identity (for example, from your corporate identity provider or from a social identity), or it can be an AWS IAM user.

Assuming Roles

Throughout this exercise I will use IAM roles to acquire and release privileges. If you’re working from the AWS command line, you’ll need to configure your command line environment to use profiles. If you’re working from the AWS Management Console, then you’ll follow these instructions to switch role. If you haven’t worked with roles before, take a minute to follow those instructions and become familiar with it before continuing.

Step 1: Create IAM policies

First, I will create 3 policies that grant very specific sets of rights. Then, I will attach those policies to roles: two roles for administrators, and one for software running on EC2 instances. You’re going to create an S3 bucket in Step 3. That bucket, like all S3 buckets, needs a globally unique name. You will reference that bucket’s name in these policies, even though you will create the bucket later. Decide the name of your bucket now. When you reach steps that require you to type or paste a JSON policy document for your bucket policy, remember to use the name of your bucket where I have written secure-demo-bucket.

Step 1a: Create the S3 bucket management policy

While logged in to the console as your Admin user, create an IAM policy in the web console using the JSON tab. Name the policy secure-bucket-admin. When you reach the step to type or paste a JSON policy document, paste the JSON from Listing 1 below. This policy allows broad S3 administration rights (creating, deleting, and modifying policies), so it is a high privilege policy. In an effort to be concise, it grants all permissions to S3 and then takes a few away by explicitly denying them. The intention is to permit managing all aspects of the bucket’s operation, while denying all access to the contents of the bucket. The explicit deny mechanism is important because, due to IAM’s policy evaluation logic, an explicit deny cannot be overridden by subsequent “allow” statements or by attaching additional policies. As the S3 service evolves over time and new features are added, the policy will permit using those new features, without any change to this policy. If you prefer to enable features explicitly, you’ll need to rewrite this policy to explicitly allow only the features you want, and then come back and revise the policy every so often, as S3 features are added that your role needs to use.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowAllActions",
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": "*"
    },
    {
      "Sid": "DenyObjectAccess",
      "Action": [
        "s3:DeleteObject",
        "s3:DeleteObjectVersion",
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:PutObjectVersionAcl"
      ],
      "Effect": "Deny",
      "Resource": "arn:aws:s3:::secure-demo-bucket"
    }
  ]
}

Listing 1: secure-bucket-admin IAM policy
 
Your policy will have an ARN (it will look something like arn:aws:iam::111122223333:policy/secure-bucket-admin). Make a note of this ARN. You will use it later to attach to the secure-bucket-admin role you’ll create in step 2.

Step 1b: Create the KMS administrator policy

While logged in to the console as your Admin user, create an IAM policy in the web console using the JSON tab. Name the policy secure-key-admin. When you reach the step to type or paste a JSON policy document, paste the JSON from Listing 2 below. Be sure to add your own 12-digit AWS account number where I have written 111122223333. This policy allows broad KMS administration rights (creating keys, granting access to keys, and modifying key policies), so it is a high privilege policy. In an effort to be concise, this policy grants all permissions to the KMS service and then denies certain rights through an explicit deny statement. The intention is to permit managing all aspects of KMS keys, while denying all access to perform encryption and decryption using KMS keys. As the KMS service evolves over time and new features are added, the policy will permit using those new features, without any change to this policy. If you prefer to enable features explicitly, you’ll need to rewrite this policy to explicitly allow only the features you want, and then come back and revise the policy every so often, as KMS features are added that your role needs to use.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowAllKMS",
      "Action": "kms:*",
      "Effect": "Allow",
      "Resource": " arn:aws:kms:*:111122223333:key/*"
    },
    {
      "Sid": "DenyKMSKeyUsage",
      "Action": [
        "kms:Decrypt",
        "kms:Encrypt",
        "kms:GenerateDataKey",
        "kms:ReEncryptFrom",
        "kms:ReEncryptTo"
      ],
      "Effect": "Deny",
      "Resource": " arn:aws:kms:*:111122223333:key/*"
    }
  ]
}

Listing 2: secure-key-admin IAM policy
 
Your policy will have an ARN (it will look something like arn:aws:iam::111122223333:policy/secure-key-admin). Make a note of this ARN. You will use it later to attach to the secure-key-admin role you’ll create in step 2.

Step 1c: Create the S3 bucket usage policy

This final policy grants access to read and write encrypted data in the target S3 bucket. This is a narrowly-scoped policy that only grants rights to a single bucket. While logged in to the console as your Admin user, create an IAM policy in the web console using the JSON tab. Name the policy secure-bucket-access.

When you reach the step to type or paste a JSON policy document for your bucket policy, paste the JSON from Listing 3 below, substituting the name of your bucket on the two lines where I have secure-demo-bucket.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "BasicList",
            "Effect": "Allow",
            "Action": [ "s3:ListAllMyBuckets", "s3:HeadBucket" ],
            "Resource": "*"
        },
        {
            "Sid": "AllowSecureBucket",
            "Effect": "Allow",
            "Action": [ "s3:PutObject", "s3:GetObjectAcl",
                "s3:GetObject", "s3:DeleteObjectVersion",
                "s3:DeleteObject", "s3:GetBucketLocation",
                "s3:GetObjectVersion" ],
            "Resource": [
                "arn:aws:s3:::secure-demo-bucket/*",
                "arn:aws:s3:::secure-demo-bucket"
            ]
        }
    ]
}

Listing 3: secure-bucket-access IAM policy

Note: In an effort to grant a minimal, but realistic, set of permissions, this IAM policy only grants access to basic get, put, and delete operations. You might have a use for other features, like tagging objects. If so, you will need to change the policy to enable the features you want to use.

Your policy will have an ARN (it will look something like arn:aws:iam::111122223333:policy/secure-bucket-access). Make a note of this ARN. You will use it later to attach to the authorized-users role you’ll create in step 2.

You might ask why this policy designed to control access to encrypted objects has no KMS permissions in it. Wouldn’t that prevent the users that assume this IAM role from using the encryption keys? It would normally prevent them, except you have the ability to list the authorized-users IAM role within the resource policy attached to the KMS key you’re about to create. By placing the authorized-users role in the KMS key resource policy, it further enforces the separation of duties so administrators in the account with an ability to modify IAM policies don’t inadvertently escalate privilege to other IAM users/roles and give them permissions to use KMS keys for decryption.

Step 2: Create IAM roles

An AWS IAM role is an identity that you can create in an AWS account that has specific permissions. An IAM role is similar to an IAM user, because it has permission policies that determine what the identity can and cannot do in AWS. It’s different from an IAM user because it’s not associated with a single person. A role can be used by users, by EC2 instances, by AWS services, or by other entities like AWS Lambda functions that you allow to use it. The IAM policies we created in step 1 do not grant permissions until we assign them to roles and assign the roles to users or entities.

Step 2a: Create the S3 bucket management role

This role will be used by administrators who need to manage the properties of the bucket.

  1. Follow the online instructions for creating an IAM role.
  2. Choose Another AWS account under the section labeled Select type of trusted entity.
  3. For the authorized AWS account ID, enter the 12-digit account number for the account that you’re working in. If you intend to authorize AWS IAM users that are defined in a different AWS IAM account to access the S3 bucket and decrypt objects, then you would include that AWS account’s ID number, instead.
  4. Name the IAM role secure-bucket-admin and import the customer managed policy named secure-bucket-admin that you created in step 1a to the role that you have created.

    Your AWS IAM role will have an ARN (it will look something like arn:aws:iam::111122223333:role/secure-bucket-admin). Make a note of this ARN. You will use it in the step 3 when you create your S3 bucket.

Step 2b: Create the KMS key management role

This role will be used by administrators who need to manage the KMS customer master keys that protect the data. The actions you take to manage the keys will be authorized by this role. Importantly, this role has no ability to modify the bucket, grant access to the bucket, or access any of the data in the bucket.

  1. Follow the online instructions for creating an IAM role.
  2. In the Select type of trusted entity section, select Another AWS account.
  3. For the authorized AWS account ID, enter the 12-digit account number for the account that you’re working in. If you intend to authorize AWS IAM users that are defined in a different AWS IAM account, then you would include that AWS account’s ID number, instead.
  4. Name the IAM role secure-key-admin and import the customer-managed policy named secure-key-admin that you created in step 1b to the role that you have created.

    Your AWS IAM role will have an ARN (it will look something like arn:aws:iam::111122223333:role/secure-key-admin). Make a note of this ARN. You will use it in step 4 when you create your KMS key.

Step 2c. Create the bucket usage role

This role will grant permissions to EC2 instances. An EC2 instance running with this role will be able to create and read encrypted data in the protected S3 bucket.

  1. Follow the online instructions for creating an IAM role.
  2. In the Select type of trusted entity section, select AWS service.
  3. Choose EC2 as the service that you will authorize. This authorizes all applications running on that EC2 instance to use credentials with permissions attached to the role.
  4. Name the IAM role authorized-users and import the customer-managed secure-bucket-access policy that you created in step 1c to the role that you have created.

This role is not for users trying to access the S3 bucket from any arbitrary application that happens to have the role’s credentials. It will only be used by users operating within applications running in AWS EC2 instances.

Step 3: Create an S3 bucket for the encrypted data

Log in to the console using your secure-bucket-admin role. (Either log in with the correct federated identity, or with the AWS IAM user you created in step 1d). Follow the instructions to create a bucket that will hold the encrypted data. In my example, I call my bucket secure-demo-bucket. You chose your own unique bucket name back in step 1. Type that bucket name throughout these steps where I use secure-demo-bucket. You will set a bucket policy and properties on that bucket later.

Step 4: Create a KMS key to encrypt and decrypt the data in the S3 bucket

Log out of the console and log back in using your secure-key-admin role. Create a customer-managed customer master key (CMK) to encrypt and decrypt the data in the S3 bucket you just created. If you already have a customer-managed CMK created that you want to use for this purpose, you can do that. To use your own CMK, skip steps 1-5 below about creating a key and, instead, select your existing key in the KMS console and then follow steps 6-8 to change the key policy to allow the authorized-users role permissions to use the key.

  1. In the AWS console, go to Key Management Service.
  2. Select the Create Key button.
  3. On the Step 1 screen, set a display name (called an “Alias”) for the key and a description. I recommend a meaningful description that tells others what the key is for.
  4. On the Step 2 screen, set tags if you need them to track usage of keys for billing purposes. Tags won’t have a functional impact in this exercise so you can skip this step if you want by selecting Next.
  5. On the Step 3 screen, select key administrators. Pick only the secure-key-admin IAM role. You must not pick the secure-bucket-admin role or the authorized-users role as key administrators to ensure separation of duties. For example, if you were to pick the authorized-users IAM role, then any user that assumed that role could escalate their own (or others’) privileges to use this key to decrypt any other data encrypted under this key in your account. If you were to pick the secure-bucket-admin user, then that user could modify permissions both on the S3 bucket and the KMS key in ways that allowed unauthorized users access to decrypt data.
  6. On the Step 4 screen, select key users. Pick only the authorized-users IAM role you created in step 2c.
  7. On the Step 5 screen, select Finish.

    After you have created the key, make note of the key’s ARN. It will look something like this:

    arn:aws:kms::11112222333:key/1234abcd-12ab-34cd-56ef-1234567890ab

    You will need it for the next step where you enforce all objects uploaded into the S3 bucket to be encrypted under this key.

Step 5: Modify the bucket policy

Log out of the console and log back with the secure-bucket-admin role. You’re going to attach a bucket policy to the bucket that does two things: it requires objects to be encrypted and it requires them to be encrypted with a specific KMS key. You will accomplish this by explicitly denying any attempt to call PutObject unless the correct conditions are true. This helps you increase your confidence that you will not store unencrypted data in this bucket.

Find the secure-demo-bucket bucket in the S3 web console, and then modify its bucket policy. Use the code from Listing 4 below as the entire bucket policy. Be sure to change secure-demo-bucket to the actual name of the bucket that you’re using in both places where it appears in the policy. You recorded the key’s ARN in step 4, make sure you insert that ARN for your KMS key where I use an example key ARN below.


{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
      {
        "Sid": "DenyUnencryptedObjectUploads",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:PutObject",
        "Resource": "arn:aws:s3:::secure-demo-bucket/*",
        "Condition": {
          "StringNotEquals": {
            "s3:x-amz-server-side-encryption": "aws:kms"
          }
        }
      },
      {
        "Sid": "DenyWrongKMSKey",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:PutObject",
        "Resource": "arn:aws:s3:::secure-demo-bucket/*",
        "Condition": {
          "StringNotEquals": {
            "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms::11112222333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
          }
        }
      }
    ]
  }

Listing 4: Bucket policy requiring encryption

Note: This bucket policy is not retroactive: If you apply this policy to a bucket that already exists and already has unencrypted objects, nothing happens to the objects that are already in the bucket. They remain unencrypted. They can be fetched or deleted. Once the policy is applied, however, new objects cannot be put in the bucket unless they are correctly encrypted.

Instead of applying a bucket policy, you could consider turning on S3 default encryption. This feature forces all new objects uploaded to an S3 bucket to be encrypted using the KMS key you created in step 4 unless the user specifies a different key. This feature doesn’t prohibit callers from encrypting objects under other KMS keys, but it ensures that the data is protected even if the user does not specify KMS encryption when putting the object. The bucket policy in Listing 4 is a bit stricter than S3 default encryption because it ensures that no object is ever encrypted by any key other than the CMK created in step 4. That strictness means the attempt to put an object fails, unless the caller explicitly names the KMS keyId in every S3 PUT request. With S3 default encryption, attempts to put an object without specifying encryption will succeed, and the data will be protected by the named KMS CMK.

Step 7: Launch an EC2 instance to demonstrate the solution

The final step to showing how this solution works is to launch an EC2 instance and show that applications running in that instance can write and read data in the S3 bucket you created. If you launch an EC2 instance that has your authorized-users role attached and log in on that instance, you will be able to upload and download objects from the bucket, encrypting and decrypting transparently as you do it. No other identity (for example, other IAM users, other IAM roles, other EC2 instances, and Lambda functions) will be able to upload and download data to this S3 bucket because these other identities don’t have the permissions to use the KMS key that protects the data.

Start by logging out of the console and log back in as your Admin user. Following instructions to launch an EC2 instance:

  1. Choose an Amazon Linux AMI.
  2. Choose an instance type. Any instance type will work. If you launch an Amazon Linux t2.micro instance, it might qualify for free tier pricing.
  3. For IAM Role, select the authorized-users role from the drop-down menu.
  4. Make sure you specify an SSH key that you have access to, and make sure that you have a way to reach the EC2 instance over the network.

Satisfy yourself that it works as expected

At this point, the solution is complete and is running. I want to demonstrate that the KMS key is providing the independent access controls the way I said it would. I will modify the key policy to remove the instance’s rights to use the KMS key. Then, I will confirm that the commands that had succeeded before now fail after the key policy change. This shows how the KMS key and its policy are completely independent of the S3 bucket policies and the IAM policies.

Test 1: Uploading encrypted objects

Using SSH, log in on the EC2 instance you launched that has the authorized-users role attached.

You will need to download a file onto the EC2 instance that you can then upload, encrypted, to the S3 bucket. If you don’t have a file that you want to use, you can use the AWS Cryptographic Details whitepaper as a reasonable test file.

On the instance, run the following command to download a local copy of the AWS Cryptographic Details whitepaper that you can use as test data:


curl -O 'https://d1.awsstatic.com/whitepapers/KMS-Cryptographic-Details.pdf'

Side note: You should also read this whitepaper. It’s very informative on how AWS KMS is built and operated to secure your encryption keys.

On the EC2 instance, use the AWS command line to upload the file to the S3 bucket. Note all the options that tell S3 to use KMS encryption and to use the correct key ID. Remember to insert the bucket name for the bucket that you’re using and the ARN of your KMS key from step 4 above.


aws s3 cp KMS-Cryptographic-Details.pdf s3://secure-demo-bucket/
--sse aws:kms --sse-kms-key-id arn:aws:kms::11112222333:key/1234abcd-12ab-34cd-56ef-1234567890ab

If all went well, you should see a message like the following, showing that the object was uploaded successfully:


upload: ./KMS-Cryptographic-Details.pdf to s3://secure-demo-bucket/KMS-Cryptographic-Details.pdf

Test 2: Upload an Unencrypted Object

You can now prove the fact that a user on this instance attempting to upload unencrypted objects will fail. Run this command to upload a second copy of the PDF file to be called test2.pdf. Be sure to substitute your bucket’s name into the command.


aws s3 cp KMS-Cryptographic-Details.pdf s3://secure-demo-bucket/test2.pdf

You’ll notice this command doesn’t include the options instructing S3 to use KMS to encrypt the file. You should see this error message:


An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

If you see no error, then double-check that your bucket policy in Step 5 above is correct.

Test 3: Downloading Encrypted Objects

You’ve now proven that the EC2 instance can upload encrypted objects and that unencrypted objects are refused. Now, you can prove that the EC2 instance has access to cause S3 to decrypt the encrypted object in the bucket using the KMS keys. Here’s how: While still on your EC2 instance, run this command, substituting your bucket name, to download a copy of the PDF file:


aws s3 cp s3://secure-demo-bucket/KMS-Cryptographic-Details.pdf test3.pdf

If this command succeeds, then you will have a file in your current directory on your EC2 instance named test3.pdf. That shows that you have successfully decrypted and downloaded the PDF file.

Test 4: Demonstrate that the key policy regulates access

Now, I will demonstrate the independence of access control provided by the KMS key policy. Leaving the bucket policy and IAM role/policy as they are, you will disable the EC2 instance’s access to the objects using the KMS key policy. The IAM policy for S3 and the bucket policy on the bucket would still normally permit the EC2 instance to access the data. But, because the KMS key policy will prevent use of the key by the authorized-users IAM role, S3 will fail to encrypt or decrypt the object. This means that any commands that execute on the EC2 instance will no longer be able to upload or download data from the S3 bucket.

First, modify the key policy.

  1. Log out of the console and log back in under the secure-key-admin user. Go to the Key Management Service console.
  2. In the left-hand navigation, select Customer managed keys and look for the key with the alias or Key ID that you’re using. The Key ID is the last 32 characters of the full key ARN.
  3. Select the Key ID for the key that you’re using to get to the screen where you can edit the key policy.
  4. In the list of Key users, you will see your authorized-users role listed. Select that role, and then select the Remove button to remove its access to use the KMS key.

At this point, the EC2 instance no longer has the permissions to use the KMS key because its role no longer grants it permission to use the key.

Repeat the command that you did in Test 1 that uploaded a PDF file to the bucket. In this case, try to make a second copy of the PDF file into an object named test4.pdf. Run this command, substituting your bucket name and your KMS key ID as required:


aws s3 cp KMS-Cryptographic-Details.pdf s3://secure-demo-bucket/test4.pdf --sse aws:kms --sse-kms-key-id abcdefab-1234-1234-1234-abcdef01234567890

You should see an error like this:


An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Now, try to download the copy of the KMS-Cryptographic-Details.pdf file from the bucket, again using the command that worked before, substituting the bucket name as required:


aws s3 cp s3://secure-demo-bucket/KMS-Cryptographic-Details.pdf test4.pdf

You should see an error message like this:


An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

These two commands are denied because when S3 tried to invoke KMS to encrypt or decrypt data, the EC2 instance role did not have permission to use the KMS key and thus the request failed. Note that there is no situation where the API call returns the KMS-encrypted data from S3. Either the API call succeeds, and you receive the decrypted data, or the API call fails, and you receive an error. All AWS services that use KMS to encrypt data behave this way—you either get the decrypted data, or you get an error message.

Restoring access to the key

To restore the EC2 instance’s access to the data, you authorize its role again in the KMS key policy:

  1. Go to Key Management Service in the AWS Console.
  2. Select Customer managed keys.
  3. Find the key that you’re using and select it.
  4. Find your authorized-users role in the list of roles, or type “authorized-users” in the search box to find it.
  5. Select the checkbox next to the authorized-users role, and then select Add to add that role as a key user.

The role will now have permission to use the key as it did before.

Useful variations on this solution

Variation 1: Using KMS keys in different AWS accounts

You can use a KMS key that is in a different AWS account for encrypting and decrypting. This allows administrators in a central AWS account to manage KMS keys, while the data itself resides in other AWS accounts. This can offer further separation of roles from the example above because even a highly privileged user (for example, root) in the account in which the authorized-users role exists won’t be able to modify the key policy. The account ID in which authorized-users role exists must be listed in the key policy. For more information, follow the instructions on sharing KMS keys across accounts.

Note that the KMS key and the S3 bucket must always be in the same region. The EC2 instance does not need to be in the same region as the S3 bucket. You will experience higher latency when your EC2 instance is not in the same region as the S3 bucket.

Variation 2: Granting KMS key usage permissions to other AWS services

EC2 is not the only service that can be granted a role this way. Lambda functions can be granted AWS IAM roles that allow them to use KMS keys. That would permit the Lambda functions with the correct roles to manipulate the S3 data, while other entities (users, EC2 instances) could not. Likewise, AWS services such as Amazon Athena might require access to a KMS key if you want to use it to search data stored in S3 that has been encrypted using KMS. If Athena is given permission to assume a role with permissions to use the KMS key, then Athena can successfully execute its search queries because S3 will be allowed to decrypt objects on behalf of Athena, which is acting on your behalf when assuming the authorized-users role.

Variation 3: Creating isolated authorization to encrypt vs decrypt

You can use the KMS key policy to isolate authorization to encrypt versus decrypt data between two identities. For example, if a role has the kms:Encrypt or kms:GenerateDataKey permissions for a key, that means that role can write encrypted data directly or ask an AWS service to do it on their behalf (for example, during an upload to an S3 bucket). If the role does not also have kms:Decrypt permission, it can’t read encrypted data. This write-only permission might be appropriate for data acquisition, security log delivery, or other functions that should not be allowed to read the data they have written. Likewise, if a role has the kms:Decrypt permission, then the role has the ability to read data. But if it lacks the kms:Encrypt permission, it cannot write or modify encrypted data. This kind of isolation authorization is suitable for audit functions and log aggregation functions that need to read data but typically are prohibited from modifying the data/logs that they read. The complete set of permissions for KMS key policies can be found in the KMS developers guide.

Cost of this solution

Three services with charges are used in this solution: EC2, S3, and KMS. The EC2 instance hours are charged according to standard EC2 pricing. Likewise, storing data in S3 will incur costs according to standard S3 pricing. There is no difference in S3 pricing for storing encrypted versus unencrypted data. Finally, KMS has a fixed price per month for each customer-managed CMK you create, which is described in the KMS pricing page. Each encryption and decryption of an object is a KMS API call and a certain number of KMS API calls are free each month. The number of free KMS API calls, and the price for API calls beyond the free tier, are described on the KMS pricing page.

Summary

The combination of IAM policies, S3 bucket policies, and KMS key policies gives you a powerful way to apply independent access control mechanisms on data. This mechanism means that one set of users can be granted rights to do maintenance operations on the buckets themselves, while not having rights to access or manipulate the data itself. Even a user or function with full privileges in S3 would be denied access to this encrypted data unless it also had the rights to use the KMS keys. It gives you an approach to access control that allows key policies to serve as an additional control when IAM policies or S3 bucket policies alone are not sufficient.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author bio

Paco Hope

Paco Hope is a Principal Security Consultant with AWS Professional Services working to help enterprise customers secure their workloads in the cloud. He has helped secure migration landing zones, design customer security architectures, and has mentored a number of AWS partners in the UK on AWS Security. He frequently speaks at information security conferences and security meetups.

12 additional AWS services and 2 features authorized at DoD Impact Level 4 and 5 for AWS GovCloud (US) Regions

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/12-additional-aws-services-and-2-features-authorized-at-dod-impact-level-4-and-5-for-aws-govcloud-us-regions/

I’m excited to share that the Defense Information Systems Agency (DISA) has authorized 12 additional AWS services and 2 features in AWS GovCloud (US) Regions. With these additional 12 services and 2 features, AWS now offers a total of 52 services authorized to process DoD mission critical data at Impact Levels (IL) 4 and 5 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG).

The authorization at DoD IL 4 and IL 5 allows DoD Mission Owners to process controlled unclassified information (CUI) and to include mission critical workloads for National Security Systems in AWS GovCloud (US) Regions. This is in addition to the work AWS does in supporting the full range of U.S. Government data classifications. AWS remains the only Cloud Service Provider accredited to address the full range, including Unclassified, Secret and Top Secret.

AWS successfully completed an independent, third-party evaluation that confirmed AWS effectively implemented over 400 security controls using applicable criteria from NIST SP 800-53 Rev 4, the US General Services Administration’s FedRAMP High baseline, the DoD CC SRG, and the Committee on National Security Systems Instruction No. 1253 at the High Confidentiality, High Integrity, and High Availability impact levels.

The newly authorized AWS services and features provide additional choices for DoD Mission Owners to enhance the security of their workloads with continuous threat monitoring; optimize and modernize their database and data analytics operations; conduct deep learning on images and video streams; build out Internet of Things (IoT) environments; and leverage fully-managed, cloud-based virtual desktops.

Recently authorized AWS services and features at DoD Impact Levels 4 and 5

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Follow the AWS Security Blog for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler Harding is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.

Automated Response and Remediation with AWS Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/automated-response-and-remediation-with-aws-security-hub/

AWS Security Hub is a service that gives you aggregated visibility into your security and compliance status across multiple AWS accounts. In addition to consuming findings from Amazon services and integrated partners, Security Hub gives you the option to create custom actions, which allow a customer to manually invoke a specific response or remediation action on a specific finding. You can send custom actions to Amazon CloudWatch Events as a specific event pattern, allowing you to create a CloudWatch Events rule that listens for these actions and sends them to a target service, such as a Lambda function or Amazon SQS queue.

By creating custom actions mapped to specific finding type and by developing a corresponding Lambda function for that custom action, you can achieve targeted, automated remediation for these findings. This allows a customer to specifically decide if he or she wants to invoke a remediation action on a specific finding. A customer can also use these Lambda functions as the target of fully automated remediation actions that do not require any human review.

In this blog post, I’ll show you how to build custom actions, CloudWatch Event rules, and Lambda functions for a dozen targeted actions that can help you remediate CIS AWS Foundations Benchmark-related compliance findings. I’ll also cover use cases for sending findings to an issue management system and for automating security patching. To promote rapid deployment and adoption of this solution, you’ll deploy a majority of the necessary components via AWS CloudFormation.

Note: The full repository for current and future response and remediation templates is hosted on GitHub and includes additional technical guidance for expanding the solution provided in this post.

Solution architecture

Figure 1 - Solution Architecture Overview

Figure 1 – Solution Architecture Overview

Figure 1 shows how a finding travels from an integrated service to a custom action:

  1. Integrated services send their findings to Security Hub.
  2. From the Security Hub console, you’ll choose a custom action for a finding. Each custom action is then emitted as a CloudWatch Event.
  3. The CloudWatch Event rule triggers a Lambda function. This function is mapped to a custom action based on the custom action’s ARN.
  4. Dependent on the particular rule, the Lambda function that is invoked will perform a remediation action on your behalf.

For the purpose of this blog post, I’ll refer to the end-to-end combination of a custom action, a CloudWatch Event rule, a Lambda function, plus any supporting services needed to perform a specific action as a “playbook.” To demonstrate how a remediation solution works end-to-end, I’ll show you how to build your first playbook manually. You’ll deploy the remainder of the playbooks via CloudFormation.

I’ll also show you how to modify four of the playbooks (three of which are appended by an asterisk), as they use AWS Lambda environment variables to perform their actions, we’ll walk through populating these later.

Based on feedback from Security Hub customers, the following controls from the CIS AWS Foundations Benchmark will be supported by this blog post:

  • 1.3 – “Ensure credentials unused for 90 days or greater are disabled”
  • 1.4 – “Ensure access keys are rotated every 90 days or less”
  • 1.5 – “Ensure IAM password policy requires at least one uppercase letter”
  • 1.6 – “Ensure IAM password policy requires at least one lowercase letter”
  • 1.7 – “Ensure IAM password policy requires at least one symbol”
  • 1.8 – “Ensure IAM password policy requires at least one number”
  • 1.9 – “Ensure IAM password policy requires a minimum length of 14 or greater”
  • 1.10 – “Ensure IAM password policy prevents password reuse”
  • 1.11 – “Ensure IAM password policy expires passwords within 90 days or less”
  • 2.2 – “Ensure CloudTrail log file validation is enabled”
  • 2.3 – “Ensure the S3 bucket CloudTrail logs to is not publicly accessible”
  • 2.4 – “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs”*
  • 2.6 – “Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket”*
  • 2.7 – “Ensure CloudTrail logs are encrypted at rest using AWS KMS CMKs”
  • 2.8 – “Ensure rotation for customer created CMKs is enabled”
  • 2.9 – “Ensure VPC flow logging is enabled in all VPCs”*
  • 4.1 – “Ensure no security groups allow ingress from 0.0.0.0/0 to port 22”
  • 4.2 – “Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389”
  • 4.3 – “Ensure the default security group of every VPC restricts all traffic”

You’ll also deploy and modify an additional playbook, “Send findings to JIRA.” You can find the high-level description of each playbook in each custom action creation script, as well as in the CloudFormation resource descriptions.

Note: If you want to send Security Hub findings to a security information and event management tool (SIEM) such as Amazon ElasticSearch Service or a third-party solution, you must change the CloudWatch Events event pattern to match all findings and use different targets such as Amazon Kinesis Data Streams to Kinesis Data Firehose to load your SIEM. This process is out of scope for this post.

Prerequisites

Ensure you have Security Hub and AWS Config turned on in your Region. Also, note that the solution in this blog post is meant to support a single account and will not support cross-account remediation as deployed. Refer to the Knowledge Center article How can I configure a Lambda function to assume a role from another AWS account? for basic information on cross-account roles for Lambda.

For the playbook “Apply Security Patches,” your EC2 instances must be managed by Systems Manager. For more information on managed instances, see AWS Systems Manager Managed Instances in the AWS Systems Manager User Guide.

Manually create a remediation playbook

To demonstrate the end-to-end process of building a playbook, I’ll first show you how to create one manually, before you deploy the remaining playbooks via CloudFormation. You’ll build a playbook to remediate Control 2.7 of the AWS Foundations Benchmark, “ensure CloudTrail logs are encrypted at rest using AWS Key Management Service (KMS) Customer Managed Keys (CMK).” Configuring CloudTrail to use KMS encryption (called SSE-KMS) provides additional confidentiality controls on you log data. To access your CloudTrail logs, users must not only have S3 read permissions for the corresponding log bucket, they now must be granted decrypt permissions by the KMS key policy.

Important note: The way this remediation, and all other remediation code is written, you can only target one finding at a time via the Action Menu.

You’ll achieve automated remediation by using a Lambda function to create a new KMS CMK and alias which identifies the non-compliant CloudTrail trail. You’ll then attach a KMS key policy that only allows the AWS account that owns the trail to decrypt the logs by using the IAM condition for StringEquals: kms:CallerAccount. You only need to run this playbook once per non-compliant CloudTrail trail.

To get started, follow these steps:

  1. Navigate to the Security Hub console, select Settings from the navigation pane, then select the Custom Actions tab.
  2. Choose Create custom action and enter values for Action name, Description, and Custom action ID, then choose Create custom action again, as shown in Figure 2.

    For the purpose of this blog post, I’ll refer to my action name as “CIS 2.7 RR” where the “RR” stands for “Response and remediation.”
     

    Figure 2 - Create custom action

    Figure 2 – Create custom action

  3. Copy the Amazon resource number (ARN) down, as you’ll need it in step 11.
  4. Navigate to the Lambda console and select Create function.
  5. Enter a function name, choose Python 3.7 runtime, and under Permissions select Create a new role with basic Lambda permissions. Then choose Create function.
  6. Scroll down to Execution role and select the hyperlink under Existing role. This will open a new tab in the IAM console.
  7. From the IAM console, select Add inline policy, then select the JSON tab, paste in the below IAM policy JSON, and select Review Policy.
    
    {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "kmssid",
            "Action": [
              "kms:CreateAlias",
              "kms:CreateKey",
              "kms:PutKeyPolicy"
            ],
            "Effect": "Allow",
            "Resource": "*"
          },
          {
            "Sid": "cloudtrailsid",
            "Action": [
              "cloudtrail:UpdateTrail"
            ],
            "Effect": "Allow",
            "Resource": "*"
          }
        ]
      }
    

  8. Give the in-line policy a name and select Create policy.
  9. Back in the Lambda console, increase Timeout to 1 minute and Memory to 256MB. Scroll up to Function code, paste in the below code, and select Save.
    
     # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
     # SPDX-License-Identifier: MIT-0
     #
     # Permission is hereby granted, free of charge, to any person obtaining a copy of this
     # software and associated documentation files (the "Software"), to deal in the Software
     # without restriction, including without limitation the rights to use, copy, modify,
     # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
     # permit persons to whom the Software is furnished to do so.
     #
     # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
     # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
     # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
     # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
     # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
     # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
    
    import boto3
    import json
    import time
    
    def lambda_handler(event, context):
        # parse non-compliant trail from Security Hub finding
        noncompliantTrail = str(event['detail']['findings'][0]['Resources'][0]['Details']['Other']['name'])
        # parse account ID from Security Hub finding, will be needed for Key Policy
        accountID = str(event['detail']['findings'][0]['AwsAccountId'])
        
        # import boto3 clients for KMS and CloudTrail
        kms = boto3.client('kms')
        cloudtrail = boto3.client('cloudtrail')
    
        # create a new KMS CMK to encrypt the non-compliant trail
        try:
            createKey = kms.create_key(
            Description='Generated by Security Hub to remediate CIS 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs',
            KeyUsage='ENCRYPT_DECRYPT',
            Origin='AWS_KMS'
            )
            # save key id as a variable
            cloudtrailKey = str(createKey['KeyMetadata']['KeyId'])
            print("Created Key" + " " + cloudtrailKey)
        except Exception as e:
            print(e)
            print("KMS CMK creation failed")
            raise
            
        # wait 2 seconds for key creation to propogate
        time.sleep(2)
    
        # attach an alias for easy identification to the key - must always begin with "alias/"
        try:
            createAlias = kms.create_alias(
            AliasName='alias/' + noncompliantTrail + '-CMK',
            TargetKeyId=cloudtrailKey
            )
            print(createAlias)
        except Exception as e:
            print(e)
            print("Failed to create KMS Alias")
            raise
        
        # wait 1 second
        time.sleep(1)
    
        # policy name for PutKeyPolicy is always "default"
        policyName = 'default'
        # set Key Policy as JSON object
        keyPolicy={
            "Version": "2012-10-17",
            "Id": "Key policy created by CloudTrail",
            "Statement": [
                {
                    "Sid": "Enable IAM User Permissions",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [ "arn:aws:iam::" + accountID + ":root" ]
                    },
                    "Action": "kms:*",
                    "Resource": "*"
                },
                {
                    "Sid": "Allow CloudTrail to encrypt logs",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudtrail.amazonaws.com"
                    },
                    "Action": "kms:GenerateDataKey*",
                    "Resource": "*",
                    "Condition": {
                        "StringLike": {
                            "kms:EncryptionContext:aws:cloudtrail:arn": "arn:aws:cloudtrail:*:" + accountID + ":trail/" + noncompliantTrail
                        }
                    }
                },
                {
                    "Sid": "Allow CloudTrail to describe key",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudtrail.amazonaws.com"
                    },
                    "Action": "kms:DescribeKey",
                    "Resource": "*"
                },
                {
                    "Sid": "Allow principals in the account to decrypt log files",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "*"
                    },
                    "Action": [
                        "kms:Decrypt",
                        "kms:ReEncryptFrom"
                    ],
                    "Resource": "*",
                    "Condition": {
                        "StringEquals": {
                            "kms:CallerAccount": accountID
                        },
                        "StringLike": {
                            "kms:EncryptionContext:aws:cloudtrail:arn": "arn:aws:cloudtrail:*:" + accountID + ":trail/" + noncompliantTrail
                        }
                    }
                },
                {
                    "Sid": "Allow alias creation during setup",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "*"
                    },
                    "Action": "kms:CreateAlias",
                    "Resource": "*",
                    "Condition": {
                        "StringEquals": {
                            "kms:CallerAccount": accountID
                        }
                    }
                }
            ]
        }
        # attaches above key policy to key
        try:
            attachKeyPolicy = kms.put_key_policy(
                KeyId=cloudtrailKey,
                Policy=json.dumps(keyPolicy),
                PolicyName=policyName
            )
            print(attachKeyPolicy)
        except Exception as e:
            print(e)
            print("Failed to attach key policy to Key:" + " " + cloudtrailKey)
    
        # update CloudTrail with the new CMK
        try:
            encryptTrail = cloudtrail.update_trail(
            Name=noncompliantTrail,
            KmsKeyId=cloudtrailKey
            )
            print(encryptTrail)
            print("CloudTrail trail" + " " + noncompliantTrail + " " + "has been successfully encrypted!")
        except Exception as e:
            print(e)
            print("Failed to attach KMS CMK to CloudTrail")
    

  10. Navigate to the CloudWatch console, and choose Events, then Create rule.
  11. On the left, next to Event Pattern Preview, select Edit.
  12. Under Event Source, find Build custom event pattern and paste the JSON below into the text box. Replace the ARN under “resources” with the ARN of the custom action you created in step 3.
    
    {
      "source": [
        "aws.securityhub"
      ],
      "detail-type": [
        "Security Hub Findings - Custom Action"
      ],
      "resources": [
        "arn:aws:securityhub:us-west-2:123456789012:action/custom/test-action1"
      ]
    }
    

  13. On the same screen, under Targets on the right, select add Target, choose the Lambda function you created in step 4, then select Configure details.
  14. On the next screen, enter values for Name and Description, then select Create rule.
  15. Back in the Security Hub console, select Compliance standards from the menu on the left, then select View results to see the CIS AWS Benchmarks.
  16. Find rule 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs and select the hyperlink to see all findings related to that control.
  17. Select a finding that is FAILED, with a Resource type that reads AwsCloudTrailTrail.
  18. From the top right, select the Actions dropdown menu, then select CIS 2.7 RR (or whatever you named this action in step 2), as shown in Figure 3.

    Selecting this action will execute the rule you created in step 13, which will invoke the Lambda function you created in step 9.
     

    Figure 3 - Security Hub Custom Actions

    Figure 3 – Security Hub Custom Actions

  19. Navigate to the CloudTrail console to ensure your CloudTrail trail is updated with SSE-KMS.

This creation flow via the console is universal for all playbook development with Security Hub. You’ll use the Actions menu in the same way to trigger the playbooks you deploy via CloudFormation in the next section of this walkthrough.

Note: To monitor the actions that are taken by the playbooks’ Lambda functions, refer to the functions’ logs. Both success and error messages will appear to help you diagnose as needed.

Deploy remediation playbooks via CloudFormation

Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the AWS CloudFormation User Guide.

After your stack has finished deploying, navigate to the Resources tab and select the hyperlink for each resource to be taken to their respective consoles. The logical ID for each resource will be prepended by the action the resource corresponds to. For example, in figure 4, CIS13RRCWEPermissions denotes CloudWatch Event permissions for AWS CIS Benchmark Control 1.3. This logical ID structure is used throughout the template.
 

Figure 4 - CloudFormation Resources

Figure 4 – CloudFormation Resources

Next, I’ll show you how to modify the Lambda functions associated with the following playbooks: “Send findings to JIRA,” CIS 2.4, CIS 2.6, and CIS 2.9.

Playbook modification: send findings to JIRA

Note: If you don’t currently have a JIRA Software Data Center deployment set up in your account, you can deploy one with a free evaluation period by following this Quick Start. If you are not interested in using JIRA or you use a different issue management tool, you can skip this section.

This playbook works by using the associated Lambda function to execute a Systems Manager automation document called AWS-CreateJiraIssue.

This Systems Manager document will in turn deploy a CloudFormation stack and create a custom Lambda function to map environmental variables (listed below) into JIRA via your playbook’s Lambda function.

Once complete, the CloudFormation stack will self-delete and the automation will be complete. To see this flow, refer to figure 5, below:
 

Figure 5 - Send to JIRA Architecture Diagram

Figure 5 – Send to JIRA Architecture Diagram

  1. A finding is selected and the custom action “Create JIRA Issue” is invoked, which triggers a CloudWatch Event.
  2. The CloudWatch Event rule will trigger a Lambda function.
  3. Lambda will invoke the AWS-CreateJiraIssue document via Systems Manager Automation.
  4. Systems Manager Automation will pass your Lambda environmental variables and Security Hub finding as document parameters.
  5. The document will create a CloudFormation Stack that contains another custom Lambda to invoke JIRA APIs for creating issues.
  6. The document’s Lambda function uses your parameters to create an issue in JIRA.
  7. JIRA will send back a response to note failure or success, and the CloudFormation stack will self-delete.

To get started, navigate to the Lambda console and find the function named SendToJIRA, then scroll down to Environmental variables: You should see 4, with the text “placeholder” next to each. The following steps walk you through how to populate them:

  1. Fill out the JIRA_API_PARAMETER field:
    1. Refer to Atlassian’s instructions to generate a JIRA API token
    2. Follow the Systems Manager Parameter Store walkthrough to create a parameter.
      1. In Step 6 of the linked instructions, choose Secure String and choose the default KMS key.
      2. In Step 7 of the linked instructions, paste in your newly generated JIRA API token.
    3. Copy the name of the parameter and paste it as the value of the JIRA_API_PARAMETER field.
  2. Fill out the JIRA_PROJECT field by pasting in your JIRA Software project’s Project Key.
  3. Fill out the JIRA_SECURITY_ISSUE_USER field:
    1. This value maps to a user in JIRA Software; refer to Atlassian’s instructions for how to create a user. Use the API key you generated in step 1.A as this user’s password.
    2. Paste the user name into the JIRA_SECURITY_ISSUE_USER field.
  4. Fill out the JIRA_URL field:
    1. From JIRA Software navigate to the System sub-menu of JIRA Administration and look for the Base URL field, underneath Settings – General Settings (see figure 6).
    2. Copy the Base URL value and paste it into the JIRA_URL field.
       
      Figure 6 - Base URL in JIRA

      Figure 6 – Base URL in JIRA

  5. When finished, select Save at the top of the Lambda console, then navigate to the Security Hub console and select Findings from the left-hand menu.
  6. Select any finding, and then, from the Actions dropdown in the top right, select Create JIRA Issue.
  7. Navigate to your JIRA instance and wait a few minutes for the new issue to appear, as shown in Figure 7.
     
    Figure 7 - Security Hub Finding in JIRA

    Figure 7 – Security Hub Finding in JIRA

    Note: If your issue hasn’t populated in JIRA after a few minutes, refer to the Systems Manager Automation console or CloudFormation console to find the stack that was created by Systems Manager and refer to the failure messages to troubleshoot further.

CIS 2.4 response & remediation playbook modification

CIS Control 2.4 is “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs.” The intent of this recommendation is to ensure that API activity recorded by CloudTrail is available to query in near real-time for the purpose of troubleshooting or security incident investigation with CloudWatch.

To send your CloudTrail logs to CloudWatch, the Lambda function for this playbook will create a brand new CloudWatch Logs group that has the name of the non-compliant CloudTrail trail in it for easy identification and then update your non-compliant CloudTrail trail to send its logs to the newly created log group.

To accomplish this, CloudTrail needs an IAM role and permissions to be allowed to publish logs to CloudWatch. To avoid creating multiple new IAM roles and policies via Lambda, you’ll populate the ARN of this IAM role in the Lambda environmental variables for this playbook.

Note: If you don’t currently have an IAM role for CloudTrail, follow these instructions from the CloudTrail user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-4_RR, then scroll down to Environmental variables.
  2. Find the variable CLOUDTRAIL_CW_LOGGING_ROLE_ARN with text that reads “placeholder” as the value.
  3. Paste the ARN of the IAM role into the field for this value, then select Save.

CIS 2.6 response & remediation playbook modification

CIS Control 2.6 is “Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket.” An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. AWS recommends that you enable bucket access logging on the CloudTrail S3 bucket. By enabling S3 bucket logging on target S3 buckets, you can capture all the events that might affect objects in the bucket. Configuring logs to be placed in a separate bucket enables centralized collection of access log information, which can be useful in security and incident response workflows.

Note: Security Hub supports CIS AWS Foundations controls only on resources in the same Region and owned by the same account as the one in which Security Hub is enabled and being used. For example, if you’re using Security Hub in the us-east-2 Region, and you’re storing CloudTrail logs in a bucket in the us-west-2 Region, Security Hub cannot find the bucket in the us-west-2 Region. The control returns a warning that the resource cannot be located. Similarly, if you’re aggregating logs from multiple accounts into a single bucket, the CIS control fails for all accounts except the account that owns the bucket.

To ensure the S3 bucket that contains your CloudTrail logs has access logging enabled, the Lambda function for this playbook invokes the Systems Manager document AWS-ConfigureS3BucketLogging. This document will enable access logging for that bucket. To avoid statically populating your S3 access logging bucket in the Lambda function’s code, you’ll pass that value in via an environmental variable.

Note: If you do not currently have an S3 bucket configured to receive access logs, follow the directions from the S3 user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-6_RR, then scroll down to Environmental variables.
  2. Find the variable ACCESS_LOGGING_BUCKET with text that reads ‘placeholder’ as the value.
  3. Paste the name of the S3 bucket that will receive the access logs into the field for this value and select Save.

CIS 2.9 response & remediation playbook modification

CIS Control 2.9 is “Ensure VPC flow logging is enabled in all VPCs.” The Amazon Virtual Private Cloud (VPC) flow logs feature enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you’ve created a flow log, you can view and retrieve its data in CloudWatch Logs. AWS recommends that you enable flow logging for packet rejects for VPCs. Flow logs provide visibility into network traffic that traverses the VPC and can detect anomalous traffic or provide insight into your security workflow.

To enable VPC flow logging for rejected packets, the Lambda function for this playbook will create a new CloudWatch Logs group. For easy identification, the name of the group will include the non-compliant VPC name. The Lambda function will programmatically update your VPC to enable flow logs to be sent to the newly created log group.

Similar to CloudTrail logging, VPC flow log need an IAM role and permissions to be allowed to publish logs to CloudWatch. To avoid creating multiple new IAM roles and policies via Lambda, you’ll populate the ARN of this IAM role in the Lambda environmental variables for this playbook.

Note: If you don’t currently have an IAM role that VPC flow logs can use to deliver logs to CloudWatch, follow the directions from the VPC user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-9_RR, then scroll down to Environmental variables.
  2. Find the variable flowLogRoleARN with text that reads ‘placeholder’ as the value.
  3. Paste in the ARN of the VPC flow logs IAM role the field for this value, and select Save.

Conclusion

In this blog post, I showed you how to create, deploy, and execute response and remediation playbooks for Security Hub. By combining custom actions, CloudWatch Event rules, and Lambda functions, you can quickly remediate non-compliant resources. I also showed you how to take pre-defined actions such as sending findings to JIRA, and how to deploy an additional seven playbooks via CloudFormation.

You can create playbooks that take other actions, such as updating Network Access Control Lists to help block malicious traffic from a TOR exit node via the UnauthorizedAccess:EC2/TorIPCaller finding from GuardDuty. Using playbooks with Security Hub is one more way to build toward the security and compliance of your AWS resources.

To avoid incurring additional charges from AWS resources, delete the CloudFormation stack you deployed as well as the resources you manually created for the CIS 2.7 response and remediation playbook.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Jonathon Rau

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

New IRAP report provides Australian public sector the ability to leverage additional services at PROTECTED level

Post Syndicated from John Hildebrandt original https://aws.amazon.com/blogs/security/new-irap-report-australian-public-sector-leverage-services-protected-level/

Following the award of PROTECTED certification to AWS in January 2019, we have now released updated Information Security Registered Assessors Program (IRAP) PROTECTED documentation via AWS Artifact. This information provides the ability to plan, architect, and self-assess systems built in AWS under the Digital Transformation Agency’s Secure Cloud Guidelines. The new documentation expands the scope to 64 PROTECTED services, including new category areas such as artificial intelligence (AI), machine learning (ML), and IoT services. For example, Amazon SageMaker is a service that provides every developer and data scientist with tools to build, train, and deploy machine learning models quickly.

This documentation gives public sector customers everything needed to evaluate AWS at the PROTECTED level. AWS is making this resource available to download on-demand through AWS Artifact. The guide provides a mapping of AWS controls for securing PROTECTED data.

The AWS IRAP PROTECTED documentation helps individual agencies simplify the process of adopting AWS services. The information enables individual agencies to complete their own assessments and adopt AWS for a broader range of services. These assessed AWS services are available within the existing AWS Asia-Pacific (Sydney) Region and cover service categories such as compute, storage, network, database, security, analytics, AI/ML, IoT, application integration, management, and governance. This means you can take advantage of all the security benefits without paying a price premium, or needing to modify your existing applications or environments.

The newly added services to the scope of the IRAP document are listed below.

For the full list of services in scope of the IRAP report, see the services in scope page (select the IRAP tab).

If you have questions about our PROTECTED certification or would like to inquire about how to use AWS for your highly sensitive workloads, contact your account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

John Hildebrandt

John is Head of Security Assurance for Australia and New Zealand at AWS in Canberra Australia. He is passionate about removing regulatory barriers of cloud adoption for customers. John has been working with Government customers at AWS for over 7 years, as the first employee for the ANZ Public Sector team.

Internet Security Notification – Department of Homeland Security Alert AA20-006A

Post Syndicated from Nathan Case original https://aws.amazon.com/blogs/security/internet-security-notification-department-of-homeland-security-alert-aa20-006a/

On January 6, 2020, the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) released an alert (AA20-006A) that highlighted measures for critical infrastructure to prepare for information security risks, but which are also relevant to all organizations. The CISA alert focuses on vulnerability mitigation and incident preparation.

At AWS, security is our core function and highest priority and, as always, we are engaged with the U.S. Government and other responsible national authorities regarding the current threat landscape. We are taking all appropriate steps to ensure that our customers and infrastructure remain protected, and we encourage our customers to do the same with their systems and workloads, whether in the cloud or on-premises.

The CISA recommendations reflect general guidance, as well as specific mitigations and monitoring that can help address information security risks. In this post, we provide customers with resources they can use to apply the CISA recommendations to their environment and implement other best practices to protect their resources. Specifically, the security principles and mechanisms provided in the Well Architected Framework and posts on AWS best practices that can help you address the issues described in the alert.

The specific techniques described in the CISA alert are almost all related to issues that exist in an on-premises Windows or Linux operating system and network environment, and are not directly related to cloud computing. However, the precautions described may be applicable to the extent customers are using those operating systems in an Amazon Elastic Compute Cloud (Amazon EC2) virtual machine environment. There are also cloud-specific technologies and issues that should be considered and addressed. Customers can use the information provided in the table below to help address the issues.

TechniqueMitigation
Credential Dumping & Spearphishing

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Getting Started: Follow Security Best Practices as You Configure Your AWS Resources

How can I configure a CloudWatch events rule for GuardDuty to send custom SNS notifications if specific AWS service event types trigger?

Data Compressed & Obfuscated Files or Information

How can I configure a CloudWatch events rule for GuardDuty to send custom SNS notifications if specific AWS service event types trigger?

Monitor, review, and protect Amazon S3 buckets using Access Analyzer for S3

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

User Execution

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Monitor, review, and protect Amazon S3 buckets using Access Analyzer for S3

Scripting

Nine AWS Security Hub best practices

How to import AWS Config rules evaluations as findings in Security Hub

Remote File Copy

Continuous Compliance with AWS Security Hub

Monitor, review, and protect Amazon S3 buckets using Access Analyzer for S3

We’re also including links to GitHub repositories that can be helpful to automate some of the above practices, and the AWS Security Incident Response white paper, to assist with planning and response to security events. We strongly recommend that you review your run-books, disaster recovery plans, and backup procedures.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, please contact your AWS Account Manager or contact AWS Support. If you need urgent help or have relevant information about an existing security issue, contact your AWS account representative.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Nathan Case

Nathan Case

Nathan is a Senior Security Strategist, and joined AWS in 2016. He is always interested to see where our customers plan to go and how we can help them get there. He is also interested in intel, combined data lake sharing opportunities, and open source collaboration. In the end Nathan loves technology and that we can change the world to make it a better place.

Author

Min Hyun

Min is the Global Lead for Growth Strategies at AWS. Her team’s mission is to set the industry bar in thought leadership for security and data privacy assurance in emerging technology, trends and strategy to advance customers’ journeys to AWS. View her other Security Blog publications here.

Author

Tim Anderson

Tim Anderson is a Senior Security Advisor with AWS Security where he focuses on addressing the security, compliance, and privacy needs for customers and industry globally. Additionally, Tim designs solutions, capabilities, and practices to teach and democratize security concepts to meet challenges across the global landscape. Previous to AWS, Tim had 16 years’ experience designing, delivering, and managing security and compliance programs for U.S. Federal customers across DoD and federal civilian agencies.