Tag Archives: Security Blog

How to record a video of Amazon AppStream 2.0 streaming sessions

Post Syndicated from Nicolas Malaval original https://aws.amazon.com/blogs/security/how-to-record-video-of-amazon-appstream-2-0-streaming-sessions/

Amazon AppStream 2.0 is a fully managed service that lets you stream applications and desktops to your users. In this post, I’ll show you how to record a video of AppStream 2.0 streaming sessions by using FFmpeg, a popular media framework.

There are many use cases for session recording, such as auditing administrative access, troubleshooting user issues, or quality assurance. For example, you could publish administrative tools with AppStream 2.0, such as a Remote Desktop Protocol (RDP) client, to protect access to your backend systems (see How to use Amazon AppStream 2.0 to reduce your bastion host attack surface) and you may want to record a video of what your administrators do when accessing and operating backend systems. You may also want to see what a user did to reproduce an issue, or view activities in a call center setting, such as call handling or customer support, for review and training.

This solution is not designed or intended for people surveillance, or for the collection of evidence for legal proceedings. You are responsible for complying with all applicable laws and regulations when using this solution.

Overview and architecture

In this section, you can learn about the steps for recording AppStream 2.0 streaming sessions and see an overview of the solution architecture. Later in this post, you can find instructions about how to implement and test the solution.

AppStream 2.0 enables you to run custom scripts to prepare the streaming instance before the applications launch or after the streaming session has completed. Figure 1 shows a simplified description of what happens before, during and after a streaming session.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Before the streaming session starts, AppStream 2.0 runs script A, which uses PsExec, a utility that enables administrators to run commands on local or remote computers, to launch script B. Script B then runs during the entire streaming session. PsExec can run the script as the LocalSystem account, a service account that has extensive privileges on a local system, while it interacts with the desktop of another session. Using the LocalSystem account, you can use FFmpeg to record the session screen and prevent AppStream 2.0 users from stopping or tampering with the solution, as long as they aren’t granted local administrator rights.
  2. Script B launches FFmpeg and starts recording the desktop. The solution uses the FFmpeg built-in screen-grabber to capture the desktop across all the available screens.
  3. When FFmpeg starts recording, it captures the area covered by the desktop at that time. If the number of screens or the resolution changes, a portion of the desktop might be outside the recorded area. In that case, script B stops the recording and starts FFmpeg again.
  4. After the streaming session ends, AppStream 2.0 runs script C, which notifies script B that it must end the recording and close. Script B stops FFmpeg.
  5. Before exiting, script B uploads the video files that FFmpeg generated to Amazon Simple Storage Service (Amazon S3). It also stores user and session metadata in Amazon S3, along with the video files, for easy retrieval of session recordings.

For a more comprehensive understanding of how the session scripts works, you can refer to the GitHub repository that contains the solution artifacts, where I go into the details of each script.

Implementing and testing the solution

Now that you understand the architecture of this solution, you can follow the instructions in this section to implement this blog post’s solution in your AWS account. You will:

  1. Create a virtual private cloud (VPC), an S3 bucket and an AWS Identity and Access Management (IAM) role with AWS CloudFormation.
  2. Create an AppStream 2.0 image builder.
  3. Configure the solution scripts on the image builder.
  4. Specify an application to publish and create an image.
  5. Create an AppStream 2.0 fleet.
  6. Create an AppStream 2.0 stack.
  7. Create a user in the AppStream 2.0 user pool.
  8. Launch a streaming session and test the solution.

Step 1: Create a VPC, an S3 bucket, and an IAM role with AWS CloudFormation

For the first step in the solution, you create a new VPC where AppStream 2.0 will be deployed, or choose an existing VPC, a new S3 bucket to store the session recordings, and a new IAM role to grant AppStream 2.0 the necessary IAM permissions.

To create the VPC, the S3 bucket, and the IAM role with AWS CloudFormation

  1. Select the following Launch Stack button to open the CloudFormation console and create a CloudFormation stack from the template. You can change the Region where resources are deployed in the navigation bar.
     
    Select the Launch Stack button to launch the template

    The latest template can also be downloaded on GitHub.

  2. Choose Next. For VPC ID, Subnet 1 ID and Subnet 2 ID, you can optionally select a VPC and two subnets, if you want to deploy the solution in an existing VPC, or leave these fields blank to create a new VPC. Then follow the on-screen instructions. AWS CloudFormation creates the following resources:
    • (If you chose to create a new VPC) An Amazon Virtual Private Cloud (Amazon VPC) with an internet gateway attached.
    • (If you chose to create a new VPC) Two public subnets on this Amazon VPC with a new route table to make them publicly accessible.
    • An S3 bucket to store the session recordings.
    • An IAM role to grant AppStream 2.0 permissions to upload video and metadata files to Amazon S3.
  3. After the stack creation has completed, choose the Outputs tab in the CloudFormation console and note the values that the process returned: the name and Region of the S3 bucket, the name of the IAM role, the ID of the VPC, and the two subnets.

Step 2: Create an AppStream 2.0 image builder

The next step is to create a new AppStream 2.0 image builder. An image builder is a virtual machine that you can use to install and configure applications for streaming, and then create a custom image.

To create the AppStream 2.0 image builder

  1. Open the AppStream 2.0 console and select the Region in the navigation bar. Choose Get Started then Skip if you are new to the console.
  2. Choose Images in the left pane, and then choose Image Builder. Choose Launch Image Builder.
  3. In Step 1: Choose Image:
    1. Select the name of the latest AppStream 2.0 base image for the Windows Server version of your choice. You can find its name in the AppStream 2.0 base image version history. For example, at the time of writing, the name of the latest Windows Server 2019 base image is AppStream-WinServer2019-07-16-2020.
    2. Choose Next.
  4. In Step 2: Configure Image Builder:
    1. For Name, enter session-recording.
    2. For Instance Type, choose stream.standard.medium.
    3. For IAM role, select the IAM role that AWS CloudFormation created.
    4. Choose Next.
  5. In Step 3: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. For VPC, select the ID of the VPC, and for Subnet 1, select the ID of Subnet 1.
    3. For Security group(s), select the ID of the security group. Refer back to the Outputs tab of the CloudFormation stack if you are unsure which VPC, subnet and security group to select.
    4. Choose Review.
  6. In Step 4: Review, choose Launch.

Step 3: Configure the solution scripts on the image builder

The session scripts to run before streaming sessions start or after sessions end are specified within an AppStream 2.0 image. In this step, you install the solution scripts on your image builder and specify the scripts to run in the session scripts configuration file.

To configure the solution scripts on the image builder

  1. Wait until the image builder is in the Running state, and then choose Connect.
  2. Within the AppStream 2.0 streaming session, on the Local User tab, choose Administrator.
  3. To install the solution scripts:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell, and then choose Run as Administrator.
    3. Run the following commands in the PowerShell terminal to create the required folders, and to copy the solution scripts and the session scripts configuration file from public objects in GitHub to the local disk. If you aren’t using Google Chrome or the AppStream 2.0 client, you need to choose the Clipboard icon in the AppStream 2.0 navigation bar, and then select Paste to remote session.
      New-Item -Path C:\SessionRecording -ItemType directory
      New-Item -Path C:\SessionRecording\Scripts -ItemType directory
      New-Item -Path C:\SessionRecording\Output -ItemType directory
      New-Item -Path C:\SessionRecording\Bin -ItemType directory
      
      $Acl = Get-Acl C:\SessionRecording
      $Acl.SetAccessRuleProtection($true,$false)
      $AccessRule1 = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule1)
      $AccessRule2 = New-Object System.Security.AccessControl.FileSystemAccessRule("SYSTEM","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule2)
      $AccessRule3 = New-Object System.Security.AccessControl.FileSystemAccessRule("ImageBuilderAdmin","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule3)
      Set-Acl C:\SessionRecording $Acl
      
      [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_a.ps1 -OutFile C:\SessionRecording\Scripts\script_a.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_b.ps1 -OutFile C:\SessionRecording\Scripts\script_b.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_c.ps1 -OutFile C:\SessionRecording\Scripts\script_c.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/variables.ps1 -OutFile C:\SessionRecording\Scripts\variables.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/config.json -OutFile C:\AppStream\SessionScripts\config.json
      

    4. Close the PowerShell terminal.
  4. To edit the variables.ps1 file with your own values:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell ISE, and then choose Run as Administrator.
    3. Choose File, then Open. Navigate to the folder C:\SessionRecording\Scripts\ and open the file variables.ps1.
    4. Edit the name and the Region of the S3 bucket with the values returned by AWS CloudFormation in the Outputs tab. You can also customize the number of frames per second, and the maximum duration in seconds of each video file. Save the file.
    5. Save and close the file.
  5. To download the latest FFmpeg and PsExec executables to the image builder:
    1. From the image builder desktop, open the Firefox desktop icon.
    2. Navigate to the URL https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-github and choose the link that contains essentials_build.zip to download FFmpeg. Choose Open to download and extract the ZIP archive. Copy the file ffmpeg.exe in the bin folder of the ZIP archive to C:\SessionRecording\Bin\.

      Note: FFmpeg only provides source code and compiled packages are available at third-party locations. If the link above is invalid, go to the FFmpeg download page and follow the instructions to download the latest release build for Windows.

    3. Navigate to the URL https://download.sysinternals.com/files/PSTools.zip to download PsExec. Choose Open to download and extract the ZIP archive. Copy the file PsExec64.exe to C:\SessionRecording\Bin\. You must agree with the license terms, because the solution in this blog post automatically accepts them.
    4. Close Firefox.

Step 4: Specify an application to publish and create an image

In this step, you publish Firefox on your image builder and create an AppStream 2.0 custom image. I chose Firefox because it’s easy to test later in the procedure. You can choose other or additional applications to publish, if needed.

To specify the application to publish and create the image

  1. From the image builder desktop, open the Image Assistant icon available on the desktop. Image Assistant guides you through the image creation process.
  2. In 1. Add Apps:
    1. Choose + Add App.
    2. Enter the location C:\Program Files (x86)\Mozilla Firefox\firefox.exe to add Firefox.
    3. Choose Open. Keep the default settings and choose Save.
    4. Choose Next multiple times until you see 4. Optimize.
  3. In 4. Optimize:
    1. Choose Launch.
    2. Choose Continue until you can see 5. Configure Image.
  4. In 5. Configure Image:
    1. For Name, enter session-recording for your image name.
    2. Choose Next.
  5. In 6. Review:
    1. Choose Disconnect and Create Image.
  6. Back in the AppStream 2.0 console:
    1. Choose Images in the left pane, and then choose the Image Registry tab.
    2. Change All Images to Private and shared with others. You will see your new AppStream 2.0 image.
    3. Wait until the image is in the Available state. This can take more than 30 minutes.

Step 5: Create an AppStream 2.0 fleet

Next, create an AppStream 2.0 fleet that consists of streaming instances that run your custom image.

To create the AppStream 2.0 fleet

  1. In the left pane of the AppStream 2.0 console, choose Fleets, and then choose Create Fleet.
  2. In Step 1: Provide Fleet Details:
    1. For Name, enter session-recording-fleet.
    2. Choose Next.
  3. In Step 2: Choose an Image:
    1. Select the name of the custom image that you created with the image builder.
    2. Choose Next.
  4. In Step 3: Configure Fleet:
    1. For Instance Type, select stream.standard.medium.
    2. For Fleet Type, choose Always-on.
    3. For Stream view, you can choose to stream either the applications or the entire desktop.
    4. For IAM role, select the IAM role.
    5. Keep the defaults for all other parameters, and choose Next.
  5. In Step 4: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. Select the VPC, the two subnets, and the security group.
    3. Choose Next.
  6. In Step 5: Review, choose Create.
  7. Wait until the fleet is in the Running state.

Step 6: Create an AppStream 2.0 stack

Create an AppStream 2.0 stack and associate it with the fleet that you just created.

To create the AppStream 2.0 stack

  1. In the left pane of the AppStream 2.0 console, choose Stacks, and then choose Create Stack.
  2. In Step 1: Stack Details:
    1. For Name, enter session-recording-stack.
    2. For Fleet, select the fleet that you created.
  3. Then follow the on-screen instructions and keep the defaults for all other parameters until the stack is created.

Step 7: Create a user in the AppStream 2.0 user pool

The AppStream 2.0 user pool provides a simplified way to manage access to applications for your users. In this step, you create a user in the user pool that you will use later in the procedure to test the solution.

To create the user in the AppStream 2.0 user pool

  1. In the left pane of the AppStream 2.0 console, choose User Pool, and then choose Create User.
  2. Enter your email address, first name, and last name. Choose Create User.
  3. Select the user you just created. Choose Actions, and then choose Assign stack.
  4. Select the stack, and then choose Assign stack.

Step 8: Test the solution

Now, sign in to AppStream 2.0 with the user that you just created, launch a streaming session, and check that the session recordings are delivered to Amazon S3.

To launch a streaming session and test the solution

  1. AppStream 2.0 sends you a notification email. Connect to the sign in portal by entering the information included in the notification email, and set a permanent password.
  2. Sign in to AppStream 2.0 by entering your email address and the permanent password.
  3. After you sign in, you can view the application catalog. Choose Firefox to launch a Firefox window and browse any websites you’d like.
  4. Choose the user icon at the top-right corner, and then choose Logout to end the session.

In the Amazon S3 console, navigate to the S3 bucket to browse the session recordings. For the session you just terminated, you can find one text file that contains user and instance metadata, and one or more video files that you can download and play with a media player like VLC.

Step 9: Clean up resources

You can now delete the two CloudFormation stacks to clean up the resources that were just created.

To clean up resources

  1. To delete the image builder:
    1. In the left pane of the AppStream 2.0 console, choose Images, and then choose Image Builder.
    2. Select the image builder. Choose Actions, then choose Delete.
  2. To delete the stack:
    1. In the left pane of the AppStream 2.0 console, choose Stacks.
    2. Select the image builder. Choose Actions, then choose Disassociate Fleet. Choose Disassociate to confirm.
    3. Choose Actions, then choose Delete.
  3. To delete the fleet:
    1. In the left pane of the AppStream 2.0 console, choose Fleets.
    2. Select the fleet. Choose Actions, then choose Stop. Choose Stop to confirm.
    3. Wait until the fleet is in the Stopped state.
    4. Choose Actions, then choose Delete.
  4. To disable the user in the user pool:
    1. In the left pane of the AppStream 2.0 console, choose User Pool.
    2. Select the user. Choose Actions, then choose Disable user. Choose Disable User to confirm.
  5. Empty the S3 bucket that CloudFormation created (see How do I empty an S3 bucket?). Repeat the same operation with the buckets that AppStream 2.0 created, whose names start with appstream-settings, appstream-logs and appstream2.
  6. Delete the CloudFormation stack on the AWS CloudFormation console (see Deleting a stack on the AWS CloudFormation console).

Conclusion

In this blog post, I showed you a way to record AppStream 2.0 sessions to video files for administrative access auditing, troubleshooting, or quality assurance. While this blog post focuses on Amazon AppStream 2.0, you could adapt and deploy the solution in Amazon Workspaces or in Amazon Elastic Compute Cloud (Amazon EC2) Windows instances.

For a deep-dive explanation of how the solution scripts function, you can refer to the GitHub repository that contains the solution artifacts.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon AppStream 2.0 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nicolas Malaval

Nicolas is a Solution Architect for Amazon Web Services. He lives in Paris and helps our healthcare customers in France adopt cloud technology and innovate with AWS. Before that, he spent three years as a Consultant for AWS Professional Services, working with enterprise customers.

Combining encryption and signing with AWS asymmetric keys

Post Syndicated from J.D. Bean original https://aws.amazon.com/blogs/security/combining-encryption-and-signing-with-aws-asymmetric-keys/

In this post, I discuss how to use AWS Key Management Service (KMS) to combine asymmetric digital signature and asymmetric encryption of the same data.

The addition of support for asymmetric keys in AWS KMS has exciting use cases for customers. The ability to create, manage, and use public and private key pairs with KMS enables you to perform digital signing operations using RSA and Elliptic Curve Cryptography (ECC) keys. AWS KMS asymmetric keys can also be used to perform digital encryption operations using RSA keys. You can use these features together to digitally sign and encrypt the same data.

Another notable property of AWS KMS asymmetric keys is that they enable disconnected use cases. For example AWS KMS asymmetric keys can be used to cryptographically verify a digital signature client-side without the need for a network connection. AWS KMS asymmetric keys also enable scenarios where customers can use KMS to securely manage decryption of data that has been encrypted by a partner’s system that does not integrate with AWS APIs or have access to AWS account credentials. For the sake of simplicity, however, the example that I discuss in this post describes a connected use case where all cryptographic actions are performed server-side in AWS KMS using AWS credentials. The use of AWS KMS asymmetric keys throughout this post allows the overall approach to be adapted to disconnected and/or non-AWS-integrated use cases.

Overview

This post contains three basic steps.

  1. Create and configure AWS asymmetric customer master keys (CMK), AWS Identity and Access Management (IAM) roles, and key policies.
  2. Use your asymmetric CMKs to encrypt and sign a sample message in the role of a sender.
  3. Use AWS KMS to decrypt and verify the message signature of the sample message archive you generated in the previous procedure using your asymmetric CMKs in the role of a receiver.

Prerequisites

The commands I use in this tutorial were tested using AWS Command Line Interface (AWS CLI) version 2.50 on Amazon Linux 2. In order to run these commands in your in your own local environment ensure that you have first installed and updated the AWS CLI.

I assume you have at least one administrator identity available to you that has broad rights for creating roles, assuming roles, as well as creating, managing and using KMS keys. This can be a federated identity (for example, from your corporate identity provider or from a social identity), or it can be an AWS IAM user. Where no AWS identity is mentioned, I assume that you will be accessing the AWS Management Console or the AWS CLI using this administrator identity.

For simplicity, I create the KMS keys in the same region as each other. You must specify an AWS Region when using the AWS CLI, either explicitly or by setting a default Region. Before beginning, you should select an AWS Region to work in such as US East (N. Virginia). If you have not configured the AWS CLI in your environment please review the Configuration basics section of the AWS Command Line Interface User Guide for instructions. You may revert this configuration once you have finished if you do not wish to continue using a default Region with your AWS CLI. Take note of your selected region. When working in the AWS Console, if you do not see resources, such as AWS KMS keys, that you expect you may want to confirm that you are viewing resources in your chosen Region. For more information on selecting your Region in the AWS Console see Choosing a Region in AWS Management Console Getting Started Guide.

Create and configure resources

In the first phase of this tutorial you create and configure two asymmetric AWS KMS CMKs, two AWS IAM roles, and configure the key policies for both of your KMS CMKs to grant permissions to the roles. Shown in the following figure.
 

Figure 1: Create keys, roles, and key policies

Figure 1: Create keys, roles, and key policies

Create asymmetric signing and encryption key pairs

In the first step, you create two asymmetric master keys (CMK). One is configured for signing and verifying digital signatures while the other is configured for encrypting and decrypting data.

Note: The CMKs configured for this post are examples. RSA and Elliptic curve CMKs key specs can differ in a variety of dimensions. The RSA or elliptic curve key spec that you choose might be determined by your security standards or the requirements of your task. Different CMK key specs are priced differently and are subject to different request quotas because they each have different performance profiles. In general, use RSA or ECC keys with the highest security level that is practical and affordable for your task. For more information on CMK configuration options, please review the How to choose your CMK configuration section of the KMS Developer Guide.

To create a CMK for encryption and decryption

  1. Use the KMS CreateKey API. Pass RSA_4096 for the CustomerMasterKeySpec parameter and ENCRYPT_DECRYPT for the KeyUsage parameter in the AWS CLI example command below in order to generate a RSA 4096 key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec RSA_4096 \
        --key-usage ENCRYPT_DECRYPT \
        --description "Sample Digital Encryption Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value in this object.

  2. As a best practice, assign an alias for your key. Use the following command to assign an alias of sample-encrypt-decrypt-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID). Mapping a human-readable alias to the KeyID will make it easier to identify, use, and manage.
    aws kms create-alias \
        --alias-name alias/sample-encrypt-decrypt-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

To create a CMK for signature and verification

  1. Use the KMS CreateKey API. Pass ECC_NIST_P521 for the CustomerMasterKeySpec parameter and SIGN_VERIFY for the KeyUsage parameter in the AWS CLI example command below in order to generate an elliptic curve (ECC) key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec \
        ECC_NIST_P521  \
        --key-usage SIGN_VERIFY \
        --description "Sample Digital Signature Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value.

  2. Use the following command to assign an alias of sample-sign-verify-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID).
    aws kms create-alias \
        --alias-name alias/sample-sign-verify-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

Create sender and receiver roles

For the next step of this tutorial, you create two AWS principals. Use the steps that follow to create two roles—a sender principal and a receiver principal. Later, you will grant permissions to perform private key operations (sign and decrypt) and public key operations (verify and encrypt) to these roles.

To create and configure the roles

  1. Navigate to the AWS Identity and Access Management (IAM) Create role console dialogue that allows entities in a specified account to assume the role. Enter your Account ID and choose Next, as shown in the following figure.

    Note: If you don’t know you AWS account ID, please read Finding you AWS account ID in the AWS IAM User Guide for guidance on how to obtain this information.

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

  2. Select Next through the next two screens.

    Note: By clicking next through these dialogues you do not attach an IAM permissions policy or a tag to this new role.

  3. On the final screen, enter a Role name of SenderRole and a Role description of your choice, as shown in the following figure.
     
    Figure 3: Create the sender role

    Figure 3: Create the sender role

  4. Choose Create role to finish creating the sender role.
  5. To create the receiver role, repeat the preceding role creation process. However, in step 3, substitute the name ReceiverRole for SenderRole.

Configure key policy permissions

Best practice is to adhere to the principle of least privilege and provide each AWS principal with the minimal permissions necessary to perform its tasks. The sender and receiver roles that you created in the previous step currently have no permissions in your account. For this scenario, the receiver principal must be granted permission to verify digital signatures and decrypt data in AWS KMS using your asymmetric CMKs and the sender principal must be granted permission to create digital signatures and encrypt data in KMS using your asymmetric CMKs.

To provide access control permissions for AWS KMS actions to your AWS principals, attach a key policy to each of your CMKs.

Modify the CMK key policy

For the sample-encrypt-decrypt-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Encrypt permissions and the IAM role for the receiver principal (ReceiverRole) kms:Decrypt permissions in the CMK key policy.

To modify the CMK key policy (console)

  1. Navigate to the AWS KMS page in the AWS Console and select customer-managed keys.
  2. Select your sample-encrypt-decrypt-key CMK.
  3. In the key policy section, choose edit.
  4. To allow your receiver principal to use the CMK to decrypt data encrypted under that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for decryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Decrypt",
        "Resource": "*"
    }
    

  5. To allow your sender principal to use the CMK to encrypt data, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for encryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Encrypt",
        "Resource": "*"
    }
    

  6. Choose Save changes.

Note: The kms:Encrypt permission is sufficient to permit the sender principal to encrypt small amounts of arbitrary data using your CMK directly.

Grant sign and verify permissions to the CMK key policy

For the sample-sign-verify-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Sign permissions in the CMK key policy and the IAM role for the receiver principal (ReceiverRole) kms:Verify permissions in the CMK key policy.

To grant sign and verify permissions

  1. Using the same process as above, navigate to the key policy edit dialog for the sample-sign-verify-key CMK in the AWS console.
  2. To allow your sender principal to use the CMK to create digital signatures, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for digital signing",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Sign",
        "Resource": "*"
    }
    

  3. To allow your receiver principal to use the CMK to verify signatures created by that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for digital signature verification",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Verify",
        "Resource": "*"
    }
    

  4. Choose Save changes.

Key permissions summary

When these key policy edits have been completed the sender principal:

  • Will have permissions to encrypt data using the sample-encrypt-decrypt-key CMK and generate digital signatures using the sample-sign-verify-key CMK.
  • Will not have permissions to decrypt or to verify signatures using the CMKs.

The receiver principal:

  • Will have permissions to decrypt data which has been encrypted using the sample-encrypt-decrypt-key CMK and to verify signatures created using the sample-sign-verify-key CMK.
  • Will not have permissions to encrypt or to generate signatures using the CMKs.
Figure 4: Summary of key policy permissions

Figure 4: Summary of key policy permissions

Signing and encrypting a sample message

So far, you’ve created two asymmetric CMKs, created a set of sender and receiver roles, and configured permissions for those roles in each of your CMK key policies. In the second phase of this tutorial, you assume the role of sender and use your asymmetric signature and verification CMK to sign a sample message. You then bundle the sample message and its corresponding digital signature together into an archive and use your encryption and decryption asymmetric CMK to encrypt the archive.
 

Figure 5: Creating a message signature and encrypting the message along with its signature

Figure 5: Creating a message signature and encrypting the message along with its signature

Note: The order of operations in this process is that the message is first signed and then the signature and the message are encrypted together. This order is intentional. When a message is signed and then encrypted, neither the contents nor the identity of the sender will be available to unauthorized 3rd parties. If the order of operations were reversed, however, and a message was first encrypted and then signed it could leak information about the sender’s identity to unauthorized 3rd parties. Moreover, when a message is encrypted and then signed, an unauthorized 3rd party with access to the files could discard the authentic signature created by the sender and replace it with a valid signature created by their own key. This creates the potential for a 3rd party to deceptively create the appearance that they are the legitimate sender of the message and exploit that misperception further.

Assume the sender role

Start by assuming the sender role. In order to successfully assume a role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the sender role.

To assume the sender role

  1. Run the following command, but be sure to replace the account ID value of 111122223333 with your account ID:
    aws sts assume-role \
        --role-arn arn:aws:iam::111122223333::role/SenderRole \
        --role-session-name AWSCLI-Session
    

  2. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  3. Confirm that you’ve successfully assumed the sender role by issuing:
    aws sts get-caller-identity
    

    Note: If the output of this command contains the text assumed-role/SenderRole, then you’ve successfully assumed the sender role.

Create a message

Now, create a sample message file called message.json.

To create a message

Run the following command to create a message with the following content:

echo "
{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}
" > ./message.json 

Create a digital signature

Creating and verifying a digital signature for the message provides confidence that the message contents haven’t been altered after being sent. This characteristic is known as integrity. Furthermore, when access to a signing key is scoped to a particular principal, creating and verifying a digital signature for the message provides confidence in the sender’s identity. This characteristic is known as authenticity. Finally, a high degree of confidence in both the integrity and authenticity of a message limits the plausible ability of a sender to fraudulently deny having signed a message. This characteristic is known a non-repudiation.

To create a digital signature

Run the following command to create a digital signature for message.json:

aws kms sign \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --signing-algorithm ECDSA_SHA_512 \
    --message fileb://message.json \
    --output text \
    --query Signature | base64 --decode > message.sig

This generates an independent digital signature file, message.sig, for message.json. Any modification to the contents of message.json, such as changing the sender or message fields, will now cause signature validation of message.sig to fail for message.json.

Encrypt the message and signature

Even with the benefits of a digital signature, the message could still be viewed by any party with access to the file. In order to provide confidence that the message contents aren’t exposed to unauthorized parties, you can encrypt the message. This characteristic is known as confidentiality. In order to retain the benefits of your digital signature you can encrypt the message and corresponding signature together in a single package.

To encrypt the message and signature

  1. Combine your message and signature into an archive. For example, with the GNU Tar utility you can issue the following:
    tar -czvf message.tar.gz message.sig message.json
    

    This will create a new archive file named message.tar.gz containing both your message and message signature.

  2. Encrypt the archive using AWS KMS. To do so, issue the following command:
    aws kms encrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --plaintext fileb://message.tar.gz \
        --output text \
        --query CiphertextBlob | base64 --decode > message.enc
    

    This will output a message.enc file containing an encrypted copy of the message.tar.gz archive.

Decrypting and verifying a sample message

Now that you’ve created, signed, and encrypted a message, let’s change gears and see what working with this message.enc file is like from the perspective of a receiving party. In the final phase of this tutorial you assume the role of receiver and use your asymmetric CMKs to decrypt the encrypted message archive and verify the digital signature that you created. Finally, you will view your message. The process is shown in the following figure.
 

Figure 6: Decrypting a message archive and verifying the message signature

Figure 6: Decrypting a message archive and verifying the message signature

Assume the receiver role

Assume the receiver role so that you can simulate receiving a signed and encrypted message. As before, in order to assume the receiver role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the receiver role.

To assume the receiver role

  1. Copy the message.enc file to a new directory to create a clean working space and navigate there in a terminal session.
  2. Assume your receiver role. To do so, execute the following command, replacing the account ID value of 111122223333 with your own:
    aws sts assume-role \
    	--role-arn arn:aws:iam::111122223333::role/ReceiverRole \
    	--role-session-name AWSCLI-Session
    

  3. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  4. Confirm that you have successfully assumed the receiver role by issuing:
    aws sts get-caller-identity
    

If the output of this command contains the text assumed-role/ReceiverRole then you have successfully assumed the receiver role.

Decrypt the encrypted message archive in AWS KMS

Decrypt the encrypted message archive to access the plaintext of the message and message signature files.

To decrypt the encrypted message archive

  1. Issue the following command:
    aws kms decrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --ciphertext-blob fileb://EncryptedMessage \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --output text \
        --query Plaintext | base64 --decode > message.tar.gz
    

  2. This will create an unencrypted message.tar.gz file that you can unpack with:
    tar -xvfz message.tar.gz
    

This, in turn, will expand the archive contents message.sig and message.json in your working directory.

Verify the message signature

To verify the signature on the message issue the following command:

aws kms verify \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --message fileb://message.json \
    --signing-algorithm ECDSA_SHA_512 \
    --signature fileb://message.sig

In the response you should see that SignatureValid is marked true indicating that the signature has been verified using the specified sample-sign-verify-key that you granted the sender principal permission to generate signatures with.

View the message

Finally, open message.json and view the file’s contents by issuing the following command:

less message.json

You will see that the contents of the file have not been modified and still read:

{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}

Note: Be careful to avoid making any changes to the contents of this file. Even a minor modification of the message contents will compromise the integrity of the message and cause future attempts at signature validation using your message.sig file to fail.

Summary

In this tutorial, you signed and encrypted data using two AWS KMS asymmetric CMKs and later decrypted and verified your signature using those CMKs.

You first created two asymmetric CMKs in AWS KMS, one for creating and verifying digital signatures and the other for encrypting and decrypting data. You then configured key policy permissions for your sender and receiver principals. Acting as your sender principal, you digitally signed a message in AWS KMS, added the message and signature to an archive and then encrypted that archive in AWS KMS. Next you assumed your receiver role and decrypted the archive in AWS KMS, viewed your message, and verified its signature in AWS KMS.

To learn more about the asymmetric keys feature of AWS KMS, please read the AWS KMS Developer Guide. If you have questions about the asymmetric keys feature, please start a new thread on the AWS KMS forum. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

J.D. Bean

J.D. is a Senior Solutions Architect at AWS working with public sector organizations and financial institutions based out of New York City. His interests include security, privacy, and compliance. He is passionate about his work enabling AWS customers’ successful cloud journeys. J.D. holds a Bachelor of Arts from The George Washington University and a Juris Doctor from New York University School of Law.

Verified, episode 2 – A Conversation with Emma Smith, Director of Global Cyber Security at Vodafone

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/verified-episode-2-conversation-with-emma-smith-director-of-global-cyber-security-at-vodafone/

Over the past 8 months, it’s become more important for us all to stay in contact with peers around the globe. Today, I’m proud to bring you the second episode of our new video series, Verified: Presented by AWS re:Inforce. Even though we couldn’t be together this year at re:Inforce, our annual security conference, we still wanted to share some of the conversations with security leaders that would have taken place at the conference. The series showcases conversations with security leaders around the globe. In episode two, I’m talking to Emma Smith, Vodafone’s Global Cyber Security Director.

Vodafone is a global technology communications company with an optimistic culture. Their focus is connecting people and building the digital future for society. During our conversation, Emma detailed how the core values of the Global Cyber Security team were inspired by the company. “We’ve got a team of people who are ultimately passionate about protecting customers, protecting society, protecting Vodafone, protecting all of our services and our employees.” Emma shared experiences about the evolution of the security organization during her past 5 years with the company.

We were also able to touch on one of Emma’s passions, diversity and inclusion. Emma has worked to implement diversity and drive a policy of inclusion at Vodafone. In June, she was named Diversity Champion in the SC Awards Europe. In her own words: “It makes me realize that my job is to smooth the way for everybody else and to try and remove some of those obstacles or barriers that were put in their way… it means that I’m really passionate about trying to get a very diverse team in security, but also in Vodafone, so that we reflect our customer base, so that we’ve got diversity of thinking, of backgrounds, of experience, and people who genuinely feel comfortable being themselves at work—which is easy to say but really hard to create that culture of safety and belonging.”

Stay tuned for future episodes of Verified: Presented by AWS re:Inforce here on the AWS Security Blog. You can watch episode one, an interview with Jason Chan, Vice President of Information Security at Netflix on YouTube. If you have an idea or a topic you’d like covered in this series, please drop us a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

How to secure your Amazon WorkSpaces for external users

Post Syndicated from Olivia Carline original https://aws.amazon.com/blogs/security/how-to-secure-your-amazon-workspaces-for-external-users/

In response to the current shift towards a remote workforce, companies are providing greater access to corporate applications from a range of different devices. Amazon WorkSpaces is a desktop-as-a-service solution that can be used to quickly deploy cloud-based desktops to your external users, including employees, third-party vendors, and consultants. Amazon WorkSpaces desktops are accessible from anywhere with an internet connection. In this blog post, I review some key security controls that you can use to architect your Amazon WorkSpaces environment to provide external users access to your corporate applications and data in a way that satisfies your unique security and compliance objectives.

Amazon Workspaces provides a virtual desktop infrastructure that removes the need for upfront infrastructure expenditure. Instead, you can pay for Windows or Linux desktop environments as you need them. These environments can be provisioned in a few minutes, and enable you to scale up to thousands of desktops that can be accessed from wherever your users are located.

As part of the shared responsibility model, security is a shared responsibility between Amazon Web Services (AWS) and you. AWS is responsible for protecting the infrastructure that runs the AWS services while you are responsible for securing your data in AWS through appropriate permissions and WorkSpace management as outlined in the Best Practices for Deploying Amazon WorkSpaces whitepaper. Amazon WorkSpaces has been independently assessed to meet the requirements of a wide range of compliance programs, including IRAP, SOC, PCI DSS, FedRAMP, and HIPAA.

Prerequisites

Define user groups

A user group is a collection of people who all have the same security rights and permissions. Leveraging user groups helps you to identify the types of access and your requirements for user authentication. How you define your user groups should reflect how you classify your data and the access controls associated with the classifications. A common approach is to begin by separating your internal (employees) and external (vendors and consultants) users. Classifying your users into different groups helps you to define your security controls. For example, the security and configuration of your external users’ devices will be different from the configuration for your internal users’ devices. The identification process also helps to ensure that you’re following the principle of least privilege by limiting access to certain applications or resources. These user groups are the building blocks for designing the rest of your security controls, including the directories, access controls, and security groups.

In this blog post, I walk you through the security configurations for the following example external user groups. How you configure security for your user groups will depend on your own security requirements.

Example user groups

Internal users: Employees who need access to company resources from any location. In addition to having access to the internet and the internal network from any supported device, internal users have administrator access on their virtual desktops so they can install applications.

External users: Third-party vendors and consultants who need access to specific websites that are inside the corporate network. They have fewer permissions and tighter guardrails on their virtual desktops and can only access resources through trusted devices. External users should have access to only pre-installed applications and not be able to install additional applications onto their WorkSpaces.

At this stage, it’s okay to separate your user groups broadly based on the preceding requirements. Later, you can configure fine-grained access controls for individual users.

Configure your directories

Amazon WorkSpaces uses directories to manage information and configuration of WorkSpaces and users. Each WorkSpace that you provision exists within a directory. There are a couple of different options for configuring the directory. Amazon Workspaces can create and manage a directory for you so that users are entered into that directory when you provision a WorkSpace. As an alternative, you can integrate WorkSpaces with an existing, on-premises Microsoft Active Directory (AD) so your users can use the credentials they already know to access applications.

Within Amazon WorkSpaces, directories play a large part in how access to workspaces is configured. Directories within Amazon WorkSpaces are used to store and manage information for your WorkSpaces and users. Based on the preceding two example user groups, let’s split your users’ WorkSpaces across two directories. That will help you to establish different access control settings for the two groups.

To define the two directories, you must set up the directories within AWS Directory Service. As previously mentioned, there are various approaches to handling user management that depend on your existing user directories and requirements. For this example, you can configure two simple Active Directories—one for internal users and one for external users. Handling the external users in a separate directory allows you to ensure your user groups are configured with least privilege. With this approach, external users can still be given access to objects inside the internal directory through a trust if required but can be configured with stricter access controls than users inside the internal directory.

A comprehensive guide to setting up your directories is available in the Amazon WorkSpaces administration guide and outlines the steps to configure a directory using AWS Managed Microsoft AD, Simple AD, or AD Connector.

Configure security settings

After you define what privileges and access controls you want in place for your external users and configure the directories you need, it’s time to establish the security controls for your WorkSpaces. This blog will focus on the external users’ security configurations from the prerequisites. Use the following steps to implement the security requirements:

  1. Establish security groups
  2. Disable local administrator rights
  3. Configure IP access control groups
  4. Define trusted devices
  5. Configure monitoring of WorkSpaces

Establish security groups

With your two AD directories configured, you can start implementing the security controls for your external users. Your Amazon WorkSpaces are configured within a logically isolated network known as Amazon Virtual Private Cloud (VPC). A key concept within Amazon VPC is security groups, which act as virtual firewalls to control inbound and outbound traffic to the virtual desktops. A properly configured security group can limit access to resources in your network or to the internet at the individual WorkSpace level or at the directory level.

To ensure that your external users can access only the network resources you want them to, you can define security groups with restrictive network access settings. One approach is to configure security groups so that your external users only have HTTP and HTTPS access to specific internal websites by trusted IP addresses. To define more fine-grained access control for individual users, you can define another restrictive security group and attach it to an individual user’s WorkSpace. This way, you can use a single directory to handle many different users with different network security requirements and ensure that third-party users only have access to authorized data and systems. In addition to security groups, you can use your preferred host-based firewall on a given WorkSpace to limit network access to resources within the VPC.

To establish and configure security groups

  1. In the Amazon WorkSpaces menu, select Directories from the left menu. Choose the directory you created for your external users. Select Actions and then Update Details as shown in the following figure.
     
    Figure 1: Updating details of your directory

    Figure 1: Updating details of your directory

  2. In the Update Directory Details screen that appears, select the down arrow next to Security Group to expand the section. Select Create New next to the dropdown menu to configure a new security group.
     
    Figure 2: Adding a security group to your directory

    Figure 2: Adding a security group to your directory

  3. In the next window, select Create security group.
  4. Enter a descriptive name for the Security group name and a description for the security group in Description. For example, the description could be external-workspaces-users-sg.
  5. Change the VPC using the dropdown menu to the VPC hosting the WorkSpaces.
  6. In the Inbound rules section, leave the rules as default. The default configuration will block everything except for sessions that have been already established from the Workspace.
  7. In the Outbound rules section, configure the following settings:
    1. Select Delete the existing outbound rule.
    2. Select Add rule.
    3. Set Type to HTTP.
    4. Leave Protocol as TCP and Port range as 80.
    5. Change Source to Custom and enter the appropriate range for your Destination based on where your internal resources are located.
    6. Select Add rule again.
    7. Set Type to HTTPS.
    8. Leave Protocol as TCP and Port range as 443.
    9. Change Source to Custom and enter the appropriate range for your Destination based on where your internal resources are located.
    Figure 3: Configuring your security groups

    Figure 3: Configuring your security groups

  8. Select Create security group.
  9. Return to the WorkSpaces directory tab and select Refresh to see the newly created security group.
  10. Select Update and Exit.

Disable local administrator rights

One of the recommendations for external users is to disable the local administrator setting on their WorkSpaces and provide them with access to only specific, preinstalled applications. This guardrail helps to ensure that external users have limited permissions and to reduce the risk that they might access or share sensitive information. If local administrator isn’t disabled, users can install applications and modify settings on their WorkSpaces. You can disable local administrator access from within the external users’ directory. Changes to the directory are applied to all new WorkSpaces that you create and can be applied to existing WorkSpaces by rebuilding them after the making changes.

Note: If your internal users don’t need local administrator access, it’s a best practice to follow the principle of least privilege and disable it for them as well.

To disable local administrator rights for external users

  1. In the Amazon WorkSpaces menu, select Directories from the left menu. Choose the directory you configured for your external users.
  2. Select Actions and then Update Details.
  3. In Update Directory Details, select Local Administrator Setting and choose the Enable radio button.
  4. Select Update and Exit as shown in the following figure.
     
    Figure 4: Disabling your local administrator setting

    Figure 4: Disabling your local administrator setting

Define IP access control

So far the security groups you have defined previously allow external users access to company resources only from inside the corporate network. You can enhance this security configuration by leveraging IP access control groups to limit traffic and only allow certain IPs to access the WorkSpaces. An IP access control group acts as a virtual firewall and filters access to WorkSpaces by controlling the source classless inter-domain routing (CIDR) ranges that users can access their WorkSpaces from. Each IP access control group consists of a set of rules that specify a permitted IP address or range of addresses that Amazon WorkSpaces can be accessed from. Using this feature, you can configure rules that permit access to your WorkSpaces only if they are coming from your company’s VPN. To achieve this control, you must define rules that specify the ranges of IP addresses for your trusted networks within IP access control groups associated to the external users directory.

Note: Currently only IPV4 addresses are permitted.

To define IP access control

  1. Inside the Amazon WorkSpaces page, select IP Access Controls on the left panel. Select Create IP Group and enter a Group Name and Description in the window that appears.
  2. Select Create as shown in the following figure.
     
    Figure 5: Creating an IP group

    Figure 5: Creating an IP group

  3. Select the box next to the IP group you just created to open the new rules form.
  4. Select Add Rule.
  5. Enter the individual IP addresses or CIDR IP ranges that you want to allow WorkSpaces to have access from in Source. If you want to restrict access to your VPN make sure to add the public IPs of the VPN. Enter a description in Description.
  6. Select Save as shown in the following figure.
     
    Figure 6: Adding rules to your IP group

    Figure 6: Adding rules to your IP group

Configure trusted devices

Regulating the devices that can connect to your workspaces can help reduce the risk of unauthorized access to your network and applications. By default, all Amazon WorkSpaces users can access their virtual desktop from any supported device that has internet connectivity. However, it’s a good practice to configure additional guardrails to limit external users to only accessing their WorkSpaces through trusted devices, otherwise known as managed devices (currently this feature only applies to Amazon WorkSpaces Windows and macOS clients). With this feature enabled, only devices that have been authenticated through a certificate-based approach will have access to WorkSpaces. If the WorkSpaces client application cannot verify that a device is trusted, it blocks attempts to log in or connect from the device.

Note: If you haven’t already configured certificates, you will need to follow the steps in the Amazon WorkSpaces Administration Guide that walkthrough the requirements of the certificates as well as the process to generate one.

To configure trusted devices

  1. In the Amazon WorkSpaces menu, select Directories in the left menu. After selecting the directory that has been configured for your external users, select Actions and then Update Details.
  2. In Update Directory Details, select Access Control Options. Select Allow next to Windows and MacOS to allow only trusted Windows and macOS devices to access WorkSpaces.
  3. Select Import to import your root certificate.
  4. Next to Other Platforms select Block so that only Windows and MacOS devices will have access.
  5. Select Update and Exit.
     
    Figure 7: Establishing trusted devices

    Figure 7: Establishing trusted devices

  6. Test your settings by trying to access one of your WorkSpaces from a trusted device and from a non-trusted device.

Use Amazon CloudWatch to monitor your WorkSpaces

Once the guardrails for your external users have been set up, it’s important to monitor your environment for suspicious behavior and potential threats. Monitoring your infrastructure should be a fundamental aspect in your security plan. Amazon WorkSpaces is natively integrated with Amazon CloudWatch, which you can use to gather and analyze metrics to gain visibility into individual WorkSpaces and at a directory level. Alongside metrics, Amazon CloudWatch Events can also be used to provide visibility into your Amazon WorkSpaces fleet so you can view, filter, and respond to logins to your WorkSpaces. This approach lets you create a thorough monitoring pipeline that enhances your security. It lets you filter and automatically respond to suspicious activity in real time. A comprehensive example of this approach is outlined in this blog post that covers the steps involved to set up a CloudWatch based monitoring system for your WorkSpaces.

Conclusion

While you’ve used Amazon WorkSpaces features to help provide secure access for your external users, it’s also important to implement the principle of least privilege across all WorkSpaces users. You can use the design considerations and procedures in this blog post to help secure your WorkSpaces for all users, internal and external. You can learn more about best practices for securing your Amazon WorkSpaces by reading the Best Practices for Deploying Amazon WorkSpaces whitepaper to understand other features and capabilities that are available.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon WorkSpaces forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Olivia Carline

Olivia is an Associate Solutions Architect working in the public sector team. In her role she enjoys helping customers up-skill and build their cloud knowledge with a particular focus on cloud security. In her free time, you can find her exploring local hiking tracks and trying out new recipes.

Integrating CloudEndure Disaster Recovery into your security incident response plan

Post Syndicated from Gonen Stein original https://aws.amazon.com/blogs/security/integrating-cloudendure-disaster-recovery-into-your-security-incident-response-plan/

An incident response plan (also known as procedure) contains the detailed actions an organization takes to prepare for a security incident in its IT environment. It also includes the mechanisms to detect, analyze, contain, eradicate, and recover from a security incident. Every incident response plan should contain a section on recovery, which outlines scenarios ranging from single component to full environment recovery. This recovery section should include disaster recovery (DR), with procedures to recover your environment from complete failure. Effective recovery from an IT disaster requires tools that can automate preparation, testing, and recovery processes. In this post, I explain how to integrate CloudEndure Disaster Recovery into the recovery section of your incident response plan. CloudEndure Disaster Recovery is an Amazon Web Services (AWS) DR solution that enables fast, reliable recovery of physical, virtual, and cloud-based servers on AWS. This post also discusses how you can use CloudEndure Disaster Recovery to reduce downtime and data loss when responding to a security incident, and best practices for maintaining your incident response plan.

How disaster recovery fits into a security incident response plan

The AWS Well-Architected Framework security pillar provides guidance to help you apply best practices and current recommendations in the design, delivery, and maintenance of secure AWS workloads. It includes a recommendation to integrate tools to secure and protect your data. A secure data replication and recovery tool helps you protect your data if there’s a security incident and quickly return to normal business operation as you resolve the incident. The recovery section of your incident response plan should define recovery point objectives (RPOs) and recovery time objectives (RTOs) for your DR-protected workloads. RPO is the window of time that data loss can be tolerated due to a disruption. RTO is the amount of time permitted to recover workloads after a disruption.

Your DR response to a security incident can vary based on the type of incident you encounter. For example, your DR plan for responding to a security incident such as ransomware—which involves data corruption—should describe how to recover workloads on your secondary DR site using a recovery point prior to the data corruption. This use case will be discussed further in the next section.

In addition to tools and processes, your security incident response plan should define the roles and responsibilities necessary during an incident. This includes the people and roles in your organization who perform incident mitigation steps, in addition to those who need to be informed and consulted. This can include technology partners, application owners, or subject matter experts (SMEs) outside of your organization who can offer additional expertise. DR-related roles for your incident response plan include:

  • A person who analyzes the situation and provides visibility to decision-makers.
  • A person who decides whether or not to trigger a DR response.
  • A person who actively triggers the DR response.

Be sure to include all of the stakeholders you identify in your documented security incident response procedures and runbooks. Test your plan to verify that the people in these roles have the pre-provisioned access they need to perform their defined role.

How to use CloudEndure Disaster Recovery during a security incident

CloudEndure Disaster Recovery continuously replicates your servers—including OS, system state configuration, databases, applications, and files—to a staging area in your target AWS Region. The staging area contains low-cost resources automatically provisioned and managed by CloudEndure Disaster Recovery. This reduces the cost of provisioning duplicate resources during normal operation. Your fully provisioned recovery environment is launched only during an incident or drill.

If your organization experiences a security incident that can be remediated using DR, you can use CloudEndure Disaster Recovery to perform failover to your target AWS Region from your source environment. When you perform failover, CloudEndure Disaster Recovery orchestrates the recovery of your environment in your target AWS Region. This enables quick recovery, with RPOs of seconds and RTOs of minutes.

To deploy CloudEndure Disaster Recovery, you must first install the CloudEndure agent on the servers in your environment that you want to replicate for DR, and then initiate data replication to your target AWS Region. Once data replication is complete and your data is in sync, you can launch machines in your target AWS Region from the CloudEndure User Console. CloudEndure Disaster Recovery enables you to launch target machines in either Test Mode or Recovery Mode. Your launched machines behave the same way in either mode; the only difference is how the machine lifecycle is displayed in the CloudEndure User Console. Launch machines by opening the Machines page, shown in the following figure, and selecting the machines you want to launch. Then select either Test Mode or Recovery Mode from the Launch Target Machines menu.
 

Figure 1: Machines page on the CloudEndure User Console

Figure 1: Machines page on the CloudEndure User Console

You can launch your entire environment, a group of servers comprising one or more applications, or a single server in your target AWS Region. When you launch machines from the CloudEndure User Console, you’re prompted to choose a recovery point from the Choose Recovery Point dialog box (shown in the following figure).

Use point-in-time recovery to respond to security incidents that involve data corruption, such as ransomware. Your incident response plan should include a mechanism to determine when data corruption occurred. Knowing how to determine which recovery point to choose in the CloudEndure User Console helps you minimize response time during a security incident. Each recovery point is a point-in-time snapshot of your servers that you can use to launch recovery machines in your target AWS Region. Select the latest recovery point before the data corruption to restore your workloads on AWS, and then choose Continue With Launch.
 

Figure 2: Selection of an earlier recovery point from the Choose Recovery Point dialog box

Figure 2: Selection of an earlier recovery point from the Choose Recovery Point dialog box

Run your recovered workloads in your target AWS Region until you’ve resolved the security incident. When the incident is resolved, you can perform failback to your primary environment using CloudEndure Disaster Recovery. You can learn more about CloudEndure Disaster Recovery setup, operation, and recovery by taking this online CloudEndure Disaster Recovery Technical Training.

Test and maintain the recovery section of your incident response plan

Your entire incident response plan must be kept accurate and up to date in order to effectively remediate security incidents if they occur. A best practice for achieving this is through frequently testing all sections of your plan, including your tools. When you first deploy CloudEndure Disaster Recovery, begin running tests as soon as all of your replicated servers are in sync on your target AWS Region. DR solution implementation is generally considered complete when all initial testing has succeeded.

By correctly configuring the networking and security groups in your target AWS Region, you can use CloudEndure Disaster Recovery to launch a test workload in an isolated environment without impacting your source environment. You can run tests as often as you want. Tests don’t incur additional fees beyond payment for the fully provisioned resources generated during tests.

Testing involves two main components: launching the machines you wish to test on AWS, and performing user acceptance testing (UAT) on the launched machines.

  1. Launch machines to test.
     
    Select the machines to test from the Machines page of the CloudEndure User Console by selecting the check box next to the machine. Then choose Test Mode from the Launch Target Machines menu, as shown in the following figure. You can select the latest recovery point or an earlier recovery point.
     
    Figure 3: Select Test Mode to launch selected machines

    Figure 3: Select Test Mode to launch selected machines

     

    The following figure shows the CloudEndure User Console. The Disaster Recovery Lifecycle column shows that the machines have been Tested Recently.

    Figure 4: Machines launched in Test Mode display purple icons in the Status column and Tested Recently in the Disaster Recovery Lifecycle column

    Figure 4: Machines launched in Test Mode display purple icons in the Status column and Tested Recently in the Disaster Recovery Lifecycle column

  2. Perform UAT testing.
     
    Begin UAT testing when the machine launch job is successfully completed and your target machines have booted.

After you’ve successfully deployed, configured, and tested CloudEndure Disaster Recovery on your source environment, add it to your ongoing change management processes so that your incident response plan remains accurate and up-to-date. This includes deploying and testing CloudEndure Disaster Recovery every time you add new servers to your environment. In addition, monitor for changes to your existing resources and make corresponding changes to your CloudEndure Disaster Recovery configuration if necessary.

How CloudEndure Disaster Recovery keeps your data secure

CloudEndure Disaster Recovery has multiple mechanisms to keep your data secure and not introduce new security risks. Data replication is performed using AES 256-bit encryption in transit. Data at rest can be encrypted by using Amazon Elastic Block Store (Amazon EBS) encryption with an AWS managed key or a customer key. Amazon EBS encryption is supported by all volume types, and includes built-in key management infrastructure that has no performance impact. Replication traffic is transmitted directly from your source servers to your target AWS Region, and can be restricted to private connectivity such as AWS Direct Connect or a VPN. CloudEndure Disaster Recovery is ISO 27001 and GDPR compliant and HIPAA eligible.

Summary

Each organization tailors its incident response plan to meet its unique security requirements. As described in this post, you can use CloudEndure Disaster Recovery to improve your organization’s incident response plan. I also explained how to recover from an earlier point in time when you respond to security incidents involving data corruption, and how to test your servers as part of maintaining the DR section of your incident response plan. By following the guidance in this post, you can improve your IT resilience and recover more quickly from security incidents. You can also reduce your DR operational costs by avoiding duplicate provisioning of your DR infrastructure.

Visit the CloudEndure Disaster Recovery product page if you would like to learn more. You can also view the AWS Raise the Bar on Data Protection and Security webinar series for additional information on how to protect your data and improve IT resilience on AWS.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Gonen Stein

Gonen is the Head of Product Strategy for CloudEndure, an AWS company. He combines his expertise in business, cloud infrastructure, storage, and information security to assist enterprise organizations with developing and deploying IT resilience and business continuity strategies in the cloud.

AWS Security Profiles: Cassia Martin, Senior Security Solutions Architect

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profiles-cassia-martin-senior-security-solutions-architect/

Cassia Martin AWS Security Profile
In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


How long have you been at AWS and what do you do in your current role?

I’ve been at Amazon for nearly 4 years, and at AWS for 2 years. I’m a solutions architect with a specialty in security. I work primarily with financial services customers, helping them solve security problems and build out secure foundations for their AWS workloads.

What’s your favorite part of your job?

Working in AWS feels like working in the future. My first job as a software engineer was fixing bugs in 20-year-old legacy C code and writing network support for SNMPv1. Now, I’m on the cutting edge of network design. When I work with my customers, I genuinely feel like I’m helping “Invent and Simplify” the future.

How did you get started in Security?

I’ve been interested in security since college. I took all the crypto and protocol courses in my computer science program from amazing professors like Radia Perlman and Michael Rabin. After college, I worked in software engineering. My real break into the security field came when I got to use my software engineering background to fix security vulnerabilities for Bank of America. After consulting across dozens of companies, I gained depth in application security, pen testing, code review, and architectural analysis. Over 10 years later, I’m using and extending those architectural analysis and AppSec skills to build and improve cloud architecture and design.

How do you explain what you do to non-technical friends or family?

“I work in computer security, helping your bank keep your online data safe and secure.” It’s true! If they are willing to hear more details, then I try to explain what the cloud is, and that you can design a network in good and bad ways to stop people from getting in.

One sad thing about not working for the Amazon.com side of the house is that I can no longer tell people that “I’m a security guard at a bookstore.” That also used to be true for me!

You’re presenting at re:Invent this year – can you give readers a sneak peek of what you’re covering?

Yes! I’ve put together a “Top 10” list to check the health of your AWS Identity foundation. I want every one of our customers to be thoughtful about how they authenticate their users and how they authorize access to their AWS resources. I’m going to talk about how to use account boundaries and AWS Organizations to build strong isolation controls, how to use roles and federation to secure login, and how to build and validate granular permissions that enable least privilege access across your network.

What are you hoping your audience will do differently after your session?

I’m giving you a list of what to do. I literally want you to take that list, one at a time, and ask yourself, “Am I doing this? If not, what would it take to do this?” I know that security can sometimes feel daunting, and in AWS, we all have access to dozens (or hundreds) of different tools you can use to build and layer your secure environment. So here is a short list to get started. I hope this will make it easier to build a strong foundation and use the tools that AWS is giving you.

From your perspective, what’s the biggest thing happening in Identity right now?

I am really excited about how tagging and Attribute Based Access Control (ABAC) can help with scaling. At a base level, Identity and Permissions are really easy. You just say “Becky should have access to the Unicorn database,” and AWS gives you powerful tools for writing a rule like that with our IAM service. But once you have not just Becky, but also Syed and Sean—and then 300 more people, 200 databases, and 1,000 S3 buckets—the sheer number of rules you have to write and keep track of gets hard. And it gets even harder for someone else to come and look at your rules afterwards and figure out if you’re doing it right.

With ABAC, you can now write a rule that says any person from team “red” can access any database that is tagged with ”red.“ That takes potentially hundreds of rules and collapses it into one easy-to-understand statement.

What is your favorite Leadership Principle at Amazon and why?

All the Amazon Leadership Principles highlight important facets of how to build successful organizations, but “Have Backbone: Disagree and Commit” is my favorite. It’s more than an LP; it’s a mechanism. It’s a way to build a system of people working toward a common goal, while still keeping our independent ideas and values. It gives us permission to disagree, while at the same time giving us a way out of stalemates and unfruitful perfectionism.

What’s the best career advice you’ve ever received?

My dad is a lifelong academic (who is secretly a little embarrassed that I never got a PhD). Growing up, I watched him in action: creating novel research, taking care of his grad students, and even running academic departments with all their bitter politics and conflicting goals.

Two things that he says about his highly successful career:

  1. The older I get and the more I learn, the less I am confident about anything.
  2. I have never accomplished anything by myself.

This perspective is antithetical, I think, to the standard American career ladder, and it’s been invaluable to me. In my career in tech, I’ve met a lot of brilliant people who know all the answers and tout all their personal accomplishments from any available rooftop. And that is absolutely one way to succeed. But I know intimately that there is another way that can also work, a way that is built on collaboration and scholarship, and constantly learning and questioning your knowledge.

If you could go back, what would you tell yourself at the beginning of your career?

I guess “don’t worry so much” is the least helpful advice ever… I’m sure I wouldn’t have been able to hear it at 22! But here is something I would have understood:

Little Cassia, you’re going to succeed at many things and fail at some things. But no matter what, every single job you tackle is going to teach you something important. You’re going to learn technical skills that will be useful when you least expect them, and you’re also going to learn more about yourself—what you want to do, who you want to surround yourself with, and what you need to thrive. Just keep trying, and I promise life will only keep getting better!

What are you most proud of in your career?

The last time I went to the DEF CON Security Conference, I attended not one, not two, but THREE different talks delivered by former mentees of mine. Getting to help these extraordinary people get started in application security, and then getting to watch them become ever more talented and exceed everything I knew, and then to watch them shine on stage—it was a privilege, and made so much pain worthwhile. Hey, I may not know anything about NFC penetration testing, but Katherine sure does, and she’s teaching the whole damned world.

Among your many degrees from Harvard University, you also have a BA in Ancient Greek. Tell us about that. What started your interest in it?

My love for Ancient Greek and Latin was fostered by some really amazing high school teachers. I went to the kind of boarding school where professors took care of you like family, and the mysterious Dr. Reyes and the two sophisticated Professors Myers took extraordinary care of my fumbling teenage heart and my raging intellectual curiosity. I had a little bit of an advantage in that I had already learned Modern Greek in grade school, since my hometown had a thriving Hellenic community. I have since completely forgotten both, but as my dear professors had me recite: “the shadow of lost knowledge at least protects you from many illusions.”

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Cassia Martin

Cassia is a Senior Security Solutions Architect based in New York City. She works with large financial institutions to solve security architecture problems and educates them on cloud tools and patterns.

Aligning IAM policies to user personas for AWS Security Hub

Post Syndicated from Vaibhawa Kumar original https://aws.amazon.com/blogs/security/aligning-iam-policies-to-user-personas-for-aws-security-hub/

AWS Security Hub provides you with a comprehensive view of your security posture across your accounts in Amazon Web Services (AWS) and gives you the ability to take action on your high-priority security alerts. There are several different user personas that use Security Hub, and they typically require different AWS Identity and Access Management (IAM) permissions. Those personas include: a security administrator; a security analyst or engineer on a central security team or in a Cloud Center of Excellence; and a Developer Operations (DevOps) engineer or application builder who is the primary owner of an AWS account. In this post, we show how to deploy sample IAM policies for these three personas.

The first persona, a security administrator or cloud system administrator (sysadmin), is responsible for setting up and configuring Security Hub, and they typically need access to all of the Security Hub APIs to do this work. As part of this work, the sysadmin enables Security Hub on various accounts and Regions, decides which standards and controls should be enabled on which accounts, enables product integrations, creates insights, sets up custom actions and automated remediations, and configures IAM policies for other users.

The second persona, a security analyst or engineer using Security Hub, is part of a central security team, often part of a Cloud Center of Excellence. We often see that cloud security efforts are centralized in Cloud Centers of Excellence within a company. These security analysts or engineers typically have access to the master account in Security Hub and can view and take action on findings from any of the connected member accounts. They typically are not configuring Security Hub, so they don’t need permissions to do so.

The third persona is a DevOps engineer or application builder. This user needs the ability to view findings and take action only on the findings associated with their account. For cloud workloads, security is often decentralized down to these users. Security Hub enables them to take more proactive responsibility for the security of their own account by directly viewing and taking action on findings in their account. They typically don’t need permissions to set up and configure Security Hub, because that is done by a central sysadmin.

Overview

The following reference architecture presents an overview of a Security Hub master-member account structure and three personas: a security administrator, a security analyst/engineer, and a DevOps engineer.
 

Figure 1: Reference architecture

Figure 1: Reference architecture

In this blog post, we show how you can create and use the following AWS managed and customer managed IAM policies to support these three personas:

  • The sysadmin persona needs permissions to configure and manage Security Hub, account memberships, insights, and integrations, and to create remediations and take actions and perform record and workflow updates. The AWS managed IAM policy called AWSSecurityHubFullAccess provides the permissions for this persona. An IAM user or role with these permissions can deploy and configure Security Hub in master and member accounts. They can also update findings. The sysadmin also requires permissions to configure AWS Config and Amazon CloudWatch event rules to set up automated responses and remediations.
  • The security analyst persona needs permissions to read, list, and describe findings, standards, controls, and products; to update findings; and to create and update insights for Security Hub resources in the master account. The AWS managed IAM policy called AWSSecurityHubReadOnlyAccess provides the permissions needed for read, list, and describe actions, and a customer managed policy will be attached to give permissions to create and update insights and to update findings.
  • The DevOps engineer persona needs the same permissions as the security analyst persona, but they will only have the ability to access their own Security Hub member AWS account(s) and won’t have access to the master account.

Depending on your specific use case, you might want to provide additional permissions to the security analyst and DevOps engineer personas. For example, you might want to also grant them permissions to create custom actions by using the UpdateActionTarget API. In that case, you should also ensure that they have appropriate permissions to create CloudWatch event rules. You can also restrict these personas to only be able to update certain fields in findings (for example, only update workflow status but not severity) by using IAM context keys.

Prerequisites

You must have already enabled Security Hub with one account as master and other associated accounts as members. You will use the following AWS services:

Implementation

To create the required customer managed policies and associate them to users and roles, you will perform these tasks, described in more detail later in this section:

  1. Create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  2. Create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  3. Create a sysadmin user and role and associate it with the AWS managed policy for AWSSecurityHubFullAccess, along with AWS Config and CloudWatch event rule permissions in the Security Hub master account.

The following policy JSON script is for those two customer managed policies.

Security Hub - Security Analyst policy 
MasterCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SecurityAnalystMasterCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
	  			"securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Security Hub – DevOps Engineer policy: 
MemberCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DevOpsMemberCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
                "securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Step 1: Create the user and role for the security analyst persona

First, create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub master account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies, and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – Security Analyst policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you’re creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 2: Create the user and role for the DevOps persona

Next, create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – DevOps Engineer policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you are creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 3: Create the user and role for the sysadmin persona

Next, create a user and role for the sysadmin persona and associate it with the AWS managed policies AWSSecurityHubFullAccess, CloudWatchEventsFullAccess, and full access to AWS Config in the Security Hub master account.

To create the IAM user and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role.
  3. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  4. Choose the permissions category Attach existing policies directly to filter and select managed policies, and attach the managed policies AWSSecurityHubFullAccess and CloudWatchEventsFullAccess.
  5. Create another policy to grant full access to AWS Config as described in the AWS Config Developer Guide, and attach the policy to the user or role. Review the permissions, and choose Add permissions.
  6. Save all the changes and try using the user and role after few minutes.

Test the users and roles in the Security Hub master and member accounts

Finally, test the three users and roles that you created for the respective personas in the preceding steps.

To test the users and roles

  1. Security analyst user and role:
    1. Sign in to the AWS Management Console in the master account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the Security Analyst can view them.
    3. Navigate to a finding and change the workflow status as described in the topic Setting the workflow status for findings.
  2. DevOps engineer user and role:
    1. Sign in to the AWS Management Console in the member account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the DevOps Engineer can view them.
    3. Create a custom insight for the member account and other attribute groupings.
  3. Sysadmin user and role:
    1. Sign in to the AWS Management Console in the master or member account, and open Security Hub.
    2. Try admin-related operations such as create a custom action, invite another account, and so on.

Adding policy conditions

You might want to further restrict the permissions for the security analyst and DevOps personas. IAM policies for Security Hub’s BatchUpdateFindings API enable you to specify conditions to prevent a user from making any update to a specific finding field. The following example disallows setting the Workflow Status field to Suppressed.

{
	"Sid": "CMPCondition",
	"Effect": "Deny",
	"Action": "securityhub:BatchUpdateFindings",
	"Resource": "*",
	"Condition": {
		"StringEquals": {
			"securityhub:ASFFSyntaxPath/Workflow.Status": "SUPPRESSED"
		}
	}
}

Summary

In this post, we showed you how to align AWS managed and customer managed IAM policies to different user personas, so that you can allow different users to access Security Hub with least privilege permissions. Security Hub enables both central security teams and individual DevOps engineers to understand and improve the security posture of the AWS accounts in their organization.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vaibhawa Kumar

Vaibhawa is a Cloud Infra Architect with AWS Professional Services. He helps customers on their cloud journey of critical workloads to the AWS cloud with infrastructure security and automations. In his free time, you can find him spending time with family, sports, and cooking.

Author

Sanjay Patel

Sanjay is a Senior Cloud Application Architect with AWS Professional Services. He has a diverse background in software design, enterprise architecture, and API integrations. He has helped AWS customers automate infrastructure security. He enjoys working with AWS customers to identify and implement the best fit solution.

Author

Ely Kahn

Ely is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

How to implement password-less authentication with Amazon Cognito and WebAuthn

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-implement-password-less-authentication-with-amazon-cognito-and-webauthn/

In this blog post, I show you how to offer a password-less authentication experience to your customers. To do this, you’ll allow physical security keys or platform authenticators (like finger-print scanners) to be used as the authentication factor to your web or mobile applications that use Amazon Cognito user pools for authentication.

An Amazon Cognito user pool is a user directory that Amazon Web Services (AWS) customers use to manage their customer identities. Web Authentication (WebAuthn) is a W3C standard that lets users authenticate to web applications using public-key cryptography. Using public-key cryptography enables you to implement a stronger authentication mechanism that’s less dependent on passwords.

Mobile and web applications can use WebAuthn together with browser and device support for the Client-To-Authenticator-Protocol (CTAP) to implement Fast ID Online (FIDO) authentication. To learn more about the flow of WebAuthn and CTAP, visit the FIDO Alliance.

How it works

Amazon Cognito user pools allow you to build a custom authentication flow that uses AWS Lambda functions to authenticates users based on one or more challenge/response cycles. You can use this flow to implement password-less authentication that is based on custom challenges. To use this flow to implement FIDO authentication, you need to create credentials during the registration phase and reference these credentials in the user’s profile. You can then validate these credentials during the authentication phase in a custom challenge.

During registration, a new set of credentials that are bound to your application (relying party), are created through a FIDO authenticator. For example, a platform authenticator with a biometric sensor or a roaming authenticator like a physical security key. The private key of this credential set remains on the authenticator, the public key, together with a credential identifier are saved in a custom attribute that’s part of the user profile in Amazon Cognito.

During authentication, a Cognito custom authentication flow will be used to implement authentication through a custom challenge. The application prompts the user to sign in using the authenticator that they used during registration. Response from the authenticator is then passed as a challenge response to Amazon Cognito and verified using the stored public key.

In this password-less flow, the private key has never left the physical device, the authenticator also validates that relying party in authentication request matches the relying party that was used to create the credentials. This combination provides a more secured authentication flow that uses stronger credentials, protects user from phishing and provides better user experience.

About the demo project

This blog post and the diagrams below explain a scenario that uses FIDO as the only authentication factor, to implement password-less authentication. To help with implementation details, I created a project to demonstrate WebAuthn integration with Amazon Cognito that provides sample code for three scenarios:

  • A scenario that uses FIDO as the only factor (password-less)
  • A scenario that uses FIDO as a second factor (with password)
  • A scenario that lets users sign-in with only a password

This project is only a demonstration and shouldn’t be used as-is in production environments. When using FIDO as authentication factor, it is a best practice to allow users to register multiple authenticators and you need to implement an account recovery workflow in case of a lost authenticator.

Implementing FIDO Authentication

Let’s take a deeper dive into the design and components involved in implementing this solution. To deploy this project in your development environment, follow the instructions in the WebAuthn integration with Amazon Cognito project.

Creating and configuring user pool

The first step is to create a Cognito user pool and triggers that orchestrate a custom authentication flow. You do that by deploying the CloudFormation stack that will create all resources as explained in the demo project.

Few implementation details to note about the user pool:

  • The template creates a user pool, app client, triggers, and Lambda functions to use for custom authentication.
  • The template creates a custom attribute called publicKeyCred. This is the custom attribute that holds a base64 encoded representation of the credential identifier and public key for the user’s authenticator.
  • The app client defines what authentication flows are allowed. You can limit allowed flows according to your use-case. To support FIDO authentication, you must allow CUSTOM_AUTH flow.
  • The app client has “write” permissions to the custom attribute publicKeyCred but not “read” permissions. This allows your application to write the attribute during registration or profile updates but excludes this attribute from the user’s id_token. Since this attribute is considered back-end data that is only used during authentication, it doesn’t need to be part of user profile in the id_token.

User registration flow

The registration flow needs to create credentials using the authenticator and store the public-key in user’s profile. Let’s take a closer look at the sequence of calls and involved components to implement this flow.
 

Figure 1: WebAuthn user registration process

Figure 1: WebAuthn user registration process

  1. The user navigates to your application, www.example.com (relying party), and creates an account. A request is sent to the relying party to build a credentials options object and send it back to the browser. (in the demo project, this starts in the createCredentials function in webauthn-client.js and creates the credentials options object by making a call to createCredRequest in authn.js)
  2. The browser uses built-in WebAuthn APIs to create the new credentials with an available authenticator using the credentials options object that was created in first step. This is done by making a call to navigator.credentials.create API (this API is available in browsers and platforms that support FIDO and WebAuthn).
  3. The user experience in this step depends on the OS, browser, and the authenticator. For example, the browser could prompt the user to attach a security key or, on devices that support it, to use a biometric scanner.
     
    Figure 2: User registration and browser alert to use an authenticator in Firefox

    Figure 2: User registration and browser alert to use an authenticator in Firefox

  4. The user interacts with an authenticator (by touching a security key or scanning finger on a touch-id device), which generates new credentials bound to the relying party and returns a response object to the browser.
  5. The browser sends a credential response object to the relying party to parse and validate the response on the server-side. The credential identifier and public key are extracted from the credential response. At this step, your application can also check additional authenticator data and use it to make authentication decisions. For example, your application can check if the authenticator was able to verify user identity through PIN or biometrics (UV flag) or only user presence (UP flag) was verified by authenticator. In the demo project, this is still part of createCredentials function and server-side parsing and validation is done in parseCredResponse that is implemented in authn.js
  6. To complete the user registration, the browser passes the profile attributes that have been collected during registration through Amazon Cognito APIs as custom attributes. This step is performed in signUp function in webauthn-client.js

At the conclusion of this process, a new user will be created in Amazon Cognito and the custom attribute “publicKeyCred” will be populated with a base64 encoded string that includes a credential identifier and the public key generated by the authenticator. This attribute is not considered secret or sensitive data, it rather includes the public key that will be used to verify the authenticator response during subsequent authentications.

User authentication flow

The following diagram describes the custom authentication flow to implement password-less authentication.
 

Figure 3: WebAuthn user authentication process

Figure 3: WebAuthn user authentication process

  1. The user provides their user name and selects the sign-in button, script (running in browser) starts the sign-in process using Amazon Cognito InitiateAuth API passing the user name and indicating that authentication flow is CUSTOM_AUTH. In the demo project, this part is performed in the signIn function in webauthn-client.js.
  2. The Amazon Cognito service passes control to the Define Auth Challenge Lambda trigger. The trigger then determines that this is the first step in the authentication and returns CUSTOM_CHALLENGE as the next challenge to the user.
  3. Control then moves to Create Auth Challenge Lambda trigger to create the custom challenge. This trigger creates a random challenge (a 64 bytes random string), extracts the credential identifier from the user profile (the value passed initially during the sign-up process) combines them and returns them as a custom challenge to the client. This is performed in CreateAuthChallenge Lambda function.
  4. The browser then prompts the user to activate an authenticator. At this stage, the authenticator verifies that credentials exist for the identifier and that the relying party matches the one that is bound to the credentials. This is implemented by making a call to navigator.credentials.get API that is available in browsers and devices that support FIDO2 and WebAuthn.
     
    Figure 4: Authentication and browser prompt to use a registered authenticator

    Figure 4: Authentication and browser prompt to use a registered authenticator

  5. If credentials exist and the relying party is verified, the authenticator requests a user attention or verification. Depending on the type of authenticator, user verification through biometrics or a PIN code is performed and the credentials response is passed back to the browser.
     
    Figure 5: Authentication examples from different browsers and platforms

    Figure 5: Authentication examples from different browsers and platforms

  6. The signIn function continues the sign-in process by calling respondToAuthChallenge API and sending the credentials response to Amazon Cognito.
  7. Amazon Cognito sends the response to the Verify Auth Challenge Lambda trigger. This trigger extracts the public key from the user profile, parses and validates the credentials response, and if the signature is valid, it responds with success. This is performed in VerifyAuthChallenge Lambda trigger.
  8. Lastly, Amazon Cognito sends the control again to Define Auth Challenge to determine the next step. If the results from Verify Auth Challenge indicate a successful response, authentication succeeds and Amazon Cognito responds with ID, access, and refresh tokens.

Conclusion

When building customer facing applications, you as the application owner and developer need to balance security with usability. Reducing the risk of account take-over and phishing is based on using strong credentials, strong second-factors, and minimizing the role of passwords. The flexibility of Amazon Cognito custom authentication flow integrated with WebAuthn offer a technical path to make this possible in addition to offering better user experience to your customers.

Check out the WebAuthn with Amazon Cognito project for code samples and deployment steps, deploy this in your development environment to see this integration in action and go build an awesome password-less experience in your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

AWS extends its MTCS Level 3 certification scope to cover United States Regions

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/aws-extends-its-mtcs-level-3-certification-scope-to-cover-united-states-regions/

We’re excited to announce the completion of the Multi-Tier Cloud Security (MTCS) Level 3 triennial certification in September 2020. The scope was expanded to cover the United States Amazon Web Services (AWS) Regions, excluding AWS GovCloud (US) Regions, in addition to Singapore and Seoul. AWS was the first cloud service provider (CSP) to attain the MTCS Level 3 certification in Singapore since 2014, and the services in scope have increased to 130—an approximately 27% increase since the last recertification audit in September 2019, and three times the number of services in scope since the last triennial audit in 2017. This provides customers with more services to choose from in the regions.

MTCS was the world’s first cloud security standard to specify a management system for cloud security that covers multiple tiers, and it can be applied by CSPs to meet differing cloud user needs for data sensitivity and business criticality. The certified CSPs will be able to better specify the levels of security that they can offer to their users. CSPs can achieve this through third-party certification and a self-disclosure requirement for CSPs that covers service-oriented information normally captured in service level agreements. The different levels of security help local businesses to pick the right CSP, and use of MTCS is mandated by the Singapore government as a requirement for public sector agencies and regulated organizations.

MTCS has three levels of security, Level 1 being the base and Level 3 the most stringent:

  • Level 1 was designed for non–business critical data and systems with basic security controls, to counter certain risks and threats targeting low-impact information systems (for example, a website that hosts public information).
  • Level 2 addresses the needs of organizations that run their business-critical data and systems in public or third-party cloud systems (for example, confidential business data and email).
  • Level 3 was designed for regulated organizations with specific and more stringent security requirements. Industry-specific regulations can be applied in addition to the baseline controls, in order to supplement and address security risks and threats in high-impact information systems (for example, highly confidential business data, financial records, and medical records).

Benefits of MTCS certification

Singapore customers in regulated industries with the strictest security requirements can securely host applications and systems with highly sensitive information, ranging from confidential business data to financial and medical records with level 3 compliance.

Financial Services Industry (FSI) customers in Korea are able to accelerate cloud adoption without the need to validate 109 out of 141 controls as required in the relevant regulations (the Financial Security Institute’s Guideline on Use of Cloud Computing Services in the Financial Industry, and the Regulation on Supervision on Electronic Financial Transactions (RSEFT)).

With increasing cloud adoption across different industries, MTCS certification has the potential to provide assurance to customers globally now that the scope is extended beyond Singapore and Korea to the United States AWS Regions. This extension also provides an alternative for Singapore government agencies to leverage the AWS services that haven’t yet launched locally, and provides resiliency and recovery use cases as well.

You can now download the latest MTCS certificates and the MTCS Self-Disclosure Form in AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the Audit Program Manager for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

New! Streamline existing IAM Access Analyzer findings using archive rules

Post Syndicated from Andrea Nedic original https://aws.amazon.com/blogs/security/new-streamline-existing-iam-access-analyzer-findings-using-archive-rules/

AWS Identity and Access Management (IAM) Access Analyzer generates comprehensive findings to help you identify resources that grant public and cross-account access. Now, you can also apply archive rules to existing findings, so you can better manage findings and focus on the findings that need your attention most.

You can think of archive rules as similar to email rules. You define email rules to automatically organize emails. With IAM Access Analyzer, you can define archive rules to automatically mark findings as intended access. Now, those rules can apply to existing as well as new IAM Access Analyzer findings. This helps you focus on findings for potential unintended access to your resources. You can then easily track and resolve these findings by reducing access, helping you to work towards least privilege.

In this post, first I give a brief overview of IAM Access Analyzer. Then I show you an example of how to create an archive rule to automatically archive findings for intended access. Finally, I show you how to update an archive rule to mark existing active findings as intended.

IAM Access Analyzer overview

IAM Access Analyzer helps you determine which resources can be accessed publicly or from other accounts or organizations. IAM Access Analyzer determines this by mathematically analyzing access control policies attached to resources. This form of analysis—called automated reasoning—applies logic and mathematical inference to determine all possible access paths allowed by a resource policy. This is how IAM Access Analyzer uses provable security to deliver comprehensive findings for potential unintended bucket access. You can enable IAM Access Analyzer in the IAM console by creating an analyzer for an account or an organization. Once you’ve created your analyzer, you can review findings for resources that can be accessed publicly or from other AWS accounts or organizations.

Create an archive rule to automatically archive findings for intended access

When you review findings and discover common patterns for intended access, you can create archive rules to automatically archive those findings. This helps you focus on findings for unintended access to your resources, just like email rules help streamline your inbox.

To create an archive rule

In the IAM console, choose Archive rules under Access Analyzer. Then, choose Create archive rule to display the Create archive rule page shown in Figure 1. There, you find the option to name the rule or use the name generated by default. In the Rule section, you define criteria to match properties of findings you want to archive. Just like email rules, you can add multiple criteria to the archive rule. You can define each criterion by selecting a finding property, an operator, and a value. To help ensure a rule doesn’t archive findings for public access, the criterion Public access is false is suggested by default.
 

Figure 1: IAM Access Analyzer create archive rule page where you add criteria to create a new archive rule

Figure 1: IAM Access Analyzer create archive rule page where you add criteria to create a new archive rule

For example, I have a security audit role external to my account that I expect to have access to resources in my account. To mark that access as intended, I create a rule to archive all findings for Amazon S3 buckets in my account that can be accessed by the security audit role outside of the account. To do this, I include two criteria: Resource type matches S3 bucket, and the AWS Account value matches the security audit role ARN. Once I add these criteria, the Results section displays the list of existing active findings the archive rule matches, as shown in Figure 2.
 

Figure 2: A rule to archive all findings for S3 buckets in an account that can be accessed by the audit role outside of the account, with matching findings displayed

Figure 2: A rule to archive all findings for S3 buckets in an account that can be accessed by the audit role outside of the account, with matching findings displayed

When you’re done adding criteria for your archive rule, select Create and archive active findings to archive new and existing findings based on the rule criteria. Alternatively, you can choose Create rule to create the rule for new findings only. In the preceding example, I chose Create and archive active findings to archive all findings—existing and new—that match the criteria.

Update an archive rule to mark existing findings as intended

You can also update an archive rule to archive existing findings retroactively and streamline your findings. To edit an archive rule, choose Archive rules under Access Analyzer, then select an existing rule and choose Edit. In the Edit archive rule page, update the archive rule criteria and review the list of existing active findings the archive rule applies to. When you save the archive rule, you can apply it retroactively to existing findings by choosing Save and archive active findings as shown in Figure 3. Otherwise, you can choose Save rule to update the rule and apply it to new findings only.

Note: You can also use the new IAM Access Analyzer API operation ApplyArchiveRule to retroactively apply an archive rule to existing findings that meet the archive rule criteria.

 

Figure 3: IAM Access Analyzer edit archive rule page where you can apply the rule retroactively to existing findings by choosing Save and archive active findings

Figure 3: IAM Access Analyzer edit archive rule page where you can apply the rule retroactively to existing findings by choosing Save and archive active findings

Get started

To turn on IAM Access Analyzer at no additional cost, open the IAM console. IAM Access Analyzer is available at no additional cost in the IAM console and through APIs in all commercial AWS Regions, AWS China Regions, and AWS GovCloud (US). To learn more about IAM Access Analyzer and which resources it supports, visit the feature page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Andrea Nedic

Andrea is a Sr. Tech Product Manager for AWS Identity and Access Management. She enjoys hearing from customers about how they build on AWS. Outside of work, Andrea likes to ski, dance, and be outdoors. She holds a PhD from Princeton University.

How to configure Duo multi-factor authentication with Amazon Cognito

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-configure-duo-multi-factor-authentication-with-amazon-cognito/

Adding multi-factor authentication (MFA) reduces the risk of user account take-over, phishing attacks, and password theft. Adding MFA while providing a frictionless sign-in experience requires you to offer a variety of MFA options that support a wide range of users and devices. Let’s see how you can achieve that with Amazon Cognito and Duo Multi-Factor Authentication (MFA).

Amazon Cognito user pools are user directories that are used by Amazon Web Services (AWS) customers to manage the identities of their customers and to add sign-in, sign-up and user management features to their customer-facing web and mobile applications. Duo Security is an APN Partner that provides unified access security and multi-factor authentication solutions.

In this blog post, I show you how to use Amazon Cognito custom authentication flow to integrate Duo Multi-Factor Authentication (MFA) into your sign-in flow and offer a wide range of MFA options to your customers. Some second factors available through Duo MFA are mobile phone SMS passcodes, approval of login via phone call, push-notification-based approval on smartphones, biometrics on devices that support it, and security keys that can be attached via USB.

How it works

Amazon Cognito user pools enable you to build a custom authentication flow that authenticates users based on one or more challenge/response cycles. You can use this flow to integrate Duo MFA into your authentication as a custom challenge.

Duo Web offers a software development kit to make it easier for you to integrate your web applications with Duo MFA. You need an account with Duo and an application to protect (which can be created from the Duo admin dashboard). When you create your application in the Duo admin dashboard, note the integration key (ikey), secret key (skey), and API hostname. These details, together with a random string (akey) that you generate, are the primary factors used to integrate your Amazon Cognito user pool with Duo MFA.

Note: ikey, skey, and akey are referred to as Duo keys.

Duo MFA will be integrated into the sign-in flow as a custom challenge. To do that, you need to generate a signed challenge request using Duo APIs and use it to load Duo MFA in an iframe and request the user’s second factor. When the challenge is answered by the user, a signed response is returned to your application and sent to Amazon Cognito for verification. If the response is valid then the MFA challenge is successful.

Let’s take a closer look at the sequence of calls and components involved in this flow.

Implementation details

In this section, I walk you through the end-to-end flow of integrating Duo MFA with Amazon Cognito using a custom authentication flow. To help you with this integration, I built a demo project that provides deployment steps and sample code to create a working demo in your environment.

Create and configure a user pool

The first step is to create the AWS resources needed for the demo. You can do that by deploying the AWS CloudFormation stack as described in the demo project.

A few implementation details to be aware of:

  • The template creates an Amazon Cognito user pool, application client, and AWS Lambda triggers that are used for the custom authentication.
  • The template also accepts ikey, skey, and akey as inputs. For security, the parameters are masked in the AWS CloudFormation console. These parameters are stored in a secret in AWS Secrets Manager with a resource policy that allows relevant Lambda functions read access to that secret.
  • Duo keys are loaded from secrets manager at the initialization of create auth challenge and verify auth challenge Lambda triggers to be used to create sign-request and verify sign-response.

Authentication flow

Figure 1: User authentication process for the custom authentication flow

Figure 1: User authentication process for the custom authentication flow

The preceding sequence diagram (Figure 1) illustrates the sequence of calls to sign in a user, which are as follows:

  1. In your application, the user is presented with a sign-in UI that captures their user name and password and starts the sign-in flow. A script—running in the browser—starts the sign-in process using the Amazon Cognito authenticateUser API with CUSTOM_AUTH set as the authentication flow. This validates the user’s credentials using Secure Remote Password (SRP) protocol and moves on to the second challenge if the credentials are valid.

    Note: The authenticateUser API automatically starts the authentication process with SRP. The first challenge that’s sent to Amazon Cognito is SRP_A. This is followed by PASSWORD_VERIFIER to verify the user’s credentials.

  2. After the SRP challenge step, the define auth challenge Lambda trigger will return CUSTOM_CHALLENGE and this will move control to the create auth challenge trigger.
  3. The create auth challenge Lambda trigger creates a Duo signed request using the Duo keys plus the username and returns the signed request as a challenge to the client. Here is a sample code of what create auth challenge should look like:
    <JavaScript>
    
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
    
        if(ikey == null || skey == null || akey == null){ 
          const promise = new Promise(function(resolve, reject) {
              secretsManagerClient.getSecretValue({SecretId: secretName}, function(err, data) {
                    if (err) {throw err; }
                    else {
                        if ('SecretString' in data) {
                            secret = JSON.parse(data.SecretString);
                            ikey = secret['duo-ikey'];
                            skey = secret['duo-skey'];
                            akey = secret['duo-akey'];
                        }
                    }
                    resolve();
                });
            })
            
            await promise; 
        }
    
        
        var username = event.userName;
        var sig_request = duo_web.sign_request(ikey, skey, akey, username);
        
        event.response.publicChallengeParameters = {
            sig_request: sig_request
        };
        
        return event;
    };
    

  4. The client initializes the Duo Web library with the signed request and displays Duo MFA in an iframe to request a second factor from the user. To initialize the Duo library, you need the api_hostname that is generated for your application in the Duo dashboard, the sign-request that was received as a challenge, and a callback function to invoke after the MFA step is completed by the user. This is done on the client side as follows:
    <JavaScript>
          //render Duo MFA iframe
          $("#duo-mfa").html('<iframe id="duo_iframe" title="Two-Factor Authentication" </iframe>');
            
          Duo.init({
            'host': api_hostname,
            'sig_request': challengeParameters.sig_request,
            'submit_callback': mfa_callback
          });
    

  5. Through the Duo iframe, the user can set up their MFA preferences and respond to an MFA challenge. After successful MFA setup, a signed response from the Duo Web library will be returned to the client and passed to the callback function that was provided in Duo.init call.
     
    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

  6. The client sends the Duo signed response to the Amazon Cognito service as a challenge response.
  7. Amazon Cognito sends the response to the verify auth challenge Lambda trigger, which uses Duo keys and username to verify the response.
    <JavaScript>
    const duo_web = require('duo_web');
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
        
        var username = event.userName;
        
        //-------get challenge response
        const sig_response = event.request.challengeAnswer;
        const verificationResult = duo_web.verify_response(ikey, skey, akey, sig_response);
        
        if (verificationResult === username) {
            event.response.answerCorrect = true;
        } else {
            event.response.answerCorrect = false;
        }
        return event;
    };
    

  8. Validation results and current state are passed once again to the define auth challenge Lambda trigger. If the user response is valid, then the Duo MFA challenge is successful. You can then decide to introduce additional challenges to the user or issue tokens and complete the authentication process.

Conclusion

As you build your mobile or web application, keep in mind that using multi-factor authentication is an effective and recommended approach to protect your customers from account take-over, phishing, and the risks of weak or compromised passwords. Making multi-factor authentication easy for your customers enables you to offer authentication experience that protects their accounts but doesn’t slow them down.

Visit the security pillar of AWS Well-Architected Framework to learn more about AWS security best practices and recommendations.

In this blog post, I showed you how to integrate Duo MFA with an Amazon Cognito user pool. Visit the demo application and review the code samples in it to learn how to integrate this with your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

AWS achieves FedRAMP P-ATO for 5 services in AWS US East/West and GovCloud (US) Regions

Post Syndicated from Amendaze Thomas original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-5-services-in-aws-us-east-west-and-govcloud-us-regions/

We’re pleased to announce that five additional AWS services have achieved provisional authorization (P-ATO) by the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB). These services provide the following capabilities for the federal government and customers with regulated workloads:

  • Enable your organization’s developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs with AWS Batch.
  • Aggregate, organize, and prioritize your security alerts or findings from multiple AWS services using AWS Security Hub.
  • Provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates using AWS Certificate Manager.
  • Enable customers to set up and govern a new, secure, multi-account AWS environment using AWS Control Tower.
  • Provide a fully managed Kubernetes service with Amazon Elastic Kubernetes Service.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

AWS US East/West Regions (FedRAMP Moderate Authorization)

AWS GovCloud (US) Regions (FedRAMP High Authorization)

AWS is continually expanding the scope of our compliance programs to help enable your organization to use our services for sensitive and regulated workloads. Today, AWS offers 90 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 76 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

author photo

Amendaze Thomas

Amendaze is the manager of the AWS Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the federal government, and over 13 years of experience supporting CISO teams with risk management framework (RMF) activities.

How to enhance Amazon CloudFront origin security with AWS WAF and AWS Secrets Manager

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-enhance-amazon-cloudfront-origin-security-with-aws-waf-and-aws-secrets-manager/

Whether your web applications provide static or dynamic content, you can improve their performance, availability, and security by using Amazon CloudFront as your content delivery network (CDN). CloudFront is a web service that speeds up distribution of your web content through a worldwide network of data centers called edge locations. CloudFront ensures that end-user requests are served by the closest edge location. As a result, viewer requests travel a short distance, improving performance for your viewers. When you deliver web content through a CDN such as CloudFront, a best practice is to prevent viewer requests from bypassing the CDN and accessing your origin content directly. In this blog post, you’ll see how to use CloudFront custom headers, AWS WAF, and AWS Secrets Manager to restrict viewer requests from accessing your CloudFront origin resources directly.

You can configure CloudFront to add custom HTTP headers to the requests that it sends to your origin. HTTP header fields are components of the header section of request and response messages in the Hypertext Transfer Protocol (HTTP). These custom headers enable you to send and gather information from your origin that isn’t included in typical viewer requests. You can use custom headers to control access to content. By configuring your origin to respond to requests only when they include a custom header that was added by CloudFront, you prevent users from bypassing CloudFront and accessing your origin content directly. In addition to offloading traffic from your origin servers, this also helps enforce web traffic being processed at CloudFront edge locations according to your AWS WAF rules prior to being forwarded to your origin.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports managed rules as well as a powerful rule language for custom rules. AWS WAF is tightly integrated with CloudFront and the Application Load Balancer (ALB). AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.

Solution overview

This blog post includes a sample solution you can deploy to see how its components integrate to implement the origin access restriction. The sample solution includes a web server deployed on Amazon Elastic Compute Cloud (Amazon EC2) Linux instances running in an AWS Auto Scaling group. Elastic Load Balancing distributes the incoming application traffic across the EC2 instances by using an ALB. The ALB is associated with an AWS WAF web access control list (web ACL), which is used to validate the incoming origin requests. Finally, a CloudFront distribution is deployed with an AWS WAF web ACL and configured to point to the origin ALB.

Although the sample solution is designed for deployment with CloudFront with an AWS WAF–associated ALB as its origin, the same approach could be used for origins that use Amazon API Gateway. A custom origin is any origin that is not an Amazon Simple Storage Service (Amazon S3) bucket, with one exception. An S3 bucket that is configured with static website hosting is a custom origin. You can refer to the CloudFront Developer Guide for more information on securing content that CloudFront delivers from S3 origins.

This solution is intended to enhance security for CloudFront custom origins that support AWS WAF, such as ALB, and is not a substitute for authentication and authorization mechanisms within your web applications. In this solution, Secrets Manager is used to control, audit, monitor, and rotate a random string used within your CloudFront and AWS WAF configurations. Although most of these lifecycle attributes could be set manually, Secrets Manager makes it easier.

Figure 1 shows how the provided AWS CloudFormation template creates the sample solution.
 

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A viewer accesses your website or application and requests one or more files, such as an image file and an HTML file.
  2. DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency.
  3. At the edge location, AWS WAF inspects the incoming request according to configured web ACL rules.
  4. At the edge location, CloudFront checks its cache for the requested content. If the content is in the cache, CloudFront returns it to the user. If the content isn’t in the cache, CloudFront adds the custom header, X-Origin-Verify, with the value of the secret from Secrets Manager, and forwards the request to the origin.
  5. At the origin Application Load Balancer (ALB), AWS WAF inspects the incoming request header, X-Origin-Verify, and allows the request if the string value is valid. If the header isn’t valid, AWS WAF blocks the request.
  6. At the configured interval, Secrets Manager automatically rotates the custom header value and updates the origin AWS WAF and CloudFront configurations.

Solution deployment

This sample solution includes seven main steps:

  1. Deploy the CloudFormation template.
  2. Confirm successful viewer access to the CloudFront URL.
  3. Confirm that direct viewer access to the origin URL is blocked by AWS WAF.
  4. Review the CloudFront origin custom header configuration.
  5. Review the AWS WAF web ACL header validation rule.
  6. Review the Secrets Manager configuration.
  7. Review the Secrets Manager AWS Lambda rotation function.

Step 1: Deploy the CloudFormation template

The stack will launch in the N. Virginia (us-east-1) Region. It takes approximately 10 minutes for the CloudFormation stack to complete.

Note: The sample solution requires deployment in the N. Virginia (us-east-1) Region. Although out of scope for this blog post, an additional sample template is available in this solution’s GitHub repository for testing this solution with an existing CloudFront distribution and regional AWS WAF web ACL. Refer to the AWS regional service support information for more details on regional service availability.

To launch the CloudFormation stack

  1. Choose the following Launch Stack icon to launch a CloudFormation stack in your account in the N. Virginia Region.
     
    Select the Launch Stack button to launch the template
  2. In the CloudFormation console, leave the configured values, and then choose Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameter Input parameter description
    EC2InstanceSize The instance size for EC2 web servers.
    HeaderName The HTTP header name for the secret string.
    WAFRulePriority The rule number to use for the regional AWS WAF web ACL. 0 is recommended, because rules are evaluated in order based on the value of priority.
    RotateInterval The rotation interval, in days, for the origin secret value. Full rotation requires two intervals.
    ArtifactsBucket The S3 bucket with artifact files (Lambda functions, templates, HTML files, and so on). Keep the default value.
    ArtifactsPrefix The path for the S3 bucket that contains artifact files. Keep the default value.

    Figure 2 shows an example of values entered under Parameters.
     

    Figure 2: Input parameters for the CloudFormation stack

    Figure 2: Input parameters for the CloudFormation stack

  4. Enter values for all of the input parameters, and then choose Next.
  5. On the Options page, keep the defaults, and then choose Next.
  6. On the Review page, confirm the details, acknowledge the statements under Capabilities and transforms as shown in Figure 3, and then choose Create stack.
     
    Figure 3: CloudFormation Capabilities and Transforms acknowledgments

    Figure 3: CloudFormation Capabilities and Transforms acknowledgments

Step 2: Confirm access to the website through CloudFront

Next, confirm that website access through CloudFront is functioning as intended. After the CloudFormation stack completes deployment, you can access the test website using the domain name that was automatically assigned to the distribution.

To confirm viewer access to the website through CloudFront

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the cfEndpoint entry, similar to that shown in Figure 4.
     
    Figure 4: CloudFormation cfEndpoint stack output

    Figure 4: CloudFormation cfEndpoint stack output

  2. The cfEndpoint is the URL for the site, and it is automatically assigned by CloudFront. Choose the cfEndpoint link to open the test page, as shown in Figure 5.
     
    Figure 5: CloudFormation cfEndpoint test page

    Figure 5: CloudFormation cfEndpoint test page

In this step, you’ve confirmed that website accessibility through CloudFront is functioning as intended.

Step 3: Confirm that direct viewer access to the origin URL is blocked by AWS WAF

In this step, you confirm that direct access to the test website is blocked by the regional AWS WAF web ACL.

To test direct access to the origin URL

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the albEndpoint entry.
  2. Choose the albEndpoint link to go to the test site URL that was automatically assigned to the ALB. Choosing this link will result in a 403 Forbidden response. When AWS WAF blocks a web request based on the conditions that you specify, it returns HTTP status code 403 (Forbidden).

In this step, you’ve confirmed that website accessibility directly to the origin ALB is blocked by the regional AWS WAF web ACL.

Step 4: Review the CloudFront origin custom header configuration

Now that you’ve confirmed that the test website can only be accessed through CloudFront, you can review the detailed CloudFront, WAF, and Secrets Manager configurations that enable this restriction.

To review the custom header configuration

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the cfDistro entry.
  2. Choose the cfDistro link to go to this distribution’s configuration in the CloudFront console. On the Origin Groups tab, under Origins, select the origin as shown in Figure 6.
     
    Figure 6: CloudFront Origins and Origin Groups settings

    Figure 6: CloudFront Origins and Origin Groups settings

  3. Choose Edit to go to the Origin Settings section, scroll to the bottom and review the Origin Custom Headers as shown in Figure 7.
     
    Figure 7: CloudFront Origin Custom Headers settings

    Figure 7: CloudFront Origin Custom Headers settings

    You can see that the custom header, X-Origin-Verify, has been configured using Secrets Manager with a random 32-character alpha-numeric value. This custom header will be added to web requests that are forwarded from CloudFront to your origin. As you learned in steps 2 and 3, requests without this header are blocked by AWS WAF at the origin ALB. In the next two steps, you will dive deeper into how this works.

Step 5: Review the AWS WAF web ACL header validation rule

In this step, you review the AWS WAF rule configuration that validates the CloudFront custom header X-Origin-Verify.

To review the header validation rule

  1. In the CloudFormation console, select Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the wafWebACLR entry.
  2. Choose the wafWebACLR link to go to the origin ALB web ACL configuration in the WAF and Shield console. On the Overview tab, you can view the Requests per 5 minute period chart and the Sampled requests list, which shows requests from the last three hours that the ALB has forwarded to AWS WAF for inspection. The sample of requests includes detailed data about each request, such as the originating IP address and Uniform Resource Identifier (URI). You also can view which rule the request matched, and whether the rule Action is configured to ALLOW, BLOCK, or COUNT requests. You can enable AWS WAF logging to get detailed information about traffic that’s analyzed by your web ACL. You send logs from your web ACL to an Amazon Kinesis Data Firehose with a configured storage destination such as Amazon S3. Information that’s contained in the logs includes the time that AWS WAF received the request from your AWS resource, detailed information about the request, and the action for the rule that each request matched.
  3. Choose the Rules tab to review the rules for this web ACL, as shown in Figure 8.
     
    Figure 8: AWS WAF web ACL rules

    Figure 8: AWS WAF web ACL rules

    On the Rules tab, you can see that the CFOriginVerifyXOriginVerify rule has been configured with the Allow action, while the Default web ACL action is Block. This means that any incoming requests that don’t match the conditions in this rule will be blocked.

    In every AWS WAF rule group and every web ACL, rules define how to inspect web requests and what to do when a web request matches the inspection criteria. Each rule requires one top-level statement, which might contain nested statements at any depth, depending on the rule and statement type. You can learn more about AWS WAF rule statements in the AWS WAF Developer Guide, AWS Online Tech Talks, and samples on GitHub.

  4. Choose the CFOriginVerifyXOriginVerify rule, and then choose Edit to bring up the Rule Builder tool. In the Rule Builder, you can see that a rule has been created with two Rule Statements similar to those in Figure 9.
     
    Figure 9: AWS WAF web ACL rule statement

    Figure 9: AWS WAF web ACL rule statement

    In the Rule Builder configuration for Statement 1, you can see that the request Header is being inspected for the x-origin-verify Header field name (HTTP header field names are case insensitive), and the String to match value is set to the value you reviewed in step 4. In the Rule Builder, you can also see a logical OR with an additional rule statement, Statement 2. You will notice that the configuration for Statement 2 is the same as Statement 1, except that the String to match value is different. You will learn about this in detail in step 7, but Statement 2 helps to ensure that valid web requests are processed by your origin servers when Secrets Manager automatically rotates the value of the X-Origin-Verify header. The effect of this rule configuration is that inspected web requests will be allowed if they match either of the two statements.

    In addition to the visual web ACL representation you just reviewed in the WAF Rule visual editor, every web ACL also has a JSON format representation you can edit by using the WAF Rule JSON editor. You can retrieve the complete configuration for a web ACL in JSON format, modify it as you need, and then provide it to AWS WAF through the console, API, or command line interface (CLI).

    This step demonstrated how your request was allowed to access the test website in step 2 and why your request was blocked in step 3.

Step 6: Review Secrets Manager configuration

Now that you’re familiar with the CloudFront and AWS WAF configurations, you will learn how Secrets Manager creates and rotates the secret used for the X-Origin-Verify header field value. Secrets Manager uses an AWS Lambda function to perform the actual rotation of the secret used for the value and update the associated AWS WAF web ACL and CloudFront distribution.

To review the Secrets Manager configuration

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the OriginVerifySecret entry.
  2. Choose the OriginVerifySecret link to go to the configuration for the secret in the Secrets Manager console. Scroll down to the section titled Secret value, and then choose Retrieve secret value to display the Secret key/value as shown in Figure 10.
     
    Figure 10: Secrets Manager retrieve value

    Figure 10: Secrets Manager retrieve value

    When you retrieve the secret, Secrets Manager programmatically decrypts the secret and displays it in the console. You can see that the secret is stored as a key-value pair, where the secret key is HEADERVALUE, and the secret value is the string used in the CloudFront and WAF configurations you reviewed in steps 3 and 4.

  3. While you’re in the Secrets Manager console, review the Rotation configuration section, as shown in Figure 11.
     
    Figure 11: Secrets Manager rotation configuration

    Figure 11: Secrets Manager rotation configuration

    You can see that rotation was enabled for this secret at an interval of one day. This configuration also includes a Lambda rotation function. Secrets Manager uses a Lambda function to perform the actual rotation of a secret. If you use your secret for one of the supported Amazon Relational Database Service (Amazon RDS) databases, then Secrets Manager provides the Lambda function for you. If you use your secret for another service, then you must provide the code for the Lambda function, as we’ve done in this solution.

Step 7: Review the Secrets Manager Lambda rotation function

In this step, you review the Secrets Manager Lambda rotation function.

To review the Secrets Manager Lambda rotation function

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. In the stack Outputs tab, look for the OriginSecretRotateFunction entry.
  2. Choose the OriginSecretRotateFunction link to go to the Lambda function that is configured for this secret. The code used for this secrets rotation function is based on the AWS Secrets Manager Rotation Template. Choose the Monitoring tab and review the Invocations graph as shown in Figure 12.
     
    Figure 12: Monitoring tab for the Lambda rotation function

    Figure 12: Monitoring tab for the Lambda rotation function

    Shortly after the CloudFormation stack creation completes, you should see several invocations in the Invocations graph. When a configured rotation schedule or a manual process triggers rotation, Secrets Manager calls the Lambda function several times, each time with different parameters. The Lambda function performs several tasks throughout the process of rotating a secret. This includes the following steps: createSecret, setSecret, testSecret, and finishSecret. Secrets Manager uses staging labels, a simple text string, to enable you to identify different versions of a secret during rotation. This includes the following staging labels: AWSPENDING, AWSCURRENT, and AWSPREVIOUS, which are covered in the following step.

  3. To learn more about the rotation steps configured for this solution, choose View logs in CloudWatch on the Monitoring tab.
    1. On the Log streams tab, select the top entry in the list.
    2. Enter Event in the Filter events field, and then choose the arrows to expand the details for each event as shown in Figure 13.
       
      Figure 13: CloudWatch event logs for the Lambda rotation function

      Figure 13: CloudWatch event logs for the Lambda rotation function

The four rotation steps annotated in Figure 13 work as follows:

Note: This section provides an overview of the rotation process for this solution. For more detailed information about the Lambda rotation function, see the Secrets Manager User Guide.

  1. The createSecret step: In this step, the Lambda function generates a new version of the secret. The rotation Lambda function calls the GetRandomPassword method to generate a new random string, and then labels the new version of the secret with the staging label AWSPENDING to mark it as the in-process version of the secret.
  2. The SetSecret step: In this step, the rotation function retrieves the version of the secret labeled AWSPENDING from Secrets Manager and updates the web ACL rule for the AWS WAF associated with the origin ALB. The two rule statements you reviewed in step 5 of this blog post are updated with the AWSPENDING and AWSCURRENT values. The rotation function also updates the value for the Origin Custom Header X-Origin-Verify. When the rotation function updates your distribution configuration, CloudFront starts to propagate the changes to all edge locations. Maintaining both the AWSPENDING and AWSCURRENT secret values helps to ensure that web requests forwarded to your origin by CloudFront are not blocked. Therefore, once a secret value is created, two rotation intervals are required for it to be removed from the configuration.
  3. The testSecret step: This step of the Lambda function verifies the AWSPENDING version of the secret by using it to access the origin ALB endpoint with the X-Origin-Verify header. Both AWSPENDING and AWSCURRENT X-Origin-Verify header values are tested to confirm a “200 OK” response from the origin ALB endpoint.
  4. The finishSecret step: In the last step, the Lambda function moves the label AWSCURRENT from the current version to this new version of the secret. The old version receives the AWSPREVIOUS staging label, and is available for recovery as the last known good version of the secret, if needed. The old version with the AWSPREVIOUS staging label no longer has any staging labels attached, so Secrets Manager considers the old version deprecated and subject to deletion.

When the finishSecret step has successfully completed, Secrets Manager schedules the next rotation by adding the rotation interval (number of days) to the completion date. This automated process causes the values used for the validation headers to be updated at the configured interval. Although out of scope for this blog post, you should monitor your secrets to ensure usage of your secrets and log any changes to them. This helps you to make sure that any unexpected usage or change can be investigated, and unwanted changes can be rolled back.

Summary

You’ve learned how to use Amazon CloudFront, AWS WAF and AWS Secrets Manager to prevent web requests from directly accessing your CloudFront origin resources. You can use this solution to improve security for CloudFront custom origins that support AWS WAF, such as ALB, Amazon API Gateway, and AWS AppSync.

When using this solution, you will incur AWS WAF usage charges for both the ALB and CloudFront associated AWS WAF web ACLs. You might wish to consider subscribing to AWS Shield Advanced, which provides higher levels of protection against distributed denial of service (DDoS) attacks and includes AWS WAF and AWS Firewall Manager at no additional cost for usage on resources protected by AWS Shield Advanced. You can also learn more about pricing for CloudFront, AWS WAF, Secrets Manager, and AWS Shield Advanced.

You can review more options for restricting access to content with CloudFront, additional AWS WAF security automations, or managed rules for AWS WAF. You can explore solutions for using AWS IP address ranges to enhance CloudFront origin security. You might also wish to learn more about Secrets Manager best practices. This code for this solution is available on GitHub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about using this solution, you can start a thread in the CloudFront, WAF, or Secrets Manager forums, review or open an issue in this solution’s GitHub repository, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Cameron Worrell

Cameron Worrell

Cameron is a Solutions Architect with a passion for security and enterprise transformation. He joined AWS in 2015.

How to automate incident response in the AWS Cloud for EC2 instances

Post Syndicated from Ben Eichorst original https://aws.amazon.com/blogs/security/how-to-automate-incident-response-in-aws-cloud-for-ec2-instances/

One of the security epics core to the AWS Cloud Adoption Framework (AWS CAF) is a focus on incident response and preparedness to address unauthorized activity. Multiple methods exist in Amazon Web Services (AWS) for automating classic incident response techniques, and the AWS Security Incident Response Guide outlines many of these methods. This post demonstrates one specific method for instantaneous response and acquisition of infrastructure data from Amazon Elastic Compute Cloud (Amazon EC2) instances.

Incident response starts with detection, progresses to investigation, and then follows with remediation. This process is no different in AWS. AWS services such as Amazon GuardDuty, Amazon Macie, and Amazon Inspector provide detection capabilities. Amazon Detective assists with investigation, including tracking and gathering information. Then, after your security organization decides to take action, pre-planned and pre-provisioned runbooks enable faster action towards a resolution. One principle outlined in the incident response whitepaper and the AWS Well-Architected Framework is the notion of pre-provisioning systems and policies to allow you to react quickly to an incident response event. The solution I present here provides a pre-provisioned architecture for an incident response system that you can use to respond to a suspect EC2 instance.

Infrastructure overview

The architecture that I outline in this blog post automates these standard actions on a suspect compute instance:

  1. Capture all the persistent disks.
  2. Capture the instance state at the time the incident response mechanism is started.
  3. Isolate the instance and protect against accidental instance termination.
  4. Perform operating system–level information gathering, such as memory captures and other parameters.
  5. Notify the administrator of these actions.

The solution in this blog post accomplishes these tasks through the following logical flow of AWS services, illustrated in Figure 1.
 

Figure 1: Infrastructure deployed by the accompanying AWS CloudFormation template and associated task flow when invoking the main API

Figure 1: Infrastructure deployed by the accompanying AWS CloudFormation template and associated task flow when invoking the main API

  1. A user or application calls an API with an EC2 instance ID to start data collection.
  2. Amazon API Gateway initiates the core logic of the process by instantiating an AWS Lambda function.
  3. The Lambda function performs the following data gathering steps before making any changes to the infrastructure:
    1. Save instance metadata to the SecResponse Amazon Simple Storage Service (Amazon S3) bucket.
    2. Save a snapshot of the instance console to the SecResponse S3 bucket.
    3. Initiate an Amazon Elastic Block Store (Amazon EBS) snapshot of all persistent block storage volumes.
  4. The Lambda function then modifies the infrastructure to continue gathering information, by doing the following steps:
    1. Set the Amazon EC2 termination protection flag on the instance.
    2. Remove any existing EC2 instance profile from the instance.
    3. If the instance is managed by AWS Systems Manager:
      1. Attach an EC2 instance profile with minimal privileges for operating system–level information gathering.
      2. Perform operating system–level information gathering actions through Systems Manager on the EC2 instance.
      3. Remove the instance profile after Systems Manager has completed its actions.
    4. Create a quarantine security group that lacks both ingress and egress rules.
    5. Move the instance into the created quarantine security group for isolation.
  5. Send an administrative notification through the configured Amazon Simple Notification Service (Amazon SNS) topic.

Solution features

By using the mechanisms outlined in this post to codify your incident response runbooks, you can see the following benefits to your incident response plan.

Preparation for incident response before an incident occurs

Both the AWS CAF and Well-Architected Framework recommend that customers formulate known procedures for incident response, and test those runbooks before an incident. Testing these processes before an event occurs decreases the time it takes you to respond in a production environment. The sample infrastructure shown in this post demonstrates how you can standardize those procedures.

Consistent incident response artifact gathering

Codifying your processes into set code and infrastructure prepares you for the need to collect data, but also standardizes the collection process into a repeatable and auditable sequence of What information was collected when and how. This reduces the likelihood of missing data for future investigations.

Walkthrough: Deploying infrastructure and starting the process

To implement the solution outlined in this post, you first need to deploy the infrastructure, and then start the data collection process by issuing an API call.

The code example in this blog post requires that you provision an AWS CloudFormation stack, which creates an S3 bucket for storing your event artifacts and a serverless API that uses API Gateway and Lambda. You then execute a query against this API to take action on a target EC2 instance.

The infrastructure deployed by the AWS CloudFormation stack is a set of AWS components as depicted previously in Figure 1. The stack includes all the services and configurations to deploy the demo. It doesn’t include a target EC2 instance that you can use to test the mechanism used in this post.

Cost

The cost for this demo is minimal because the base infrastructure is completely serverless. With AWS, you only pay for the infrastructure that you use, so the single API call issued in this demo costs fractions of a cent. Artifact storage costs will incur S3 storage prices, and Amazon EC2 snapshots will be stored at their respective prices.

Deploy the AWS CloudFormation stack

In future posts and updates, we will show how to set up this security response mechanism inside a separate account designated for security, but for the purposes of this post, your demo stack must reside in the same AWS account as the target instance that you set up in the next section.

First, start by deploying the AWS CloudFormation template to provision the infrastructure.

To deploy this template in the us-east-1 region

  1. Choose the Launch Stack button to open the AWS CloudFormation console pre-loaded with the template:
     
    Select the Launch Stack button to launch the template
  2. (Optional) In the AWS CloudFormation console, on the Specify Details page, customize the stack name.
  3. For the LambdaS3BucketLocation and LambdaZipFileName fields, leave the default values for the purposes of this blog. Customizing this field allows you to customize this code example for your own purposes and store it in an S3 bucket of your choosing.
  4. Customize the S3BucketName field. This needs to be a globally unique S3 bucket name. This bucket is where gathered artifacts are stored for the demo in this blog. You must customize it beyond the default value for the template to instantiate properly.
  5. (Optional) Customize the SNSTopicName field. This name provides a meaningful label for the SNS topic that notifies the administrator of the actions that were performed.
  6. Choose Next to configure the stack options and leave all default settings in place.
  7. Choose Next to review and scroll to the bottom of the page. Select all three check boxes under the Capabilities and Transforms section, next to each of the three acknowledgements:
    • I acknowledge that AWS CloudFormation might create IAM resources.
    • I acknowledge that AWS CloudFormation might create IAM resources with custom names.
    • I acknowledge that AWS CloudFormation might require the following capability: CAPABILITY_AUTO_EXPAND.
  8. Choose Create Stack.

Set up a target EC2 instance

In order to demonstrate the functionality of this mechanism, you need a target host. Provision any EC2 instance in your account to act as a target for the security response mechanism to act upon for information collection and quarantine. To optimize affordability and demonstrate full functionality, I recommend choosing a small instance size (for example, t2.nano) and optionally joining the instance into Systems Manager for the ability to later execute Run Command API queries. For more details on configuring Systems Manager, refer to the AWS Systems Manager User Guide.

Retrieve required information for system initiation

The entire security response mechanism triggers through an API call. To successfully initiate this call, you first need to gather the API URI and key information.

To find the API URI and key information

  1. Navigate to the AWS CloudFormation console and choose the stack that you’ve instantiated.
  2. Choose the Outputs tab and save the value for the key APIBaseURI. This is the base URI for the API Gateway. It will resemble https://abcdefgh12.execute-api.us-east-1.amazonaws.com.
  3. Next, navigate to the API Gateway console and choose the API with the name SecurityResponse.
  4. Choose API Keys, and then choose the only key present.
  5. Next to the API key field, choose Show to reveal the key, and then save this value to a notepad for later use.

(Optional) Configure administrative notification through the created SNS topic

One aspect of this mechanism is that it sends notifications through SNS topics. You can optionally subscribe your email or another notification pipeline mechanism to the created SNS topic in order to receive notifications on actions taken by the system.

Initiate the security response mechanism

Note that, in this demo code, you’re using a simple API key for limiting access to API Gateway. In production applications, you would use an authentication mechanism such as Amazon Cognito to control access to your API.

To kick off the security response mechanism, initiate a REST API query against the API that was created in the AWS CloudFormation template. You first create this API call by using a curl command to be run from a Linux system.

To create the API initiation curl command

  1. Copy the following example curl command.
    curl -v -X POST -i -H "x-api-key: 012345ABCDefGHIjkLMS20tGRJ7othuyag" https://abcdefghi.execute-api.us-east-1.amazonaws.com/DEMO/secresponse -d '{
      "instance_id":"i-123457890"
    }'
    

  2. Replace the placeholder API key specified in the x-api-key HTTP header with your API key.
  3. Replace the example URI path with your API’s specific URI. To create the full URI, concatenate the base URI listed in the AWS CloudFormation output you gathered previously with the API call path, which is /DEMO/secresponse. This full URI for your specific API call should closely resemble this sample URI path: https://abcdefghi.execute-api.us-east-1.amazonaws.com/DEMO/secresponse
  4. Replace the value associated with the key instance_id with the instance ID of the target EC2 instance you created.

Because this mechanism initiates through a simple API call, you can easily integrate it with existing workflow management systems. This allows for complex data collection and forensic procedures to be integrated with existing incident response workflows.

Review the gathered data

Note that the following items were uploaded as objects in the security response S3 bucket:

  1. A console screenshot, as shown in Figure 2.
  2. (If Systems Manager is configured) stdout information from the commands that were run on the host operating system.
  3. Instance metadata in JSON form.

 

Figure 2: Example outputs from a successful completion of this blog post's mechanism

Figure 2: Example outputs from a successful completion of this blog post’s mechanism

Additionally, if you load the Amazon EC2 console and scroll down to Elastic Block Store, you can see that EBS snapshots are present for all persistent disks as shown in Figure 3.
 

Figure 3: Evidence of an EBS snapshot from a successful run

Figure 3: Evidence of an EBS snapshot from a successful run

You can also verify that the previously outlined security controls are in place by viewing the instance in the Amazon EC2 console. You should see the removal of AWS Identity and Access Management (IAM) roles from the target EC2 instances and that the instance has been placed into network isolation through a newly created quarantine security group.

Note that for the purposes of this demo, all information that you gathered is stored in the same AWS account as the workload. As a best practice, many AWS customers choose instead to store this information in an AWS account that’s specifically designated for incident response and analysis. A dedicated account provides clear isolation of function and restriction of access. Using AWS Organizations service control policies (SCPs) and IAM permissions, your security team can limit access to adhere to security policy, legal guidance, and compliance regulations.

Clean up and delete artifacts

To clean up the artifacts from the solution in this post, first delete all information in your security response S3 bucket. Then delete the CloudFormation stack that was provisioned at the start of this process in order to clean up all remaining infrastructure.

Conclusion

Placing workloads in the AWS Cloud allows for pre-provisioned and explicitly defined incident response runbooks to be codified and quickly executed on suspect EC2 instances. This enables you to gather data in minutes that previously took hours or even days using manual processes.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Eichorst

Ben is a Senior Solutions Architect, Security, Cryptography, and Identity Specialist for AWS. He works with AWS customers to efficiently implement globally scalable security programs while empowering development teams and reducing risk. He holds a BA from Northwestern University and an MBA from University of Colorado.

Introducing the first video in our new series, Verified, featuring Netflix’s Jason Chan

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/introducing-first-video-new-series-verified-featuring-netflix-jason-chan/

The year has been a profoundly different one for us all, and like many of you, I’ve been adjusting, both professionally and personally, to this “new normal.” Here at AWS we’ve seen an increase in customers looking for secure solutions to maintain productivity in an increased work-from-home world. We’ve also seen an uptick in requests for training; it’s clear, a sense of community and learning are critically important as workforces physically distance.

For these reasons, I’m happy to announce the launch of Verified: Presented by AWS re:Inforce. I’m hosting this series, but I’ll be joined by leaders in cloud security across a variety of industries. The goal is to have an open conversation about the common issues we face in securing our systems and tools. Topics will include how the pandemic is impacting cloud security, tips for creating an effective security program from the ground up, how to create a culture of security, emerging security trends, and more. Learn more by following me on Twitter (@StephenSchmidt), and get regular updates from @AWSSecurityInfo. Verified is just one of the many ways we will continue sharing best practices with our customers during this time. You can find more by reading the AWS Security Blog, reviewing our documentation, visiting the AWS Security and Compliance webpages, watching re:Invent and re:Inforce playlists, and/or reviewing the Security Pillar of Well Architected.

Our first conversation, above, is with Jason Chan, Vice President of Information Security at Netflix. Jason spoke to us about the security program at Netflix, his approach to hiring security talent, and how Zero Trust enables a remote workforce. Jason also has solid insights to share about how he started and grew the security program at Netflix.

“In the early days, what we were really trying to figure out is how do we build a large-scale consumer video-streaming service in the public cloud, and how do you do that in a secure way? There wasn’t a ton of expertise in that, so when I was building the security team at Netflix, I thought, ‘how do we bring in folks from a variety of backgrounds, generalists … to tackle this problem?’”

He also gave his view on how a growing security team can measure ROI. “I think it’s difficult to have a pure equation around that. So what we try to spend our time doing is really making sure that we, as a team, are aligned on what is the most important—what are the most important assets to protect, what are the most critical risks that we’re trying to prevent—and then make sure that leadership is aligned with that, because, as we all know, there’s not unlimited resources, right? You can’t hire an unlimited number of folks or spend an unlimited amount of money, so you’re always trying to figure out how do you prioritize, and how do you find where is going to be the biggest impact for your value?”

Check out Jason’s full interview above, and stay tuned for further videos in this series. If you have an idea or a topic you’d like covered in this series, please drop us a comment below. Thanks!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Automate AWS Firewall Manager onboarding using AWS Centralized WAF and VPC Security Group Management solution

Post Syndicated from Satheesh Kumar original https://aws.amazon.com/blogs/security/automate-aws-firewall-manager-onboarding-using-aws-centralized-waf-and-vpc-security-group-management-solution/

Many customers—especially large enterprises—run workloads across multiple AWS accounts and in multiple AWS regions. AWS Firewall Manager service, launched in April 2018, enables customers to centrally configure and manage AWS WAF rules, audit Amazon VPC security group rules across accounts and applications in AWS Organizations, and protect resources against distributed DDoS attacks.

In this blog post, we show you how to onboard your accounts and your AWS Organizations into the AWS Firewall Manager service and start centrally managing the security policies of your AWS Organizations member accounts. We also show you examples of how to perform operations on the Firewall Manager policies after you’ve deployed the solution, so you can adjust your security posture over time.

As more and more customers began using Firewall Manager for centralized management, they gave us feedback on how it could be improved. We heard that the process of defining policies and configuring rule sets can be challenging and time consuming, especially in a multi-account, multi-region scenario. We built the AWS Centralized WAF and VPC Security Group Management solution to make it easier and faster.

Solution overview

The AWS Centralized WAF and VPC Security Group Management solution fully automates the deployment of Firewall Manager with a set of opinionated defaults for the policies. We call it “opinionated,” because there’s no set of security rules that is right for absolutely every customer. We’re providing an example opinion, but you might have a different opinion, given your unique circumstance. Our examples block traffic that you might have a reason to allow. We install the following policies when you deploy this solution:

  • AWS WAF global policy for Amazon CloudFront distributions and regional policy for Application Load Balancer and Amazon API Gateway: Both the AWS WAF policies include the following AWS Managed Rules for AWS WAF. You can further customize these rules to suit your WAF requirements.
    • AWSManagedRulesCommonRuleSet
    • AWSManagedRulesAdminProtectionRuleSet
    • AWSManagedRulesKnownBadInputsRuleSet
    • AWSManagedRulesSQLiRuleSet
  • Amazon VPC security group usage audit and content audit policy: These policies flag security groups that are unused or redundant. Automatic remediation is turned off by default, but you can turn it on by customizing the solution.
  • AWS Shield Advanced policy: If your account has enabled Shield Advanced, then Shield Advanced protection is enabled for CloudFront, Application Load Balancer, and Elastic IPs.

The core of the solution is the AWS CloudFormation template aws-centralized-waf-and-security-group-management. This template deploys the components shown in Figure 1:

Figure 1: Main solutions template - aws-centralized-waf-and-vpc-security-group-management.template

Figure 1: Main solutions template – aws-centralized-waf-and-vpc-security-group-management.template

  1. After deployment, you can update the three AWS Systems Manager parameters—/FMS/OUs, /FMS/Regions, and /FMS/Tags—with appropriate values to control the scope and applicability of the Firewall Manager security policies.
  2. On update of the values in the Systems Manager parameters, the Amazon EventBridge rule captures the parameter update event.
  3. Amazon EventBridge triggers an AWS Lambda function to deploy the Firewall Manager policies.
  4. The AWS Lambda function will deploy the Firewall manager security policies across the OUs and regions specified in step 1.
  5. The lambda function updates the Amazon DynamoDB table with Firewall manager policies metadata.

Configuring Prerequisites Automatically

There are a few important prerequisites that must be configured in your account before you deploy this solution. We have built a template called aws-fms-prereq that will launch the solution prerequisites.

When you execute the aws-fms-prereq template, the following things will happen:

Note: If the Firewall Manager prerequisites described above are already met, you can skip this step and go directly to next step: Deploy the solution template.

If you run this prerequisite template in the Organization primary account that is also your Firewall Manager admin account, the solution template will be deployed automatically. If you do this, you can skip the step of deploying the solution template and jump straight to Manage your Firewall manager security policies.

Figure 2 shows how the prerequisite template creates a Lambda function. That function validates and installs the prerequisites and AWS CloudFormation stack sets to enable AWS Config across all member accounts in the organization.

Figure 2: Solution prerequisites

Figure 2: Solution prerequisites

Installing Prerequisites

If the Firewall Manager prerequisites are already met, skip this step and go directly to next step, below: Deploy the solution template.

  1. Install the AWS CLI

    Note: If you already have the AWS Command Line Interface v2 installed on your workstation, you can skip this step.

    The template deployment can be done using the AWS Management Console or the AWS CLI. This procedure uses the AWS CLI to do the deployment. To get started, install the AWS CLI and configure it with credentials that have the required IAM permissions to create resources.

  2. Deploy the prerequisite templateIf this is the first time you’re using Firewall Manager, download the aws-fms-prereq.template, and run the following AWS CLI command in the primary account of your AWS Organization to check for and complete the prerequisites.Replace the variable <your_FW_account_ID> with the account ID of your AWS Firewall Admin. This deployment typically takes 2–3 minutes, but sometimes a little longer if there are a large number of member accounts in your organization. If you want AWS Config to be enabled in your member accounts, then set the EnableConfig parameter to Yes; however, if AWS Config is already enabled, then set it to No.
    aws cloudformation create-stack \
    --stack-name fms-prereq-stack \
    --template-body file://aws-fms-prereq.template \
    --parameters ParameterKey=FMSAdmin,ParameterValue=<your_FW_account_ID> ParameterKey=EnableConfig,ParameterValue=Yes
    

Deploy the solution template

Deploy the solution using aws-centralized-waf-and-vpc-security-group-management.template—an AWS CloudFormation template—in your Firewall Manager admin account that was created by the prerequisite template.

This template deploys the AWS Centralized WAF and VPC Security Group Management solution shown in Figure 1, with all the resources and integrations.

The following AWS CLI command will deploy the solution template:

aws cloudformation create-stack \
--stack-name fms-central-policy-mgmt-stack \
--template-body file://aws-centralized-waf-and-vpc-security-group-management.template \
--capabilities CAPABILITY_IAM

The command will print very basic output, simply identifying the stack’s name, as shown below.

{
 "StackId": "arn:aws:cloudformation:us-east-1:<your_FW_admin_account_ID>:stack/fms-central-policy-mgmt-stack/<stack ARN>"
}

You can run the following CLI command to check the status of the stack deployment. Once you see that “StackStatus” is set to “CREATE_COMPLETE” in the output, you can proceed to the next step.

aws cloudformation describe-stacks \
--stack-name fms-central-policy-mgmt-stack

Manage your Firewall manager security policies

Once the solution is deployed, you can deploy the Firewall Manager policies to the organization member accounts, which will be reflected in the Parameter Store. These changes to the Parameter Store are picked up by the EventBridge rule, which triggers the Lambda function to automatically create, delete, or modify the policies as required.

Note: All of the following commands must be executed in the Firewall manager admin account, which might not be the root account of your AWS Organization.

  1. Add OUs to the scope of Firewall Manager policiesTo begin, define the OUs that the Firewall Manager policies should apply to. Store the list of OUs in a parameter named /FMS/OUs. The following AWS CLI command stores a comma-separated list of OU IDs in the right parameter.
    aws ssm put-parameter \
     --name "/FMS/OUs" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_OU_IDs>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  2. Add AWS Regions to the scope of Firewall Manager policiesNext you need to add the AWS regions where you want these policies to be applied. In the AWS CLI command that follows, you can customize the AWS region list depending on where you run your workloads. If, for example, you want to use us-west-2, us-east-1, and eu-west-1, then you need to provide us-west-2,us-east-1,eu-west-1 as your value in the command below.
    aws ssm put-parameter \
     --name "/FMS/Regions" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_regions>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  3. (Optional) Add resource tagsPlease note that this is an optional step. Resource tags are a way to apply these Firewall Manager policies to some resources, but not others. Imagine that we only want to apply these policies to resources that have the tag Environment set to the value Prod. The following command will do that:
    aws ssm put-parameter \
     --name "/FMS/Tags" \
     --type "String" \
     --value "{\"ResourceTags\":[{\"Key\":\"Environment\",\"Value\":\"Prod\"}],\"ExcludeResourceTags\":false}" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
     "Version": 2,
     "Tier": "Standard"
    }
    

Test the solution

Test the solution by creating a global CloudFront distribution and an Amazon VPC security group in one of your member accounts, and configure these resources to be noncompliant with the policies enforced by Firewall Manager.

Deploy test resources in one of your member accounts

Run the following AWS CLI command to deploy the demo template in one of the member accounts to create a sample CloudFront distribution and an Amazon VPC security group that aren’t compliant with the default Firewall Manager policies.

aws cloudformation create-stack \
  --stack-name demo-stack \
  --template-body file://aws-fms-demo.template \
  --capabilities CAPABILITY_IAM

Check for web ACL and security group audit results

To check the webACL resource association in the member account, follow these steps:

  1. Log in to your member account management console where you had deployed your demo-template and go to the WAF & Shield service page.
  2. You will see the WebACL with the prefix “FMManagedWebACLV2FMS-WAF-01“ , select that webACL.
  3. Go to the tab Associated AWS resources.
  4. You should see the newly created CloudFront distribution listed here.

To check the compliance status of the newly created security group, follow these steps:

  1. Log in to the Firewall manager admin account management console and go to the AWS Firewall Manager service page.
  2. In the Security policies section, you will find the FMS-SecGroup-02 policy. Select the policy FMS-SecGroup-02.
  3. You will see that the member account (where the demo template was deployed) is marked as non-compliant. Select the member account number to see the newly created security group along with the reason for the noncompliant finding.

Clean-up test resources in member account

Follow these steps to clean up the test resources that the demo template would have created in your member account:

  1. Log in to member account management console, and go to Cloudfront service page.
  2. Select the CloudFront distribution. (Origin would have the stack name as its prefix.)
  3. Go to “Behaviours,” select the one and select Edit.
  4. Remove the lambda function association and save changes.
  5. Run this CLI command to delete the stack that you had created earlier:
    aws cloudformation delete-stack \
      --stack-name demo-stack
    

Customizing the solution’s source code

This solution deploys Firewall Manager policies with some opinionated default rules defined in the manifest.json file. They probably aren’t all that you will need. You could customize these policies to fit your own needs, but it takes a bit of development skill. Also, the steps are too long to include here. If, for example, you want to add another AWS WAF rule group to the default AWS WAF security policy, you’ll need to edit the manifest and redeploy the solution. See how to do this customization and several others.

(Optional) Clean-up resources

Once you’ve tried the solution, if you want to clean up the stack you can do so in two steps:

  1. Go to the Firewall Manager admin account management console, navigate to the Parameter Store, and update the /FMS/OU parameter with the value delete. This ensures that the Firewall Manager policies and their related resources are deleted.
  2. Run the following AWS CLI command in the Firewall manager admin account to delete the solution stack you deployed earlier:
    aws cloudformation delete-stack --stack-name fms-central-policy-mgmt-stack
    

Conclusion

The AWS Centralized WAF and VPC Security Group Management solution addresses the feedback we heard from you, our customer. You told us the process of defining policies and configuring rule sets is challenging and time consuming, so we wrote this to make it easier and faster. In this post, we showed you how to deploy the solution following a two-step process using AWS CloudFormation. We also showed you ways that you can use the solution to easily manage your Firewall Manager policies by updating values in Systems Manager Parameter Store. Updating the values automates the deployment based on your choice of OUs, Regions, and resources. Finally, we showed you how to further customize the solution to suit your needs.

This post is just an introduction to what is possible and the overall objective. Before you start using this solution for anything important, we recommend that you review the solution implementation guide. It contains step-by-step directions and more example use cases. The guide also includes security recommendations and some cost estimates for the various supported scenarios.

To learn more about other AWS solutions, visit the AWS Solutions Library.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Satheesh Kumar

Satheesh is a Senior Solutions Architect based out of Bangalore, India. He helps large enterprise customers build solutions using AWS services.

Author

Ramanan Kannan

Ramanan is a Senior Solutions Architect and works with large enterprise customers in the financial domain. He is based out of Chennai, India.

Use AWS Firewall Manager to deploy protection at scale in AWS Organizations

Post Syndicated from Chamandeep Singh original https://aws.amazon.com/blogs/security/use-aws-firewall-manager-to-deploy-protection-at-scale-in-aws-organizations/

Security teams that are responsible for securing workloads in hundreds of Amazon Web Services (AWS) accounts in different organizational units aim for a consistent approach across AWS Organizations. Key goals include enforcing preventative measures to mitigate known security issues, having a central approach for notifying the SecOps team about potential distributed denial of service (DDoS) attacks, and continuing to maintain compliance obligations. AWS Firewall Manager works at the organizational level to help you achieve your intended security posture while it provides reporting for non-compliant resources in all your AWS accounts. This post provides step-by-step instructions to deploy and manage security policies across your AWS Organizations implementation by using Firewall Manager.

You can use Firewall Manager to centrally manage AWS WAF, AWS Shield Advanced, and Amazon Virtual Private Cloud (Amazon VPC) security groups across all your AWS accounts. Firewall Manager helps to protect resources across different accounts, and it can protect resources with specific tags or resources in a group of AWS accounts that are in specific organizational units (OUs). With AWS Organizations, you can centrally manage policies across multiple AWS accounts without having to use custom scripts and manual processes.

Architecture diagram

Figure 1 shows an example organizational structure in AWS Organizations, with several OUs that we’ll use in the example policy sets in this blog post.

Figure 1: AWS Organizations and OU structure

Figure 1: AWS Organizations and OU structure

Firewall Manager can be associated to either the AWS master payer account or one of the member AWS accounts that has appropriate permissions as a delegated administrator. Following the best practices for organizational units, in this post we use a dedicated Security Tooling AWS account (named Security in the diagram) to operate the Firewall Manager administrator deployment under the Security OU. The Security OU is used for hosting security-related access and services. The Security OU, its child OUs, and the associated AWS accounts should be owned and managed by your security organization.

Firewall Manager prerequisites

Firewall Manager has the following prerequisites that you must complete before you create and apply a Firewall Manager policy:

  1. AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an organization and Enabling all features in your organization.
  2. A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the Firewall Manager administrator for Firewall Manager. This gives the account permission to deploy security policies across the organization.
  3. AWS Config: You must enable AWS Config for all of the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, use the Enable AWS Config template from the StackSets sample templates.

Deployment of security policies

In the following sections, we explain how to create AWS WAF rules, Shield Advanced protections, and Amazon VPC security groups by using Firewall Manager. We further explain how you can deploy these different policy types to protect resources across your accounts in AWS Organizations. Each Firewall Manager policy is specific to an individual resource type. If you want to enforce multiple policy types across accounts, you should create multiple policies. You can create more than one policy for each type. If you add a new account to an organization that you created with AWS Organizations, Firewall Manager automatically applies the policy to the resources in that account that are within scope of the policy. This is a scalable approach to assist you in deploying the necessary configuration when developers create resources. For instance, you can create an AWS WAF policy that will result in a known set of AWS WAF rules being deployed whenever someone creates an Amazon CloudFront distribution.

Policy 1: Create and manage security groups

You can use Firewall Manager to centrally configure and manage Amazon VPC security groups across all your AWS accounts in AWS Organizations. A previous AWS Security blog post walks you through how to apply common security group rules, audit your security groups, and detect unused and redundant rules in your security groups across your AWS environment.

Firewall Manager automatically audits new resources and rules as customers add resources or security group rules to their accounts. You can audit overly permissive security group rules, such as rules with a wide range of ports or Classless Inter-Domain Routing (CIDR) ranges, or rules that have enabled all protocols to access resources. To audit security group policies, you can use application and protocol lists to specify what’s allowed and what’s denied by the policy.

In this blog post, we use a security policy to audit the security groups for overly permissive rules and high-risk applications that are allowed to open to local CIDR ranges (for example, 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). We created a custom application list named Bastion Host for port 22 and a custom protocol list named Allowed Protocol that allows the child account to create rules only on TCP protocols. Refer link for how to create a custom managed application and protocol list.

To create audit security group policies

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, select Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this example, we selected the Sydney (ap-southeast-2) Region because we have all of our resources in the Sydney Region.
  3. Create the policy, and in Policy details, choose Security group. For Region, select a Region (we selected Sydney (ap-southeast-2)), and then choose Next.
  4. For Security group policy type, choose Auditing and enforcement of security group rules, and then choose Next.
  5. Enter a policy name. We named our policy AWS_FMS_Audit_SecurityGroup.
  6. For Policy rule options, for this example, we chose Configure managed audit policy rules.
  7. Under Policy rules, choose the following:
    1. For Security group rules to audit, choose Inbound Rules.
    2. For Rules, select the following:
      1. Select Audit over permissive security group rules.
        • For Allowed security group rules, choose Add Protocol list and select the custom protocol list Allowed Protocols that we created earlier.
        • For Denied security group rules, select Deny rules with the allow ‘ALL’ protocol.
      2. Select Audit high risk applications.
        • Choose Applications that can only access local CIDR ranges. Then choose Add application list and select the custom application list Bastion host that we created earlier.
  8. For Policy action, for the example in this post, we chose Auto remediate any noncompliant resources. Choose Next.

    Figure 2: Policy rules for the security group audit policy

    Figure 2: Policy rules for the security group audit policy

  9. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included Organizational units, select OU (example – Non-Prod Accounts).
    2. For Resource type, select EC2 Instance, Security Group, and Elastic Network Interface.
    3. For Resources, choose Include all resources that match the selected resource type.
  10. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Audit_Security_group.

Important: Migrating AWS accounts from one organizational unit to another won’t remove or detach the existing security group policy applied by Firewall Manager. For example, in the reference architecture in Figure 1 we have the AWS account Tenant-5 under the Staging OU. We’ve created a different Firewall Manager security group policy for the Pre-Prod OU and Prod OU. If you move the Tenant-5 account to Prod OU from Staging OU, the resources associated with Tenant-5 will continue to have the security group policies that are defined for both Prod and Staging OU unless you select otherwise before relocating the AWS account. Firewall Manager supports the detach option in case of policy deletion, because moving accounts across the OU may have unintended impacts such as loss of connectivity or protection, and therefore Firewall Manager won’t remove the security group.

Policy 2: Managing AWS WAF v2 policy

A Firewall Manager AWS WAF policy contains the rule groups that you want to apply to your resources. When you apply the policy, Firewall Manager creates a Firewall Manager web access control list (web ACL) in each account that’s within the policy scope.

Note: Creating Amazon Kinesis Data Firehose delivery stream is a prerequisite to manage the WAF ACL logging at Step 8 in us-east-1. (example – aws-waf-logs-lab-waf-logs)

To create a Firewall Manager – AWS WAF v2 policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select a Region. FMS region selection is on the service page drop down tab. For this example, we selected the Region as Global, since the policy is to protect CloudFront resources.
  3. Create the policy. Under Policy details, choose AWS WAF and for Region, choose Global. Then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_WAF_Rule.
  5. On the Policy rule page, under Web ACL configuration, add rule groups. AWS WAF supports custom rule groups (the customer creates the rules), AWS Managed Rules rule groups (AWS manages the rules), and AWS Marketplace managed rule groups. For this example, we chose AWS Managed Rules rule groups.
  6. For this example, for First rule groups, we chose the AWS Managed Rules rule group, AWS Core rule set. For Last rule groups, we chose the AWS Managed Rules rule group, Amazon IP reputation list.
  7. For Default web ACL action for requests that don’t match any rules in the web ACL, choose a default action. We chose Allow.
  8. Firewall Manager enables logging for a specific web ACL. This logging is applied to all the in-scope accounts and delivers the logs to a centralized single account. To enable centralized logging of AWS WAF logs:
    1. For Logging configuration status, choose Enabled.
    2. For IAM role, Firewall Manager creates an AWS WAF service-role for logging. Your security account should have the necessary IAM permissions. Learn more about access requirements for logging.
    3. Select Kinesis stream created earlier called aws-waf-logs-lab-waf-logs in us-east-1 as we’re using Cloudfront as a resource in the policy.
    4. For Redacted fields, for this example select HTTP method, Query String, URI, and Header. You can also add a new header. For more information, see Configure logging for an AWS Firewall Manager AWS WAF policy.
  9. For Policy action, for this example, we chose Auto remediate any noncompliant resources. To replace the existing web ACL that is currently associated with the resource, select Replace web ACLs that are currently associated with in-scope resources with the web ACLs created by this policy. Choose Next.

    Note: If a resource has an association with another web ACL that is managed by a different active Firewall Manager, it doesn’t affect that resource.

    Figure 3: Policy rules for the AWS WAF security policy

    Figure 3: Policy rules for the AWS WAF security policy

  10. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included organizational units, select OU (example – Pre-Prod Accounts).
    2. For Resource type, choose CloudFront distribution.
    3. For Resources, choose Include all resources that match the selected resource type.
  11. You can create tags for the security policy. For the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to WAF_Policy.
  12. Review the security policy, and then choose Create Policy.

    Note: For the AWS WAF v2 policy, the web ACL pushed by the Firewall Manager can’t be modified on the individual account. The account owner can only add a new rule group.

  13. In the policy’s first and last rule groups sets, you can add additional rule groups at the linked AWS account level to provide additional security based on application requirements. You can use managed rule groups, which AWS Managed Rules and AWS Marketplace sellers create and maintain for you. For example, you can use the WordPress application rule group, which contains rules that block request patterns associated with the exploitation of vulnerabilities specific to a WordPress site. You can also manage and use your own rule groups.For more information about all of these options, see Rule groups. Another example could be using a rate-based rule that tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. Learn more about rate-based rules.

Policy 3: Managing AWS Shield Advanced policy

AWS Shield Advanced is a paid service that provides additional protections for internet facing applications. If you have Business or Enterprise support, you can engage the 24X7 AWS DDoS Response Team (DRT), who can write rules on your behalf to mitigate Layer 7 DDoS attacks. Please refer Shield Advanced pricing for more info before proceeding with Shield FMS Policy.

After you complete the prerequisites that were outlined in the prerequisites section, we’ll create Shield Advanced policy which contains the accounts and resources that you want to protect with Shield Advanced. Purpose of this policy is to activate the AWS Shield Advanced in the Accounts in OU’s scope and add the selected resources under Shield Advanced protection list.

To create a Firewall Manager – Shield Advanced policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this post, we’ve selected the Sydney (ap-southeast-2) Region because all of our resources are in the Sydney Region.

    Note: To protect CloudFront resources, select the Global option.

  3. Create the policy, and in Policy details, choose AWS Shield Advanced. For Region, select a Region (example – ap-southeast-2), and then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_ShieldAdvanced Rule.
  5. For Policy action, for the example in this post, we chose Auto remediate any non-compliant resources. Alternatively, if you choose Create but do not apply this policy to existing or new resources, Firewall Manager doesn’t apply Shield Advanced protection to any resources. You must apply the policy to resources later. Choose Next.
  6. For Policy scope, this example uses the OU structure as the container of multiple accounts with similar requirements:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational units. For Included organizational units, select OU (example – Staging Accounts OU).
    2. For Resource type, select Application Load Balancer and Elastic IP.
    3. For Resources, choose Include all resources that match the selected resource type.
      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Note: If you want to protect only the resources with specific tags, or alternatively exclude resources with specific tags, choose Use tags to include/exclude resources, enter the tags, and then choose either Include or Exclude. Tags enable you to categorize AWS resources in different ways, for example by indicating an environment, owner, or team to include or exclude in Firewall Manager policy. Firewall Manager combines the tags with “AND” so that, if you add more than one tag to a policy scope, a resource must have all the specified tags to be included or excluded.

      Important: Shield Advanced supports protection for Amazon Route 53 and AWS Global Accelerator. However, protection for these resources cannot be deployed with the help of Firewall Manager security policy at this time. If you need to protect these resources with Shield Advanced, you should use individual AWS account access through the API or console to activate Shield Advanced protection for the intended resources.

  7. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Shield_Advanced_Policy. You can use the tags in the Resource element of IAM permission policy statements to either allow or deny users to make changes to security policy.
  8. Review the security policy, and then choose Create Policy.

Now you’ve successfully created a Firewall Manager security policy. Using the organizational units in AWS Organizations as a method to deploy the Firewall Manager security policy, when you add an account to the OU or to any of its child OUs, Firewall Manager automatically applies the policy to the new account.

Important: You don’t need to manually subscribe Shield Advanced on the member accounts. Firewall Manager subscribes Shield Advanced on the member accounts as part of creating the policy.

Operational visibility and compliance report

Firewall Manager offers a centralized incident notification for DDoS incidents that are reported by Shield Advanced. You can create an Amazon SNS topic to monitor the protected resources for potential DDoS activities and send notifications accordingly. Learn how to create an SNS topic. If you have resources in different Regions, the SNS topic needs to be created in the intended Region. You must perform this step from the Firewall Manager delegated AWS account (for example, Security Tooling) to receive alerts across your AWS accounts in that organization.

As a best practice, you should set up notifications for all the Regions where you have a production workload under Shield Advanced protection.

To create an SNS topic in the Firewall Manager administrative console

  1. In the AWS Management Console, sign in to the Security Tooling account or the AWS Firewall Manager delegated administrator account. In the left navigation pane, under AWS Firewall Manager, choose Settings.
  2. Select the SNS topic that you created earlier to be used for the Firewall Manager central notification mechanism. For this example, we created a new SNS topic in the Sydney Region (ap-southeast-2) named SNS_Topic_Syd.
  3. For Recipient email address, enter the email address that the SNS topic will be sent to. Choose Configure SNS configuration.

After you create the SNS configuration, you can see the SNS topic in the appropriate Region, as in the following example.

Figure 5: An SNS topic for centralized incident notification

Figure 5: An SNS topic for centralized incident notification

AWS Shield Advanced records metrics in Amazon CloudWatch to monitor the protected resources and can also create Amazon CloudWatch alarms. For the simplicity purpose we took the email notification route for this example. In security operations environment, you should integrate the SNS notification to your existing ticketing system or pager duty for Realtime response.

Important: You can also use the CloudWatch dashboard to monitor potential DDoS activity. It collects and processes raw data from Shield Advanced into readable, near real-time metrics.

You can automatically enforce policies on AWS resources that currently exist or are created in the future, in order to promote compliance with firewall rules across the organization. For all policies, you can view the compliance status for in-scope accounts and resources by using the API or AWS Command Line Interface (AWS CLI) method. For content audit security group policies, you can also view detailed violation information for in-scope resources. This information can help you to better understand and manage your security risk.

View all the policies in the Firewall Manager administrative account

For our example, we created three security policies in the Firewall Manager delegated administrator account. We can check policy compliance status for all three policies by using the AWS Management Console, AWS CLI, or API methods. The AWS CLI example that follows can be further extended to build an automation for notifying the non-compliant resource owners.

To list all the policies in FMS

 aws fms list-policies --region ap-southeast-2
{
    "PolicyList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "RemediationEnabled": false, 
            "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:222222222222:policy/78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "SecurityServiceType": "WAFV2", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58"
        },
        {
            "PolicyName": "AWS_FMS_Audit_SecurityGroup", 
            "RemediationEnabled": true, 
            "ResourceType": "ResourceTypeList", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:<Account-Id>:policy/d44f3f38-ed6f-4af3-b5b3-78e9583051cf", 
            "SecurityServiceType": "SECURITY_GROUPS_CONTENT_AUDIT", 
            "PolicyId": "d44f3f38-ed6f-4af3-b5b3-78e9583051cf"
        }
    ]
}

Now, we got the policy id to check the compliance status

aws fms list-compliance-status --policy-id 78edcc79-c0b1-46ed-b7b9-d166b9fd3b58
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601360994.0, 
            "MemberAccount": "444444444444", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 0, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 444444444444 associated to the policy is compliant. The following example shows the status for the second policy.

aws fms list-compliance-status --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "AWS_FMS_WAF_Rule", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601361231.0, 
            "MemberAccount": "555555555555", 
            "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 3, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "NON_COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 555555555555 associated to the policy is non-compliant.

To provide detailed compliance information about the specified member account, the output includes resources that are in and out of compliance with the specified policy, as shown in the following example.

aws fms get-compliance-detail --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d --member-account 555555555555
{
    "PolicyComplianceDetail": {
        "Violators": [
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2: 555555555555:loadbalancer/app/FMSTest2/c2da4e99d4d13cf4", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }, 
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2:555555555555:loadbalancer/app/fmstest/1e70668ce77eb61b", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }
        ], 
        "EvaluationLimitExceeded": false, 
        "PolicyOwner": "222222222222", 
        "ExpiredAt": 1601362402.0, 
        "MemberAccount": "555555555555", 
        "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
        "IssueInfoMap": {}
    }
}

In the preceding example, two Application Load Balancers (ALBs) are not associated with a web ACL. You can further introduce automation by using AWS Lambda functions to isolate the non-compliant resources or trigger an alert for the account owner to launch manual remediation.

Resource Clean up

You can delete a Firewall Manager policy by performing the following steps.

To delete a policy (console)

  1. In the navigation pane, choose Security policies.
  2. Choose the option next to the policy that you want to delete. We created 3 policies which needs to be removed one by one.
  3. Choose Delete.

Important: When you delete a Firewall Manager Shield Advanced policy, the policy is deleted, but your accounts remain subscribed to Shield Advanced.

Conclusion

In this post, you learned how you can use Firewall Manager to enforce required preventative policies from a central delegated AWS account managed by your security team. You can extend this strategy to all AWS OUs to meet your future needs as new AWS accounts or resources get added to AWS Organizations. A central notification delivery to your Security Operations team is crucial from a visibility perspective, and with the help of Firewall Manager you can build a scalable approach to stay protected, informed, and compliant. Firewall Manager simplifies your AWS WAF, AWS Shield Advanced, and Amazon VPC security group administration and maintenance tasks across multiple accounts and resources.

For further reading and updates, see the Firewall Manager Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chamandeep Singh

Chamandeep is a Senior Technical Account Manager and member of the Global Security field team at AWS. He works with financial sector enterprise customers to support operations and security, and also designs scalable cloud solutions. He lives in Australia at present and enjoy travelling around the world.

Author

Prabhakaran Thirumeni

Prabhakaran is a Cloud Architect with AWS, specializing in network security and cloud infrastructure. His focus is helping customers design and build solutions for their enterprises. Outside of work he stays active with badminton, running, and exploring the world.

How to automatically archive expected IAM Access Analyzer findings

Post Syndicated from Josh Joy original https://aws.amazon.com/blogs/security/how-to-automatically-archive-expected-iam-access-analyzer-findings/

AWS Identity and Access Management (IAM) Access Analyzer continuously monitors your Amazon Web Services (AWS) resource-based policies for changes in order to identify resources that grant public or cross-account access from outside your AWS account or organization. Access Analyzer findings include detailed information that you can use to make an informed decision about whether access to the shared resource was intended or not. The findings information includes the affected AWS resource, the external principal that has access, the condition from the policy statement that grants the access, and the access level, such as read, write, or the ability to modify permissions.

In this blog post, we show you how to automatically archive Access Analyzer findings for expected events, such as authorized resource access. The benefit of automatically archiving expected findings is to help you reduce distraction from findings that don’t require action, enabling you to concentrate on remediating any unexpected access to your shared resources.

Access Analyzer provides you with the ability to archive findings that show intended cross-account sharing of your AWS resources. The AWS service-provided archive mechanism provides you with built-in archive rules that can automatically archive new findings that meet the criteria you define (such as directive controls). For example, your organizational access controls might allow your auditor to have read-only IAM role cross-account access from your security account into all of your accounts. In this security auditor scenario, you can define a built-in archive rule to automatically archive the findings related to the auditor cross-account IAM role that has authorized read-only access.

A limitation of the built-in archive rules is that they are static and based only on simple pattern matching. To build your own custom archiving logic, you can create an AWS Lambda function that listens to Amazon CloudWatch Events. Access Analyzer forwards all findings to CloudWatch Events, and you can easily configure a CloudWatch Events rule to trigger a Lambda function for each Access Analyzer finding. For example, if you want to look up the tags on a resource, you can make an AWS API call based on the Amazon Resource Name (ARN) for the resource in your Lambda function. As another example, you might want to compute an overall risk score based on the various parts of a finding and archive everything below a certain threshold score that you define.

In this blog post, we show you how to configure a built-in archive rule, how to add context enrichment for more complex rules, and how to trigger an alert for unintended findings. We first cover the scenario of the auditor role using a built-in archive rule. Then, we show how to perform automated archive remediation by using CloudWatch Events with AWS Step Functions to add context enrichment and automatically remediate the authorized sharing of a cross-account AWS Key Management Service (AWS KMS) key. Finally, we show how to trigger alerts for the unintended sharing of a public Amazon Simple Storage Service (Amazon S3) bucket.

Prerequisites

The solution we give here assumes that you have Access Analyzer enabled in your AWS account. You can find more details about enabling Access Analyzer in the Getting Started guide for that feature. Access Analyzer is available at no additional cost in the IAM console and through APIs in all commercial AWS Regions. Access Analyzer is also available through APIs in the AWS GovCloud (US) Regions.

How to use the built-in archive rules

In our first example, there is a security auditor cross-account IAM role that can be assumed by security automation tools from the central security AWS account. We use the built-in archive rules to automatically archive cross-account findings related to the cross-account security auditor IAM role.

To create a built-in archive rule

  1. In the AWS Management Console, choose Identity and Access Management (IAM). On the dashboard, choose Access Analyzer, and then choose Archive rules.
  2. Choose the Create archive rule button.
     
    Figure 1: Create archive rule

    Figure 1: Create archive rule

  3. You can select archive rule criteria based on your use case. For this example, in the search box, choose AWS Account as the criteria, since we want to automatically archive the security auditor account.
     
    Figure 2: Select archive rule criteria

    Figure 2: Select archive rule criteria

  4. You can now enter the value for the selected criteria. In this case, for Criteria, choose AWS Account, and then choose the equals operator.
  5. After you’ve entered your criteria, choose the Create archive rule button.
     
    Figure 3: Finish creating the archive rule

    Figure 3: Finish creating the archive rule

    You should see a message confirming that you’ve successfully created a new archive rule.
     

    Figure 4: Successful creation of a new archive rule

    Figure 4: Successful creation of a new archive rule

How to automatically archive expected findings

We now show you how to automatically archive expected findings by using a serverless workflow that you define by using AWS Step Functions. We show you how to leverage Step Functions to enrich an Access Analyzer finding, evaluate the finding against your customized rule engine logic, and finally either archive the finding or send a notification. A CloudWatch Event Rule will trigger the Step Functions workflow when Access Analyzer generates a new finding.

Solution architecture – serverless workflow

The CloudWatch event bus delivers the Access Analyzer findings to the Step Functions workflow. The Step Functions workflow responds to each Access Analyzer finding and either archives the finding for authorized access or sends an Amazon Simple Notification Service (Amazon SNS) email notification for an unauthorized access finding, as shown in figure 5.
 

Figure 5: Solution architecture for automatic archiving

Figure 5: Solution architecture for automatic archiving

The Step Functions workflow enriches the finding and provides contextual information to the rules engine for evaluation, as shown in figure 6. The Access Analyzer finding is either archived or generates an alert, based on the result of the rules engine evaluation and the associated risk level. If you’re interested in remediating the finding, you can learn more by watching the talk AWS re:Invent 2019: [NEW LAUNCH!] Dive Deep into IAM Access Analyzer (SEC309).
 

Figure 6: Finding analysis and archival

Figure 6: Finding analysis and archival

This example uses four Lambda functions. One function is for context enrichment, a second function is for rule evaluation logic, a third function is to archive expected findings, and finally a fourth function is to send a notification for findings that require investigation by your security operations team.

First, the enrichment Lambda function retrieves the tags associated with the AWS resource. The following code example retrieves the S3 bucket tags.

def lookup_s3_tags(resource_arn):
  tags = {}

  s3_client = boto3.client("s3")
  bucket_tags = s3_client.get_bucket_tagging(Bucket=resource_arn)["TagSet"]

  return bucket_tags

The Lambda function can perform additional enrichment beyond looking up tags, such as looking up the AWS KMS key alias, as shown in the next code example.

def additional_enrichment(resource_type, resource_arn):
  additional_context = {}

  if resource_type == "AWS::KMS::Key":
    kms_client = boto3.client("kms")
    aliases = kms_client.list_aliases(KeyId=resource_arn)["Aliases"]
    additional_context["key_aliases"] = [alias["AliasName"] for alias in aliases]

  return additional_context

Next, the evaluation rule Lambda function determines whether the finding is authorized and can be archived, or whether the finding is unauthorized and a notification needs to be generated. In this example, we first check whether the resource is shared publicly and then immediately alert if there’s an unexpected public sharing of a resource. Additionally, we explicitly don’t want public sharing of resources that are tagged Confidential. Our example method checks whether the value “Confidential” is set as the “Data Classification” tag and correspondingly returns False in order to trigger a notification.

Also, we allow cross-account sharing of a key in the development environment with the tag key “IsAllowedToShare” and tag value “true”, tag key “Environment” with tag value “development”, and a key alias of “DevelopmentKey”.

# Evaluate Risk Level
# Return True to raise alert if risk level exceeds threshold
# Return False to archive finding
def should_raise_alert(finding_details, tags, additional_context):
  if (
      finding_details["isPublic"]
      and not is_allowed_public(finding_details, tags, additional_context)
     ):
    return True
  elif (
        tags.get("IsAllowedToShare") == "true"
        and tags.get("Environment") == "development"
        and "DevelopmentKey" in additional_context.get("key_aliases", [])
    ):
    return False

  return True

def is_allowed_public(finding_details, tags, additional_context):
  # customize your logic here
  # for example, Data Classification is Confidential, return False for no public access
  if "Data Classification" in tags and tags["Data Classification"] == "Confidential":
    return False 

  return True
  if should_raise_alert(finding_details, tags, additional_context):
    return {"status": "NOTIFY"}
  else:
    return {"status": "ARCHIVE"}     

We then use the Choice condition to trigger either the archive or notification step.

 next(sfn.Choice(self, "Archive?"). \
  when(sfn.Condition.string_equals("$.guid.status", "ARCHIVE"), archive_task). \
  when(sfn.Condition.string_equals("$.guid.status", "NOTIFY"), notification_task) \
 )

The archive Lambda step archives the Access Analyzer finding if a rule is successfully evaluated.

def archive_finding(finding_id, analyzer_arn):
  access_analyzer_client = boto3.client("accessanalyzer")
  access_analyzer_client.update_findings(
    analyzerArn=analyzer_arn,
    ids=[finding_id],
    status="ARCHIVED"
  )

Otherwise, we raise an SNS notification because there is unauthorized resource sharing.

  resource_type = event["detail"]["resourceType"]
  resource_arn = event["detail"]["resource"]

  sns_client = boto3.client('sns')
  sns_client.publish(
      TopicArn=sns_topic_arn,
      Message=f"Alert {resource_type} {resource_arn} exceeds risk level.",
      Subject="Alert Access Analyzer Finding"
  )

Solution deployment

You can deploy the solution through either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

Prerequisites

Make sure that Access Analyzer is enabled in your AWS account. You can find an AWS CloudFormation template for doing so in the GitHub repository. It’s also possible for you to enable Access Analyzer across your organization by using the scripts for AWS CloudFormation StackSets found in the GitHub repository. See more details in the blog post Enabling AWS IAM Access Analyzer on AWS Control Tower accounts.

To deploy the solution by using the AWS Management Console

  1. In your security account, launch the template by choosing the following Launch Stack button.
     
    Select the Launch Stack button to launch the template
  2. Provide the following parameter for the security account:
    EmailSubscriptionParameter: The email address to receive subscription notifications for any findings that exceed your defined risk level.

To deploy the solution by using the AWS CDK

Additionally, you can find the latest code on GitHub, where you can also contribute to the sample code. The following commands shows how to deploy the solution by using the AWS Cloud Development Kit (AWS CDK). First, upload the Lambda assets to S3. Then, deploy the solution to your account.

cdk bootstrap

cdk deploy --parameters EmailSubscriptionParameter=YOUR_EMAIL_ADDRESS_HERE

To test the solution

  1. Create a cross-account KMS key. You should receive an email notification after several minutes.
  2. Create a cross-account KMS key with the tags IsAllowedToShare=true and Environment=development. Also, create a KMS key alias named alias/DevelopmentKey for this key. After a few seconds, you should see that the finding was automatically archived.

Summary

In this blog post, we showed you how IAM Access Analyzer can help you identify resources in your organization and accounts that are shared with an external identity. We explained how to automatically archive expected findings by using the built-in archive rules. Then, we walked you through how to automatically archive expected shared resources. We showed you how to create a serverless workflow that uses AWS Step Functions, which performs context enrichment and then automatically archives your findings for expected shared resources.

After you follow the steps in this blog post for automatic archiving, you will only receive Access Analyzer findings for unexpected AWS resource sharing. A good way to manage these unexpected Access Analyzer findings is with AWS Security Hub, alongside your other findings. Visit Getting started with AWS Security Hub to learn more. You can also see the blog post Automated Response and Remediation with AWS Security Hub for event patterns and remediation code examples.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Josh Joy

Josh is a Security Consultant with the AWS Global Security Practice, a part of our Worldwide Professional Services Organization. Josh helps customers improve their security posture as they migrate their most sensitive workloads to AWS. Josh enjoys diving deep and working backwards in order to help customers achieve positive outcomes.

Author

Andrew Gacek

Andrew is a Principal Applied Scientist in the Automated Reasoning Group at Amazon. He designs analyses to ensure the safety and security of AWS customer configurations. Prior to joining Amazon, Andrew worked at Rockwell Collins where he used automated reasoning to verify aerospace applications. He holds a PhD in Computer Science from the University of Minnesota.

How to add authentication to a single-page web application with Amazon Cognito OAuth2 implementation

Post Syndicated from George Conti original https://aws.amazon.com/blogs/security/how-to-add-authentication-single-page-web-application-with-amazon-cognito-oauth2-implementation/

In this post, I’ll be showing you how to configure Amazon Cognito as an OpenID provider (OP) with a single-page web application.

This use case describes using Amazon Cognito to integrate with an existing authorization system following the OpenID Connect (OIDC) specification. OIDC is an identity layer on top of the OAuth 2.0 protocol to enable clients to verify the identity of users. Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Some key reasons customers select Amazon Cognito include:

  • Simplicity of implementation: The console is very intuitive; it takes a short time to understand how to configure and use Amazon Cognito. Amazon Cognito also has key out-of-the-box functionality, including social sign-in, multi-factor authentication (MFA), forgotten password support, and infrastructure as code (AWS CloudFormation) support.
  • Ability to customize workflows: Amazon Cognito offers the option of a hosted UI where users can sign-in directly to Amazon Cognito or sign-in via social identity providers such as Amazon, Google, Apple, and Facebook. The Amazon Cognito hosted UI and workflows help save your team significant time and effort.
  • OIDC support: Amazon Cognito can securely pass user profile information to an existing authorization system following the ODIC authorization code flow. The authorization system uses the user profile information to secure access to the app.

Amazon Cognito overview

Amazon Cognito follows the OIDC specification to authenticate users of web and mobile apps. Users can sign in directly through the Amazon Cognito hosted UI or through a federated identity provider, such as Amazon, Facebook, Apple, or Google. The hosted UI workflows include sign-in and sign-up, password reset, and MFA. Since not all customer workflows are the same, you can customize Amazon Cognito workflows at key points with AWS Lambda functions, allowing you to run code without provisioning or managing servers. After a user authenticates, Amazon Cognito returns standard OIDC tokens. You can use the user profile information in the ID token to grant your users access to your own resources or you can use the tokens to grant access to APIs hosted by Amazon API Gateway. You can also exchange the tokens for temporary AWS credentials to access other AWS services.

Figure 1: Amazon Cognito sign-in flow

Figure 1: Amazon Cognito sign-in flow

OAuth 2.0 and OIDC

OAuth 2.0 is an open standard that allows a user to delegate access to their information to other websites or applications without handing over credentials. OIDC is an identity layer on top of OAuth 2.0 that uses OAuth 2.0 flows. OAuth 2.0 defines a number of flows to manage the interaction between the application, user, and authorization server. The right flow to use depends on the type of application.

The client credentials flow is used in machine-to-machine communications. You can use the client credentials flow to request an access token to access your own resources, which means you can use this flow when your app is requesting the token on its own behalf, not on behalf of a user. The authorization code grant flow is used to return an authorization code that is then exchanged for user pool tokens. Because the tokens are never exposed directly to the user, they are less likely to be shared broadly or accessed by an unauthorized party. However, a custom application is required on the back end to exchange the authorization code for user pool tokens. For security reasons, we recommend the Authorization Code Flow with Proof Key Code Exchange (PKCE) for public clients, such as single-page apps or native mobile apps.

The following table shows recommended flows per application type.

Application CFlow Description
Machine Client credentials Use this flow when your application is requesting the token on its own behalf, not on behalf of the user
Web app on a server Authorization code grant A regular web app on a web server
Single-page app Authorization code grant PKCE An app running in the browser, such as JavaScript
Mobile app Authorization code grant PKCE iOS or Android app

Securing the authorization code flow

Amazon Cognito can help you achieve compliance with regulatory frameworks and certifications, but it’s your responsibility to use the service in a way that remains compliant and secure. You need to determine the sensitivity of the user profile data in Amazon Cognito; adhere to your company’s security requirements, applicable laws and regulations; and configure your application and corresponding Amazon Cognito settings appropriately for your use case.

Note: You can learn more about regulatory frameworks and certifications at AWS Services in Scope by Compliance Program. You can download compliance reports from AWS Artifact.

We recommend that you use the authorization code flow with PKCE for single-page apps. Applications that use PKCE generate a random code verifier that’s created for every authorization request. Proof Key for Code Exchange by OAuth Public Clients has more information on use of a code verifier. In the following sections, I will show you how to set up the Amazon Cognito authorization endpoint for your app to support a code verifier.

The authorization code flow

In OpenID terms, the app is the relying party (RP) and Amazon Cognito is the OP. The flow for the authorization code flow with PKCE is as follows:

  1. The user enters the app home page URL in the browser and the browser fetches the app.
  2. The app generates the PKCE code challenge and redirects the request to the Amazon Cognito OAuth2 authorization endpoint (/oauth2/authorize).
  3. Amazon Cognito responds back to the user’s browser with the Amazon Cognito hosted sign-in page.
  4. The user signs in with their user name and password, signs up as a new user, or signs in with a federated sign-in. After a successful sign-in, Amazon Cognito returns the authorization code to the browser, which redirects the authorization code back to the app.
  5. The app sends a request to the Amazon Cognito OAuth2 token endpoint (/oauth2/token) with the authorization code, its client credentials, and the PKCE verifier.
  6. Amazon Cognito authenticates the app with the supplied credentials, validates the authorization code, validates the request with the code verifier, and returns the OpenID tokens, access token, ID token, and refresh token.
  7. The app validates the OpenID ID token and then uses the user profile information (claims) in the ID token to provide access to resources.(Optional) The app can use the access token to retrieve the user profile information from the Amazon Cognito user information endpoint (/userInfo).
  8. Amazon Cognito returns the user profile information (claims) about the authenticated user to the app. The app then uses the claims to provide access to resources.

The following diagram shows the authorization code flow with PKCE.

Figure 2: Authorization code flow

Figure 2: Authorization code flow

Implementing an app with Amazon Cognito authentication

Now that you’ve learned about Amazon Cognito OAuth implementation, let’s create a working example app that uses Amazon Cognito OAuth implementation. You’ll create an Amazon Cognito user pool along with an app client, the app, an Amazon Simple Storage Service (Amazon S3) bucket, and an Amazon CloudFront distribution for the app, and you’ll configure the app client.

Step 1. Create a user pool

Start by creating your user pool with the default configuration.

Create a user pool:

  1. Go to the Amazon Cognito console and select Manage User Pools. This takes you to the User Pools Directory.
  2. Select Create a user pool in the upper corner.
  3. Enter a Pool name, select Review defaults, and select Create pool.
  4. Copy the Pool ID, which will be used later to create your single-page app. It will be something like region_xxxxx. You will use it to replace the variable YOUR_USERPOOL_ID in a later step.(Optional) You can add additional features to the user pool, but this demonstration uses the default configuration. For more information see, the Amazon Cognito documentation.

The following figure shows you entering the user pool name.

Figure 3: Enter a name for the user pool

Figure 3: Enter a name for the user pool

The following figure shows the resulting user pool configuration.

Figure 4: Completed user pool configuration

Figure 4: Completed user pool configuration

Step 2. Create a domain name

The Amazon Cognito hosted UI lets you use your own domain name or you can add a prefix to the Amazon Cognito domain. This example uses an Amazon Cognito domain with a prefix.

Create a domain name:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under App integration, select Domain name.
  3. In the Amazon Cognito domain section, add your Domain prefix (for example, myblog).
  4. Select Check availability. If your domain isn’t available, change the domain prefix and try again.
  5. When your domain is confirmed as available, copy the Domain prefix to use when you create your single-page app. You will use it to replace the variable YOUR_COGNITO_DOMAIN_PREFIX in a later step.
  6. Choose Save changes.

The following figure shows creating an Amazon Cognito hosted domain.

Figure 5: Creating an Amazon Cognito hosted UI domain

Figure 5: Creating an Amazon Cognito hosted UI domain

Step 3. Create an app client

Now create the app client user pool. An app client is where you register your app with the user pool. Generally, you create an app client for each app platform. For example, you might create an app client for a single-page app and another app client for a mobile app. Each app client has its own ID, authentication flows, and permissions to access user attributes.

Create an app client:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under General settings, select App clients.
  3. Choose Add an app client.
  4. Enter a name for the app client in the App client name field.
  5. Uncheck Generate client secret and accept the remaining default configurations.

    Note: The client secret is used to authenticate the app client to the user pool. Generate client secret is unchecked because you don’t want to send the client secret on the URL using client-side JavaScript. The client secret is used by applications that have a server-side component that can secure the client secret.

  6. Choose Create app client as shown in the following figure.

    Figure 6: Create and configure an app client

    Figure 6: Create and configure an app client

  7. Copy the App client ID. You will use it to replace the variable YOUR_APPCLIENT_ID in a later step.

The following figure shows the App client ID which is automatically generated when the app client is created.

Figure 7: App client configuration

Figure 7: App client configuration

Step 4. Create an Amazon S3 website bucket

Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. We use Amazon S3 here to host a static website.

Create an Amazon S3 bucket with the following settings:

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. Choose Create bucket to start the Create bucket wizard.
  3. In Bucket name, enter a DNS-compliant name for your bucket. You will use this in a later step to replace the YOURS3BUCKETNAME variable.
  4. In Region, choose the AWS Region where you want the bucket to reside.

    Note: It’s recommended to create the Amazon S3 bucket in the same AWS Region as Amazon Cognito.

  5. Look up the region code from the region table (for example, US-East [N. Virginia] has a region code of us-east-1). You will use the region code to replace the variable YOUR_REGION in a later step.
  6. Choose Next.
  7. Select the Versioning checkbox.
  8. Choose Next.
  9. Choose Next.
  10. Choose Create bucket.
  11. Select the bucket you just created from the Amazon S3 bucket list.
  12. Select the Properties tab.
  13. Choose Static website hosting.
  14. Choose Use this bucket to host a website.
  15. For the index document, enter index.html and then choose Save.

Step 5. Create a CloudFront distribution

Amazon CloudFront is a fast content delivery network service that helps securely deliver data, videos, applications, and APIs to customers globally with low latency and high transfer speeds—all within a developer-friendly environment. In this step, we use CloudFront to set up an HTTPS-enabled domain for the static website hosted on Amazon S3.

Create a CloudFront distribution (web distribution) with the following modified default settings:

  1. Sign into the AWS Management Console and open the CloudFront console.
  2. Choose Create Distribution.
  3. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  4. Choose the Origin Domain Name from the dropdown list. It will be YOURS3BUCKETNAME.s3.amazonaws.com.
  5. For Restrict Bucket Access, select Yes.
  6. For Origin Access Identity, select Create a New Identity.
  7. For Grant Read Permission on Bucket, select Yes, Update Bucket Policy.
  8. For the Viewer Protocol Policy, select Redirect HTTP to HTTPS.
  9. For Cache Policy, select Managed-Caching Disabled.
  10. Set the Default Root Object to index.html.(Optional) Add a comment. Comments are a good place to describe the purpose of your distribution, for example, “Amazon Cognito SPA.”
  11. Select Create Distribution. The distribution will take a few minutes to create and update.
  12. Copy the Domain Name. This is the CloudFront distribution domain name, which you will use in a later step as the DOMAINNAME value in the YOUR_REDIRECT_URI variable.

Step 6. Create the app

Now that you’ve created the Amazon S3 bucket for static website hosting and the CloudFront distribution for the site, you’re ready to use the code that follows to create a sample app.

Use the following information from the previous steps:

  1. YOUR_COGNITO_DOMAIN_PREFIX is from Step 2.
  2. YOUR_REGION is the AWS region you used in Step 4 when you created your Amazon S3 bucket.
  3. YOUR_APPCLIENT_ID is the App client ID from Step 3.
  4. YOUR_USERPOOL_ID is the Pool ID from Step 1.
  5. YOUR_REDIRECT_URI, which is https://DOMAINNAME/index.html, where DOMAINNAME is your domain name from Step 5.

Create userprofile.js

Use the following text to create the userprofile.js file. Substitute the preceding pre-existing values for the variables in the text.

var myHeaders = new Headers();
myHeaders.set('Cache-Control', 'no-store');
var urlParams = new URLSearchParams(window.location.search);
var tokens;
var domain = "YOUR_COGNITO_DOMAIN_PREFIX";
var region = "YOUR_REGION";
var appClientId = "YOUR_APPCLIENT_ID";
var userPoolId = "YOUR_USERPOOL_ID";
var redirectURI = "YOUR_REDIRECT_URI";

//Convert Payload from Base64-URL to JSON
const decodePayload = payload => {
  const cleanedPayload = payload.replace(/-/g, '+').replace(/_/g, '/');
  const decodedPayload = atob(cleanedPayload)
  const uriEncodedPayload = Array.from(decodedPayload).reduce((acc, char) => {
    const uriEncodedChar = ('00' + char.charCodeAt(0).toString(16)).slice(-2)
    return `${acc}%${uriEncodedChar}`
  }, '')
  const jsonPayload = decodeURIComponent(uriEncodedPayload);

  return JSON.parse(jsonPayload)
}

//Parse JWT Payload
const parseJWTPayload = token => {
    const [header, payload, signature] = token.split('.');
    const jsonPayload = decodePayload(payload)

    return jsonPayload
};

//Parse JWT Header
const parseJWTHeader = token => {
    const [header, payload, signature] = token.split('.');
    const jsonHeader = decodePayload(header)

    return jsonHeader
};

//Generate a Random String
const getRandomString = () => {
    const randomItems = new Uint32Array(28);
    crypto.getRandomValues(randomItems);
    const binaryStringItems = randomItems.map(dec => `0${dec.toString(16).substr(-2)}`)
    return binaryStringItems.reduce((acc, item) => `${acc}${item}`, '');
}

//Encrypt a String with SHA256
const encryptStringWithSHA256 = async str => {
    const PROTOCOL = 'SHA-256'
    const textEncoder = new TextEncoder();
    const encodedData = textEncoder.encode(str);
    return crypto.subtle.digest(PROTOCOL, encodedData);
}

//Convert Hash to Base64-URL
const hashToBase64url = arrayBuffer => {
    const items = new Uint8Array(arrayBuffer)
    const stringifiedArrayHash = items.reduce((acc, i) => `${acc}${String.fromCharCode(i)}`, '')
    const decodedHash = btoa(stringifiedArrayHash)

    const base64URL = decodedHash.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '');
    return base64URL
}

// Main Function
async function main() {
  var code = urlParams.get('code');

  //If code not present then request code else request tokens
  if (code == null){

    // Create random "state"
    var state = getRandomString();
    sessionStorage.setItem("pkce_state", state);

    // Create PKCE code verifier
    var code_verifier = getRandomString();
    sessionStorage.setItem("code_verifier", code_verifier);

    // Create code challenge
    var arrayHash = await encryptStringWithSHA256(code_verifier);
    var code_challenge = hashToBase64url(arrayHash);
    sessionStorage.setItem("code_challenge", code_challenge)

    // Redirtect user-agent to /authorize endpoint
    location.href = "https://"+domain+".auth."+region+".amazoncognito.com/oauth2/authorize?response_type=code&state="+state+"&client_id="+appClientId+"&redirect_uri="+redirectURI+"&scope=openid&code_challenge_method=S256&code_challenge="+code_challenge;
  } else {

    // Verify state matches
    state = urlParams.get('state');
    if(sessionStorage.getItem("pkce_state") != state) {
        alert("Invalid state");
    } else {

    // Fetch OAuth2 tokens from Cognito
    code_verifier = sessionStorage.getItem('code_verifier');
  await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/token?grant_type=authorization_code&client_id="+appClientId+"&code_verifier="+code_verifier+"&redirect_uri="+redirectURI+"&code="+ code,{
  method: 'post',
  headers: {
    'Content-Type': 'application/x-www-form-urlencoded'
  }})
  .then((response) => {
    return response.json();
  })
  .then((data) => {

    // Verify id_token
    tokens=data;
    var idVerified = verifyToken (tokens.id_token);
    Promise.resolve(idVerified).then(function(value) {
      if (value.localeCompare("verified")){
        alert("Invalid ID Token - "+ value);
        return;
      }
      });
    // Display tokens
    document.getElementById("id_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.id_token),null,'\t');
    document.getElementById("access_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.access_token),null,'\t');
  });

    // Fetch from /user_info
    await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/userInfo",{
      method: 'post',
      headers: {
        'authorization': 'Bearer ' + tokens.access_token
    }})
    .then((response) => {
      return response.json();
    })
    .then((data) => {
      // Display user information
      document.getElementById("userInfo").innerHTML = JSON.stringify(data, null,'\t');
    });
  }}}
  main();

Create the verifier.js file

Use the following text to create the verifier.js file.

var key_id;
var keys;
var key_index;

//verify token
async function verifyToken (token) {
//get Cognito keys
keys_url = 'https://cognito-idp.'+ region +'.amazonaws.com/' + userPoolId + '/.well-known/jwks.json';
await fetch(keys_url)
.then((response) => {
return response.json();
})
.then((data) => {
keys = data['keys'];
});

//Get the kid (key id)
var tokenHeader = parseJWTHeader(token);
key_id = tokenHeader.kid;

//search for the kid key id in the Cognito Keys
const key = keys.find(key =>key.kid===key_id)
if (key === undefined){
return "Public key not found in Cognito jwks.json";
}

//verify JWT Signature
var keyObj = KEYUTIL.getKey(key);
var isValid = KJUR.jws.JWS.verifyJWT(token, keyObj, {alg: ["RS256"]});
if (isValid){
} else {
return("Signature verification failed");
}

//verify token has not expired
var tokenPayload = parseJWTPayload(token);
if (Date.now() >= tokenPayload.exp * 1000) {
return("Token expired");
}

//verify app_client_id
var n = tokenPayload.aud.localeCompare(appClientId)
if (n != 0){
return("Token was not issued for this audience");
}
return("verified");
};

Create an index.html file

Use the following text to create the index.html file.

<!doctype html>

<html lang="en">
<head>
<meta charset="utf-8">

<title>MyApp</title>
<meta name="description" content="My Application">
<meta name="author" content="Your Name">
</head>

<body>
<h2>Cognito User</h2>

<p style="white-space:pre-line;" id="token_status"></p>

<p>Id Token</p>
<p style="white-space:pre-line;" id="id_token"></p>

<p>Access Token</p>
<p style="white-space:pre-line;" id="access_token"></p>

<p>User Profile</p>
<p style="white-space:pre-line;" id="userInfo"></p>
<script language="JavaScript" type="text/javascript"
src="https://kjur.github.io/jsrsasign/jsrsasign-latest-all-min.js">
</script>
<script src="js/verifier.js"></script>
<script src="js/userprofile.js"></script>
</body>
</html>

Upload the files into the Amazon S3 Bucket you created earlier

Upload the files you just created to the Amazon S3 bucket that you created in Step 4. If you’re using Chrome or Firefox browsers, you can choose the folders and files to upload and then drag and drop them into the destination bucket. Dragging and dropping is the only way that you can upload folders.

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. In the Bucket name list, choose the name of the bucket that you created earlier in Step 4.
  3. In a window other than the console window, select the index.html file to upload. Then drag and drop the file into the console window that lists the destination bucket.
  4. In the Upload dialog box, choose Upload.
  5. Choose Create Folder.
  6. Enter the name js and choose Save.
  7. Choose the js folder.
  8. In a window other than the console window, select the userprofile.js and verifier.js files to upload. Then drag and drop the files into the console window js folder.

    Note: The Amazon S3 bucket root will contain the index.html file and a js folder. The js folder will contain the userprofile.js and verifier.js files.

Step 7. Configure the app client settings

Use the Amazon Cognito console to configure the app client settings, including identity providers, OAuth flows, and OAuth scopes.

Configure the app client settings:

  1. Go to the Amazon Cognito console.
  2. Choose Manage your User Pools.
  3. Select your user pool.
  4. Select App integration, and then select App client settings.
  5. Under Enabled Identity Providers, select Cognito User Pool.(Optional) You can add federated identity providers. Adding User Pool Sign-in Through a Third-Party has more information about how to add federation providers.
  6. Enter the Callback URL(s) where the user is to be redirected after successfully signing in. The callback URL is the URL of your web app that will receive the authorization code. In our example, this will be the Domain Name for the CloudFront distribution you created earlier. It will look something like https://DOMAINNAME/index.html where DOMAINNAME is xxxxxxx.cloudfront.net.

    Note: HTTPS is required for the Callback URLs. For this example, I used CloudFront as a HTTPS endpoint for the app in Amazon S3.

  7. Next, select Authorization code grant from the Allowed OAuth Flows and OpenID from Allowed OAuth Scopes. The OpenID scope will return the ID token and grant access to all user attributes that are readable by the client.
  8. Choose Save changes.

Step 8. Show the app home page

Now that the Amazon Cognito user pool is configured and the sample app is built, you can test using Amazon Cognito as an OP from the sample JavaScript app you created in Step 6.

View the app’s home page:

  1. Open a web browser and enter the app’s home page URL using the CloudFront distribution to serve your index.html page created in Step 6 (https://DOMAINNAME/index.html) and the app will redirect the browser to the Amazon Cognito /authorize endpoint.
  2. The /authorize endpoint redirects the browser to the Amazon Cognito hosted UI, where the user can sign in or sign up. The following figure shows the user sign-in page.

    Figure 8: User sign-in page

    Figure 8: User sign-in page

Step 9. Create a user

You can use the Amazon Cognito user pool to manage your users or you can use a federated identity provider. Users can sign in or sign up from the Amazon Cognito hosted UI or from a federated identity provider. If you configured a federated identity provider, users will see a list of federated providers that they can choose from. When a user chooses a federated identity provider, they are redirected to the federated identity provider sign-in page. After signing in, the browser is directed back to Amazon Cognito. For this post, Amazon Cognito is the only identity provider, so you will use the Amazon Cognito hosted UI to create an Amazon Cognito user.

Create a new user using Amazon Cognito hosted UI:

  1. Create a new user by selecting Sign up and entering a username, password, and email address. Then select the Sign up button. The following figure shows the sign up screen.

    Figure 9: Sign up with a new account

    Figure 9: Sign up with a new account

  2. The Amazon Cognito sign up workflow will verify the email address by sending a verification code to that address. The following figure shows the prompt to enter the verification code.

    Figure 10: Enter the verification code

    Figure 10: Enter the verification code

  3. Enter the code from the verification email in the Verification Code text box.
  4. Select Confirm Account.

Step 10. Viewing the Amazon Cognito tokens and profile information

After authentication, the app displays the tokens and user information. The following figure shows the OAuth2 access token and OIDC ID token that are returned from the /token endpoint and the user profile returned from the /userInfo endpoint. Now that the user has been authenticated, the application can use the user’s email address to look up the user’s account information in an application data store. Based on the user’s account information, the application can grant/restrict access to paid content or show account information like order history.

Figure 11: Token and user profile information

Figure 11: Token and user profile information

Note: Many browsers will cache redirects. If your browser is repeatedly redirecting to the index.html page, clear the browser cache.

Summary

In this post, we’ve shown you how easy it is to add user authentication to your web and mobile apps with Amazon Cognito.

We created a Cognito User Pool as our user directory, assigned a domain name to the Amazon Cognito hosted UI, and created an application client for our application. Then we created an Amazon S3 bucket to host our website. Next, we created a CloudFront distribution for our Amazon S3 bucket. Then we created our application and uploaded it to our Amazon S3 website bucket. From there, we configured the client app settings with our identity provider, OAuth flows, and scopes. Then we accessed our application and used the Amazon Cognito sign-in flow to create a username and password. Finally, we logged into our application to see the OAuth and OIDC tokens.

Amazon Cognito saves you time and effort when implementing authentication with an intuitive UI, OAuth2 and OIDC support, and customizable workflows. You can now focus on building features that are important to your core business.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

George Conti

George is a Solution Architect for the AWS Financial Services team. He is passonate about technology and helping Financial Services Companies build solutions with AWS Services.

Architecting for database encryption on AWS

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/architecting-for-database-encryption-on-aws/

In this post, I review the options you have to protect your customer data when migrating or building new databases in Amazon Web Services (AWS). I focus on how you can support sensitive workloads in ways that help you maintain compliance and regulatory obligations, and meet security objectives.

Understanding transparent data encryption

I commonly see enterprise customers migrating existing databases straight from on-premises to AWS without reviewing their design. This might seem simpler and faster, but they miss the opportunity to review the scalability, cost-savings, and feature capability of native cloud services. A straight lift and shift migration can also create unnecessary operational overheads, carry-over unneeded complexity, and result in more time spent troubleshooting and responding to events over time.

One example is when enterprise customers who are using Transparent Data Encryption (TDE) or Extensible Key Management (EKM) technologies want to reuse the same technologies in their migration to AWS. TDE and EKM are database technologies that encrypt and decrypt database records as the records are written and read to the underlying storage medium. Customers use TDE features in Microsoft SQL Server, Oracle 10g and 11g, and Oracle Enterprise Edition to meet requirements for data-at-rest encryption. This shouldn’t mean that TDE is the requirement. It’s infrequent that an organizational policy or compliance framework specifies a technology such as TDE in the actual requirement. For example, the Payment Card Industry Data Security Standard (PCI-DSS) standard requires that sensitive data must be protected using “Strong cryptography with associated key-management processes and procedures.” Nowhere does PCI-DSS endorse or require the use of a specific technology.

Understanding risks

It’s important that you understand the risks that encryption-at-rest mitigates before selecting a technology to use. Encryption-at-rest, in the context of databases, generally manages the risk that one of the disks used to store database data is physically stolen and thus compromised. In on-premises scenarios, TDE is an effective technology used to manage this risk. All data from the database—up to and including the disk—is encrypted. The database manages all key management and cryptographic operations. You can also use TDE with a hardware security module (HSM) so that the keys and cryptography for the database are managed outside of the database itself. In TDE implementations, the HSM is used only to manage the key encryption keys (KEK), and not the data encryption keys (DEK) themselves. The DEKs are in volatile memory in the database at runtime, and so the cryptographic operations occur on the database itself.

You can also use native operating system encryption technologies such as dm-crypt or LUKS (Linux Unified Key Setup). Dm-crypt is a full disk encryption (FDE) subsystem in Linux kernel version 2.6 and beyond. Dm-crypt can be used on its own or with LUKS as an extension to add more features. When using dm-crypt, the operating system kernel is responsible for encrypting and decrypting data as it’s written and read from the attached volumes. This would achieve the same outcome as TDE—data written and read to the disk volume is encrypted, and the risk related to physical disk compromise is managed. DEKs are in runtime memory of the machine running the database.

With some TDE implementations, you can encrypt tables, rows, columns, and cells with different DEKs to achieve granular separation of duties between operators. Customers can then configure TDE to authorize access to each DEK based on database login credentials and job function, helping to manage risks associated with unauthorized access. However, the most common configuration I’ve seen is to rely on whole database encryption when using TDE. This configuration gives similar protection against the identified risks as dm-crypt with LUKS used without an HSM, since the DEKs and KEKs are stored within the instance in both cases and the result is that the database data on disk is encrypted.

Using encryption to manage data at rest risks in AWS

When you move to AWS, you gain additional security capabilities that can simplify your security implementations. Since the announcement of the AWS Key Management Service (AWS KMS) in 2014, it has been tightly integrated with Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), and dozens of other services on AWS. This means that data is encrypted on disk by checking a single check box. Furthermore, you get the benefits of AWS KMS for key management and cryptographic operations, while being transparent to the Amazon Elastic Compute Cloud (Amazon EC2) instance where the data is being encrypted and decrypted. For simplicity, the authorization for access to the data is managed entirely by AWS Identity and Access Management (IAM) and AWS KMS key resource policies.

If you need more granular access control to the data, you can use the AWS Encryption SDK to encrypt data at the application layer. That provides the same effect as TDE cell-level protection, with a FIPS140-2 Level 2 validated HSM, as might be required by a recognizing standard.

If you must use a FIPS140-2 Level 3 validated HSM to meet more stringent compliance standards or regulations, then you can use the Custom Key Store capability of AWS KMS to achieve that—again in a transparent way. This option has a trade-off, as there is additional operational overhead in terms of managing an AWS CloudHSM cluster.

Many customers choose to migrate their database into the managed Amazon Relational Database Service (Amazon RDS), rather than managing the database instance themselves. Like the Amazon EC2 service, RDS uses Amazon EBS volumes for its data storage, and so can seamlessly use AWS KMS for encryption at rest functionality. When you do so, your management overhead for the protection of data-at-rest reduces to almost zero. This lets you focus on business value while AWS is responsible for the management of your database and the protection of the underlying data. The next section reviews this option and others in more detail.

You can review the available Amazon RDS database engines and versions via the Amazon RDS User Guide documentation, or by running the following AWS Command Line Interface (AWS CLI) command:

aws rds describe-db-engine-versions --query "DBEngineVersions[].DBEngineVersionDescription" --region <regionIdentifier>

Recommended Solutions

If you’re moving an existing database to AWS, you have the following solutions for data at rest encryption. I go into more detail for each option below.

Table 1 – Encryption options

Option Database management Host Encryption Key management
1 Amazon managed Amazon RDS Amazon EBS AWS KMS
2 Amazon managed Amazon RDS Amazon EBS AWS KMS Custom Key Store
3 Customer managed Amazon EC2 Amazon EBS AWS KMS
4 Customer managed Amazon EC2 Amazon EBS AWS KMS Custom Key Store
5 Customer managed Amazon EC2 Amazon EBS LUKS
6 Customer managed Amazon EC2 Database Database TDE
7 Customer managed Amazon EC2 Database CloudHSM

Option 1 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within an AWS Region to provide resiliency. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended as your default database encryption approach.

Benefits

  • No key management requirement on host; key management is automated and performed by AWS KMS
  • Meets FIPS140-2 Level 2 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the operating system and database engine
  • Well-recognized configuration, with support offered through AWS Support
  • AWS KMS costs are comparatively low

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 2 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS custom key store

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within a Region to provide resiliency. CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended where FIPS140-2 Level 3 validation is a specified compliance requirement.

Benefits

  • No key management requirement on host; key management is performed by AWS KMS
  • Meets FIPS140-2 Level 3 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the database engine
  • Well-recognized configuration with support offered through AWS Support

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Option 3 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS

In this approach, the key difference is that you’re responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Key rotation and management is handled transparently by AWS
  • Data encryption keys are managed by the hypervisor, not by your EC2 instance
  • AWS KMS costs are comparatively low

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 4 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS custom key store

In this approach, you are again responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. And similar to Option 2, CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version and when FIPS140-2 Level 3 compliance is required.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Data encryption keys managed by the hypervisor, not by your EC2 instance
  • Keys managed by FIPS140-2 Level 3 validated HSM

Challenges

  • You’re responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 5 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by LUKS

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. You also need to install LUKS onto the Linux instance to manage the encryption of data on Amazon EBS.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Transparent encryption is managed by OS with LUKS

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Data encryption keys are managed directly on the EC2 instance, and not a dedicated key management system
  • Scaling must be vertical, which is slow and costly
  • LUKS is supported through open-source licensing
  • Support for backup and recovery is LUKS specific, and require additional consideration
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Note: This approach limits you to only Linux instances and requires the most technical knowledge and effort on your part. Options, such as BitLocker and SQL Server Always Encrypted, exist for Windows hosts, and the complexity and challenges are similar to those of LUKS.

Option 6 – Customer-managed database platform hosted on Amazon EC2 with database encryption and key management provided by TDE

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by the database engine to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • Data encryption keys are managed directly on the EC2 instance
  • Scaling is dependent on TDE functionality and Amazon EC2 scaling
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Option 7 – Customer-managed database platform hosted on Amazon EC2 with database encryption performed by TDE and key management provided by CloudHSM

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by a CloudHSM cluster to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Wallet keys (KEK) are managed by a FIPS140-2 Level 3 validated HSM
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Integration and support of CloudHSM with TDE might vary
  • Scaling is dependent on TDE functionality, Amazon EC2 scaling, and CloudHSM cluster.
  • Data encryption keys are managed on EC2 instance
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Summary

While you can operate in AWS similar to how you operate in your on-premises environment, the preceding configurations and recommendations show how you can significantly reduce your challenges and increase your benefits by using cloud-native security services like AWS KMS, Amazon RDS, and CloudHSM. Specifically, using Amazon RDS with Amazon EBS volumes encrypted by AWS KMS provides a highly scalable, resilient, and secure way to manage your keys in AWS.

While there might be some architectural redesign and configuration work needed to move an on-premises database into Amazon RDS, you can leverage AWS services to help you meet your compliance requirements with less effort. By offloading the OS and database maintenance responsibility to AWS, you simultaneously reduce operational friction and increase security. By migrating this way, you can benefit from the scalability and resilience of the AWS global infrastructure and expertise. Lastly, to get started with migrating your database to AWS, I encourage you to use the AWS Database Migration Service.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Author

Scott Conklin

Scott is a Senior Security Consultant with AWS Professional Services (Global Specialty Practice). Based out of Chicago with 4 years tenure, he is an avid distance runner, crypto nerd, lover of unicorns, and enjoys camping, nature, playing Minecraft with his 3 kids, and binge watching Amazon Prime with his wife.