Tag Archives: Advanced (300)

How to record a video of Amazon AppStream 2.0 streaming sessions

Post Syndicated from Nicolas Malaval original https://aws.amazon.com/blogs/security/how-to-record-video-of-amazon-appstream-2-0-streaming-sessions/

Amazon AppStream 2.0 is a fully managed service that lets you stream applications and desktops to your users. In this post, I’ll show you how to record a video of AppStream 2.0 streaming sessions by using FFmpeg, a popular media framework.

There are many use cases for session recording, such as auditing administrative access, troubleshooting user issues, or quality assurance. For example, you could publish administrative tools with AppStream 2.0, such as a Remote Desktop Protocol (RDP) client, to protect access to your backend systems (see How to use Amazon AppStream 2.0 to reduce your bastion host attack surface) and you may want to record a video of what your administrators do when accessing and operating backend systems. You may also want to see what a user did to reproduce an issue, or view activities in a call center setting, such as call handling or customer support, for review and training.

This solution is not designed or intended for people surveillance, or for the collection of evidence for legal proceedings. You are responsible for complying with all applicable laws and regulations when using this solution.

Overview and architecture

In this section, you can learn about the steps for recording AppStream 2.0 streaming sessions and see an overview of the solution architecture. Later in this post, you can find instructions about how to implement and test the solution.

AppStream 2.0 enables you to run custom scripts to prepare the streaming instance before the applications launch or after the streaming session has completed. Figure 1 shows a simplified description of what happens before, during and after a streaming session.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Before the streaming session starts, AppStream 2.0 runs script A, which uses PsExec, a utility that enables administrators to run commands on local or remote computers, to launch script B. Script B then runs during the entire streaming session. PsExec can run the script as the LocalSystem account, a service account that has extensive privileges on a local system, while it interacts with the desktop of another session. Using the LocalSystem account, you can use FFmpeg to record the session screen and prevent AppStream 2.0 users from stopping or tampering with the solution, as long as they aren’t granted local administrator rights.
  2. Script B launches FFmpeg and starts recording the desktop. The solution uses the FFmpeg built-in screen-grabber to capture the desktop across all the available screens.
  3. When FFmpeg starts recording, it captures the area covered by the desktop at that time. If the number of screens or the resolution changes, a portion of the desktop might be outside the recorded area. In that case, script B stops the recording and starts FFmpeg again.
  4. After the streaming session ends, AppStream 2.0 runs script C, which notifies script B that it must end the recording and close. Script B stops FFmpeg.
  5. Before exiting, script B uploads the video files that FFmpeg generated to Amazon Simple Storage Service (Amazon S3). It also stores user and session metadata in Amazon S3, along with the video files, for easy retrieval of session recordings.

For a more comprehensive understanding of how the session scripts works, you can refer to the GitHub repository that contains the solution artifacts, where I go into the details of each script.

Implementing and testing the solution

Now that you understand the architecture of this solution, you can follow the instructions in this section to implement this blog post’s solution in your AWS account. You will:

  1. Create a virtual private cloud (VPC), an S3 bucket and an AWS Identity and Access Management (IAM) role with AWS CloudFormation.
  2. Create an AppStream 2.0 image builder.
  3. Configure the solution scripts on the image builder.
  4. Specify an application to publish and create an image.
  5. Create an AppStream 2.0 fleet.
  6. Create an AppStream 2.0 stack.
  7. Create a user in the AppStream 2.0 user pool.
  8. Launch a streaming session and test the solution.

Step 1: Create a VPC, an S3 bucket, and an IAM role with AWS CloudFormation

For the first step in the solution, you create a new VPC where AppStream 2.0 will be deployed, or choose an existing VPC, a new S3 bucket to store the session recordings, and a new IAM role to grant AppStream 2.0 the necessary IAM permissions.

To create the VPC, the S3 bucket, and the IAM role with AWS CloudFormation

  1. Select the following Launch Stack button to open the CloudFormation console and create a CloudFormation stack from the template. You can change the Region where resources are deployed in the navigation bar.
     
    Select the Launch Stack button to launch the template

    The latest template can also be downloaded on GitHub.

  2. Choose Next. For VPC ID, Subnet 1 ID and Subnet 2 ID, you can optionally select a VPC and two subnets, if you want to deploy the solution in an existing VPC, or leave these fields blank to create a new VPC. Then follow the on-screen instructions. AWS CloudFormation creates the following resources:
    • (If you chose to create a new VPC) An Amazon Virtual Private Cloud (Amazon VPC) with an internet gateway attached.
    • (If you chose to create a new VPC) Two public subnets on this Amazon VPC with a new route table to make them publicly accessible.
    • An S3 bucket to store the session recordings.
    • An IAM role to grant AppStream 2.0 permissions to upload video and metadata files to Amazon S3.
  3. After the stack creation has completed, choose the Outputs tab in the CloudFormation console and note the values that the process returned: the name and Region of the S3 bucket, the name of the IAM role, the ID of the VPC, and the two subnets.

Step 2: Create an AppStream 2.0 image builder

The next step is to create a new AppStream 2.0 image builder. An image builder is a virtual machine that you can use to install and configure applications for streaming, and then create a custom image.

To create the AppStream 2.0 image builder

  1. Open the AppStream 2.0 console and select the Region in the navigation bar. Choose Get Started then Skip if you are new to the console.
  2. Choose Images in the left pane, and then choose Image Builder. Choose Launch Image Builder.
  3. In Step 1: Choose Image:
    1. Select the name of the latest AppStream 2.0 base image for the Windows Server version of your choice. You can find its name in the AppStream 2.0 base image version history. For example, at the time of writing, the name of the latest Windows Server 2019 base image is AppStream-WinServer2019-07-16-2020.
    2. Choose Next.
  4. In Step 2: Configure Image Builder:
    1. For Name, enter session-recording.
    2. For Instance Type, choose stream.standard.medium.
    3. For IAM role, select the IAM role that AWS CloudFormation created.
    4. Choose Next.
  5. In Step 3: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. For VPC, select the ID of the VPC, and for Subnet 1, select the ID of Subnet 1.
    3. For Security group(s), select the ID of the security group. Refer back to the Outputs tab of the CloudFormation stack if you are unsure which VPC, subnet and security group to select.
    4. Choose Review.
  6. In Step 4: Review, choose Launch.

Step 3: Configure the solution scripts on the image builder

The session scripts to run before streaming sessions start or after sessions end are specified within an AppStream 2.0 image. In this step, you install the solution scripts on your image builder and specify the scripts to run in the session scripts configuration file.

To configure the solution scripts on the image builder

  1. Wait until the image builder is in the Running state, and then choose Connect.
  2. Within the AppStream 2.0 streaming session, on the Local User tab, choose Administrator.
  3. To install the solution scripts:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell, and then choose Run as Administrator.
    3. Run the following commands in the PowerShell terminal to create the required folders, and to copy the solution scripts and the session scripts configuration file from public objects in GitHub to the local disk. If you aren’t using Google Chrome or the AppStream 2.0 client, you need to choose the Clipboard icon in the AppStream 2.0 navigation bar, and then select Paste to remote session.
      New-Item -Path C:\SessionRecording -ItemType directory
      New-Item -Path C:\SessionRecording\Scripts -ItemType directory
      New-Item -Path C:\SessionRecording\Output -ItemType directory
      New-Item -Path C:\SessionRecording\Bin -ItemType directory
      
      $Acl = Get-Acl C:\SessionRecording
      $Acl.SetAccessRuleProtection($true,$false)
      $AccessRule1 = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule1)
      $AccessRule2 = New-Object System.Security.AccessControl.FileSystemAccessRule("SYSTEM","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule2)
      $AccessRule3 = New-Object System.Security.AccessControl.FileSystemAccessRule("ImageBuilderAdmin","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule3)
      Set-Acl C:\SessionRecording $Acl
      
      [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_a.ps1 -OutFile C:\SessionRecording\Scripts\script_a.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_b.ps1 -OutFile C:\SessionRecording\Scripts\script_b.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_c.ps1 -OutFile C:\SessionRecording\Scripts\script_c.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/variables.ps1 -OutFile C:\SessionRecording\Scripts\variables.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/config.json -OutFile C:\AppStream\SessionScripts\config.json
      

    4. Close the PowerShell terminal.
  4. To edit the variables.ps1 file with your own values:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell ISE, and then choose Run as Administrator.
    3. Choose File, then Open. Navigate to the folder C:\SessionRecording\Scripts\ and open the file variables.ps1.
    4. Edit the name and the Region of the S3 bucket with the values returned by AWS CloudFormation in the Outputs tab. You can also customize the number of frames per second, and the maximum duration in seconds of each video file. Save the file.
    5. Save and close the file.
  5. To download the latest FFmpeg and PsExec executables to the image builder:
    1. From the image builder desktop, open the Firefox desktop icon.
    2. Navigate to the URL https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-github and choose the link that contains essentials_build.zip to download FFmpeg. Choose Open to download and extract the ZIP archive. Copy the file ffmpeg.exe in the bin folder of the ZIP archive to C:\SessionRecording\Bin\.

      Note: FFmpeg only provides source code and compiled packages are available at third-party locations. If the link above is invalid, go to the FFmpeg download page and follow the instructions to download the latest release build for Windows.

    3. Navigate to the URL https://download.sysinternals.com/files/PSTools.zip to download PsExec. Choose Open to download and extract the ZIP archive. Copy the file PsExec64.exe to C:\SessionRecording\Bin\. You must agree with the license terms, because the solution in this blog post automatically accepts them.
    4. Close Firefox.

Step 4: Specify an application to publish and create an image

In this step, you publish Firefox on your image builder and create an AppStream 2.0 custom image. I chose Firefox because it’s easy to test later in the procedure. You can choose other or additional applications to publish, if needed.

To specify the application to publish and create the image

  1. From the image builder desktop, open the Image Assistant icon available on the desktop. Image Assistant guides you through the image creation process.
  2. In 1. Add Apps:
    1. Choose + Add App.
    2. Enter the location C:\Program Files (x86)\Mozilla Firefox\firefox.exe to add Firefox.
    3. Choose Open. Keep the default settings and choose Save.
    4. Choose Next multiple times until you see 4. Optimize.
  3. In 4. Optimize:
    1. Choose Launch.
    2. Choose Continue until you can see 5. Configure Image.
  4. In 5. Configure Image:
    1. For Name, enter session-recording for your image name.
    2. Choose Next.
  5. In 6. Review:
    1. Choose Disconnect and Create Image.
  6. Back in the AppStream 2.0 console:
    1. Choose Images in the left pane, and then choose the Image Registry tab.
    2. Change All Images to Private and shared with others. You will see your new AppStream 2.0 image.
    3. Wait until the image is in the Available state. This can take more than 30 minutes.

Step 5: Create an AppStream 2.0 fleet

Next, create an AppStream 2.0 fleet that consists of streaming instances that run your custom image.

To create the AppStream 2.0 fleet

  1. In the left pane of the AppStream 2.0 console, choose Fleets, and then choose Create Fleet.
  2. In Step 1: Provide Fleet Details:
    1. For Name, enter session-recording-fleet.
    2. Choose Next.
  3. In Step 2: Choose an Image:
    1. Select the name of the custom image that you created with the image builder.
    2. Choose Next.
  4. In Step 3: Configure Fleet:
    1. For Instance Type, select stream.standard.medium.
    2. For Fleet Type, choose Always-on.
    3. For Stream view, you can choose to stream either the applications or the entire desktop.
    4. For IAM role, select the IAM role.
    5. Keep the defaults for all other parameters, and choose Next.
  5. In Step 4: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. Select the VPC, the two subnets, and the security group.
    3. Choose Next.
  6. In Step 5: Review, choose Create.
  7. Wait until the fleet is in the Running state.

Step 6: Create an AppStream 2.0 stack

Create an AppStream 2.0 stack and associate it with the fleet that you just created.

To create the AppStream 2.0 stack

  1. In the left pane of the AppStream 2.0 console, choose Stacks, and then choose Create Stack.
  2. In Step 1: Stack Details:
    1. For Name, enter session-recording-stack.
    2. For Fleet, select the fleet that you created.
  3. Then follow the on-screen instructions and keep the defaults for all other parameters until the stack is created.

Step 7: Create a user in the AppStream 2.0 user pool

The AppStream 2.0 user pool provides a simplified way to manage access to applications for your users. In this step, you create a user in the user pool that you will use later in the procedure to test the solution.

To create the user in the AppStream 2.0 user pool

  1. In the left pane of the AppStream 2.0 console, choose User Pool, and then choose Create User.
  2. Enter your email address, first name, and last name. Choose Create User.
  3. Select the user you just created. Choose Actions, and then choose Assign stack.
  4. Select the stack, and then choose Assign stack.

Step 8: Test the solution

Now, sign in to AppStream 2.0 with the user that you just created, launch a streaming session, and check that the session recordings are delivered to Amazon S3.

To launch a streaming session and test the solution

  1. AppStream 2.0 sends you a notification email. Connect to the sign in portal by entering the information included in the notification email, and set a permanent password.
  2. Sign in to AppStream 2.0 by entering your email address and the permanent password.
  3. After you sign in, you can view the application catalog. Choose Firefox to launch a Firefox window and browse any websites you’d like.
  4. Choose the user icon at the top-right corner, and then choose Logout to end the session.

In the Amazon S3 console, navigate to the S3 bucket to browse the session recordings. For the session you just terminated, you can find one text file that contains user and instance metadata, and one or more video files that you can download and play with a media player like VLC.

Step 9: Clean up resources

You can now delete the two CloudFormation stacks to clean up the resources that were just created.

To clean up resources

  1. To delete the image builder:
    1. In the left pane of the AppStream 2.0 console, choose Images, and then choose Image Builder.
    2. Select the image builder. Choose Actions, then choose Delete.
  2. To delete the stack:
    1. In the left pane of the AppStream 2.0 console, choose Stacks.
    2. Select the image builder. Choose Actions, then choose Disassociate Fleet. Choose Disassociate to confirm.
    3. Choose Actions, then choose Delete.
  3. To delete the fleet:
    1. In the left pane of the AppStream 2.0 console, choose Fleets.
    2. Select the fleet. Choose Actions, then choose Stop. Choose Stop to confirm.
    3. Wait until the fleet is in the Stopped state.
    4. Choose Actions, then choose Delete.
  4. To disable the user in the user pool:
    1. In the left pane of the AppStream 2.0 console, choose User Pool.
    2. Select the user. Choose Actions, then choose Disable user. Choose Disable User to confirm.
  5. Empty the S3 bucket that CloudFormation created (see How do I empty an S3 bucket?). Repeat the same operation with the buckets that AppStream 2.0 created, whose names start with appstream-settings, appstream-logs and appstream2.
  6. Delete the CloudFormation stack on the AWS CloudFormation console (see Deleting a stack on the AWS CloudFormation console).

Conclusion

In this blog post, I showed you a way to record AppStream 2.0 sessions to video files for administrative access auditing, troubleshooting, or quality assurance. While this blog post focuses on Amazon AppStream 2.0, you could adapt and deploy the solution in Amazon Workspaces or in Amazon Elastic Compute Cloud (Amazon EC2) Windows instances.

For a deep-dive explanation of how the solution scripts function, you can refer to the GitHub repository that contains the solution artifacts.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon AppStream 2.0 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nicolas Malaval

Nicolas is a Solution Architect for Amazon Web Services. He lives in Paris and helps our healthcare customers in France adopt cloud technology and innovate with AWS. Before that, he spent three years as a Consultant for AWS Professional Services, working with enterprise customers.

Combining encryption and signing with AWS asymmetric keys

Post Syndicated from J.D. Bean original https://aws.amazon.com/blogs/security/combining-encryption-and-signing-with-aws-asymmetric-keys/

In this post, I discuss how to use AWS Key Management Service (KMS) to combine asymmetric digital signature and asymmetric encryption of the same data.

The addition of support for asymmetric keys in AWS KMS has exciting use cases for customers. The ability to create, manage, and use public and private key pairs with KMS enables you to perform digital signing operations using RSA and Elliptic Curve Cryptography (ECC) keys. AWS KMS asymmetric keys can also be used to perform digital encryption operations using RSA keys. You can use these features together to digitally sign and encrypt the same data.

Another notable property of AWS KMS asymmetric keys is that they enable disconnected use cases. For example AWS KMS asymmetric keys can be used to cryptographically verify a digital signature client-side without the need for a network connection. AWS KMS asymmetric keys also enable scenarios where customers can use KMS to securely manage decryption of data that has been encrypted by a partner’s system that does not integrate with AWS APIs or have access to AWS account credentials. For the sake of simplicity, however, the example that I discuss in this post describes a connected use case where all cryptographic actions are performed server-side in AWS KMS using AWS credentials. The use of AWS KMS asymmetric keys throughout this post allows the overall approach to be adapted to disconnected and/or non-AWS-integrated use cases.

Overview

This post contains three basic steps.

  1. Create and configure AWS asymmetric customer master keys (CMK), AWS Identity and Access Management (IAM) roles, and key policies.
  2. Use your asymmetric CMKs to encrypt and sign a sample message in the role of a sender.
  3. Use AWS KMS to decrypt and verify the message signature of the sample message archive you generated in the previous procedure using your asymmetric CMKs in the role of a receiver.

Prerequisites

The commands I use in this tutorial were tested using AWS Command Line Interface (AWS CLI) version 2.50 on Amazon Linux 2. In order to run these commands in your in your own local environment ensure that you have first installed and updated the AWS CLI.

I assume you have at least one administrator identity available to you that has broad rights for creating roles, assuming roles, as well as creating, managing and using KMS keys. This can be a federated identity (for example, from your corporate identity provider or from a social identity), or it can be an AWS IAM user. Where no AWS identity is mentioned, I assume that you will be accessing the AWS Management Console or the AWS CLI using this administrator identity.

For simplicity, I create the KMS keys in the same region as each other. You must specify an AWS Region when using the AWS CLI, either explicitly or by setting a default Region. Before beginning, you should select an AWS Region to work in such as US East (N. Virginia). If you have not configured the AWS CLI in your environment please review the Configuration basics section of the AWS Command Line Interface User Guide for instructions. You may revert this configuration once you have finished if you do not wish to continue using a default Region with your AWS CLI. Take note of your selected region. When working in the AWS Console, if you do not see resources, such as AWS KMS keys, that you expect you may want to confirm that you are viewing resources in your chosen Region. For more information on selecting your Region in the AWS Console see Choosing a Region in AWS Management Console Getting Started Guide.

Create and configure resources

In the first phase of this tutorial you create and configure two asymmetric AWS KMS CMKs, two AWS IAM roles, and configure the key policies for both of your KMS CMKs to grant permissions to the roles. Shown in the following figure.
 

Figure 1: Create keys, roles, and key policies

Figure 1: Create keys, roles, and key policies

Create asymmetric signing and encryption key pairs

In the first step, you create two asymmetric master keys (CMK). One is configured for signing and verifying digital signatures while the other is configured for encrypting and decrypting data.

Note: The CMKs configured for this post are examples. RSA and Elliptic curve CMKs key specs can differ in a variety of dimensions. The RSA or elliptic curve key spec that you choose might be determined by your security standards or the requirements of your task. Different CMK key specs are priced differently and are subject to different request quotas because they each have different performance profiles. In general, use RSA or ECC keys with the highest security level that is practical and affordable for your task. For more information on CMK configuration options, please review the How to choose your CMK configuration section of the KMS Developer Guide.

To create a CMK for encryption and decryption

  1. Use the KMS CreateKey API. Pass RSA_4096 for the CustomerMasterKeySpec parameter and ENCRYPT_DECRYPT for the KeyUsage parameter in the AWS CLI example command below in order to generate a RSA 4096 key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec RSA_4096 \
        --key-usage ENCRYPT_DECRYPT \
        --description "Sample Digital Encryption Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value in this object.

  2. As a best practice, assign an alias for your key. Use the following command to assign an alias of sample-encrypt-decrypt-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID). Mapping a human-readable alias to the KeyID will make it easier to identify, use, and manage.
    aws kms create-alias \
        --alias-name alias/sample-encrypt-decrypt-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

To create a CMK for signature and verification

  1. Use the KMS CreateKey API. Pass ECC_NIST_P521 for the CustomerMasterKeySpec parameter and SIGN_VERIFY for the KeyUsage parameter in the AWS CLI example command below in order to generate an elliptic curve (ECC) key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec \
        ECC_NIST_P521  \
        --key-usage SIGN_VERIFY \
        --description "Sample Digital Signature Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value.

  2. Use the following command to assign an alias of sample-sign-verify-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID).
    aws kms create-alias \
        --alias-name alias/sample-sign-verify-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

Create sender and receiver roles

For the next step of this tutorial, you create two AWS principals. Use the steps that follow to create two roles—a sender principal and a receiver principal. Later, you will grant permissions to perform private key operations (sign and decrypt) and public key operations (verify and encrypt) to these roles.

To create and configure the roles

  1. Navigate to the AWS Identity and Access Management (IAM) Create role console dialogue that allows entities in a specified account to assume the role. Enter your Account ID and choose Next, as shown in the following figure.

    Note: If you don’t know you AWS account ID, please read Finding you AWS account ID in the AWS IAM User Guide for guidance on how to obtain this information.

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

  2. Select Next through the next two screens.

    Note: By clicking next through these dialogues you do not attach an IAM permissions policy or a tag to this new role.

  3. On the final screen, enter a Role name of SenderRole and a Role description of your choice, as shown in the following figure.
     
    Figure 3: Create the sender role

    Figure 3: Create the sender role

  4. Choose Create role to finish creating the sender role.
  5. To create the receiver role, repeat the preceding role creation process. However, in step 3, substitute the name ReceiverRole for SenderRole.

Configure key policy permissions

Best practice is to adhere to the principle of least privilege and provide each AWS principal with the minimal permissions necessary to perform its tasks. The sender and receiver roles that you created in the previous step currently have no permissions in your account. For this scenario, the receiver principal must be granted permission to verify digital signatures and decrypt data in AWS KMS using your asymmetric CMKs and the sender principal must be granted permission to create digital signatures and encrypt data in KMS using your asymmetric CMKs.

To provide access control permissions for AWS KMS actions to your AWS principals, attach a key policy to each of your CMKs.

Modify the CMK key policy

For the sample-encrypt-decrypt-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Encrypt permissions and the IAM role for the receiver principal (ReceiverRole) kms:Decrypt permissions in the CMK key policy.

To modify the CMK key policy (console)

  1. Navigate to the AWS KMS page in the AWS Console and select customer-managed keys.
  2. Select your sample-encrypt-decrypt-key CMK.
  3. In the key policy section, choose edit.
  4. To allow your receiver principal to use the CMK to decrypt data encrypted under that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for decryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Decrypt",
        "Resource": "*"
    }
    

  5. To allow your sender principal to use the CMK to encrypt data, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for encryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Encrypt",
        "Resource": "*"
    }
    

  6. Choose Save changes.

Note: The kms:Encrypt permission is sufficient to permit the sender principal to encrypt small amounts of arbitrary data using your CMK directly.

Grant sign and verify permissions to the CMK key policy

For the sample-sign-verify-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Sign permissions in the CMK key policy and the IAM role for the receiver principal (ReceiverRole) kms:Verify permissions in the CMK key policy.

To grant sign and verify permissions

  1. Using the same process as above, navigate to the key policy edit dialog for the sample-sign-verify-key CMK in the AWS console.
  2. To allow your sender principal to use the CMK to create digital signatures, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for digital signing",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Sign",
        "Resource": "*"
    }
    

  3. To allow your receiver principal to use the CMK to verify signatures created by that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for digital signature verification",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Verify",
        "Resource": "*"
    }
    

  4. Choose Save changes.

Key permissions summary

When these key policy edits have been completed the sender principal:

  • Will have permissions to encrypt data using the sample-encrypt-decrypt-key CMK and generate digital signatures using the sample-sign-verify-key CMK.
  • Will not have permissions to decrypt or to verify signatures using the CMKs.

The receiver principal:

  • Will have permissions to decrypt data which has been encrypted using the sample-encrypt-decrypt-key CMK and to verify signatures created using the sample-sign-verify-key CMK.
  • Will not have permissions to encrypt or to generate signatures using the CMKs.
Figure 4: Summary of key policy permissions

Figure 4: Summary of key policy permissions

Signing and encrypting a sample message

So far, you’ve created two asymmetric CMKs, created a set of sender and receiver roles, and configured permissions for those roles in each of your CMK key policies. In the second phase of this tutorial, you assume the role of sender and use your asymmetric signature and verification CMK to sign a sample message. You then bundle the sample message and its corresponding digital signature together into an archive and use your encryption and decryption asymmetric CMK to encrypt the archive.
 

Figure 5: Creating a message signature and encrypting the message along with its signature

Figure 5: Creating a message signature and encrypting the message along with its signature

Note: The order of operations in this process is that the message is first signed and then the signature and the message are encrypted together. This order is intentional. When a message is signed and then encrypted, neither the contents nor the identity of the sender will be available to unauthorized 3rd parties. If the order of operations were reversed, however, and a message was first encrypted and then signed it could leak information about the sender’s identity to unauthorized 3rd parties. Moreover, when a message is encrypted and then signed, an unauthorized 3rd party with access to the files could discard the authentic signature created by the sender and replace it with a valid signature created by their own key. This creates the potential for a 3rd party to deceptively create the appearance that they are the legitimate sender of the message and exploit that misperception further.

Assume the sender role

Start by assuming the sender role. In order to successfully assume a role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the sender role.

To assume the sender role

  1. Run the following command, but be sure to replace the account ID value of 111122223333 with your account ID:
    aws sts assume-role \
        --role-arn arn:aws:iam::111122223333::role/SenderRole \
        --role-session-name AWSCLI-Session
    

  2. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  3. Confirm that you’ve successfully assumed the sender role by issuing:
    aws sts get-caller-identity
    

    Note: If the output of this command contains the text assumed-role/SenderRole, then you’ve successfully assumed the sender role.

Create a message

Now, create a sample message file called message.json.

To create a message

Run the following command to create a message with the following content:

echo "
{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}
" > ./message.json 

Create a digital signature

Creating and verifying a digital signature for the message provides confidence that the message contents haven’t been altered after being sent. This characteristic is known as integrity. Furthermore, when access to a signing key is scoped to a particular principal, creating and verifying a digital signature for the message provides confidence in the sender’s identity. This characteristic is known as authenticity. Finally, a high degree of confidence in both the integrity and authenticity of a message limits the plausible ability of a sender to fraudulently deny having signed a message. This characteristic is known a non-repudiation.

To create a digital signature

Run the following command to create a digital signature for message.json:

aws kms sign \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --signing-algorithm ECDSA_SHA_512 \
    --message fileb://message.json \
    --output text \
    --query Signature | base64 --decode > message.sig

This generates an independent digital signature file, message.sig, for message.json. Any modification to the contents of message.json, such as changing the sender or message fields, will now cause signature validation of message.sig to fail for message.json.

Encrypt the message and signature

Even with the benefits of a digital signature, the message could still be viewed by any party with access to the file. In order to provide confidence that the message contents aren’t exposed to unauthorized parties, you can encrypt the message. This characteristic is known as confidentiality. In order to retain the benefits of your digital signature you can encrypt the message and corresponding signature together in a single package.

To encrypt the message and signature

  1. Combine your message and signature into an archive. For example, with the GNU Tar utility you can issue the following:
    tar -czvf message.tar.gz message.sig message.json
    

    This will create a new archive file named message.tar.gz containing both your message and message signature.

  2. Encrypt the archive using AWS KMS. To do so, issue the following command:
    aws kms encrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --plaintext fileb://message.tar.gz \
        --output text \
        --query CiphertextBlob | base64 --decode > message.enc
    

    This will output a message.enc file containing an encrypted copy of the message.tar.gz archive.

Decrypting and verifying a sample message

Now that you’ve created, signed, and encrypted a message, let’s change gears and see what working with this message.enc file is like from the perspective of a receiving party. In the final phase of this tutorial you assume the role of receiver and use your asymmetric CMKs to decrypt the encrypted message archive and verify the digital signature that you created. Finally, you will view your message. The process is shown in the following figure.
 

Figure 6: Decrypting a message archive and verifying the message signature

Figure 6: Decrypting a message archive and verifying the message signature

Assume the receiver role

Assume the receiver role so that you can simulate receiving a signed and encrypted message. As before, in order to assume the receiver role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the receiver role.

To assume the receiver role

  1. Copy the message.enc file to a new directory to create a clean working space and navigate there in a terminal session.
  2. Assume your receiver role. To do so, execute the following command, replacing the account ID value of 111122223333 with your own:
    aws sts assume-role \
    	--role-arn arn:aws:iam::111122223333::role/ReceiverRole \
    	--role-session-name AWSCLI-Session
    

  3. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  4. Confirm that you have successfully assumed the receiver role by issuing:
    aws sts get-caller-identity
    

If the output of this command contains the text assumed-role/ReceiverRole then you have successfully assumed the receiver role.

Decrypt the encrypted message archive in AWS KMS

Decrypt the encrypted message archive to access the plaintext of the message and message signature files.

To decrypt the encrypted message archive

  1. Issue the following command:
    aws kms decrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --ciphertext-blob fileb://EncryptedMessage \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --output text \
        --query Plaintext | base64 --decode > message.tar.gz
    

  2. This will create an unencrypted message.tar.gz file that you can unpack with:
    tar -xvfz message.tar.gz
    

This, in turn, will expand the archive contents message.sig and message.json in your working directory.

Verify the message signature

To verify the signature on the message issue the following command:

aws kms verify \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --message fileb://message.json \
    --signing-algorithm ECDSA_SHA_512 \
    --signature fileb://message.sig

In the response you should see that SignatureValid is marked true indicating that the signature has been verified using the specified sample-sign-verify-key that you granted the sender principal permission to generate signatures with.

View the message

Finally, open message.json and view the file’s contents by issuing the following command:

less message.json

You will see that the contents of the file have not been modified and still read:

{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}

Note: Be careful to avoid making any changes to the contents of this file. Even a minor modification of the message contents will compromise the integrity of the message and cause future attempts at signature validation using your message.sig file to fail.

Summary

In this tutorial, you signed and encrypted data using two AWS KMS asymmetric CMKs and later decrypted and verified your signature using those CMKs.

You first created two asymmetric CMKs in AWS KMS, one for creating and verifying digital signatures and the other for encrypting and decrypting data. You then configured key policy permissions for your sender and receiver principals. Acting as your sender principal, you digitally signed a message in AWS KMS, added the message and signature to an archive and then encrypted that archive in AWS KMS. Next you assumed your receiver role and decrypted the archive in AWS KMS, viewed your message, and verified its signature in AWS KMS.

To learn more about the asymmetric keys feature of AWS KMS, please read the AWS KMS Developer Guide. If you have questions about the asymmetric keys feature, please start a new thread on the AWS KMS forum. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

J.D. Bean

J.D. is a Senior Solutions Architect at AWS working with public sector organizations and financial institutions based out of New York City. His interests include security, privacy, and compliance. He is passionate about his work enabling AWS customers’ successful cloud journeys. J.D. holds a Bachelor of Arts from The George Washington University and a Juris Doctor from New York University School of Law.

Aligning IAM policies to user personas for AWS Security Hub

Post Syndicated from Vaibhawa Kumar original https://aws.amazon.com/blogs/security/aligning-iam-policies-to-user-personas-for-aws-security-hub/

AWS Security Hub provides you with a comprehensive view of your security posture across your accounts in Amazon Web Services (AWS) and gives you the ability to take action on your high-priority security alerts. There are several different user personas that use Security Hub, and they typically require different AWS Identity and Access Management (IAM) permissions. Those personas include: a security administrator; a security analyst or engineer on a central security team or in a Cloud Center of Excellence; and a Developer Operations (DevOps) engineer or application builder who is the primary owner of an AWS account. In this post, we show how to deploy sample IAM policies for these three personas.

The first persona, a security administrator or cloud system administrator (sysadmin), is responsible for setting up and configuring Security Hub, and they typically need access to all of the Security Hub APIs to do this work. As part of this work, the sysadmin enables Security Hub on various accounts and Regions, decides which standards and controls should be enabled on which accounts, enables product integrations, creates insights, sets up custom actions and automated remediations, and configures IAM policies for other users.

The second persona, a security analyst or engineer using Security Hub, is part of a central security team, often part of a Cloud Center of Excellence. We often see that cloud security efforts are centralized in Cloud Centers of Excellence within a company. These security analysts or engineers typically have access to the master account in Security Hub and can view and take action on findings from any of the connected member accounts. They typically are not configuring Security Hub, so they don’t need permissions to do so.

The third persona is a DevOps engineer or application builder. This user needs the ability to view findings and take action only on the findings associated with their account. For cloud workloads, security is often decentralized down to these users. Security Hub enables them to take more proactive responsibility for the security of their own account by directly viewing and taking action on findings in their account. They typically don’t need permissions to set up and configure Security Hub, because that is done by a central sysadmin.

Overview

The following reference architecture presents an overview of a Security Hub master-member account structure and three personas: a security administrator, a security analyst/engineer, and a DevOps engineer.
 

Figure 1: Reference architecture

Figure 1: Reference architecture

In this blog post, we show how you can create and use the following AWS managed and customer managed IAM policies to support these three personas:

  • The sysadmin persona needs permissions to configure and manage Security Hub, account memberships, insights, and integrations, and to create remediations and take actions and perform record and workflow updates. The AWS managed IAM policy called AWSSecurityHubFullAccess provides the permissions for this persona. An IAM user or role with these permissions can deploy and configure Security Hub in master and member accounts. They can also update findings. The sysadmin also requires permissions to configure AWS Config and Amazon CloudWatch event rules to set up automated responses and remediations.
  • The security analyst persona needs permissions to read, list, and describe findings, standards, controls, and products; to update findings; and to create and update insights for Security Hub resources in the master account. The AWS managed IAM policy called AWSSecurityHubReadOnlyAccess provides the permissions needed for read, list, and describe actions, and a customer managed policy will be attached to give permissions to create and update insights and to update findings.
  • The DevOps engineer persona needs the same permissions as the security analyst persona, but they will only have the ability to access their own Security Hub member AWS account(s) and won’t have access to the master account.

Depending on your specific use case, you might want to provide additional permissions to the security analyst and DevOps engineer personas. For example, you might want to also grant them permissions to create custom actions by using the UpdateActionTarget API. In that case, you should also ensure that they have appropriate permissions to create CloudWatch event rules. You can also restrict these personas to only be able to update certain fields in findings (for example, only update workflow status but not severity) by using IAM context keys.

Prerequisites

You must have already enabled Security Hub with one account as master and other associated accounts as members. You will use the following AWS services:

Implementation

To create the required customer managed policies and associate them to users and roles, you will perform these tasks, described in more detail later in this section:

  1. Create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  2. Create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  3. Create a sysadmin user and role and associate it with the AWS managed policy for AWSSecurityHubFullAccess, along with AWS Config and CloudWatch event rule permissions in the Security Hub master account.

The following policy JSON script is for those two customer managed policies.

Security Hub - Security Analyst policy 
MasterCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SecurityAnalystMasterCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
	  			"securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Security Hub – DevOps Engineer policy: 
MemberCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DevOpsMemberCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
                "securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Step 1: Create the user and role for the security analyst persona

First, create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub master account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies, and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – Security Analyst policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you’re creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 2: Create the user and role for the DevOps persona

Next, create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – DevOps Engineer policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you are creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 3: Create the user and role for the sysadmin persona

Next, create a user and role for the sysadmin persona and associate it with the AWS managed policies AWSSecurityHubFullAccess, CloudWatchEventsFullAccess, and full access to AWS Config in the Security Hub master account.

To create the IAM user and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role.
  3. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  4. Choose the permissions category Attach existing policies directly to filter and select managed policies, and attach the managed policies AWSSecurityHubFullAccess and CloudWatchEventsFullAccess.
  5. Create another policy to grant full access to AWS Config as described in the AWS Config Developer Guide, and attach the policy to the user or role. Review the permissions, and choose Add permissions.
  6. Save all the changes and try using the user and role after few minutes.

Test the users and roles in the Security Hub master and member accounts

Finally, test the three users and roles that you created for the respective personas in the preceding steps.

To test the users and roles

  1. Security analyst user and role:
    1. Sign in to the AWS Management Console in the master account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the Security Analyst can view them.
    3. Navigate to a finding and change the workflow status as described in the topic Setting the workflow status for findings.
  2. DevOps engineer user and role:
    1. Sign in to the AWS Management Console in the member account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the DevOps Engineer can view them.
    3. Create a custom insight for the member account and other attribute groupings.
  3. Sysadmin user and role:
    1. Sign in to the AWS Management Console in the master or member account, and open Security Hub.
    2. Try admin-related operations such as create a custom action, invite another account, and so on.

Adding policy conditions

You might want to further restrict the permissions for the security analyst and DevOps personas. IAM policies for Security Hub’s BatchUpdateFindings API enable you to specify conditions to prevent a user from making any update to a specific finding field. The following example disallows setting the Workflow Status field to Suppressed.

{
	"Sid": "CMPCondition",
	"Effect": "Deny",
	"Action": "securityhub:BatchUpdateFindings",
	"Resource": "*",
	"Condition": {
		"StringEquals": {
			"securityhub:ASFFSyntaxPath/Workflow.Status": "SUPPRESSED"
		}
	}
}

Summary

In this post, we showed you how to align AWS managed and customer managed IAM policies to different user personas, so that you can allow different users to access Security Hub with least privilege permissions. Security Hub enables both central security teams and individual DevOps engineers to understand and improve the security posture of the AWS accounts in their organization.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vaibhawa Kumar

Vaibhawa is a Cloud Infra Architect with AWS Professional Services. He helps customers on their cloud journey of critical workloads to the AWS cloud with infrastructure security and automations. In his free time, you can find him spending time with family, sports, and cooking.

Author

Sanjay Patel

Sanjay is a Senior Cloud Application Architect with AWS Professional Services. He has a diverse background in software design, enterprise architecture, and API integrations. He has helped AWS customers automate infrastructure security. He enjoys working with AWS customers to identify and implement the best fit solution.

Author

Ely Kahn

Ely is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

Rapid and flexible Infrastructure as Code using the AWS CDK with AWS Solutions Constructs

Post Syndicated from Biff Gaut original https://aws.amazon.com/blogs/devops/rapid-flexible-infrastructure-with-solutions-constructs-cdk/

Introduction

As workloads move to the cloud and all infrastructure becomes virtual, infrastructure as code (IaC) becomes essential to leverage the agility of this new world. JSON and YAML are the powerful, declarative modeling languages of AWS CloudFormation, allowing you to define complex architectures using IaC. Just as higher level languages like BASIC and C abstracted away the details of assembly language and made developers more productive, the AWS Cloud Development Kit (AWS CDK) provides a programming model above the native template languages, a model that makes developers more productive when creating IaC. When you instantiate CDK objects in your Typescript (or Python, Java, etc.) application, those objects “compile” into a YAML template that the CDK deploys as an AWS CloudFormation stack.

AWS Solutions Constructs take this simplification a step further by providing a library of common service patterns built on top of the CDK. These multi-service patterns allow you to deploy multiple resources with a single object, resources that follow best practices by default – both independently and throughout their interaction.

Comparison of an Application stack with Assembly Language, 4th generation language and Object libraries such as Hibernate with an IaC stack of CloudFormation, AWS CDK and AWS Solutions Constructs

Application Development Stack vs. IaC Development Stack

Solution overview

To demonstrate how using Solutions Constructs can accelerate the development of IaC, in this post you will create an architecture that ingests and stores sensor readings using Amazon Kinesis Data Streams, AWS Lambda, and Amazon DynamoDB.

An architecture diagram showing sensor readings being sent to a Kinesis data stream. A Lambda function will receive the Kinesis records and store them in a DynamoDB table.

Prerequisite – Setting up the CDK environment

Tip – If you want to try this example but are concerned about the impact of changing the tools or versions on your workstation, try running it on AWS Cloud9. An AWS Cloud9 environment is launched with an AWS Identity and Access Management (AWS IAM) role and doesn’t require configuring with an access key. It uses the current region as the default for all CDK infrastructure.

To prepare your workstation for CDK development, confirm the following:

  • Node.js 10.3.0 or later is installed on your workstation (regardless of the language used to write CDK apps).
  • You have configured credentials for your environment. If you’re running locally you can do this by configuring the AWS Command Line Interface (AWS CLI).
  • TypeScript 2.7 or later is installed globally (npm -g install typescript)

Before creating your CDK project, install the CDK toolkit using the following command:

npm install -g aws-cdk

Create the CDK project

  1. First create a project folder called stream-ingestion with these two commands:

mkdir stream-ingestion
cd stream-ingestion

  1. Now create your CDK application using this command:

npx [email protected] init app --language=typescript

Tip – This example will be written in TypeScript – you can also specify other languages for your projects.

At this time, you must use the same version of the CDK and Solutions Constructs. We’re using version 1.68.0 of both based upon what’s available at publication time, but you can update this with a later version for your projects in the future.

Let’s explore the files in the application this command created:

  • bin/stream-ingestion.ts – This is the module that launches the application. The key line of code is:

new StreamIngestionStack(app, 'StreamIngestionStack');

This creates the actual stack, and it’s in StreamIngestionStack that you will write the CDK code that defines the resources in your architecture.

  • lib/stream-ingestion-stack.ts – This is the important class. In the constructor of StreamIngestionStack you will add the constructs that will create your architecture.

During the deployment process, the CDK uploads your Lambda function to an Amazon S3 bucket so it can be incorporated into your stack.

  1. To create that S3 bucket and any other infrastructure the CDK requires, run this command:

cdk bootstrap

The CDK uses the same supporting infrastructure for all projects within a region, so you only need to run the bootstrap command once in any region in which you create CDK stacks.

  1. To install the required Solutions Constructs packages for our architecture, run the these two commands from the command line:

npm install @aws-solutions-constructs/[email protected]
npm install @aws-solutions-constructs/[email protected]

Write the code

First you will write the Lambda function that processes the Kinesis data stream messages.

  1. Create a folder named lambda under stream-ingestion
  2. Within the lambda folder save a file called lambdaFunction.js with the following contents:
var AWS = require("aws-sdk");

// Create the DynamoDB service object
var ddb = new AWS.DynamoDB({ apiVersion: "2012-08-10" });

AWS.config.update({ region: process.env.AWS_REGION });

// We will configure our construct to 
// look for the .handler function
exports.handler = async function (event) {
  try {
    // Kinesis will deliver records 
    // in batches, so we need to iterate through
    // each record in the batch
    for (let record of event.Records) {
      const reading = parsePayload(record.kinesis.data);
      await writeRecord(record.kinesis.partitionKey, reading);
    };
  } catch (err) {
    console.log(`Write failed, err:\n${JSON.stringify(err, null, 2)}`);
    throw err;
  }
  return;
};

// Write the provided sensor reading data to the DynamoDB table
async function writeRecord(partitionKey, reading) {

  var params = {
    // Notice that Constructs automatically sets up 
    // an environment variable with the table name.
    TableName: process.env.DDB_TABLE_NAME,
    Item: {
      partitionKey: { S: partitionKey },  // sensor Id
      timestamp: { S: reading.timestamp },
      value: { N: reading.value}
    },
  };

  // Call DynamoDB to add the item to the table
  await ddb.putItem(params).promise();
}

// Decode the payload and extract the sensor data from it
function parsePayload(payload) {

  const decodedPayload = Buffer.from(payload, "base64").toString(
    "ascii"
  );

  // Our CLI command will send the records to Kinesis
  // with the values delimited by '|'
  const payloadValues = decodedPayload.split("|", 2)
  return {
    value: payloadValues[0],
    timestamp: payloadValues[1]
  }
}

We won’t spend a lot of time explaining this function – it’s pretty straightforward and heavily commented. It receives an event with one or more sensor readings, and for each reading it extracts the pertinent data and saves it to the DynamoDB table.

You will use two Solutions Constructs to create your infrastructure:

The aws-kinesisstreams-lambda construct deploys an Amazon Kinesis data stream and a Lambda function.

  • aws-kinesisstreams-lambda creates the Kinesis data stream and Lambda function that subscribes to that stream. To support this, it also creates other resources, such as IAM roles and encryption keys.

The aws-lambda-dynamodb construct deploys a Lambda function and a DynamoDB table.

  • aws-lambda-dynamodb creates an Amazon DynamoDB table and a Lambda function with permission to access the table.
  1. To deploy the first of these two constructs, replace the code in lib/stream-ingestion-stack.ts with the following code:
import * as cdk from "@aws-cdk/core";
import * as lambda from "@aws-cdk/aws-lambda";
import { KinesisStreamsToLambda } from "@aws-solutions-constructs/aws-kinesisstreams-lambda";

import * as ddb from "@aws-cdk/aws-dynamodb";
import { LambdaToDynamoDB } from "@aws-solutions-constructs/aws-lambda-dynamodb";

export class StreamIngestionStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const kinesisLambda = new KinesisStreamsToLambda(
      this,
      "KinesisLambdaConstruct",
      {
        lambdaFunctionProps: {
          // Where the CDK can find the lambda function code
          runtime: lambda.Runtime.NODEJS_10_X,
          handler: "lambdaFunction.handler",
          code: lambda.Code.fromAsset("lambda"),
        },
      }
    );

    // Next Solutions Construct goes here
  }
}

Let’s explore this code:

  • It instantiates a new KinesisStreamsToLambda object. This Solutions Construct will launch a new Kinesis data stream and a new Lambda function, setting up the Lambda function to receive all the messages in the Kinesis data stream. It will also deploy all the additional resources and policies required for the architecture to follow best practices.
  • The third argument to the constructor is the properties object, where you specify overrides of default values or any other information the construct needs. In this case you provide properties for the encapsulated Lambda function that informs the CDK where to find the code for the Lambda function that you stored as lambda/lambdaFunction.js earlier.
  1. Now you’ll add the second construct that connects the Lambda function to a new DynamoDB table. In the same lib/stream-ingestion-stack.ts file, replace the line // Next Solutions Construct goes here with the following code:
    // Define the primary key for the new DynamoDB table
    const primaryKeyAttribute: ddb.Attribute = {
      name: "partitionKey",
      type: ddb.AttributeType.STRING,
    };

    // Define the sort key for the new DynamoDB table
    const sortKeyAttribute: ddb.Attribute = {
      name: "timestamp",
      type: ddb.AttributeType.STRING,
    };

    const lambdaDynamoDB = new LambdaToDynamoDB(
      this,
      "LambdaDynamodbConstruct",
      {
        // Tell construct to use the Lambda function in
        // the first construct rather than deploy a new one
        existingLambdaObj: kinesisLambda.lambdaFunction,
        tablePermissions: "Write",
        dynamoTableProps: {
          partitionKey: primaryKeyAttribute,
          sortKey: sortKeyAttribute,
          billingMode: ddb.BillingMode.PROVISIONED,
          removalPolicy: cdk.RemovalPolicy.DESTROY
        },
      }
    );

    // Add autoscaling
    const readScaling = lambdaDynamoDB.dynamoTable.autoScaleReadCapacity({
      minCapacity: 1,
      maxCapacity: 50,
    });

    readScaling.scaleOnUtilization({
      targetUtilizationPercent: 50,
    });

Let’s explore this code:

  • The first two const objects define the names and types for the partition key and sort key of the DynamoDB table.
  • The LambdaToDynamoDB construct instantiated creates a new DynamoDB table and grants access to your Lambda function. The key to this call is the properties object you pass in the third argument.
    • The first property sent to LambdaToDynamoDB is existingLambdaObj – by setting this value to the Lambda function created by KinesisStreamsToLambda, you’re telling the construct to not create a new Lambda function, but to grant the Lambda function in the other Solutions Construct access to the DynamoDB table. This illustrates how you can chain many Solutions Constructs together to create complex architectures.
    • The second property sent to LambdaToDynamoDB tells the construct to limit the Lambda function’s access to the table to write only.
    • The third property sent to LambdaToDynamoDB is actually a full properties object defining the DynamoDB table. It provides the two attribute definitions you created earlier as well as the billing mode. It also sets the RemovalPolicy to DESTROY. This policy setting ensures that the table is deleted when you delete this stack – in most cases you should accept the default setting to protect your data.
  • The last two lines of code show how you can use statements to modify a construct outside the constructor. In this case we set up auto scaling on the new DynamoDB table, which we can access with the dynamoTable property on the construct we just instantiated.

That’s all it takes to create the all resources to deploy your architecture.

  1. Save all the files, then compile the Typescript into a CDK program using this command:

npm run build

  1. Finally, launch the stack using this command:

cdk deploy

(Enter “y” in response to Do you wish to deploy all these changes (y/n)?)

You will see some warnings where you override CDK default values. Because you are doing this intentionally you may disregard these, but it’s always a good idea to review these warnings when they occur.

Tip – Many mysterious CDK project errors stem from mismatched versions. If you get stuck on an inexplicable error, check package.json and confirm that all CDK and Solutions Constructs libraries have the same version number (with no leading caret ^). If necessary, correct the version numbers, delete the package-lock.json file and node_modules tree and run npm install. Think of this as the “turn it off and on again” first response to CDK errors.

You have now deployed the entire architecture for the demo – open the CloudFormation stack in the AWS Management Console and take a few minutes to explore all 12 resources that the program deployed (and the 380 line template generated to created them).

Feed the Stream

Now use the CLI to send some data through the stack.

Go to the Kinesis Data Streams console and copy the name of the data stream. Replace the stream name in the following command and run it from the command line.

aws kinesis put-records \
--stream-name StreamIngestionStack-KinesisLambdaConstructKinesisStreamXXXXXXXX-XXXXXXXXXXXX \
--records \
PartitionKey=1301,'Data=15.4|2020-08-22T01:16:36+00:00' \
PartitionKey=1503,'Data=39.1|2020-08-22T01:08:15+00:00'

Tip – If you are using the AWS CLI v2, the previous command will result in an “Invalid base64…” error because v2 expects the inputs to be Base64 encoded by default. Adding the argument --cli-binary-format raw-in-base64-out will fix the issue.

To confirm that the messages made it through the service, open the DynamoDB console – you should see the two records in the table.

Now that you’ve got it working, pause to think about what you just did. You deployed a system that can ingest and store sensor readings and scale to handle heavy loads. You did that by instantiating two objects – well under 60 lines of code. Experiment with changing some property values and deploying the changes by running npm run build and cdk deploy again.

Cleanup

To clean up the resources in the stack, run this command:

cdk destroy

Conclusion

Just as languages like BASIC and C allowed developers to write programs at a higher level of abstraction than assembly language, the AWS CDK and AWS Solutions Constructs allow us to create CloudFormation stacks in Typescript, Java, or Python instead JSON or YAML. Just as there will always be a place for assembly language, there will always be situations where we want to write CloudFormation templates manually – but for most situations, we can now use the AWS CDK and AWS Solutions Constructs to create complex and complete architectures in a fraction of the time with very little code.

AWS Solutions Constructs can currently be used in CDK applications written in Typescript, Javascript, Java and Python and will be available in C# applications soon.

About the Author

Biff Gaut has been shipping software since 1983, from small startups to large IT shops. Along the way he has contributed to 2 books, spoken at several conferences and written many blog posts. He is now a Principal Solutions Architect at AWS working on the AWS Solutions Constructs team, helping customers deploy better architectures more quickly.

How to implement password-less authentication with Amazon Cognito and WebAuthn

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-implement-password-less-authentication-with-amazon-cognito-and-webauthn/

In this blog post, I show you how to offer a password-less authentication experience to your customers. To do this, you’ll allow physical security keys or platform authenticators (like finger-print scanners) to be used as the authentication factor to your web or mobile applications that use Amazon Cognito user pools for authentication.

An Amazon Cognito user pool is a user directory that Amazon Web Services (AWS) customers use to manage their customer identities. Web Authentication (WebAuthn) is a W3C standard that lets users authenticate to web applications using public-key cryptography. Using public-key cryptography enables you to implement a stronger authentication mechanism that’s less dependent on passwords.

Mobile and web applications can use WebAuthn together with browser and device support for the Client-To-Authenticator-Protocol (CTAP) to implement Fast ID Online (FIDO) authentication. To learn more about the flow of WebAuthn and CTAP, visit the FIDO Alliance.

How it works

Amazon Cognito user pools allow you to build a custom authentication flow that uses AWS Lambda functions to authenticates users based on one or more challenge/response cycles. You can use this flow to implement password-less authentication that is based on custom challenges. To use this flow to implement FIDO authentication, you need to create credentials during the registration phase and reference these credentials in the user’s profile. You can then validate these credentials during the authentication phase in a custom challenge.

During registration, a new set of credentials that are bound to your application (relying party), are created through a FIDO authenticator. For example, a platform authenticator with a biometric sensor or a roaming authenticator like a physical security key. The private key of this credential set remains on the authenticator, the public key, together with a credential identifier are saved in a custom attribute that’s part of the user profile in Amazon Cognito.

During authentication, a Cognito custom authentication flow will be used to implement authentication through a custom challenge. The application prompts the user to sign in using the authenticator that they used during registration. Response from the authenticator is then passed as a challenge response to Amazon Cognito and verified using the stored public key.

In this password-less flow, the private key has never left the physical device, the authenticator also validates that relying party in authentication request matches the relying party that was used to create the credentials. This combination provides a more secured authentication flow that uses stronger credentials, protects user from phishing and provides better user experience.

About the demo project

This blog post and the diagrams below explain a scenario that uses FIDO as the only authentication factor, to implement password-less authentication. To help with implementation details, I created a project to demonstrate WebAuthn integration with Amazon Cognito that provides sample code for three scenarios:

  • A scenario that uses FIDO as the only factor (password-less)
  • A scenario that uses FIDO as a second factor (with password)
  • A scenario that lets users sign-in with only a password

This project is only a demonstration and shouldn’t be used as-is in production environments. When using FIDO as authentication factor, it is a best practice to allow users to register multiple authenticators and you need to implement an account recovery workflow in case of a lost authenticator.

Implementing FIDO Authentication

Let’s take a deeper dive into the design and components involved in implementing this solution. To deploy this project in your development environment, follow the instructions in the WebAuthn integration with Amazon Cognito project.

Creating and configuring user pool

The first step is to create a Cognito user pool and triggers that orchestrate a custom authentication flow. You do that by deploying the CloudFormation stack that will create all resources as explained in the demo project.

Few implementation details to note about the user pool:

  • The template creates a user pool, app client, triggers, and Lambda functions to use for custom authentication.
  • The template creates a custom attribute called publicKeyCred. This is the custom attribute that holds a base64 encoded representation of the credential identifier and public key for the user’s authenticator.
  • The app client defines what authentication flows are allowed. You can limit allowed flows according to your use-case. To support FIDO authentication, you must allow CUSTOM_AUTH flow.
  • The app client has “write” permissions to the custom attribute publicKeyCred but not “read” permissions. This allows your application to write the attribute during registration or profile updates but excludes this attribute from the user’s id_token. Since this attribute is considered back-end data that is only used during authentication, it doesn’t need to be part of user profile in the id_token.

User registration flow

The registration flow needs to create credentials using the authenticator and store the public-key in user’s profile. Let’s take a closer look at the sequence of calls and involved components to implement this flow.
 

Figure 1: WebAuthn user registration process

Figure 1: WebAuthn user registration process

  1. The user navigates to your application, www.example.com (relying party), and creates an account. A request is sent to the relying party to build a credentials options object and send it back to the browser. (in the demo project, this starts in the createCredentials function in webauthn-client.js and creates the credentials options object by making a call to createCredRequest in authn.js)
  2. The browser uses built-in WebAuthn APIs to create the new credentials with an available authenticator using the credentials options object that was created in first step. This is done by making a call to navigator.credentials.create API (this API is available in browsers and platforms that support FIDO and WebAuthn).
  3. The user experience in this step depends on the OS, browser, and the authenticator. For example, the browser could prompt the user to attach a security key or, on devices that support it, to use a biometric scanner.
     
    Figure 2: User registration and browser alert to use an authenticator in Firefox

    Figure 2: User registration and browser alert to use an authenticator in Firefox

  4. The user interacts with an authenticator (by touching a security key or scanning finger on a touch-id device), which generates new credentials bound to the relying party and returns a response object to the browser.
  5. The browser sends a credential response object to the relying party to parse and validate the response on the server-side. The credential identifier and public key are extracted from the credential response. At this step, your application can also check additional authenticator data and use it to make authentication decisions. For example, your application can check if the authenticator was able to verify user identity through PIN or biometrics (UV flag) or only user presence (UP flag) was verified by authenticator. In the demo project, this is still part of createCredentials function and server-side parsing and validation is done in parseCredResponse that is implemented in authn.js
  6. To complete the user registration, the browser passes the profile attributes that have been collected during registration through Amazon Cognito APIs as custom attributes. This step is performed in signUp function in webauthn-client.js

At the conclusion of this process, a new user will be created in Amazon Cognito and the custom attribute “publicKeyCred” will be populated with a base64 encoded string that includes a credential identifier and the public key generated by the authenticator. This attribute is not considered secret or sensitive data, it rather includes the public key that will be used to verify the authenticator response during subsequent authentications.

User authentication flow

The following diagram describes the custom authentication flow to implement password-less authentication.
 

Figure 3: WebAuthn user authentication process

Figure 3: WebAuthn user authentication process

  1. The user provides their user name and selects the sign-in button, script (running in browser) starts the sign-in process using Amazon Cognito InitiateAuth API passing the user name and indicating that authentication flow is CUSTOM_AUTH. In the demo project, this part is performed in the signIn function in webauthn-client.js.
  2. The Amazon Cognito service passes control to the Define Auth Challenge Lambda trigger. The trigger then determines that this is the first step in the authentication and returns CUSTOM_CHALLENGE as the next challenge to the user.
  3. Control then moves to Create Auth Challenge Lambda trigger to create the custom challenge. This trigger creates a random challenge (a 64 bytes random string), extracts the credential identifier from the user profile (the value passed initially during the sign-up process) combines them and returns them as a custom challenge to the client. This is performed in CreateAuthChallenge Lambda function.
  4. The browser then prompts the user to activate an authenticator. At this stage, the authenticator verifies that credentials exist for the identifier and that the relying party matches the one that is bound to the credentials. This is implemented by making a call to navigator.credentials.get API that is available in browsers and devices that support FIDO2 and WebAuthn.
     
    Figure 4: Authentication and browser prompt to use a registered authenticator

    Figure 4: Authentication and browser prompt to use a registered authenticator

  5. If credentials exist and the relying party is verified, the authenticator requests a user attention or verification. Depending on the type of authenticator, user verification through biometrics or a PIN code is performed and the credentials response is passed back to the browser.
     
    Figure 5: Authentication examples from different browsers and platforms

    Figure 5: Authentication examples from different browsers and platforms

  6. The signIn function continues the sign-in process by calling respondToAuthChallenge API and sending the credentials response to Amazon Cognito.
  7. Amazon Cognito sends the response to the Verify Auth Challenge Lambda trigger. This trigger extracts the public key from the user profile, parses and validates the credentials response, and if the signature is valid, it responds with success. This is performed in VerifyAuthChallenge Lambda trigger.
  8. Lastly, Amazon Cognito sends the control again to Define Auth Challenge to determine the next step. If the results from Verify Auth Challenge indicate a successful response, authentication succeeds and Amazon Cognito responds with ID, access, and refresh tokens.

Conclusion

When building customer facing applications, you as the application owner and developer need to balance security with usability. Reducing the risk of account take-over and phishing is based on using strong credentials, strong second-factors, and minimizing the role of passwords. The flexibility of Amazon Cognito custom authentication flow integrated with WebAuthn offer a technical path to make this possible in addition to offering better user experience to your customers.

Check out the WebAuthn with Amazon Cognito project for code samples and deployment steps, deploy this in your development environment to see this integration in action and go build an awesome password-less experience in your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Building, bundling, and deploying applications with the AWS CDK

Post Syndicated from Cory Hall original https://aws.amazon.com/blogs/devops/building-apps-with-aws-cdk/

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages.

The post CDK Pipelines: Continuous delivery for AWS CDK applications showed how you can use CDK Pipelines to deploy a TypeScript-based AWS Lambda function. In that post, you learned how to add additional build commands to the pipeline to compile the TypeScript code to JavaScript, which is needed to create the Lambda deployment package.

In this post, we dive deeper into how you can perform these build commands as part of your AWS CDK build process by using the native AWS CDK bundling functionality.

If you’re working with Python, TypeScript, or JavaScript-based Lambda functions, you may already be familiar with the PythonFunction and NodejsFunction constructs, which use the bundling functionality. This post describes how to write your own bundling logic for instances where a higher-level construct either doesn’t already exist or doesn’t meet your needs. To illustrate this, I walk through two different examples: a Lambda function written in Golang and a static site created with Nuxt.js.

Concepts

A typical CI/CD pipeline contains steps to build and compile your source code, bundle it into a deployable artifact, push it to artifact stores, and deploy to an environment. In this post, we focus on the building, compiling, and bundling stages of the pipeline.

The AWS CDK has the concept of bundling source code into a deployable artifact. As of this writing, this works for two main types of assets: Docker images published to Amazon Elastic Container Registry (Amazon ECR) and files published to Amazon Simple Storage Service (Amazon S3). For files published to Amazon S3, this can be as simple as pointing to a local file or directory, which the AWS CDK uploads to Amazon S3 for you.

When you build an AWS CDK application (by running cdk synth), a cloud assembly is produced. The cloud assembly consists of a set of files and directories that define your deployable AWS CDK application. In the context of the AWS CDK, it might include the following:

  • AWS CloudFormation templates and instructions on where to deploy them
  • Dockerfiles, corresponding application source code, and information about where to build and push the images to
  • File assets and information about which S3 buckets to upload the files to

Use case

For this use case, our application consists of front-end and backend components. The example code is available in the GitHub repo. In the repository, I have split the example into two separate AWS CDK applications. The repo also contains the Golang Lambda example app and the Nuxt.js static site.

Golang Lambda function

To create a Golang-based Lambda function, you must first create a Lambda function deployment package. For Go, this consists of a .zip file containing a Go executable. Because we don’t commit the Go executable to our source repository, our CI/CD pipeline must perform the necessary steps to create it.

In the context of the AWS CDK, when we create a Lambda function, we have to tell the AWS CDK where to find the deployment package. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  runtime: lambda.Runtime.GO_1_X,
  handler: 'main',
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-go-executable')),
});

In the preceding code, the lambda.Code.fromAsset() method tells the AWS CDK where to find the Golang executable. When we run cdk synth, it stages this Go executable in the cloud assembly, which it zips and publishes to Amazon S3 as part of the PublishAssets stage.

If we’re running the AWS CDK as part of a CI/CD pipeline, this executable doesn’t exist yet, so how do we create it? One method is CDK bundling. The lambda.Code.fromAsset() method takes a second optional argument, AssetOptions, which contains the bundling parameter. With this bundling parameter, we can tell the AWS CDK to perform steps prior to staging the files in the cloud assembly.

Breaking down the BundlingOptions parameter further, we can perform the build inside a Docker container or locally.

Building inside a Docker container

For this to work, we need to make sure that we have Docker running on our build machine. In AWS CodeBuild, this means setting privileged: true. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
    bundling: {
      image: lambda.Runtime.GO_1_X.bundlingDockerImage,
      command: [
        'bash', '-c', [
          'go test -v',
          'GOOS=linux go build -o /asset-output/main',
      ].join(' && '),
    },
  })
  ...
});

We specify two parameters:

  • image (required) – The Docker image to perform the build commands in
  • command (optional) – The command to run within the container

The AWS CDK mounts the folder specified as the first argument to fromAsset at /asset-input inside the container, and mounts the asset output directory (where the cloud assembly is staged) at /asset-output inside the container.

After we perform the build commands, we need to make sure we copy the Golang executable to the /asset-output location (or specify it as the build output location like in the preceding example).

This is the equivalent of running something like the following code:

docker run \
  --rm \
  -v folder-containing-source-code:/asset-input \
  -v cdk.out/asset.1234a4b5/:/asset-output \
  lambci/lambda:build-go1.x \
  bash -c 'GOOS=linux go build -o /asset-output/main'

Building locally

To build locally (not in a Docker container), we have to provide the local parameter. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
    bundling: {
      image: lambda.Runtime.GO_1_X.bundlingDockerImage,
      command: [],
      local: {
        tryBundle(outputDir: string) {
          try {
            spawnSync('go version')
          } catch {
            return false
          }

          spawnSync(`GOOS=linux go build -o ${path.join(outputDir, 'main')}`);
          return true
        },
      },
    },
  })
  ...
});

The local parameter must implement the ILocalBundling interface. The tryBundle method is passed the asset output directory, and expects you to return a boolean (true or false). If you return true, the AWS CDK doesn’t try to perform Docker bundling. If you return false, it falls back to Docker bundling. Just like with Docker bundling, you must make sure that you place the Go executable in the outputDir.

Typically, you should perform some validation steps to ensure that you have the required dependencies installed locally to perform the build. This could be checking to see if you have go installed, or checking a specific version of go. This can be useful if you don’t have control over what type of build environment this might run in (for example, if you’re building a construct to be consumed by others).

If we run cdk synth on this, we see a new message telling us that the AWS CDK is bundling the asset. If we include additional commands like go test, we also see the output of those commands. This is especially useful if you wanted to fail a build if tests failed. See the following code:

$ cdk synth
Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
✓  . (9ms)
✓  clients (5ms)

DONE 8 tests in 11.476s
✓  clients (5ms) (coverage: 84.6% of statements)
✓  . (6ms) (coverage: 78.4% of statements)

DONE 8 tests in 2.464s

Cloud Assembly

If we look at the cloud assembly that was generated (located at cdk.out), we see something like the following code:

$ cdk synth
Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
✓  . (9ms)
✓  clients (5ms)

DONE 8 tests in 11.476s
✓  clients (5ms) (coverage: 84.6% of statements)
✓  . (6ms) (coverage: 78.4% of statements)

DONE 8 tests in 2.464s

It contains our GolangLambdaStack CloudFormation template that defines our Lambda function, as well as our Golang executable, bundled at asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952/main.

Let’s look at how the AWS CDK uses this information. The GolangLambdaStack.assets.json file contains all the information necessary for the AWS CDK to know where and how to publish our assets (in this use case, our Golang Lambda executable). See the following code:

{
  "version": "5.0.0",
  "files": {
    "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952": {
      "source": {
        "path": "asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952",
        "packaging": "zip"
      },
      "destinations": {
        "current_account-current_region": {
          "bucketName": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}",
          "objectKey": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip",
          "assumeRoleArn": "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/cdk-hnb659fds-file-publishing-role-${AWS::AccountId}-${AWS::Region}"
        }
      }
    }
  }
}

The file contains information about where to find the source files (source.path) and what type of packaging (source.packaging). It also tells the AWS CDK where to publish this .zip file (bucketName and objectKey) and what AWS Identity and Access Management (IAM) role to use (assumeRoleArn). In this use case, we only deploy to a single account and Region, but if you have multiple accounts or Regions, you see multiple destinations in this file.

The GolangLambdaStack.template.json file that defines our Lambda resource looks something like the following code:

{
  "Resources": {
    "MyGoFunction0AB33E85": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": {
            "Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}"
          },
          "S3Key": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip"
        },
        "Handler": "main",
        ...
      }
    },
    ...
  }
}

The S3Bucket and S3Key match the bucketName and objectKey from the assets.json file. By default, the S3Key is generated by calculating a hash of the folder location that you pass to lambda.Code.fromAsset(), (for this post, folder-containing-source-code). This means that any time we update our source code, this calculated hash changes and a new Lambda function deployment is triggered.

Nuxt.js static site

In this section, I walk through building a static site using the Nuxt.js framework. You can apply the same logic to any static site framework that requires you to run a build step prior to deploying.

To deploy this static site, we use the BucketDeployment construct. This is a construct that allows you to populate an S3 bucket with the contents of .zip files from other S3 buckets or from a local disk.

Typically, we simply tell the BucketDeployment construct where to find the files that it needs to deploy to the S3 bucket. See the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-directory')),
  ],
  destinationBucket: myBucket
});

To deploy a static site built with a framework like Nuxt.js, we need to first run a build step to compile the site into something that can be deployed. For Nuxt.js, we run the following two commands:

  • yarn install – Installs all our dependencies
  • yarn generate – Builds the application and generates every route as an HTML file (used for static hosting)

This creates a dist directory, which you can deploy to Amazon S3.

Just like with the Golang Lambda example, we can perform these steps as part of the AWS CDK through either local or Docker bundling.

Building inside a Docker container

To build inside a Docker container, use the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
      bundling: {
        image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
        command: [
          'bash', '-c', [
            'yarn install',
            'yarn generate',
            'cp -r /asset-input/dist/* /asset-output/',
          ].join(' && '),
        ],
      },
    }),
  ],
  ...
});

For this post, we build inside the publicly available node:lts image hosted on DockerHub. Inside the container, we run our build commands yarn install && yarn generate, and copy the generated dist directory to our output directory (the cloud assembly).

The parameters are the same as described in the Golang example we walked through earlier.

Building locally

To build locally, use the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
      bundling: {
        local: {
          tryBundle(outputDir: string) {
            try {
              spawnSync('yarn --version');
            } catch {
              return false
            }

            spawnSync('yarn install && yarn generate');

       fs.copySync(path.join(__dirname, ‘path-to-nuxtjs-project’, ‘dist’), outputDir);
            return true
          },
        },
        image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
        command: [],
      },
    }),
  ],
  ...
});

Building locally works the same as the Golang example we walked through earlier, with one exception. We have one additional command to run that copies the generated dist folder to our output directory (cloud assembly).

Conclusion

This post showed how you can easily compile your backend and front-end applications using the AWS CDK. You can find the example code for this post in this GitHub repo. If you have any questions or comments, please comment on the GitHub repo. If you have any additional examples you want to add, we encourage you to create a Pull Request with your example!

Our code also contains examples of deploying the applications using CDK Pipelines, so if you’re interested in deploying the example yourself, check out the example repo.

 

About the author

Cory Hall

Cory is a Solutions Architect at Amazon Web Services with a passion for DevOps and is based in Charlotte, NC. Cory works with enterprise AWS customers to help them design, deploy, and scale applications to achieve their business goals.

How to configure Duo multi-factor authentication with Amazon Cognito

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-configure-duo-multi-factor-authentication-with-amazon-cognito/

Adding multi-factor authentication (MFA) reduces the risk of user account take-over, phishing attacks, and password theft. Adding MFA while providing a frictionless sign-in experience requires you to offer a variety of MFA options that support a wide range of users and devices. Let’s see how you can achieve that with Amazon Cognito and Duo Multi-Factor Authentication (MFA).

Amazon Cognito user pools are user directories that are used by Amazon Web Services (AWS) customers to manage the identities of their customers and to add sign-in, sign-up and user management features to their customer-facing web and mobile applications. Duo Security is an APN Partner that provides unified access security and multi-factor authentication solutions.

In this blog post, I show you how to use Amazon Cognito custom authentication flow to integrate Duo Multi-Factor Authentication (MFA) into your sign-in flow and offer a wide range of MFA options to your customers. Some second factors available through Duo MFA are mobile phone SMS passcodes, approval of login via phone call, push-notification-based approval on smartphones, biometrics on devices that support it, and security keys that can be attached via USB.

How it works

Amazon Cognito user pools enable you to build a custom authentication flow that authenticates users based on one or more challenge/response cycles. You can use this flow to integrate Duo MFA into your authentication as a custom challenge.

Duo Web offers a software development kit to make it easier for you to integrate your web applications with Duo MFA. You need an account with Duo and an application to protect (which can be created from the Duo admin dashboard). When you create your application in the Duo admin dashboard, note the integration key (ikey), secret key (skey), and API hostname. These details, together with a random string (akey) that you generate, are the primary factors used to integrate your Amazon Cognito user pool with Duo MFA.

Note: ikey, skey, and akey are referred to as Duo keys.

Duo MFA will be integrated into the sign-in flow as a custom challenge. To do that, you need to generate a signed challenge request using Duo APIs and use it to load Duo MFA in an iframe and request the user’s second factor. When the challenge is answered by the user, a signed response is returned to your application and sent to Amazon Cognito for verification. If the response is valid then the MFA challenge is successful.

Let’s take a closer look at the sequence of calls and components involved in this flow.

Implementation details

In this section, I walk you through the end-to-end flow of integrating Duo MFA with Amazon Cognito using a custom authentication flow. To help you with this integration, I built a demo project that provides deployment steps and sample code to create a working demo in your environment.

Create and configure a user pool

The first step is to create the AWS resources needed for the demo. You can do that by deploying the AWS CloudFormation stack as described in the demo project.

A few implementation details to be aware of:

  • The template creates an Amazon Cognito user pool, application client, and AWS Lambda triggers that are used for the custom authentication.
  • The template also accepts ikey, skey, and akey as inputs. For security, the parameters are masked in the AWS CloudFormation console. These parameters are stored in a secret in AWS Secrets Manager with a resource policy that allows relevant Lambda functions read access to that secret.
  • Duo keys are loaded from secrets manager at the initialization of create auth challenge and verify auth challenge Lambda triggers to be used to create sign-request and verify sign-response.

Authentication flow

Figure 1: User authentication process for the custom authentication flow

Figure 1: User authentication process for the custom authentication flow

The preceding sequence diagram (Figure 1) illustrates the sequence of calls to sign in a user, which are as follows:

  1. In your application, the user is presented with a sign-in UI that captures their user name and password and starts the sign-in flow. A script—running in the browser—starts the sign-in process using the Amazon Cognito authenticateUser API with CUSTOM_AUTH set as the authentication flow. This validates the user’s credentials using Secure Remote Password (SRP) protocol and moves on to the second challenge if the credentials are valid.

    Note: The authenticateUser API automatically starts the authentication process with SRP. The first challenge that’s sent to Amazon Cognito is SRP_A. This is followed by PASSWORD_VERIFIER to verify the user’s credentials.

  2. After the SRP challenge step, the define auth challenge Lambda trigger will return CUSTOM_CHALLENGE and this will move control to the create auth challenge trigger.
  3. The create auth challenge Lambda trigger creates a Duo signed request using the Duo keys plus the username and returns the signed request as a challenge to the client. Here is a sample code of what create auth challenge should look like:
    <JavaScript>
    
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
    
        if(ikey == null || skey == null || akey == null){ 
          const promise = new Promise(function(resolve, reject) {
              secretsManagerClient.getSecretValue({SecretId: secretName}, function(err, data) {
                    if (err) {throw err; }
                    else {
                        if ('SecretString' in data) {
                            secret = JSON.parse(data.SecretString);
                            ikey = secret['duo-ikey'];
                            skey = secret['duo-skey'];
                            akey = secret['duo-akey'];
                        }
                    }
                    resolve();
                });
            })
            
            await promise; 
        }
    
        
        var username = event.userName;
        var sig_request = duo_web.sign_request(ikey, skey, akey, username);
        
        event.response.publicChallengeParameters = {
            sig_request: sig_request
        };
        
        return event;
    };
    

  4. The client initializes the Duo Web library with the signed request and displays Duo MFA in an iframe to request a second factor from the user. To initialize the Duo library, you need the api_hostname that is generated for your application in the Duo dashboard, the sign-request that was received as a challenge, and a callback function to invoke after the MFA step is completed by the user. This is done on the client side as follows:
    <JavaScript>
          //render Duo MFA iframe
          $("#duo-mfa").html('<iframe id="duo_iframe" title="Two-Factor Authentication" </iframe>');
            
          Duo.init({
            'host': api_hostname,
            'sig_request': challengeParameters.sig_request,
            'submit_callback': mfa_callback
          });
    

  5. Through the Duo iframe, the user can set up their MFA preferences and respond to an MFA challenge. After successful MFA setup, a signed response from the Duo Web library will be returned to the client and passed to the callback function that was provided in Duo.init call.
     
    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

  6. The client sends the Duo signed response to the Amazon Cognito service as a challenge response.
  7. Amazon Cognito sends the response to the verify auth challenge Lambda trigger, which uses Duo keys and username to verify the response.
    <JavaScript>
    const duo_web = require('duo_web');
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
        
        var username = event.userName;
        
        //-------get challenge response
        const sig_response = event.request.challengeAnswer;
        const verificationResult = duo_web.verify_response(ikey, skey, akey, sig_response);
        
        if (verificationResult === username) {
            event.response.answerCorrect = true;
        } else {
            event.response.answerCorrect = false;
        }
        return event;
    };
    

  8. Validation results and current state are passed once again to the define auth challenge Lambda trigger. If the user response is valid, then the Duo MFA challenge is successful. You can then decide to introduce additional challenges to the user or issue tokens and complete the authentication process.

Conclusion

As you build your mobile or web application, keep in mind that using multi-factor authentication is an effective and recommended approach to protect your customers from account take-over, phishing, and the risks of weak or compromised passwords. Making multi-factor authentication easy for your customers enables you to offer authentication experience that protects their accounts but doesn’t slow them down.

Visit the security pillar of AWS Well-Architected Framework to learn more about AWS security best practices and recommendations.

In this blog post, I showed you how to integrate Duo MFA with an Amazon Cognito user pool. Visit the demo application and review the code samples in it to learn how to integrate this with your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Improving customer experience and reducing cost with CodeGuru Profiler

Post Syndicated from Rajesh original https://aws.amazon.com/blogs/devops/improving-customer-experience-and-reducing-cost-with-codeguru-profiler/

Amazon CodeGuru is a set of developer tools powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code. Amazon CodeGuru Profiler allows you to profile your applications in a low impact, always on manner. It helps you improve your application’s performance, reduce cost and diagnose application issues through rich data visualization and proactive recommendations. CodeGuru Profiler has been a very successful and widely used service within Amazon, before it was offered as a public service. This post discusses a few ways in which internal Amazon teams have used and benefited from continuous profiling of their production applications. These uses cases can provide you with better insights on how to reap similar benefits for your applications using CodeGuru Profiler.

Inside Amazon, over 100,000 applications currently use CodeGuru Profiler across various environments globally. Over the last few years, CodeGuru Profiler has served as an indispensable tool for resolving issues in the following three categories:

  1. Performance bottlenecks, high latency and CPU utilization
  2. Cost and Infrastructure utilization
  3. Diagnosis of an application impacting event

API latency improvement for CodeGuru Profiler

What could be a better example than CodeGuru Profiler using itself to improve its own performance?
CodeGuru Profiler offers an API called BatchGetFrameMetricData, which allows you to fetch time series data for a set of frames or methods. We noticed that the 99th percentile latency (i.e. the slowest 1 percent of requests over a 5 minute period) metric for this API was approximately 5 seconds, higher than what we wanted for our customers.

Solution

CodeGuru Profiler is built on a micro service architecture, with the BatchGetFrameMetricData API implemented as set of AWS Lambda functions. It also leverages other AWS services such as Amazon DynamoDB to store data and Amazon CloudWatch to record performance metrics.

When investigating the latency issue, the team found that the 5-second latency spikes were happening during certain time intervals rather than continuously, which made it difficult to easily reproduce and determine the root cause of the issue in pre-production environment. The new Lambda profiling feature in CodeGuru came in handy, and so the team decided to enable profiling for all its Lambda functions. The low impact, continuous profiling capability of CodeGuru Profiler allowed the team to capture comprehensive profiles over a period of time, including when the latency spikes occurred, enabling the team to better understand the issue.
After capturing the profiles, the team went through the flame graphs of one of the Lambda functions (TimeSeriesMetricsGeneratorLambda) and learned that all of its CPU time was spent by the thread responsible to publish metrics to CloudWatch. The following screenshot shows a flame graph during one of these spikes.

TimeSeriesMetricsGeneratorLambda taking 100% CPU

As seen, there is a single call stack visible in the above flame graph, indicating all the CPU time was taken by the thread invoking above code. This helped the team immediately understand what was happening. Above code was related to the thread responsible for publishing the CloudWatch metrics. This thread was publishing these metrics in a synchronized block and as this thread took most of the CPU, it caused all other threads to wait and the latency to spike. To fix the issue, the team simply changed the TimeSeriesMetricsGeneratorLambda Lambda code, to publish CloudWatch metrics at the end of the function, which eliminated contention of this thread with all other threads.

Improvement

After the fix was deployed, the 5 second latency spikes were gone, as seen in the following graph.

Latency reduction for BatchGetFrameMetricData API

Cost, infrastructure and other improvements for CAGE

CAGE is an internal Amazon retail service that does royalty aggregation for digital products, such as Kindle eBooks, MP3 songs and albums and more. Like many other Amazon services, CAGE is also customer of CodeGuru Profiler.

CAGE was experiencing latency delays and growing infrastructure cost, and wanted to reduce them. Thanks to CodeGuru Profiler’s always-on profiling capabilities, rich visualization and recommendations, the team was able to successfully diagnose the issues, determine the root cause and fix them.

Solution

With the help of CodeGuru Profiler, the CAGE team identified several reasons for their degraded service performance and increased hardware utilization:

  • Excessive garbage collection activity – The team reviewed the service flame graphs (see the following screenshot) and identified that a lot of CPU time was spent getting garbage collection activities, 65.07% of the total service CPU.

Excessive garbage collection activities for CAGE

  • Metadata overhead – The team followed CodeGuru Profiler recommendation to identify that the service’s DynamoDB responses were consuming higher CPU, 2.86% of total CPU time. This was due to the response metadata caching in the AWS SDK v1.x HTTP client that was turned on by default. This was causing higher CPU overhead for high throughput applications such as CAGE. The following screenshot shows the relevant recommendation.

Response metadata recommendation for CAGE

  • Excessive logging – The team also identified excessive logging of its internal Amazon ION structures. The team initially added this logging for debugging purposes, but was unaware of its impact on the CPU cost, taking 2.28% of the overall service CPU. The following screenshot is part of the flame graph that helped identify the logging impact.

Excessive logging in CAGE service

The team used these flame graphs and CodeGuru Profiler provided recommendations to determine the root cause of the issues and systematically resolve them by doing the following:

  • Switching to a more efficient garbage collector
  • Removing excessive logging
  • Disabling metadata caching for Dynamo DB response

Improvements

After making these changes, the team was able to reduce their infrastructure cost by 25%, saving close to $2600 per month. Service latency also improved, with a reduction in service’s 99th percentile latency from approximately 2,500 milliseconds to 250 milliseconds in their North America (NA) region as shown below.

CAGE Latency Reduction

The team also realized a side benefit of having reduced log verbosity and saw a reduction in log size by 55%.

Event Analysis of increased checkout latency for Amazon.com

During one of the high traffic times, Amazon retail customers experienced higher than normal latency on their checkout page. The issue was due to one of the downstream service’s API experiencing high latency and CPU utilization. While the team quickly mitigated the issue by increasing the service’s servers, the always-on CodeGuru Profiler came to the rescue to help diagnose and fix the issue permanently.

Solution

The team analyzed the flame graphs from CodeGuru Profiler at the time of the event and noticed excessive CPU consumption (69.47%) when logging exceptions using Log4j2. See the following screenshot taken from an earlier version of CodeGuru Profiler user interface.

Excessive CPU consumption when logging exceptions using Log4j2

With CodeGuru Profiler flame graph and other metrics, the team quickly confirmed that the issue was due to excessive exception logging using Log4j2. This downstream service had recently upgraded to Log4j2 version 2.8, in which exception logging could be expensive, due to the way Log4j2 handles class-loading of certain stack frames. Log4j 2.x versions enabled class loading by default, which was disabled in 1.x versions, causing the increased latency and CPU utilization. The team was not able to detect this issue in pre-production environment, as the impact was observable only in high traffic situations.

Improvement

After they understood the issue, the team successfully rolled out the fix, removing the unnecessary exception trace logging to fix the issue. Such performance issues and many others are proactively offered as CodeGuru Profiler recommendations, to ensure you can proactively learn about such issues with your applications and quickly resolve them.

Conclusion

I hope this post provided a glimpse into various ways CodeGuru Profiler can benefit your business and applications. To get started using CodeGuru Profiler, see Setting up CodeGuru Profiler.
For more information about CodeGuru Profiler, see the following:

Investigating performance issues with Amazon CodeGuru Profiler

Optimizing application performance with Amazon CodeGuru Profiler

Find Your Application’s Most Expensive Lines of Code and Improve Code Quality with Amazon CodeGuru

 

Building a cross-account CI/CD pipeline for single-tenant SaaS solutions

Post Syndicated from Rafael Ramos original https://aws.amazon.com/blogs/devops/cross-account-ci-cd-pipeline-single-tenant-saas/

With the increasing demand from enterprise customers for a pay-as-you-go consumption model, more and more independent software vendors (ISVs) are shifting their business model towards software as a service (SaaS). Usually this kind of solution is architected using a multi-tenant model. It means that the infrastructure resources and applications are shared across multiple customers, with mechanisms in place to isolate their environments from each other. However, you may not want or can’t afford to share resources for security or compliance reasons, so you need a single-tenant environment.

To achieve this higher level of segregation across the tenants, it’s recommended to isolate the environments on the AWS account level. This strategy brings benefits, such as no network overlapping, no account limits sharing, and simplified usage tracking and billing, but it comes with challenges from an operational standpoint. Whereas multi-tenant solutions require management of a single shared production environment, single-tenant installations consist of dedicated production environments for each customer, without any shared resources across the tenants. When the number of tenants starts to grow, delivering new features at a rapid pace becomes harder to accomplish, because each new version needs to be manually deployed on each tenant environment.

This post describes how to automate this deployment process to deliver software quickly, securely, and less error-prone for each existing tenant. I demonstrate all the steps to build and configure a CI/CD pipeline using AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. For each new version, the pipeline automatically deploys the same application version on the multiple tenant AWS accounts.

There are different caveats to build such cross-account CI/CD pipelines on AWS. Because of that, I use AWS Command Line Interface (AWS CLI) to manually go through the process and demonstrate in detail the various configuration aspects you have to handle, such as artifact encryption, cross-account permission granting, and pipeline actions.

Single-tenancy vs. multi-tenancy

One of the first aspects to consider when architecting your SaaS solution is its tenancy model. Each brings their own benefits and architectural challenges. On multi-tenant installations, each customer shares the same set of resources, including databases and applications. With this mode, you can use the servers’ capacity more efficiently, which generally leads to significant cost-saving opportunities. On the other hand, you have to carefully secure your solution to prevent a customer from accessing sensitive data from another. Designing for high availability becomes even more critical on multi-tenant workloads, because more customers are affected in the event of downtime.

Because the environments are by definition isolated from each other, single-tenant solutions are simpler to design when it comes to security, networking isolation, and data segregation. Likewise, you can customize the applications per customer, and have different versions for specific tenants. You also have the advantage of eliminating the noisy-neighbor effect, and can plan the infrastructure for the customer’s scalability requirements. As a drawback, in comparison with multi-tenant, the single-tenant model is operationally more complex because you have more servers and applications to maintain.

Which tenancy model to choose depends ultimately on whether you can meet your customer needs. They might have specific governance requirements, be bound to a certain industry regulation, or have compliance criteria that influences which model they can choose. For more information about modeling your SaaS solutions, see SaaS on AWS.

Solution overview

To demonstrate this solution, I consider a fictitious single-tenant ISV with two customers: Unicorn and Gnome. It uses one central account where the tools reside (Tooling account), and two other accounts, each representing a tenant (Unicorn and Gnome accounts). As depicted in the following architecture diagram, when a developer pushes code changes to CodeCommit, Amazon CloudWatch Events  triggers the CodePipeline CI/CD pipeline, which automatically deploys a new version on each tenant’s AWS account. It ensures that the fictitious ISV doesn’t have the operational burden to manually re-deploy the same version for each end-customers.

Architecture diagram of a CI/CD pipeline for single-tenant SaaS solutions

For illustration purposes, the sample application I use in this post is an AWS Lambda function that returns a simple JSON object when invoked.

Prerequisites

Before getting started, you must have the following prerequisites:

Setting up the Git repository

Your first step is to set up your Git repository.

  1. Create a CodeCommit repository to host the source code.

The CI/CD pipeline is automatically triggered every time new code is pushed to that repository.

  1. Make sure Git is configured to use IAM credentials to access AWS CodeCommit via HTTP by running the following command from the terminal:
git config --global credential.helper '!aws codecommit credential-helper $@'
git config --global credential.UseHttpPath true
  1. Clone the newly created repository locally, and add two files in the root folder: index.js and application.yaml.

The first file is the JavaScript code for the Lambda function that represents the sample application. For our use case, the function returns a JSON response object with statusCode: 200 and the body Hello!\n. See the following code:

exports.handler = async (event) => {
    const response = {
        statusCode: 200,
        body: `Hello!\n`,
    };
    return response;
};

The second file is where the infrastructure is defined using AWS CloudFormation. The sample application consists of a Lambda function, and we use AWS Serverless Application Model (AWS SAM) to simplify the resources creation. See the following code:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Sample Application.

Parameters:
    S3Bucket:
        Type: String
    S3Key:
        Type: String
    ApplicationName:
        Type: String
        
Resources:
    SampleApplication:
        Type: 'AWS::Serverless::Function'
        Properties:
            FunctionName: !Ref ApplicationName
            Handler: index.handler
            Runtime: nodejs12.x
            CodeUri:
                Bucket: !Ref S3Bucket
                Key: !Ref S3Key
            Description: Hello Lambda.
            MemorySize: 128
            Timeout: 10
  1. Push both files to the remote Git repository.

Creating the artifact store encryption key

By default, CodePipeline uses server-side encryption with an AWS Key Management Service (AWS KMS) managed customer master key (CMK) to encrypt the release artifacts. Because the Unicorn and Gnome accounts need to decrypt those release artifacts, you need to create a customer managed CMK in the Tooling account.

From the terminal, run the following command to create the artifact encryption key:

aws kms create-key --region <YOUR_REGION>

This command returns a JSON object with the key ARN property if run successfully. Its format is similar to arn:aws:kms:<YOUR_REGION>:<TOOLING_ACCOUNT_ID>:key/<KEY_ID>. Record this value to use in the following steps.

The encryption key has been created manually for educational purposes only, but it’s considered a best practice to have it as part of the Infrastructure as Code (IaC) bundle.

Creating an Amazon S3 artifact store and configuring a bucket policy

Our use case uses Amazon Simple Storage Service (Amazon S3) as artifact store. Every release artifact is encrypted and stored as an object in an S3 bucket that lives in the Tooling account.

To create and configure the artifact store, follow these steps in the Tooling account:

  1. From the terminal, create an S3 bucket and give it a unique name:
aws s3api create-bucket \
    --bucket <BUCKET_UNIQUE_NAME> \
    --region <YOUR_REGION> \
    --create-bucket-configuration LocationConstraint=<YOUR_REGION>
  1. Configure the bucket to use the customer managed CMK created in the previous step. This makes sure the objects stored in this bucket are encrypted using that key, replacing <KEY_ARN> with the ARN property from the previous step:
aws s3api put-bucket-encryption \
    --bucket <BUCKET_UNIQUE_NAME> \
    --server-side-encryption-configuration \
        '{
            "Rules": [
                {
                    "ApplyServerSideEncryptionByDefault": {
                        "SSEAlgorithm": "aws:kms",
                        "KMSMasterKeyID": "<KEY_ARN>"
                    }
                }
            ]
        }'
  1. The artifacts stored in the bucket need to be accessed from the Unicorn and Gnome Configure the bucket policies to allow cross-account access:
aws s3api put-bucket-policy \
    --bucket <BUCKET_UNIQUE_NAME> \
    --policy \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": [
                        "s3:GetBucket*",
                        "s3:List*"
                    ],
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:root",
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:root"
                        ]
                    },
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>"
                    ]
                },
                {
                    "Action": [
                        "s3:GetObject*"
                    ],
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:root",
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:root"
                        ]
                    },
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>/CrossAccountPipeline/*"
                    ]
                }
            ]
        }' 

This S3 bucket has been created manually for educational purposes only, but it’s considered a best practice to have it as part of the IaC bundle.

Creating a cross-account IAM role in each tenant account

Following the security best practice of granting least privilege, each action declared on CodePipeline should have its own IAM role.  For this use case, the pipeline needs to perform changes in the Unicorn and Gnome accounts from the Tooling account, so you need to create a cross-account IAM role in each tenant account.

Repeat the following steps for each tenant account to allow CodePipeline to assume role in those accounts:

  1. Configure a named CLI profile for the tenant account to allow running commands using the correct access keys.
  2. Create an IAM role that can be assumed from another AWS account, replacing <TENANT_PROFILE_NAME> with the profile name you defined in the previous step:
aws iam create-role \
    --role-name CodePipelineCrossAccountRole \
    --profile <TENANT_PROFILE_NAME> \
    --assume-role-policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "arn:aws:iam::<TOOLING_ACCOUNT_ID>:root"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }'
  1. Create an IAM policy that grants access to the artifact store S3 bucket and to the artifact encryption key:
aws iam create-policy \
    --policy-name CodePipelineCrossAccountArtifactReadPolicy \
    --profile <TENANT_PROFILE_NAME> \
    --policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": [
                        "s3:GetBucket*",
                        "s3:ListBucket"
                    ],
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>"
                    ],
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "s3:GetObject*",
                        "s3:Put*"
                    ],
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>/CrossAccountPipeline/*"
                    ],
                    "Effect": "Allow"
                },
                {
                    "Action": [ 
                        "kms:DescribeKey", 
                        "kms:GenerateDataKey*", 
                        "kms:Encrypt", 
                        "kms:ReEncrypt*", 
                        "kms:Decrypt" 
                    ], 
                    "Resource": "<KEY_ARN>",
                    "Effect": "Allow"
                }
            ]
        }'
  1. Attach the CodePipelineCrossAccountArtifactReadPolicy IAM policy to the CodePipelineCrossAccountRole IAM role:
aws iam attach-role-policy \
    --profile <TENANT_PROFILE_NAME> \
    --role-name CodePipelineCrossAccountRole \
    --policy-arn arn:aws:iam::<TENANT_ACCOUNT_ID>:policy/CodePipelineCrossAccountArtifactReadPolicy
  1. Create an IAM policy that allows to pass the IAM role CloudFormationDeploymentRole to CloudFormation and to perform CloudFormation actions on the application Stack:
aws iam create-policy \
    --policy-name CodePipelineCrossAccountCfnPolicy \
    --profile <TENANT_PROFILE_NAME> \
    --policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": [
                        "iam:PassRole"
                    ],
                    "Resource": "arn:aws:iam::<TENANT_ACCOUNT_ID>:role/CloudFormationDeploymentRole",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "cloudformation:*"
                    ],
                    "Resource": "arn:aws:cloudformation:<YOUR_REGION>:<TENANT_ACCOUNT_ID>:stack/SampleApplication*/*",
                    "Effect": "Allow"
                }
            ]
        }'
  1. Attach the CodePipelineCrossAccountCfnPolicy IAM policy to the CodePipelineCrossAccountRole IAM role:
aws iam attach-role-policy \
    --profile <TENANT_PROFILE_NAME> \
    --role-name CodePipelineCrossAccountRole \
    --policy-arn arn:aws:iam::<TENANT_ACCOUNT_ID>:policy/CodePipelineCrossAccountCfnPolicy

Additional configuration is needed in the Tooling account to allow access, which you complete later on.

Creating a deployment IAM role in each tenant account

After CodePipeline assumes the CodePipelineCrossAccountRole IAM role into the tenant account, it triggers AWS CloudFormation to provision the infrastructure based on the template defined in the application.yaml file. For that, AWS CloudFormation needs to assume an IAM role that grants privileges to create resources into the tenant AWS account.

Repeat the following steps for each tenant account to allow AWS CloudFormation to create resources in those accounts:

  1. Create an IAM role that can be assumed by AWS CloudFormation:
aws iam create-role \
    --role-name CloudFormationDeploymentRole \
    --profile <TENANT_PROFILE_NAME> \
    --assume-role-policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudformation.amazonaws.com"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }'
  1. Create an IAM policy that grants permissions to create AWS resources:
aws iam create-policy \
    --policy-name CloudFormationDeploymentPolicy \
    --profile <TENANT_PROFILE_NAME> \
    --policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": "iam:PassRole",
                    "Resource": "arn:aws:iam::<TENANT_ACCOUNT_ID>:role/*",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "iam:GetRole",
                        "iam:CreateRole",
                        "iam:DeleteRole",
                        "iam:AttachRolePolicy",
                        "iam:DetachRolePolicy"
                    ],
                    "Resource": "arn:aws:iam::<TENANT_ACCOUNT_ID>:role/*",
                    "Effect": "Allow"
                },
                {
                    "Action": "lambda:*",
                    "Resource": "*",
                    "Effect": "Allow"
                },
                {
                    "Action": "codedeploy:*",
                    "Resource": "*",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "s3:GetObject*",
                        "s3:GetBucket*",
                        "s3:List*"
                    ],
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>",
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>/*"
                    ],
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "kms:Decrypt",
                        "kms:DescribeKey"
                    ],
                    "Resource": "<KEY_ARN>",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "cloudformation:CreateStack",
                        "cloudformation:DescribeStack*",
                        "cloudformation:GetStackPolicy",
                        "cloudformation:GetTemplate*",
                        "cloudformation:SetStackPolicy",
                        "cloudformation:UpdateStack",
                        "cloudformation:ValidateTemplate"
                    ],
                    "Resource": "arn:aws:cloudformation:<YOUR_REGION>:<TENANT_ACCOUNT_ID>:stack/SampleApplication*/*",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "cloudformation:CreateChangeSet"
                    ],
                    "Resource": "arn:aws:cloudformation:<YOUR_REGION>:aws:transform/Serverless-2016-10-31",
                    "Effect": "Allow"
                }
            ]
        }'

The granted permissions in this IAM policy depend on the resources your application needs to be provisioned. Because the application in our use case consists of a simple Lambda function, the IAM policy only needs permissions over Lambda. The other permissions declared are to access and decrypt the Lambda code from the artifact store, use AWS CodeDeploy to deploy the function, and create and attach the Lambda execution role.

  1. Attach the IAM policy to the IAM role:
aws iam attach-role-policy \
    --profile <TENANT_PROFILE_NAME> \
    --role-name CloudFormationDeploymentRole \
    --policy-arn arn:aws:iam::<TENANT_ACCOUNT_ID>:policy/CloudFormationDeploymentPolicy

Configuring an artifact store encryption key

Even though the IAM roles created in the tenant accounts declare permissions to use the CMK encryption key, that’s not enough to have access to the key. To access the key, you must update the CMK key policy.

From the terminal, run the following command to attach the new policy:

aws kms put-key-policy \
    --key-id <KEY_ARN> \
    --policy-name default \
    --region <YOUR_REGION> \
    --policy \
        '{
             "Id": "TenantAccountAccess",
             "Version": "2012-10-17",
             "Statement": [
                {
                    "Sid": "Enable IAM User Permissions",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "arn:aws:iam::<TOOLING_ACCOUNT_ID>:root"
                    },
                    "Action": "kms:*",
                    "Resource": "*"
                },
                {
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:role/CloudFormationDeploymentRole",
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:role/CodePipelineCrossAccountRole",
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:role/CloudFormationDeploymentRole",
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:role/CodePipelineCrossAccountRole"
                        ]
                    },
                    "Action": [
                        "kms:Decrypt",
                        "kms:DescribeKey"
                    ],
                    "Resource": "*"
                }
             ]
         }'

Provisioning the CI/CD pipeline

Each CodePipeline workflow consists of two or more stages, which are composed by a series of parallel or serial actions. For our use case, the pipeline is made up of four stages:

  • Source – Declares CodeCommit as the source control for the application code.
  • Build – Using CodeBuild, it installs the dependencies and builds deployable artifacts. In this use case, the sample application is too simple and this stage is used for illustration purposes.
  • Deploy_Dev – Deploys the sample application on a sandbox environment. At this point, the deployable artifacts generated at the Build stage are used to create a CloudFormation stack and deploy the Lambda function.
  • Deploy_Prod – Similar to Deploy_Dev, at this stage the sample application is deployed on the tenant production environments. For that, it contains two actions (one per tenant) that are run in parallel. CodePipeline uses CodePipelineCrossAccountRole to assume a role on the tenant account, and from there, CloudFormationDeploymentRole is used to effectively deploy the application.

To provision your resources, complete the following steps from the terminal:

  1. Download the CloudFormation pipeline template:
curl -LO https://cross-account-ci-cd-pipeline-single-tenant-saas.s3.amazonaws.com/pipeline.yaml
  1. Deploy the CloudFormation stack using the pipeline template:
aws cloudformation deploy \
    --template-file pipeline.yaml \
    --region <YOUR_REGION> \
    --stack-name <YOUR_PIPELINE_STACK_NAME> \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        ArtifactBucketName=<BUCKET_UNIQUE_NAME> \
        ArtifactEncryptionKeyArn=<KMS_KEY_ARN> \
        UnicornAccountId=<UNICORN_TENANT_ACCOUNT_ID> \
        GnomeAccountId=<GNOME_TENANT_ACCOUNT_ID> \
        SampleApplicationRepositoryName=<YOUR_CODECOMMIT_REPOSITORY_NAME> \
        RepositoryBranch=<YOUR_CODECOMMIT_MAIN_BRANCH>

This is the list of the required parameters to deploy the template:

    • ArtifactBucketName – The name of the S3 bucket where the deployment artifacts are to be stored.
    • ArtifactEncryptionKeyArn – The ARN of the customer managed CMK to be used as artifact encryption key.
    • UnicornAccountId – The AWS account ID for the first tenant (Unicorn) where the application is to be deployed.
    • GnomeAccountId – The AWS account ID for the second tenant (Gnome) where the application is to be deployed.
    • SampleApplicationRepositoryName – The name of the CodeCommit repository where source changes are detected.
    • RepositoryBranch – The name of the CodeCommit branch where source changes are detected. The default value is master in case no value is provided.
  1. Wait for AWS CloudFormation to create the resources.

When stack creation is complete, the pipeline starts automatically.

For each existing tenant, an action is declared within the Deploy_Prod stage. The following code is a snippet of how these actions are configured to deploy the application on a different account:

RoleArn: !Sub arn:aws:iam::${UnicornAccountId}:role/CodePipelineCrossAccountRole
Configuration:
    ActionMode: CREATE_UPDATE
    Capabilities: CAPABILITY_IAM,CAPABILITY_AUTO_EXPAND
    StackName: !Sub SampleApplication-unicorn-stack-${AWS::Region}
    RoleArn: !Sub arn:aws:iam::${UnicornAccountId}:role/CloudFormationDeploymentRole
    TemplatePath: CodeCommitSource::application.yaml
    ParameterOverrides: !Sub | 
        { 
            "ApplicationName": "SampleApplication-Unicorn",
            "S3Bucket": { "Fn::GetArtifactAtt" : [ "ApplicationBuildOutput", "BucketName" ] },
            "S3Key": { "Fn::GetArtifactAtt" : [ "ApplicationBuildOutput", "ObjectKey" ] }
        }

The code declares two IAM roles. The first one is the IAM role assumed by the CodePipeline action to access the tenant AWS account, whereas the second is the IAM role used by AWS CloudFormation to create AWS resources in the tenant AWS account. The ParameterOverrides configuration declares where the release artifact is located. The S3 bucket and key are in the Tooling account and encrypted using the customer managed CMK. That’s why it was necessary to grant access from external accounts using a bucket and KMS policies.

Besides the CI/CD pipeline itself, this CloudFormation template declares IAM roles that are used by the pipeline and its actions. The main IAM role is named CrossAccountPipelineRole, which is used by the CodePipeline service. It contains permissions to assume the action roles. See the following code:

{
    "Action": "sts:AssumeRole",
    "Effect": "Allow",
    "Resource": [
        "arn:aws:iam::<TOOLING_ACCOUNT_ID>:role/<PipelineSourceActionRole>",
        "arn:aws:iam::<TOOLING_ACCOUNT_ID>:role/<PipelineApplicationBuildActionRole>",
        "arn:aws:iam::<TOOLING_ACCOUNT_ID>:role/<PipelineDeployDevActionRole>",
        "arn:aws:iam::<UNICORN_ACCOUNT_ID>:role/CodePipelineCrossAccountRole",
        "arn:aws:iam::<GNOME_ACCOUNT_ID>:role/CodePipelineCrossAccountRole"
    ]
}

When you have more tenant accounts, you must add additional roles to the list.

After CodePipeline runs successfully, test the sample application by invoking the Lambda function on each tenant account:

aws lambda invoke --function-name SampleApplication --profile <TENANT_PROFILE_NAME> --region <YOUR_REGION> out

The output should be:

{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}

Cleaning up

Follow these steps to delete the components and avoid future incurring charges:

  1. Delete the production application stack from each tenant account:
aws cloudformation delete-stack --profile <TENANT_PROFILE_NAME> --region <YOUR_REGION> --stack-name SampleApplication-<TENANT_NAME>-stack-<YOUR_REGION>
  1. Delete the dev application stack from the Tooling account:
aws cloudformation delete-stack --region <YOUR_REGION> --stack-name SampleApplication-dev-stack-<YOUR_REGION>
  1. Delete the pipeline stack from the Tooling account:
aws cloudformation delete-stack --region <YOUR_REGION> --stack-name <YOUR_PIPELINE_STACK_NAME>
  1. Delete the customer managed CMK from the Tooling account:
aws kms schedule-key-deletion --region <YOUR_REGION> --key-id <KEY_ARN>
  1. Delete the S3 bucket from the Tooling account:
aws s3 rb s3://<BUCKET_UNIQUE_NAME> --force
  1. Optionally, delete the IAM roles and policies you created in the tenant accounts

Conclusion

This post demonstrated what it takes to build a CI/CD pipeline for single-tenant SaaS solutions isolated on the AWS account level. It covered how to grant cross-account access to artifact stores on Amazon S3 and artifact encryption keys on AWS KMS using policies and IAM roles. This approach is less error-prone because it avoids human errors when manually deploying the exact same application for multiple tenants.

For this use case, we performed most of the steps manually to better illustrate all the steps and components involved. For even more automation, consider using the AWS Cloud Development Kit (AWS CDK) and its pipeline construct to create your CI/CD pipeline and have everything as code. Moreover, for production scenarios, consider having integration tests as part of the pipeline.

Rafael Ramos

Rafael Ramos

Rafael is a Solutions Architect at AWS, where he helps ISVs on their journey to the cloud. He spent over 13 years working as a software developer, and is passionate about DevOps and serverless. Outside of work, he enjoys playing tabletop RPG, cooking and running marathons.

Automate AWS Firewall Manager onboarding using AWS Centralized WAF and VPC Security Group Management solution

Post Syndicated from Satheesh Kumar original https://aws.amazon.com/blogs/security/automate-aws-firewall-manager-onboarding-using-aws-centralized-waf-and-vpc-security-group-management-solution/

Many customers—especially large enterprises—run workloads across multiple AWS accounts and in multiple AWS regions. AWS Firewall Manager service, launched in April 2018, enables customers to centrally configure and manage AWS WAF rules, audit Amazon VPC security group rules across accounts and applications in AWS Organizations, and protect resources against distributed DDoS attacks.

In this blog post, we show you how to onboard your accounts and your AWS Organizations into the AWS Firewall Manager service and start centrally managing the security policies of your AWS Organizations member accounts. We also show you examples of how to perform operations on the Firewall Manager policies after you’ve deployed the solution, so you can adjust your security posture over time.

As more and more customers began using Firewall Manager for centralized management, they gave us feedback on how it could be improved. We heard that the process of defining policies and configuring rule sets can be challenging and time consuming, especially in a multi-account, multi-region scenario. We built the AWS Centralized WAF and VPC Security Group Management solution to make it easier and faster.

Solution overview

The AWS Centralized WAF and VPC Security Group Management solution fully automates the deployment of Firewall Manager with a set of opinionated defaults for the policies. We call it “opinionated,” because there’s no set of security rules that is right for absolutely every customer. We’re providing an example opinion, but you might have a different opinion, given your unique circumstance. Our examples block traffic that you might have a reason to allow. We install the following policies when you deploy this solution:

  • AWS WAF global policy for Amazon CloudFront distributions and regional policy for Application Load Balancer and Amazon API Gateway: Both the AWS WAF policies include the following AWS Managed Rules for AWS WAF. You can further customize these rules to suit your WAF requirements.
    • AWSManagedRulesCommonRuleSet
    • AWSManagedRulesAdminProtectionRuleSet
    • AWSManagedRulesKnownBadInputsRuleSet
    • AWSManagedRulesSQLiRuleSet
  • Amazon VPC security group usage audit and content audit policy: These policies flag security groups that are unused or redundant. Automatic remediation is turned off by default, but you can turn it on by customizing the solution.
  • AWS Shield Advanced policy: If your account has enabled Shield Advanced, then Shield Advanced protection is enabled for CloudFront, Application Load Balancer, and Elastic IPs.

The core of the solution is the AWS CloudFormation template aws-centralized-waf-and-security-group-management. This template deploys the components shown in Figure 1:

Figure 1: Main solutions template - aws-centralized-waf-and-vpc-security-group-management.template

Figure 1: Main solutions template – aws-centralized-waf-and-vpc-security-group-management.template

  1. After deployment, you can update the three AWS Systems Manager parameters—/FMS/OUs, /FMS/Regions, and /FMS/Tags—with appropriate values to control the scope and applicability of the Firewall Manager security policies.
  2. On update of the values in the Systems Manager parameters, the Amazon EventBridge rule captures the parameter update event.
  3. Amazon EventBridge triggers an AWS Lambda function to deploy the Firewall Manager policies.
  4. The AWS Lambda function will deploy the Firewall manager security policies across the OUs and regions specified in step 1.
  5. The lambda function updates the Amazon DynamoDB table with Firewall manager policies metadata.

Configuring Prerequisites Automatically

There are a few important prerequisites that must be configured in your account before you deploy this solution. We have built a template called aws-fms-prereq that will launch the solution prerequisites.

When you execute the aws-fms-prereq template, the following things will happen:

Note: If the Firewall Manager prerequisites described above are already met, you can skip this step and go directly to next step: Deploy the solution template.

If you run this prerequisite template in the Organization primary account that is also your Firewall Manager admin account, the solution template will be deployed automatically. If you do this, you can skip the step of deploying the solution template and jump straight to Manage your Firewall manager security policies.

Figure 2 shows how the prerequisite template creates a Lambda function. That function validates and installs the prerequisites and AWS CloudFormation stack sets to enable AWS Config across all member accounts in the organization.

Figure 2: Solution prerequisites

Figure 2: Solution prerequisites

Installing Prerequisites

If the Firewall Manager prerequisites are already met, skip this step and go directly to next step, below: Deploy the solution template.

  1. Install the AWS CLI

    Note: If you already have the AWS Command Line Interface v2 installed on your workstation, you can skip this step.

    The template deployment can be done using the AWS Management Console or the AWS CLI. This procedure uses the AWS CLI to do the deployment. To get started, install the AWS CLI and configure it with credentials that have the required IAM permissions to create resources.

  2. Deploy the prerequisite templateIf this is the first time you’re using Firewall Manager, download the aws-fms-prereq.template, and run the following AWS CLI command in the primary account of your AWS Organization to check for and complete the prerequisites.Replace the variable <your_FW_account_ID> with the account ID of your AWS Firewall Admin. This deployment typically takes 2–3 minutes, but sometimes a little longer if there are a large number of member accounts in your organization. If you want AWS Config to be enabled in your member accounts, then set the EnableConfig parameter to Yes; however, if AWS Config is already enabled, then set it to No.
    aws cloudformation create-stack \
    --stack-name fms-prereq-stack \
    --template-body file://aws-fms-prereq.template \
    --parameters ParameterKey=FMSAdmin,ParameterValue=<your_FW_account_ID> ParameterKey=EnableConfig,ParameterValue=Yes
    

Deploy the solution template

Deploy the solution using aws-centralized-waf-and-vpc-security-group-management.template—an AWS CloudFormation template—in your Firewall Manager admin account that was created by the prerequisite template.

This template deploys the AWS Centralized WAF and VPC Security Group Management solution shown in Figure 1, with all the resources and integrations.

The following AWS CLI command will deploy the solution template:

aws cloudformation create-stack \
--stack-name fms-central-policy-mgmt-stack \
--template-body file://aws-centralized-waf-and-vpc-security-group-management.template \
--capabilities CAPABILITY_IAM

The command will print very basic output, simply identifying the stack’s name, as shown below.

{
 "StackId": "arn:aws:cloudformation:us-east-1:<your_FW_admin_account_ID>:stack/fms-central-policy-mgmt-stack/<stack ARN>"
}

You can run the following CLI command to check the status of the stack deployment. Once you see that “StackStatus” is set to “CREATE_COMPLETE” in the output, you can proceed to the next step.

aws cloudformation describe-stacks \
--stack-name fms-central-policy-mgmt-stack

Manage your Firewall manager security policies

Once the solution is deployed, you can deploy the Firewall Manager policies to the organization member accounts, which will be reflected in the Parameter Store. These changes to the Parameter Store are picked up by the EventBridge rule, which triggers the Lambda function to automatically create, delete, or modify the policies as required.

Note: All of the following commands must be executed in the Firewall manager admin account, which might not be the root account of your AWS Organization.

  1. Add OUs to the scope of Firewall Manager policiesTo begin, define the OUs that the Firewall Manager policies should apply to. Store the list of OUs in a parameter named /FMS/OUs. The following AWS CLI command stores a comma-separated list of OU IDs in the right parameter.
    aws ssm put-parameter \
     --name "/FMS/OUs" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_OU_IDs>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  2. Add AWS Regions to the scope of Firewall Manager policiesNext you need to add the AWS regions where you want these policies to be applied. In the AWS CLI command that follows, you can customize the AWS region list depending on where you run your workloads. If, for example, you want to use us-west-2, us-east-1, and eu-west-1, then you need to provide us-west-2,us-east-1,eu-west-1 as your value in the command below.
    aws ssm put-parameter \
     --name "/FMS/Regions" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_regions>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  3. (Optional) Add resource tagsPlease note that this is an optional step. Resource tags are a way to apply these Firewall Manager policies to some resources, but not others. Imagine that we only want to apply these policies to resources that have the tag Environment set to the value Prod. The following command will do that:
    aws ssm put-parameter \
     --name "/FMS/Tags" \
     --type "String" \
     --value "{\"ResourceTags\":[{\"Key\":\"Environment\",\"Value\":\"Prod\"}],\"ExcludeResourceTags\":false}" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
     "Version": 2,
     "Tier": "Standard"
    }
    

Test the solution

Test the solution by creating a global CloudFront distribution and an Amazon VPC security group in one of your member accounts, and configure these resources to be noncompliant with the policies enforced by Firewall Manager.

Deploy test resources in one of your member accounts

Run the following AWS CLI command to deploy the demo template in one of the member accounts to create a sample CloudFront distribution and an Amazon VPC security group that aren’t compliant with the default Firewall Manager policies.

aws cloudformation create-stack \
  --stack-name demo-stack \
  --template-body file://aws-fms-demo.template \
  --capabilities CAPABILITY_IAM

Check for web ACL and security group audit results

To check the webACL resource association in the member account, follow these steps:

  1. Log in to your member account management console where you had deployed your demo-template and go to the WAF & Shield service page.
  2. You will see the WebACL with the prefix “FMManagedWebACLV2FMS-WAF-01“ , select that webACL.
  3. Go to the tab Associated AWS resources.
  4. You should see the newly created CloudFront distribution listed here.

To check the compliance status of the newly created security group, follow these steps:

  1. Log in to the Firewall manager admin account management console and go to the AWS Firewall Manager service page.
  2. In the Security policies section, you will find the FMS-SecGroup-02 policy. Select the policy FMS-SecGroup-02.
  3. You will see that the member account (where the demo template was deployed) is marked as non-compliant. Select the member account number to see the newly created security group along with the reason for the noncompliant finding.

Clean-up test resources in member account

Follow these steps to clean up the test resources that the demo template would have created in your member account:

  1. Log in to member account management console, and go to Cloudfront service page.
  2. Select the CloudFront distribution. (Origin would have the stack name as its prefix.)
  3. Go to “Behaviours,” select the one and select Edit.
  4. Remove the lambda function association and save changes.
  5. Run this CLI command to delete the stack that you had created earlier:
    aws cloudformation delete-stack \
      --stack-name demo-stack
    

Customizing the solution’s source code

This solution deploys Firewall Manager policies with some opinionated default rules defined in the manifest.json file. They probably aren’t all that you will need. You could customize these policies to fit your own needs, but it takes a bit of development skill. Also, the steps are too long to include here. If, for example, you want to add another AWS WAF rule group to the default AWS WAF security policy, you’ll need to edit the manifest and redeploy the solution. See how to do this customization and several others.

(Optional) Clean-up resources

Once you’ve tried the solution, if you want to clean up the stack you can do so in two steps:

  1. Go to the Firewall Manager admin account management console, navigate to the Parameter Store, and update the /FMS/OU parameter with the value delete. This ensures that the Firewall Manager policies and their related resources are deleted.
  2. Run the following AWS CLI command in the Firewall manager admin account to delete the solution stack you deployed earlier:
    aws cloudformation delete-stack --stack-name fms-central-policy-mgmt-stack
    

Conclusion

The AWS Centralized WAF and VPC Security Group Management solution addresses the feedback we heard from you, our customer. You told us the process of defining policies and configuring rule sets is challenging and time consuming, so we wrote this to make it easier and faster. In this post, we showed you how to deploy the solution following a two-step process using AWS CloudFormation. We also showed you ways that you can use the solution to easily manage your Firewall Manager policies by updating values in Systems Manager Parameter Store. Updating the values automates the deployment based on your choice of OUs, Regions, and resources. Finally, we showed you how to further customize the solution to suit your needs.

This post is just an introduction to what is possible and the overall objective. Before you start using this solution for anything important, we recommend that you review the solution implementation guide. It contains step-by-step directions and more example use cases. The guide also includes security recommendations and some cost estimates for the various supported scenarios.

To learn more about other AWS solutions, visit the AWS Solutions Library.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Satheesh Kumar

Satheesh is a Senior Solutions Architect based out of Bangalore, India. He helps large enterprise customers build solutions using AWS services.

Author

Ramanan Kannan

Ramanan is a Senior Solutions Architect and works with large enterprise customers in the financial domain. He is based out of Chennai, India.

Use AWS Firewall Manager to deploy protection at scale in AWS Organizations

Post Syndicated from Chamandeep Singh original https://aws.amazon.com/blogs/security/use-aws-firewall-manager-to-deploy-protection-at-scale-in-aws-organizations/

Security teams that are responsible for securing workloads in hundreds of Amazon Web Services (AWS) accounts in different organizational units aim for a consistent approach across AWS Organizations. Key goals include enforcing preventative measures to mitigate known security issues, having a central approach for notifying the SecOps team about potential distributed denial of service (DDoS) attacks, and continuing to maintain compliance obligations. AWS Firewall Manager works at the organizational level to help you achieve your intended security posture while it provides reporting for non-compliant resources in all your AWS accounts. This post provides step-by-step instructions to deploy and manage security policies across your AWS Organizations implementation by using Firewall Manager.

You can use Firewall Manager to centrally manage AWS WAF, AWS Shield Advanced, and Amazon Virtual Private Cloud (Amazon VPC) security groups across all your AWS accounts. Firewall Manager helps to protect resources across different accounts, and it can protect resources with specific tags or resources in a group of AWS accounts that are in specific organizational units (OUs). With AWS Organizations, you can centrally manage policies across multiple AWS accounts without having to use custom scripts and manual processes.

Architecture diagram

Figure 1 shows an example organizational structure in AWS Organizations, with several OUs that we’ll use in the example policy sets in this blog post.

Figure 1: AWS Organizations and OU structure

Figure 1: AWS Organizations and OU structure

Firewall Manager can be associated to either the AWS master payer account or one of the member AWS accounts that has appropriate permissions as a delegated administrator. Following the best practices for organizational units, in this post we use a dedicated Security Tooling AWS account (named Security in the diagram) to operate the Firewall Manager administrator deployment under the Security OU. The Security OU is used for hosting security-related access and services. The Security OU, its child OUs, and the associated AWS accounts should be owned and managed by your security organization.

Firewall Manager prerequisites

Firewall Manager has the following prerequisites that you must complete before you create and apply a Firewall Manager policy:

  1. AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an organization and Enabling all features in your organization.
  2. A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the Firewall Manager administrator for Firewall Manager. This gives the account permission to deploy security policies across the organization.
  3. AWS Config: You must enable AWS Config for all of the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, use the Enable AWS Config template from the StackSets sample templates.

Deployment of security policies

In the following sections, we explain how to create AWS WAF rules, Shield Advanced protections, and Amazon VPC security groups by using Firewall Manager. We further explain how you can deploy these different policy types to protect resources across your accounts in AWS Organizations. Each Firewall Manager policy is specific to an individual resource type. If you want to enforce multiple policy types across accounts, you should create multiple policies. You can create more than one policy for each type. If you add a new account to an organization that you created with AWS Organizations, Firewall Manager automatically applies the policy to the resources in that account that are within scope of the policy. This is a scalable approach to assist you in deploying the necessary configuration when developers create resources. For instance, you can create an AWS WAF policy that will result in a known set of AWS WAF rules being deployed whenever someone creates an Amazon CloudFront distribution.

Policy 1: Create and manage security groups

You can use Firewall Manager to centrally configure and manage Amazon VPC security groups across all your AWS accounts in AWS Organizations. A previous AWS Security blog post walks you through how to apply common security group rules, audit your security groups, and detect unused and redundant rules in your security groups across your AWS environment.

Firewall Manager automatically audits new resources and rules as customers add resources or security group rules to their accounts. You can audit overly permissive security group rules, such as rules with a wide range of ports or Classless Inter-Domain Routing (CIDR) ranges, or rules that have enabled all protocols to access resources. To audit security group policies, you can use application and protocol lists to specify what’s allowed and what’s denied by the policy.

In this blog post, we use a security policy to audit the security groups for overly permissive rules and high-risk applications that are allowed to open to local CIDR ranges (for example, 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). We created a custom application list named Bastion Host for port 22 and a custom protocol list named Allowed Protocol that allows the child account to create rules only on TCP protocols. Refer link for how to create a custom managed application and protocol list.

To create audit security group policies

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, select Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this example, we selected the Sydney (ap-southeast-2) Region because we have all of our resources in the Sydney Region.
  3. Create the policy, and in Policy details, choose Security group. For Region, select a Region (we selected Sydney (ap-southeast-2)), and then choose Next.
  4. For Security group policy type, choose Auditing and enforcement of security group rules, and then choose Next.
  5. Enter a policy name. We named our policy AWS_FMS_Audit_SecurityGroup.
  6. For Policy rule options, for this example, we chose Configure managed audit policy rules.
  7. Under Policy rules, choose the following:
    1. For Security group rules to audit, choose Inbound Rules.
    2. For Rules, select the following:
      1. Select Audit over permissive security group rules.
        • For Allowed security group rules, choose Add Protocol list and select the custom protocol list Allowed Protocols that we created earlier.
        • For Denied security group rules, select Deny rules with the allow ‘ALL’ protocol.
      2. Select Audit high risk applications.
        • Choose Applications that can only access local CIDR ranges. Then choose Add application list and select the custom application list Bastion host that we created earlier.
  8. For Policy action, for the example in this post, we chose Auto remediate any noncompliant resources. Choose Next.

    Figure 2: Policy rules for the security group audit policy

    Figure 2: Policy rules for the security group audit policy

  9. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included Organizational units, select OU (example – Non-Prod Accounts).
    2. For Resource type, select EC2 Instance, Security Group, and Elastic Network Interface.
    3. For Resources, choose Include all resources that match the selected resource type.
  10. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Audit_Security_group.

Important: Migrating AWS accounts from one organizational unit to another won’t remove or detach the existing security group policy applied by Firewall Manager. For example, in the reference architecture in Figure 1 we have the AWS account Tenant-5 under the Staging OU. We’ve created a different Firewall Manager security group policy for the Pre-Prod OU and Prod OU. If you move the Tenant-5 account to Prod OU from Staging OU, the resources associated with Tenant-5 will continue to have the security group policies that are defined for both Prod and Staging OU unless you select otherwise before relocating the AWS account. Firewall Manager supports the detach option in case of policy deletion, because moving accounts across the OU may have unintended impacts such as loss of connectivity or protection, and therefore Firewall Manager won’t remove the security group.

Policy 2: Managing AWS WAF v2 policy

A Firewall Manager AWS WAF policy contains the rule groups that you want to apply to your resources. When you apply the policy, Firewall Manager creates a Firewall Manager web access control list (web ACL) in each account that’s within the policy scope.

Note: Creating Amazon Kinesis Data Firehose delivery stream is a prerequisite to manage the WAF ACL logging at Step 8 in us-east-1. (example – aws-waf-logs-lab-waf-logs)

To create a Firewall Manager – AWS WAF v2 policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select a Region. FMS region selection is on the service page drop down tab. For this example, we selected the Region as Global, since the policy is to protect CloudFront resources.
  3. Create the policy. Under Policy details, choose AWS WAF and for Region, choose Global. Then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_WAF_Rule.
  5. On the Policy rule page, under Web ACL configuration, add rule groups. AWS WAF supports custom rule groups (the customer creates the rules), AWS Managed Rules rule groups (AWS manages the rules), and AWS Marketplace managed rule groups. For this example, we chose AWS Managed Rules rule groups.
  6. For this example, for First rule groups, we chose the AWS Managed Rules rule group, AWS Core rule set. For Last rule groups, we chose the AWS Managed Rules rule group, Amazon IP reputation list.
  7. For Default web ACL action for requests that don’t match any rules in the web ACL, choose a default action. We chose Allow.
  8. Firewall Manager enables logging for a specific web ACL. This logging is applied to all the in-scope accounts and delivers the logs to a centralized single account. To enable centralized logging of AWS WAF logs:
    1. For Logging configuration status, choose Enabled.
    2. For IAM role, Firewall Manager creates an AWS WAF service-role for logging. Your security account should have the necessary IAM permissions. Learn more about access requirements for logging.
    3. Select Kinesis stream created earlier called aws-waf-logs-lab-waf-logs in us-east-1 as we’re using Cloudfront as a resource in the policy.
    4. For Redacted fields, for this example select HTTP method, Query String, URI, and Header. You can also add a new header. For more information, see Configure logging for an AWS Firewall Manager AWS WAF policy.
  9. For Policy action, for this example, we chose Auto remediate any noncompliant resources. To replace the existing web ACL that is currently associated with the resource, select Replace web ACLs that are currently associated with in-scope resources with the web ACLs created by this policy. Choose Next.

    Note: If a resource has an association with another web ACL that is managed by a different active Firewall Manager, it doesn’t affect that resource.

    Figure 3: Policy rules for the AWS WAF security policy

    Figure 3: Policy rules for the AWS WAF security policy

  10. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included organizational units, select OU (example – Pre-Prod Accounts).
    2. For Resource type, choose CloudFront distribution.
    3. For Resources, choose Include all resources that match the selected resource type.
  11. You can create tags for the security policy. For the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to WAF_Policy.
  12. Review the security policy, and then choose Create Policy.

    Note: For the AWS WAF v2 policy, the web ACL pushed by the Firewall Manager can’t be modified on the individual account. The account owner can only add a new rule group.

  13. In the policy’s first and last rule groups sets, you can add additional rule groups at the linked AWS account level to provide additional security based on application requirements. You can use managed rule groups, which AWS Managed Rules and AWS Marketplace sellers create and maintain for you. For example, you can use the WordPress application rule group, which contains rules that block request patterns associated with the exploitation of vulnerabilities specific to a WordPress site. You can also manage and use your own rule groups.For more information about all of these options, see Rule groups. Another example could be using a rate-based rule that tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. Learn more about rate-based rules.

Policy 3: Managing AWS Shield Advanced policy

AWS Shield Advanced is a paid service that provides additional protections for internet facing applications. If you have Business or Enterprise support, you can engage the 24X7 AWS DDoS Response Team (DRT), who can write rules on your behalf to mitigate Layer 7 DDoS attacks. Please refer Shield Advanced pricing for more info before proceeding with Shield FMS Policy.

After you complete the prerequisites that were outlined in the prerequisites section, we’ll create Shield Advanced policy which contains the accounts and resources that you want to protect with Shield Advanced. Purpose of this policy is to activate the AWS Shield Advanced in the Accounts in OU’s scope and add the selected resources under Shield Advanced protection list.

To create a Firewall Manager – Shield Advanced policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this post, we’ve selected the Sydney (ap-southeast-2) Region because all of our resources are in the Sydney Region.

    Note: To protect CloudFront resources, select the Global option.

  3. Create the policy, and in Policy details, choose AWS Shield Advanced. For Region, select a Region (example – ap-southeast-2), and then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_ShieldAdvanced Rule.
  5. For Policy action, for the example in this post, we chose Auto remediate any non-compliant resources. Alternatively, if you choose Create but do not apply this policy to existing or new resources, Firewall Manager doesn’t apply Shield Advanced protection to any resources. You must apply the policy to resources later. Choose Next.
  6. For Policy scope, this example uses the OU structure as the container of multiple accounts with similar requirements:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational units. For Included organizational units, select OU (example – Staging Accounts OU).
    2. For Resource type, select Application Load Balancer and Elastic IP.
    3. For Resources, choose Include all resources that match the selected resource type.
      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Note: If you want to protect only the resources with specific tags, or alternatively exclude resources with specific tags, choose Use tags to include/exclude resources, enter the tags, and then choose either Include or Exclude. Tags enable you to categorize AWS resources in different ways, for example by indicating an environment, owner, or team to include or exclude in Firewall Manager policy. Firewall Manager combines the tags with “AND” so that, if you add more than one tag to a policy scope, a resource must have all the specified tags to be included or excluded.

      Important: Shield Advanced supports protection for Amazon Route 53 and AWS Global Accelerator. However, protection for these resources cannot be deployed with the help of Firewall Manager security policy at this time. If you need to protect these resources with Shield Advanced, you should use individual AWS account access through the API or console to activate Shield Advanced protection for the intended resources.

  7. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Shield_Advanced_Policy. You can use the tags in the Resource element of IAM permission policy statements to either allow or deny users to make changes to security policy.
  8. Review the security policy, and then choose Create Policy.

Now you’ve successfully created a Firewall Manager security policy. Using the organizational units in AWS Organizations as a method to deploy the Firewall Manager security policy, when you add an account to the OU or to any of its child OUs, Firewall Manager automatically applies the policy to the new account.

Important: You don’t need to manually subscribe Shield Advanced on the member accounts. Firewall Manager subscribes Shield Advanced on the member accounts as part of creating the policy.

Operational visibility and compliance report

Firewall Manager offers a centralized incident notification for DDoS incidents that are reported by Shield Advanced. You can create an Amazon SNS topic to monitor the protected resources for potential DDoS activities and send notifications accordingly. Learn how to create an SNS topic. If you have resources in different Regions, the SNS topic needs to be created in the intended Region. You must perform this step from the Firewall Manager delegated AWS account (for example, Security Tooling) to receive alerts across your AWS accounts in that organization.

As a best practice, you should set up notifications for all the Regions where you have a production workload under Shield Advanced protection.

To create an SNS topic in the Firewall Manager administrative console

  1. In the AWS Management Console, sign in to the Security Tooling account or the AWS Firewall Manager delegated administrator account. In the left navigation pane, under AWS Firewall Manager, choose Settings.
  2. Select the SNS topic that you created earlier to be used for the Firewall Manager central notification mechanism. For this example, we created a new SNS topic in the Sydney Region (ap-southeast-2) named SNS_Topic_Syd.
  3. For Recipient email address, enter the email address that the SNS topic will be sent to. Choose Configure SNS configuration.

After you create the SNS configuration, you can see the SNS topic in the appropriate Region, as in the following example.

Figure 5: An SNS topic for centralized incident notification

Figure 5: An SNS topic for centralized incident notification

AWS Shield Advanced records metrics in Amazon CloudWatch to monitor the protected resources and can also create Amazon CloudWatch alarms. For the simplicity purpose we took the email notification route for this example. In security operations environment, you should integrate the SNS notification to your existing ticketing system or pager duty for Realtime response.

Important: You can also use the CloudWatch dashboard to monitor potential DDoS activity. It collects and processes raw data from Shield Advanced into readable, near real-time metrics.

You can automatically enforce policies on AWS resources that currently exist or are created in the future, in order to promote compliance with firewall rules across the organization. For all policies, you can view the compliance status for in-scope accounts and resources by using the API or AWS Command Line Interface (AWS CLI) method. For content audit security group policies, you can also view detailed violation information for in-scope resources. This information can help you to better understand and manage your security risk.

View all the policies in the Firewall Manager administrative account

For our example, we created three security policies in the Firewall Manager delegated administrator account. We can check policy compliance status for all three policies by using the AWS Management Console, AWS CLI, or API methods. The AWS CLI example that follows can be further extended to build an automation for notifying the non-compliant resource owners.

To list all the policies in FMS

 aws fms list-policies --region ap-southeast-2
{
    "PolicyList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "RemediationEnabled": false, 
            "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:222222222222:policy/78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "SecurityServiceType": "WAFV2", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58"
        },
        {
            "PolicyName": "AWS_FMS_Audit_SecurityGroup", 
            "RemediationEnabled": true, 
            "ResourceType": "ResourceTypeList", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:<Account-Id>:policy/d44f3f38-ed6f-4af3-b5b3-78e9583051cf", 
            "SecurityServiceType": "SECURITY_GROUPS_CONTENT_AUDIT", 
            "PolicyId": "d44f3f38-ed6f-4af3-b5b3-78e9583051cf"
        }
    ]
}

Now, we got the policy id to check the compliance status

aws fms list-compliance-status --policy-id 78edcc79-c0b1-46ed-b7b9-d166b9fd3b58
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601360994.0, 
            "MemberAccount": "444444444444", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 0, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 444444444444 associated to the policy is compliant. The following example shows the status for the second policy.

aws fms list-compliance-status --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "AWS_FMS_WAF_Rule", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601361231.0, 
            "MemberAccount": "555555555555", 
            "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 3, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "NON_COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 555555555555 associated to the policy is non-compliant.

To provide detailed compliance information about the specified member account, the output includes resources that are in and out of compliance with the specified policy, as shown in the following example.

aws fms get-compliance-detail --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d --member-account 555555555555
{
    "PolicyComplianceDetail": {
        "Violators": [
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2: 555555555555:loadbalancer/app/FMSTest2/c2da4e99d4d13cf4", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }, 
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2:555555555555:loadbalancer/app/fmstest/1e70668ce77eb61b", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }
        ], 
        "EvaluationLimitExceeded": false, 
        "PolicyOwner": "222222222222", 
        "ExpiredAt": 1601362402.0, 
        "MemberAccount": "555555555555", 
        "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
        "IssueInfoMap": {}
    }
}

In the preceding example, two Application Load Balancers (ALBs) are not associated with a web ACL. You can further introduce automation by using AWS Lambda functions to isolate the non-compliant resources or trigger an alert for the account owner to launch manual remediation.

Resource Clean up

You can delete a Firewall Manager policy by performing the following steps.

To delete a policy (console)

  1. In the navigation pane, choose Security policies.
  2. Choose the option next to the policy that you want to delete. We created 3 policies which needs to be removed one by one.
  3. Choose Delete.

Important: When you delete a Firewall Manager Shield Advanced policy, the policy is deleted, but your accounts remain subscribed to Shield Advanced.

Conclusion

In this post, you learned how you can use Firewall Manager to enforce required preventative policies from a central delegated AWS account managed by your security team. You can extend this strategy to all AWS OUs to meet your future needs as new AWS accounts or resources get added to AWS Organizations. A central notification delivery to your Security Operations team is crucial from a visibility perspective, and with the help of Firewall Manager you can build a scalable approach to stay protected, informed, and compliant. Firewall Manager simplifies your AWS WAF, AWS Shield Advanced, and Amazon VPC security group administration and maintenance tasks across multiple accounts and resources.

For further reading and updates, see the Firewall Manager Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chamandeep Singh

Chamandeep is a Senior Technical Account Manager and member of the Global Security field team at AWS. He works with financial sector enterprise customers to support operations and security, and also designs scalable cloud solutions. He lives in Australia at present and enjoy travelling around the world.

Author

Prabhakaran Thirumeni

Prabhakaran is a Cloud Architect with AWS, specializing in network security and cloud infrastructure. His focus is helping customers design and build solutions for their enterprises. Outside of work he stays active with badminton, running, and exploring the world.

How to add authentication to a single-page web application with Amazon Cognito OAuth2 implementation

Post Syndicated from George Conti original https://aws.amazon.com/blogs/security/how-to-add-authentication-single-page-web-application-with-amazon-cognito-oauth2-implementation/

In this post, I’ll be showing you how to configure Amazon Cognito as an OpenID provider (OP) with a single-page web application.

This use case describes using Amazon Cognito to integrate with an existing authorization system following the OpenID Connect (OIDC) specification. OIDC is an identity layer on top of the OAuth 2.0 protocol to enable clients to verify the identity of users. Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Some key reasons customers select Amazon Cognito include:

  • Simplicity of implementation: The console is very intuitive; it takes a short time to understand how to configure and use Amazon Cognito. Amazon Cognito also has key out-of-the-box functionality, including social sign-in, multi-factor authentication (MFA), forgotten password support, and infrastructure as code (AWS CloudFormation) support.
  • Ability to customize workflows: Amazon Cognito offers the option of a hosted UI where users can sign-in directly to Amazon Cognito or sign-in via social identity providers such as Amazon, Google, Apple, and Facebook. The Amazon Cognito hosted UI and workflows help save your team significant time and effort.
  • OIDC support: Amazon Cognito can securely pass user profile information to an existing authorization system following the ODIC authorization code flow. The authorization system uses the user profile information to secure access to the app.

Amazon Cognito overview

Amazon Cognito follows the OIDC specification to authenticate users of web and mobile apps. Users can sign in directly through the Amazon Cognito hosted UI or through a federated identity provider, such as Amazon, Facebook, Apple, or Google. The hosted UI workflows include sign-in and sign-up, password reset, and MFA. Since not all customer workflows are the same, you can customize Amazon Cognito workflows at key points with AWS Lambda functions, allowing you to run code without provisioning or managing servers. After a user authenticates, Amazon Cognito returns standard OIDC tokens. You can use the user profile information in the ID token to grant your users access to your own resources or you can use the tokens to grant access to APIs hosted by Amazon API Gateway. You can also exchange the tokens for temporary AWS credentials to access other AWS services.

Figure 1: Amazon Cognito sign-in flow

Figure 1: Amazon Cognito sign-in flow

OAuth 2.0 and OIDC

OAuth 2.0 is an open standard that allows a user to delegate access to their information to other websites or applications without handing over credentials. OIDC is an identity layer on top of OAuth 2.0 that uses OAuth 2.0 flows. OAuth 2.0 defines a number of flows to manage the interaction between the application, user, and authorization server. The right flow to use depends on the type of application.

The client credentials flow is used in machine-to-machine communications. You can use the client credentials flow to request an access token to access your own resources, which means you can use this flow when your app is requesting the token on its own behalf, not on behalf of a user. The authorization code grant flow is used to return an authorization code that is then exchanged for user pool tokens. Because the tokens are never exposed directly to the user, they are less likely to be shared broadly or accessed by an unauthorized party. However, a custom application is required on the back end to exchange the authorization code for user pool tokens. For security reasons, we recommend the Authorization Code Flow with Proof Key Code Exchange (PKCE) for public clients, such as single-page apps or native mobile apps.

The following table shows recommended flows per application type.

Application CFlow Description
Machine Client credentials Use this flow when your application is requesting the token on its own behalf, not on behalf of the user
Web app on a server Authorization code grant A regular web app on a web server
Single-page app Authorization code grant PKCE An app running in the browser, such as JavaScript
Mobile app Authorization code grant PKCE iOS or Android app

Securing the authorization code flow

Amazon Cognito can help you achieve compliance with regulatory frameworks and certifications, but it’s your responsibility to use the service in a way that remains compliant and secure. You need to determine the sensitivity of the user profile data in Amazon Cognito; adhere to your company’s security requirements, applicable laws and regulations; and configure your application and corresponding Amazon Cognito settings appropriately for your use case.

Note: You can learn more about regulatory frameworks and certifications at AWS Services in Scope by Compliance Program. You can download compliance reports from AWS Artifact.

We recommend that you use the authorization code flow with PKCE for single-page apps. Applications that use PKCE generate a random code verifier that’s created for every authorization request. Proof Key for Code Exchange by OAuth Public Clients has more information on use of a code verifier. In the following sections, I will show you how to set up the Amazon Cognito authorization endpoint for your app to support a code verifier.

The authorization code flow

In OpenID terms, the app is the relying party (RP) and Amazon Cognito is the OP. The flow for the authorization code flow with PKCE is as follows:

  1. The user enters the app home page URL in the browser and the browser fetches the app.
  2. The app generates the PKCE code challenge and redirects the request to the Amazon Cognito OAuth2 authorization endpoint (/oauth2/authorize).
  3. Amazon Cognito responds back to the user’s browser with the Amazon Cognito hosted sign-in page.
  4. The user signs in with their user name and password, signs up as a new user, or signs in with a federated sign-in. After a successful sign-in, Amazon Cognito returns the authorization code to the browser, which redirects the authorization code back to the app.
  5. The app sends a request to the Amazon Cognito OAuth2 token endpoint (/oauth2/token) with the authorization code, its client credentials, and the PKCE verifier.
  6. Amazon Cognito authenticates the app with the supplied credentials, validates the authorization code, validates the request with the code verifier, and returns the OpenID tokens, access token, ID token, and refresh token.
  7. The app validates the OpenID ID token and then uses the user profile information (claims) in the ID token to provide access to resources.(Optional) The app can use the access token to retrieve the user profile information from the Amazon Cognito user information endpoint (/userInfo).
  8. Amazon Cognito returns the user profile information (claims) about the authenticated user to the app. The app then uses the claims to provide access to resources.

The following diagram shows the authorization code flow with PKCE.

Figure 2: Authorization code flow

Figure 2: Authorization code flow

Implementing an app with Amazon Cognito authentication

Now that you’ve learned about Amazon Cognito OAuth implementation, let’s create a working example app that uses Amazon Cognito OAuth implementation. You’ll create an Amazon Cognito user pool along with an app client, the app, an Amazon Simple Storage Service (Amazon S3) bucket, and an Amazon CloudFront distribution for the app, and you’ll configure the app client.

Step 1. Create a user pool

Start by creating your user pool with the default configuration.

Create a user pool:

  1. Go to the Amazon Cognito console and select Manage User Pools. This takes you to the User Pools Directory.
  2. Select Create a user pool in the upper corner.
  3. Enter a Pool name, select Review defaults, and select Create pool.
  4. Copy the Pool ID, which will be used later to create your single-page app. It will be something like region_xxxxx. You will use it to replace the variable YOUR_USERPOOL_ID in a later step.(Optional) You can add additional features to the user pool, but this demonstration uses the default configuration. For more information see, the Amazon Cognito documentation.

The following figure shows you entering the user pool name.

Figure 3: Enter a name for the user pool

Figure 3: Enter a name for the user pool

The following figure shows the resulting user pool configuration.

Figure 4: Completed user pool configuration

Figure 4: Completed user pool configuration

Step 2. Create a domain name

The Amazon Cognito hosted UI lets you use your own domain name or you can add a prefix to the Amazon Cognito domain. This example uses an Amazon Cognito domain with a prefix.

Create a domain name:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under App integration, select Domain name.
  3. In the Amazon Cognito domain section, add your Domain prefix (for example, myblog).
  4. Select Check availability. If your domain isn’t available, change the domain prefix and try again.
  5. When your domain is confirmed as available, copy the Domain prefix to use when you create your single-page app. You will use it to replace the variable YOUR_COGNITO_DOMAIN_PREFIX in a later step.
  6. Choose Save changes.

The following figure shows creating an Amazon Cognito hosted domain.

Figure 5: Creating an Amazon Cognito hosted UI domain

Figure 5: Creating an Amazon Cognito hosted UI domain

Step 3. Create an app client

Now create the app client user pool. An app client is where you register your app with the user pool. Generally, you create an app client for each app platform. For example, you might create an app client for a single-page app and another app client for a mobile app. Each app client has its own ID, authentication flows, and permissions to access user attributes.

Create an app client:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under General settings, select App clients.
  3. Choose Add an app client.
  4. Enter a name for the app client in the App client name field.
  5. Uncheck Generate client secret and accept the remaining default configurations.

    Note: The client secret is used to authenticate the app client to the user pool. Generate client secret is unchecked because you don’t want to send the client secret on the URL using client-side JavaScript. The client secret is used by applications that have a server-side component that can secure the client secret.

  6. Choose Create app client as shown in the following figure.

    Figure 6: Create and configure an app client

    Figure 6: Create and configure an app client

  7. Copy the App client ID. You will use it to replace the variable YOUR_APPCLIENT_ID in a later step.

The following figure shows the App client ID which is automatically generated when the app client is created.

Figure 7: App client configuration

Figure 7: App client configuration

Step 4. Create an Amazon S3 website bucket

Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. We use Amazon S3 here to host a static website.

Create an Amazon S3 bucket with the following settings:

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. Choose Create bucket to start the Create bucket wizard.
  3. In Bucket name, enter a DNS-compliant name for your bucket. You will use this in a later step to replace the YOURS3BUCKETNAME variable.
  4. In Region, choose the AWS Region where you want the bucket to reside.

    Note: It’s recommended to create the Amazon S3 bucket in the same AWS Region as Amazon Cognito.

  5. Look up the region code from the region table (for example, US-East [N. Virginia] has a region code of us-east-1). You will use the region code to replace the variable YOUR_REGION in a later step.
  6. Choose Next.
  7. Select the Versioning checkbox.
  8. Choose Next.
  9. Choose Next.
  10. Choose Create bucket.
  11. Select the bucket you just created from the Amazon S3 bucket list.
  12. Select the Properties tab.
  13. Choose Static website hosting.
  14. Choose Use this bucket to host a website.
  15. For the index document, enter index.html and then choose Save.

Step 5. Create a CloudFront distribution

Amazon CloudFront is a fast content delivery network service that helps securely deliver data, videos, applications, and APIs to customers globally with low latency and high transfer speeds—all within a developer-friendly environment. In this step, we use CloudFront to set up an HTTPS-enabled domain for the static website hosted on Amazon S3.

Create a CloudFront distribution (web distribution) with the following modified default settings:

  1. Sign into the AWS Management Console and open the CloudFront console.
  2. Choose Create Distribution.
  3. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  4. Choose the Origin Domain Name from the dropdown list. It will be YOURS3BUCKETNAME.s3.amazonaws.com.
  5. For Restrict Bucket Access, select Yes.
  6. For Origin Access Identity, select Create a New Identity.
  7. For Grant Read Permission on Bucket, select Yes, Update Bucket Policy.
  8. For the Viewer Protocol Policy, select Redirect HTTP to HTTPS.
  9. For Cache Policy, select Managed-Caching Disabled.
  10. Set the Default Root Object to index.html.(Optional) Add a comment. Comments are a good place to describe the purpose of your distribution, for example, “Amazon Cognito SPA.”
  11. Select Create Distribution. The distribution will take a few minutes to create and update.
  12. Copy the Domain Name. This is the CloudFront distribution domain name, which you will use in a later step as the DOMAINNAME value in the YOUR_REDIRECT_URI variable.

Step 6. Create the app

Now that you’ve created the Amazon S3 bucket for static website hosting and the CloudFront distribution for the site, you’re ready to use the code that follows to create a sample app.

Use the following information from the previous steps:

  1. YOUR_COGNITO_DOMAIN_PREFIX is from Step 2.
  2. YOUR_REGION is the AWS region you used in Step 4 when you created your Amazon S3 bucket.
  3. YOUR_APPCLIENT_ID is the App client ID from Step 3.
  4. YOUR_USERPOOL_ID is the Pool ID from Step 1.
  5. YOUR_REDIRECT_URI, which is https://DOMAINNAME/index.html, where DOMAINNAME is your domain name from Step 5.

Create userprofile.js

Use the following text to create the userprofile.js file. Substitute the preceding pre-existing values for the variables in the text.

var myHeaders = new Headers();
myHeaders.set('Cache-Control', 'no-store');
var urlParams = new URLSearchParams(window.location.search);
var tokens;
var domain = "YOUR_COGNITO_DOMAIN_PREFIX";
var region = "YOUR_REGION";
var appClientId = "YOUR_APPCLIENT_ID";
var userPoolId = "YOUR_USERPOOL_ID";
var redirectURI = "YOUR_REDIRECT_URI";

//Convert Payload from Base64-URL to JSON
const decodePayload = payload => {
  const cleanedPayload = payload.replace(/-/g, '+').replace(/_/g, '/');
  const decodedPayload = atob(cleanedPayload)
  const uriEncodedPayload = Array.from(decodedPayload).reduce((acc, char) => {
    const uriEncodedChar = ('00' + char.charCodeAt(0).toString(16)).slice(-2)
    return `${acc}%${uriEncodedChar}`
  }, '')
  const jsonPayload = decodeURIComponent(uriEncodedPayload);

  return JSON.parse(jsonPayload)
}

//Parse JWT Payload
const parseJWTPayload = token => {
    const [header, payload, signature] = token.split('.');
    const jsonPayload = decodePayload(payload)

    return jsonPayload
};

//Parse JWT Header
const parseJWTHeader = token => {
    const [header, payload, signature] = token.split('.');
    const jsonHeader = decodePayload(header)

    return jsonHeader
};

//Generate a Random String
const getRandomString = () => {
    const randomItems = new Uint32Array(28);
    crypto.getRandomValues(randomItems);
    const binaryStringItems = randomItems.map(dec => `0${dec.toString(16).substr(-2)}`)
    return binaryStringItems.reduce((acc, item) => `${acc}${item}`, '');
}

//Encrypt a String with SHA256
const encryptStringWithSHA256 = async str => {
    const PROTOCOL = 'SHA-256'
    const textEncoder = new TextEncoder();
    const encodedData = textEncoder.encode(str);
    return crypto.subtle.digest(PROTOCOL, encodedData);
}

//Convert Hash to Base64-URL
const hashToBase64url = arrayBuffer => {
    const items = new Uint8Array(arrayBuffer)
    const stringifiedArrayHash = items.reduce((acc, i) => `${acc}${String.fromCharCode(i)}`, '')
    const decodedHash = btoa(stringifiedArrayHash)

    const base64URL = decodedHash.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '');
    return base64URL
}

// Main Function
async function main() {
  var code = urlParams.get('code');

  //If code not present then request code else request tokens
  if (code == null){

    // Create random "state"
    var state = getRandomString();
    sessionStorage.setItem("pkce_state", state);

    // Create PKCE code verifier
    var code_verifier = getRandomString();
    sessionStorage.setItem("code_verifier", code_verifier);

    // Create code challenge
    var arrayHash = await encryptStringWithSHA256(code_verifier);
    var code_challenge = hashToBase64url(arrayHash);
    sessionStorage.setItem("code_challenge", code_challenge)

    // Redirtect user-agent to /authorize endpoint
    location.href = "https://"+domain+".auth."+region+".amazoncognito.com/oauth2/authorize?response_type=code&state="+state+"&client_id="+appClientId+"&redirect_uri="+redirectURI+"&scope=openid&code_challenge_method=S256&code_challenge="+code_challenge;
  } else {

    // Verify state matches
    state = urlParams.get('state');
    if(sessionStorage.getItem("pkce_state") != state) {
        alert("Invalid state");
    } else {

    // Fetch OAuth2 tokens from Cognito
    code_verifier = sessionStorage.getItem('code_verifier');
  await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/token?grant_type=authorization_code&client_id="+appClientId+"&code_verifier="+code_verifier+"&redirect_uri="+redirectURI+"&code="+ code,{
  method: 'post',
  headers: {
    'Content-Type': 'application/x-www-form-urlencoded'
  }})
  .then((response) => {
    return response.json();
  })
  .then((data) => {

    // Verify id_token
    tokens=data;
    var idVerified = verifyToken (tokens.id_token);
    Promise.resolve(idVerified).then(function(value) {
      if (value.localeCompare("verified")){
        alert("Invalid ID Token - "+ value);
        return;
      }
      });
    // Display tokens
    document.getElementById("id_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.id_token),null,'\t');
    document.getElementById("access_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.access_token),null,'\t');
  });

    // Fetch from /user_info
    await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/userInfo",{
      method: 'post',
      headers: {
        'authorization': 'Bearer ' + tokens.access_token
    }})
    .then((response) => {
      return response.json();
    })
    .then((data) => {
      // Display user information
      document.getElementById("userInfo").innerHTML = JSON.stringify(data, null,'\t');
    });
  }}}
  main();

Create the verifier.js file

Use the following text to create the verifier.js file.

var key_id;
var keys;
var key_index;

//verify token
async function verifyToken (token) {
//get Cognito keys
keys_url = 'https://cognito-idp.'+ region +'.amazonaws.com/' + userPoolId + '/.well-known/jwks.json';
await fetch(keys_url)
.then((response) => {
return response.json();
})
.then((data) => {
keys = data['keys'];
});

//Get the kid (key id)
var tokenHeader = parseJWTHeader(token);
key_id = tokenHeader.kid;

//search for the kid key id in the Cognito Keys
const key = keys.find(key =>key.kid===key_id)
if (key === undefined){
return "Public key not found in Cognito jwks.json";
}

//verify JWT Signature
var keyObj = KEYUTIL.getKey(key);
var isValid = KJUR.jws.JWS.verifyJWT(token, keyObj, {alg: ["RS256"]});
if (isValid){
} else {
return("Signature verification failed");
}

//verify token has not expired
var tokenPayload = parseJWTPayload(token);
if (Date.now() >= tokenPayload.exp * 1000) {
return("Token expired");
}

//verify app_client_id
var n = tokenPayload.aud.localeCompare(appClientId)
if (n != 0){
return("Token was not issued for this audience");
}
return("verified");
};

Create an index.html file

Use the following text to create the index.html file.

<!doctype html>

<html lang="en">
<head>
<meta charset="utf-8">

<title>MyApp</title>
<meta name="description" content="My Application">
<meta name="author" content="Your Name">
</head>

<body>
<h2>Cognito User</h2>

<p style="white-space:pre-line;" id="token_status"></p>

<p>Id Token</p>
<p style="white-space:pre-line;" id="id_token"></p>

<p>Access Token</p>
<p style="white-space:pre-line;" id="access_token"></p>

<p>User Profile</p>
<p style="white-space:pre-line;" id="userInfo"></p>
<script language="JavaScript" type="text/javascript"
src="https://kjur.github.io/jsrsasign/jsrsasign-latest-all-min.js">
</script>
<script src="js/verifier.js"></script>
<script src="js/userprofile.js"></script>
</body>
</html>

Upload the files into the Amazon S3 Bucket you created earlier

Upload the files you just created to the Amazon S3 bucket that you created in Step 4. If you’re using Chrome or Firefox browsers, you can choose the folders and files to upload and then drag and drop them into the destination bucket. Dragging and dropping is the only way that you can upload folders.

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. In the Bucket name list, choose the name of the bucket that you created earlier in Step 4.
  3. In a window other than the console window, select the index.html file to upload. Then drag and drop the file into the console window that lists the destination bucket.
  4. In the Upload dialog box, choose Upload.
  5. Choose Create Folder.
  6. Enter the name js and choose Save.
  7. Choose the js folder.
  8. In a window other than the console window, select the userprofile.js and verifier.js files to upload. Then drag and drop the files into the console window js folder.

    Note: The Amazon S3 bucket root will contain the index.html file and a js folder. The js folder will contain the userprofile.js and verifier.js files.

Step 7. Configure the app client settings

Use the Amazon Cognito console to configure the app client settings, including identity providers, OAuth flows, and OAuth scopes.

Configure the app client settings:

  1. Go to the Amazon Cognito console.
  2. Choose Manage your User Pools.
  3. Select your user pool.
  4. Select App integration, and then select App client settings.
  5. Under Enabled Identity Providers, select Cognito User Pool.(Optional) You can add federated identity providers. Adding User Pool Sign-in Through a Third-Party has more information about how to add federation providers.
  6. Enter the Callback URL(s) where the user is to be redirected after successfully signing in. The callback URL is the URL of your web app that will receive the authorization code. In our example, this will be the Domain Name for the CloudFront distribution you created earlier. It will look something like https://DOMAINNAME/index.html where DOMAINNAME is xxxxxxx.cloudfront.net.

    Note: HTTPS is required for the Callback URLs. For this example, I used CloudFront as a HTTPS endpoint for the app in Amazon S3.

  7. Next, select Authorization code grant from the Allowed OAuth Flows and OpenID from Allowed OAuth Scopes. The OpenID scope will return the ID token and grant access to all user attributes that are readable by the client.
  8. Choose Save changes.

Step 8. Show the app home page

Now that the Amazon Cognito user pool is configured and the sample app is built, you can test using Amazon Cognito as an OP from the sample JavaScript app you created in Step 6.

View the app’s home page:

  1. Open a web browser and enter the app’s home page URL using the CloudFront distribution to serve your index.html page created in Step 6 (https://DOMAINNAME/index.html) and the app will redirect the browser to the Amazon Cognito /authorize endpoint.
  2. The /authorize endpoint redirects the browser to the Amazon Cognito hosted UI, where the user can sign in or sign up. The following figure shows the user sign-in page.

    Figure 8: User sign-in page

    Figure 8: User sign-in page

Step 9. Create a user

You can use the Amazon Cognito user pool to manage your users or you can use a federated identity provider. Users can sign in or sign up from the Amazon Cognito hosted UI or from a federated identity provider. If you configured a federated identity provider, users will see a list of federated providers that they can choose from. When a user chooses a federated identity provider, they are redirected to the federated identity provider sign-in page. After signing in, the browser is directed back to Amazon Cognito. For this post, Amazon Cognito is the only identity provider, so you will use the Amazon Cognito hosted UI to create an Amazon Cognito user.

Create a new user using Amazon Cognito hosted UI:

  1. Create a new user by selecting Sign up and entering a username, password, and email address. Then select the Sign up button. The following figure shows the sign up screen.

    Figure 9: Sign up with a new account

    Figure 9: Sign up with a new account

  2. The Amazon Cognito sign up workflow will verify the email address by sending a verification code to that address. The following figure shows the prompt to enter the verification code.

    Figure 10: Enter the verification code

    Figure 10: Enter the verification code

  3. Enter the code from the verification email in the Verification Code text box.
  4. Select Confirm Account.

Step 10. Viewing the Amazon Cognito tokens and profile information

After authentication, the app displays the tokens and user information. The following figure shows the OAuth2 access token and OIDC ID token that are returned from the /token endpoint and the user profile returned from the /userInfo endpoint. Now that the user has been authenticated, the application can use the user’s email address to look up the user’s account information in an application data store. Based on the user’s account information, the application can grant/restrict access to paid content or show account information like order history.

Figure 11: Token and user profile information

Figure 11: Token and user profile information

Note: Many browsers will cache redirects. If your browser is repeatedly redirecting to the index.html page, clear the browser cache.

Summary

In this post, we’ve shown you how easy it is to add user authentication to your web and mobile apps with Amazon Cognito.

We created a Cognito User Pool as our user directory, assigned a domain name to the Amazon Cognito hosted UI, and created an application client for our application. Then we created an Amazon S3 bucket to host our website. Next, we created a CloudFront distribution for our Amazon S3 bucket. Then we created our application and uploaded it to our Amazon S3 website bucket. From there, we configured the client app settings with our identity provider, OAuth flows, and scopes. Then we accessed our application and used the Amazon Cognito sign-in flow to create a username and password. Finally, we logged into our application to see the OAuth and OIDC tokens.

Amazon Cognito saves you time and effort when implementing authentication with an intuitive UI, OAuth2 and OIDC support, and customizable workflows. You can now focus on building features that are important to your core business.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

George Conti

George is a Solution Architect for the AWS Financial Services team. He is passonate about technology and helping Financial Services Companies build solutions with AWS Services.

Architecting for database encryption on AWS

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/architecting-for-database-encryption-on-aws/

In this post, I review the options you have to protect your customer data when migrating or building new databases in Amazon Web Services (AWS). I focus on how you can support sensitive workloads in ways that help you maintain compliance and regulatory obligations, and meet security objectives.

Understanding transparent data encryption

I commonly see enterprise customers migrating existing databases straight from on-premises to AWS without reviewing their design. This might seem simpler and faster, but they miss the opportunity to review the scalability, cost-savings, and feature capability of native cloud services. A straight lift and shift migration can also create unnecessary operational overheads, carry-over unneeded complexity, and result in more time spent troubleshooting and responding to events over time.

One example is when enterprise customers who are using Transparent Data Encryption (TDE) or Extensible Key Management (EKM) technologies want to reuse the same technologies in their migration to AWS. TDE and EKM are database technologies that encrypt and decrypt database records as the records are written and read to the underlying storage medium. Customers use TDE features in Microsoft SQL Server, Oracle 10g and 11g, and Oracle Enterprise Edition to meet requirements for data-at-rest encryption. This shouldn’t mean that TDE is the requirement. It’s infrequent that an organizational policy or compliance framework specifies a technology such as TDE in the actual requirement. For example, the Payment Card Industry Data Security Standard (PCI-DSS) standard requires that sensitive data must be protected using “Strong cryptography with associated key-management processes and procedures.” Nowhere does PCI-DSS endorse or require the use of a specific technology.

Understanding risks

It’s important that you understand the risks that encryption-at-rest mitigates before selecting a technology to use. Encryption-at-rest, in the context of databases, generally manages the risk that one of the disks used to store database data is physically stolen and thus compromised. In on-premises scenarios, TDE is an effective technology used to manage this risk. All data from the database—up to and including the disk—is encrypted. The database manages all key management and cryptographic operations. You can also use TDE with a hardware security module (HSM) so that the keys and cryptography for the database are managed outside of the database itself. In TDE implementations, the HSM is used only to manage the key encryption keys (KEK), and not the data encryption keys (DEK) themselves. The DEKs are in volatile memory in the database at runtime, and so the cryptographic operations occur on the database itself.

You can also use native operating system encryption technologies such as dm-crypt or LUKS (Linux Unified Key Setup). Dm-crypt is a full disk encryption (FDE) subsystem in Linux kernel version 2.6 and beyond. Dm-crypt can be used on its own or with LUKS as an extension to add more features. When using dm-crypt, the operating system kernel is responsible for encrypting and decrypting data as it’s written and read from the attached volumes. This would achieve the same outcome as TDE—data written and read to the disk volume is encrypted, and the risk related to physical disk compromise is managed. DEKs are in runtime memory of the machine running the database.

With some TDE implementations, you can encrypt tables, rows, columns, and cells with different DEKs to achieve granular separation of duties between operators. Customers can then configure TDE to authorize access to each DEK based on database login credentials and job function, helping to manage risks associated with unauthorized access. However, the most common configuration I’ve seen is to rely on whole database encryption when using TDE. This configuration gives similar protection against the identified risks as dm-crypt with LUKS used without an HSM, since the DEKs and KEKs are stored within the instance in both cases and the result is that the database data on disk is encrypted.

Using encryption to manage data at rest risks in AWS

When you move to AWS, you gain additional security capabilities that can simplify your security implementations. Since the announcement of the AWS Key Management Service (AWS KMS) in 2014, it has been tightly integrated with Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), and dozens of other services on AWS. This means that data is encrypted on disk by checking a single check box. Furthermore, you get the benefits of AWS KMS for key management and cryptographic operations, while being transparent to the Amazon Elastic Compute Cloud (Amazon EC2) instance where the data is being encrypted and decrypted. For simplicity, the authorization for access to the data is managed entirely by AWS Identity and Access Management (IAM) and AWS KMS key resource policies.

If you need more granular access control to the data, you can use the AWS Encryption SDK to encrypt data at the application layer. That provides the same effect as TDE cell-level protection, with a FIPS140-2 Level 2 validated HSM, as might be required by a recognizing standard.

If you must use a FIPS140-2 Level 3 validated HSM to meet more stringent compliance standards or regulations, then you can use the Custom Key Store capability of AWS KMS to achieve that—again in a transparent way. This option has a trade-off, as there is additional operational overhead in terms of managing an AWS CloudHSM cluster.

Many customers choose to migrate their database into the managed Amazon Relational Database Service (Amazon RDS), rather than managing the database instance themselves. Like the Amazon EC2 service, RDS uses Amazon EBS volumes for its data storage, and so can seamlessly use AWS KMS for encryption at rest functionality. When you do so, your management overhead for the protection of data-at-rest reduces to almost zero. This lets you focus on business value while AWS is responsible for the management of your database and the protection of the underlying data. The next section reviews this option and others in more detail.

You can review the available Amazon RDS database engines and versions via the Amazon RDS User Guide documentation, or by running the following AWS Command Line Interface (AWS CLI) command:

aws rds describe-db-engine-versions --query "DBEngineVersions[].DBEngineVersionDescription" --region <regionIdentifier>

Recommended Solutions

If you’re moving an existing database to AWS, you have the following solutions for data at rest encryption. I go into more detail for each option below.

Table 1 – Encryption options

Option Database management Host Encryption Key management
1 Amazon managed Amazon RDS Amazon EBS AWS KMS
2 Amazon managed Amazon RDS Amazon EBS AWS KMS Custom Key Store
3 Customer managed Amazon EC2 Amazon EBS AWS KMS
4 Customer managed Amazon EC2 Amazon EBS AWS KMS Custom Key Store
5 Customer managed Amazon EC2 Amazon EBS LUKS
6 Customer managed Amazon EC2 Database Database TDE
7 Customer managed Amazon EC2 Database CloudHSM

Option 1 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within an AWS Region to provide resiliency. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended as your default database encryption approach.

Benefits

  • No key management requirement on host; key management is automated and performed by AWS KMS
  • Meets FIPS140-2 Level 2 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the operating system and database engine
  • Well-recognized configuration, with support offered through AWS Support
  • AWS KMS costs are comparatively low

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 2 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS custom key store

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within a Region to provide resiliency. CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended where FIPS140-2 Level 3 validation is a specified compliance requirement.

Benefits

  • No key management requirement on host; key management is performed by AWS KMS
  • Meets FIPS140-2 Level 3 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the database engine
  • Well-recognized configuration with support offered through AWS Support

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Option 3 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS

In this approach, the key difference is that you’re responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Key rotation and management is handled transparently by AWS
  • Data encryption keys are managed by the hypervisor, not by your EC2 instance
  • AWS KMS costs are comparatively low

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 4 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS custom key store

In this approach, you are again responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. And similar to Option 2, CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version and when FIPS140-2 Level 3 compliance is required.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Data encryption keys managed by the hypervisor, not by your EC2 instance
  • Keys managed by FIPS140-2 Level 3 validated HSM

Challenges

  • You’re responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 5 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by LUKS

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. You also need to install LUKS onto the Linux instance to manage the encryption of data on Amazon EBS.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Transparent encryption is managed by OS with LUKS

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Data encryption keys are managed directly on the EC2 instance, and not a dedicated key management system
  • Scaling must be vertical, which is slow and costly
  • LUKS is supported through open-source licensing
  • Support for backup and recovery is LUKS specific, and require additional consideration
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Note: This approach limits you to only Linux instances and requires the most technical knowledge and effort on your part. Options, such as BitLocker and SQL Server Always Encrypted, exist for Windows hosts, and the complexity and challenges are similar to those of LUKS.

Option 6 – Customer-managed database platform hosted on Amazon EC2 with database encryption and key management provided by TDE

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by the database engine to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • Data encryption keys are managed directly on the EC2 instance
  • Scaling is dependent on TDE functionality and Amazon EC2 scaling
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Option 7 – Customer-managed database platform hosted on Amazon EC2 with database encryption performed by TDE and key management provided by CloudHSM

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by a CloudHSM cluster to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Wallet keys (KEK) are managed by a FIPS140-2 Level 3 validated HSM
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Integration and support of CloudHSM with TDE might vary
  • Scaling is dependent on TDE functionality, Amazon EC2 scaling, and CloudHSM cluster.
  • Data encryption keys are managed on EC2 instance
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Summary

While you can operate in AWS similar to how you operate in your on-premises environment, the preceding configurations and recommendations show how you can significantly reduce your challenges and increase your benefits by using cloud-native security services like AWS KMS, Amazon RDS, and CloudHSM. Specifically, using Amazon RDS with Amazon EBS volumes encrypted by AWS KMS provides a highly scalable, resilient, and secure way to manage your keys in AWS.

While there might be some architectural redesign and configuration work needed to move an on-premises database into Amazon RDS, you can leverage AWS services to help you meet your compliance requirements with less effort. By offloading the OS and database maintenance responsibility to AWS, you simultaneously reduce operational friction and increase security. By migrating this way, you can benefit from the scalability and resilience of the AWS global infrastructure and expertise. Lastly, to get started with migrating your database to AWS, I encourage you to use the AWS Database Migration Service.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Author

Scott Conklin

Scott is a Senior Security Consultant with AWS Professional Services (Global Specialty Practice). Based out of Chicago with 4 years tenure, he is an avid distance runner, crypto nerd, lover of unicorns, and enjoys camping, nature, playing Minecraft with his 3 kids, and binge watching Amazon Prime with his wife.

AWS Firewall Manager helps automate security group management: 3 scenarios

Post Syndicated from Sonakshi Pandey original https://aws.amazon.com/blogs/security/aws-firewall-manager-helps-automate-security-group-management-3-scenarios/

In this post, we walk you through scenarios that use AWS Firewall Manager to centrally manage security groups across your AWS Organizations implementation. Firewall Manager is a security management tool that helps you centralize, configure, and maintain AWS WAF rules, AWS Shield Advanced protections, and Amazon Virtual Private Cloud (Amazon VPC) security groups across AWS Organizations.

A multi-account strategy provides the highest level of resource isolation, and helps you to efficiently track costs and avoid running into any API limits. Creating a separate account for each project, business unit, and development stage also enforces logical separation of your resources.

As organizations innovate, developers are constantly updating applications and, in the process, setting up new resources. Managing security groups for new resources across multiple accounts becomes complex as the organization grows. To enable developers to have control over the configuration of their own applications, you can use Firewall Manager to automate the auditing and management of VPC security groups across multiple Amazon Web Services (AWS) accounts.

Firewall Manager enables you to create security group policies and automatically implement them. You can do this across your entire organization, or limit it to specified accounts and organizational units (OU). Also, Firewall Manager lets you use AWS Config to identify and review resources that don’t comply with the security group policy. You can choose to view the accounts and resources that are out of compliance without taking corrective action, or to automatically remediate noncompliant resources.

Scenarios where AWS Firewall Manager can help manage security groups

Scenario 1: Central security group management for required security groups

Let’s consider an example where you’re running an ecommerce website. You’ve decided to use Organizations to centrally manage billing and several aspects of access, compliance, security, and sharing resources across AWS accounts. As shown in the following figure, AWS accounts that belong to the same team are grouped into OUs. In this example, the organization has a foundational OU, and multiple business OUs—ecommerce, digital marketing, and product.

Figure 1: Overview of ecommerce website

Figure 1: Overview of ecommerce website

The business OUs contain the development, test, and production accounts. Each of these accounts is managed by the developers in charge of development, test, and production stages used for the launch of the ecommerce website.

The product teams are responsible for configuring and maintaining the AWS environment according to the guidance from the security team. An intrusion detection system (IDS) has been set up to monitor infrastructure for security activity. The IDS architecture requires that an agent be installed on instances across multiple accounts. The IDS agent running on the Amazon Elastic Compute Cloud (Amazon EC2) instances protects their infrastructure from common security issues. The agent collects telemetry data used for analysis, and communicates with the central IDS instance that sits in the AWS security account. The central IDS instance analyzes the telemetry data and notifies the administrators with its findings.

For the host-based agent to communicate with the central system correctly, each Amazon EC2 instance must have specific inbound and outbound ports and specific destinations defined as allowed. To enable our product to focus on their applications, we want to use automation to ensure that the right network configuration is implemented so that instances can communicate with the central IDS.

You can address the preceding problem with Firewall Manager by implementing a common security group policy for required accounts. With Firewall Manager, you create a common IDS security group in the central security account and replicate it across other accounts in the ecommerce OU, as shown in the following figure.

Figure 2: Security groups central management with Firewall Manager

Figure 2: Security groups central management with Firewall Manager

Changes made to these security groups can be seamlessly propagated to all the accounts. The changes can be tracked from the Firewall Manager console as shown in figure 3. Firewall Manager propagates changes to the security groups based on the tags attached to the Amazon EC2 instance.

As shown in figure 3, with Firewall Manager you can quickly view the compliance status for each policy by looking at how many accounts are included in the scope of the policy and how many out of those are compliant or non-compliant. Firewall Manager is also integrated with AWS Security Hub, which can trigger security automation based on findings.

Figure 3: Firewall Manager findings

Figure 3: Firewall Manager findings

Scenario 2: Clean-up of unused and redundant security groups

Firewall Manager can also help manage the clean-up of unused and redundant security groups. In a development environment, instances are often terminated post testing, but the security groups associated with those instances might remain. We want to only remove the security groups that are no longer in use to avoid causing issues with running applications.

Figure 4: Ecommerce OU, accounts, and security groups

Figure 4: Ecommerce OU, accounts, and security groups

In our example, developers are testing features in a test account. In this scenario, once the testing is completed, the instances are terminated and the security groups remain in the account. The preceding figure shows unused security groups like Test1, Test2, and Test3 in the test account.

A Firewall Manager usage audit security group policy monitors your organization for unused and redundant security groups. You can configure Firewall Manager to automatically notify you of unused, redundant, or non-compliant security groups, and to automatically remove them. These actions are applied to existing and new accounts that are added to your organization.

Scenario 3: Audit and remediate overly permissive security groups across all AWS accounts

The security team is responsible for maintaining the security of the AWS environment and must monitor and remediate overly permissive security groups across all AWS accounts. Auditing security groups for overly permissive access is a critical security function and can become inefficient and time consuming when done manually.

You can use Firewall Manager content audit security group policy to provide auditing and enforcement of your organization’s security policy for risky security groups, most commonly known as allowed or blocked security group rules. This enables you to set guardrails and monitor for overly permissive rules centrally. For example, we set an allow list policy to allow secure shell access only from authorized IP addresses on the corporate network.

Firewall Manager enables you to create security group policies to protect all accounts across your organization. These policies are applied to accounts or to OUs that contain specific tags, as shown in figure 5. Using the Firewall Manager console, you can get a quick view of the non-compliant security groups across accounts in your organization. Additionally, Firewall Manager can be configured to send notifications to the security administrators or automatically remove non-compliant security groups.

In the policy scope, you can choose the AWS accounts this policy applies to, the resource type, and which resource to include based on the resource tags, as shown in figure 5.

Figure 5: Edit tags for policy scope

Figure 5: Edit tags for policy scope

Conclusion

This post shares a few core use cases that enable security practitioners to build the capability to centrally manage security groups across AWS Organizations. Developers can focus on building applications, while the audit and configuration of network controls is automated by Firewall Manager. The key use cases we discussed are:

  1. Common security group policies
  2. Content audit security groups policies
  3. Usage audit security group policies

Firewall Manager is useful in a dynamic and growing multi-account AWS environment. Follow the Getting Started with Firewall Manager guide to learn more about implementing this service in your AWS environment.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonakshi Pandey

Sonakshi is a Solutions Architect at Amazon Web Services. She helps customers migrate and optimize workloads on AWS. Sonakshi is based in Seattle and enjoys cooking, traveling, blogging, reading thriller novels, and spending time with her family.

Author

Laura Reith

Laura is a Solutions Architect at Amazon Web Services. Before AWS, she worked as a Solutions Architect in Taiwan focusing on physical security and retail analytics.

Author

Kevin Moraes

Kevin is a Partner Solutions Architect with AWS. Kevin enjoys working with customers and helping to build them in areas of Network Infrastructure, Security, and Migration conforming to best practices. When not at work, Kevin likes to travel, watch sports, and listen to music.

Isolating network access to your AWS Cloud9 environments

Post Syndicated from Brandon Wu original https://aws.amazon.com/blogs/security/isolating-network-access-to-your-aws-cloud9-environments/

In this post, I show you how to create isolated AWS Cloud9 environments for your developers without requiring ingress (inbound) access from the internet. I also walk you through optional steps to further isolate your AWS Cloud9 environment by removing egress (outbound) access. Until recently, AWS Cloud9 required you to allow ingress Secure Shell (SSH) access from authorized AWS Cloud9 IP addresses. Now AWS Cloud 9 allows you to create and run your development environments within your isolated Amazon Virtual Private Cloud (Amazon VPC), without direct connectivity from the internet, adding an additional layer of security.

AWS Cloud9 is an integrated development environment (IDE) that lets you write, run, edit, and debug code using only a web browser. Developers who use AWS Cloud9 have access to an isolated environment where they can innovate, experiment, develop, and perform early testing without impacting the overall security and stability of other environments. By using AWS Cloud9, you can store your code securely in a version control system (like AWS CodeCommit), configure your AWS Cloud9 EC2 development environments to use encrypted Amazon Elastic Block Store (Amazon EBS) volumes, and share your environments within the same account.

Solution overview

Before enhanced virtual private cloud (VPC) support was available, AWS Cloud9 required you to allow ingress Secure Shell (SSH) access from authorized AWS Cloud9 IP addresses in order to use the IDE. The addition of private VPC support enables you to create and run AWS Cloud9 environments in private subnets without direct connectivity from the internet. You can use VPC security groups to configure the ingress and egress traffic that you allow, or choose to disallow all traffic.

Since this feature uses AWS Systems Manager to support using AWS Cloud9 in private subnets, it’s worth taking a minute to read and understand a bit about it before you continue. Systems Manager Session Manager provides an interactive shell connection between AWS Cloud9 and its associated Amazon Elastic Compute Cloud (Amazon EC2) instance in the Amazon Virtual Private Cloud (Amazon VPC). The AWS Cloud9 instance initiates an egress connection to the Session Manager service using the pre-installed Systems Manager agent. In order to use this feature, your developers must have access to instances managed by Session Manager in their IAM policy.

When you create an AWS Cloud9 no-ingress EC2 instance (with access via Systems Manager) into a private subnet, its security group doesn’t have an ingress rule to allow incoming network traffic. The security group does, however, have an egress rule that permits egress traffic from the instance. AWS Cloud9 requires this to download packages and libraries to keep the AWS Cloud9 IDE up to date.

If you want to prevent egress connectivity in addition to ingress traffic for the instance, you can configure Systems Manager to use an interface VPC endpoint. This allows you to restrict egress connections from your environment and ensure the encrypted connections between the AWS Cloud9 EC2 instance and Systems Manager are carried over the AWS global network. The architecture of accessing your AWS Cloud9 instance using Systems Manager and interface VPC endpoints is shown in Figure 1.
 

Figure 1: Accessing AWS Cloud9 environment via AWS Systems Manager and Interface VPC Endpoints

Figure 1: Accessing AWS Cloud9 environment via AWS Systems Manager and Interface VPC Endpoints

Note: The use of interface VPC endpoints incurs an additional charge for each hour your VPC endpoints remain provisioned. This is in addition to the AWS Cloud9 EC2 instance cost.

Prerequisites

You must have a VPC configured with an attached internet gateway, public and private subnets, and a network address translation (NAT) gateway created in your public subnet. Your VPC must also have DNS resolution and DNS hostnames options enabled. To learn more, you can visit Working with VPCs and subnets, Internet gateways, and NAT gateways.

You must also give your developers access to their AWS Cloud9 environments managed by Session Manager.

AWS Cloud9 requires egress access to the internet for some features, including downloading required libraries or packages needed for updates to the IDE and running AWS Lambda functions. If you don’t want to allow egress internet access for your environment, you can create your VPC without an attached internet gateway, public subnet, and NAT gateway.

Implement the solution

To set up AWS Cloud9 with access via Systems Manager:

  1. Optionally, if no egress access is required, set up interface VPC endpoints for Session Manager
  2. Create a no-ingress Amazon EC2 instance for your AWS Cloud9 environment

(Optional) Set up interface VPC endpoints for Session Manager

Note: For no-egress environments only.

You can skip this step if you don’t need your VPC to restrict egress access. If you need your environment to restrict egress access, continue.

Start by using the AWS Management Console to configure Systems Manager to use an interface VPC endpoint (powered by AWS PrivateLink). If you’d prefer, you can use this custom AWS CloudFormation template to configure the VPC endpoints.

Interface endpoints allow you to privately access Amazon EC2 and System Manager APIs by using a private IP address. This also restricts all traffic between your managed instances, Systems Manager, and Amazon EC2 to the Amazon network. Using the interface VPC endpoint, you don’t need to set up an internet gateway, a NAT device, or a virtual private gateway.

To set up interface VPC endpoints for Session Manager

  1. Create a VPC security group to allow ingress access over HTTPS (port 443) from the subnet where you will deploy your AWS Cloud9 environment. This is applied to your interface VPC endpoints to allow connections from your AWS Cloud9 instance to use Systems Manager.
  2. Create a VPC endpoint.
  3. In the list of Service Names, select com.amazonaws.<region>.ssm service as shown in Figure 2.
     
    Figure 2: AWS PrivateLink service selection filter

    Figure 2: AWS PrivateLink service selection filter

  4. Select your VPC and private Subnets you want to associate the interface VPC endpoint with.
  5. Choose Enable for this endpoint for the Enable DNS name setting.
  6. Select the security group you created in Step 1.
  7. Add any optional tags for the interface VPC endpoint.
  8. Choose Create endpoint.
  9. Repeat Steps 2 through 8 to create interface VPC endpoints for the com.amazonaws.<region>.ssmmessages and com.amazonaws.<region>.ec2messages services.
  10. When all three interface VPC endpoints have a status of available, you can move to the next procedure.

Create a no-ingress Amazon EC2 instance for your AWS Cloud9 environment

Deploy a no-ingress Amazon EC2 instance for your AWS Cloud9 environment using the console. Optionally, you can use this custom AWS CloudFormation template to create the no-ingress Amazon EC2 instance. You can also use the AWS Command Line Interface, or AWS Cloud9 API to set up your AWS Cloud9 environment with access via Systems Manager.

As part of this process, AWS Cloud9 automatically creates three IAM resources pre-configured with the appropriate permissions:

  • An IAM service-linked role (AWSServiceRoleForAWSCloud9)
  • A service role (AWSCloud9SSMAccessRole)
  • An instance profile (AWSCloud9SSMInstanceProfile)

The AWSCloud9SSMAccessRole and AWSCloud9SSMInstanceProfile are attached to your AWS Cloud9 EC2 instance. This service role for Amazon EC2 is configured with the minimum permissions required to integrate with Session Manager. By default, AWS Cloud9 makes managed temporary AWS access credentials available to you in the environment. If you need to grant additional permissions to your AWS Cloud9 instance to access other services, you can create a new role and instance profile and attach it to your AWS Cloud9 instance.

By default, your AWS Cloud9 environment is created with a VPC security group with no ingress access and allowing egress access so the AWS Cloud9 IDE can download required libraries or packages needed for urgent updates to IDE plugins. You can optionally configure your AWS Cloud9 environment to restrict egress access by removing the egress rules in the security group. If you restrict egress access, some features won’t work (for example, the AWS Lambda plugin and updates to IDE plugins).

To use the console to create your AWS Cloud9 environment

  1. Navigate to the AWS Cloud9 console.
  2. Select Create environment on the top right of the console.
  3. Enter a Name and Description.
  4. Select Next step.
  5. Select Create a new no-ingress EC2 instance for your environment (access via Systems Manager) as shown in Figure 3.
     
    Figure 3: AWS Cloud9 environment settings

    Figure 3: AWS Cloud9 environment settings

  6. Select your preferred Instance type, Platform, and Cost-saving setting.
  7. You can optionally configure the Network settings to select the Network (VPC) and private Subnet to create your AWS Cloud9 instance.
  8. Select Next step.

Your AWS Cloud9 environment is ready to use. You can access your AWS Cloud9 environment console via Session Manager using encrypted connections over the AWS global network as shown in Figure 4.
 

Figure 4: AWS Cloud9 instance console access

Figure 4: AWS Cloud9 instance console access

You can see that this AWS Cloud9 connection is using Session Manager by navigating to the Session Manager console and viewing the active sessions as shown in Figure 5.
 

Figure 5: AWS Systems Manager Session Manager active sessions

Figure 5: AWS Systems Manager Session Manager active sessions

Summary

Security teams are charged with providing secure operating environments without inhibiting developer productivity. With the ability to deploy your AWS Cloud9 environment instances in a private subnet, you can provide a seamless experience for developing applications using the AWS Cloud9 IDE while enabling security teams to enforce key security controls to protect their corporate networks and intellectual property.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Cloud9 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Brandon Wu

Brandon is a security solutions architect helping financial services organizations secure their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and experimenting in the kitchen.

On-Demand SCIM provisioning of Azure AD to AWS SSO with PowerShell

Post Syndicated from Natalie Doerr original https://aws.amazon.com/blogs/security/on-demand-scim-provisioning-of-azure-ad-to-aws-sso-with-powershell/

In this post, I will demonstrate how you can use a PowerShell script to initiate an on-demand synchronization between Azure Active Directory and AWS Single Sign-On (AWS SSO) and avoid the default 40-minute synchronization schedule between both identity providers. This solution helps enterprises quickly synchronize changes made to users, groups, or permissions within Azure AD with AWS SSO. This allows user or permission changes to be quickly reflected in associated AWS accounts.

Prerequisites

You need the following to complete this session:

This post focuses on the steps needed to set up the on-demand sync solution. You can find specifics on how to set up and use PowerShell and the Azure PowerShell modules at Installing Azure PowerShell.
 

Figure 1: Triggering the SCIM Endpoint to sync all users and groups

Figure 1: Triggering the SCIM Endpoint to sync all users and groups

Grant permission to the Graph API to access the Default Directory in Azure AD

To get started, grant the permissions needed for the application to have access to the directory endpoint.

To grant permissions

  1. Sign in to the Azure Portal and navigate to the Azure AD dashboard.
  2. From the left navigation pane, select App registrations. If you don’t see your application listed, select the All applications tab.
    For this example, I’m using an application named AWS.
     
    Figure 2: Select the AWS app registration

    Figure 2: Select the AWS app registration

  3. Choose API permissions from the navigation pane.
  4. Choose the Add a permission option.
     
    Figure 3: Select the Add API permission

    Figure 3: Select the Add API permission

  5. From the settings page that opens, choose the Microsoft Graph option.
     
    Figure 4: Request API permissions

    Figure 4: Request API permissions

    Under What type of permissions does your application require, select Delegated permissions and enter directory.readwrite.all in the permissions search field. Select Directory.ReadWrite.All and choose Add permissions at the bottom of the page.
     

    Figure 5: Request API permissions - Add permissions

    Figure 5: Request API permissions – Add permissions

  6. On the API permissions page, choose Grant admin consent for Default Directory and select Yes.
     
    Figure 6: Grant permission for the account to have administrator permissions

    Figure 6: Grant permission for the account to have administrator permissions

Create a certificate and secret to access the application

To get started, create a certificate and secret which grants secure access to the AWS application.

To create a certificate and secret

  1. Choose Certificate & secrets from the left navigation menu and then choose New client secret.
     
    Figure 7: Creating a client secret for 1 year

    Figure 7: Creating a client secret for 1 year

  2. Select the desired length of the certificate.
  3. Provide a description and choose Add.
    1. Copy the value of the certificate that’s generated and save it to use later in this process.
    2. After you’ve saved the value to use later, select Home from the top left corner of the screen.
    Figure 8: Make sure you click Copy to clipboard to store the value of the secret

    Figure 8: Make sure you click Copy to clipboard to store the value of the secret

Create a user with permissions to run the code

Now that you’ve given your application access to the directory, let’s create a user and assign the proper permissions to run the code.

To create a user and assign permissions

  1. Choose Azure Active Directory from the Azure services list.
  2. Choose Users and select New user. The User name, First name, and Last name fields are required. In this example, I set the User name and First name to Auth and the Last name to User.
    1. Take note of the password that is set for this user and save it to use later.
    2. Once completed, choose Create.
    Figure 9: Create a user in Azure AD

    Figure 9: Create a user in Azure AD

  3. Select the newly created user from the list.
    1. On the left navigation pane, select Assigned roles.
    2. Choose Add assignments.
    3. Choose Hybrid identity administrator and select Add.
    Figure 10: Assign the user the role to trigger the API

    Figure 10: Assign the user the role to trigger the API

  4. Select Default Directory from the top of the navigation pane.
    1. Choose Enterprise applications.
    2. Choose the AWS application.
    3. Select Assign users and groups.
    Figure 11: Azure Enterprise applications - Assign users and groups

    Figure 11: Azure Enterprise applications – Assign users and groups

  5. Choose + Add user at the top of the window.
    1. Select the user you created earlier. I select Auth as that was the user I created earlier.
    2. Choose Select and then Assign.
    Figure 12: Select the user we created earlier from Figure 9

    Figure 12: Select the user we created earlier from Figure 9

     

    Figure 13: Assign the user to the application

    Figure 13: Assign the user to the application

  6. Now that you’ve added the user, you can see that the user is assigned to the application.
     
    Figure 14: Screen now showing that the user has been assigned to the application

    Figure 14: Screen now showing that the user has been assigned to the application

  7. It’s recommended to log in to the Azure portal as the user you just created in a new incognito or private browser session. As part of the first log in, you’ll be prompted to change the password.

Prerequisites to trigger the SCIM endpoint

You need the following items to run the PowerShell code that triggers the endpoint.

  1. From the application registration, retrieve the items shown below. Note that you must use the client secret saved earlier when the certificate was created.
    • Tenant ID
    • Display name
    • Application ID
    • Client secret
    • User name
    • Password
  2. Copy the items to a notepad in the preceding order so you can enter all of them through a single copy and paste action while running the script.
  3. From the menu, select Azure Active Directory.
  4. Choose App registrations and select the AWS App that was set up.
  5. Copy the Application (client) ID and the Directory (tenant) ID.
Figure 15: App registration contains all the items needed for the PowerShell script

Figure 15: App registration contains all the items needed for the PowerShell script

Trigger the SCIM endpoint with PowerShell

Now that you’ve completed all of the previous steps, you need to copy the code from the GitHub repository to your local machine and run it. We’ve configured the code to run manually, but you can also automate it to trigger an Azure Automation runbook when users are added to Azure through Alerts. You can also configure CloudWatch Events to run a Lambda function at periodic intervals.

To trigger the SCIM endpoint

  1. Copy the code from the GitHub repository.
  2. Save the code using the code editor of your choice, or you can download Visual Studio Code. Give the file a user-friendly name, such as Sync.ps1.
  3. Navigate to the location where you saved the file and run ./sync.ps1.
  4. When prompted, enter the values from the notepad. You can paste these all at one time so you don’t have to copy and paste each individual item.

    Note: When copying and pasting in Windows, choose the PowerShell icon, then Edit > Paste.

     

    Figure 16: Windows Command Prompt – Select Paste to copy all items needed to trigger the sync

    Figure 16: Windows Command Prompt – Select Paste to copy all items needed to trigger the sync

After you paste the values into the PowerShell window, you see the script input as shown in the following screenshot. The client secret and password are secure values and are masked for security purposes.
 

Figure 17: PowerShell script with input values pasted in

Figure 17: PowerShell script with input values pasted in

After the job has started in PowerShell, two messages are displayed. One indicating that synchronization is starting and a following message when synchronization has completed. Both are shown in the following figure.
 

Figure 18: Output from a successful run of the PowerShell script

Figure 18: Output from a successful run of the PowerShell script

View the synchronization status and logs

To verify that the job ran successfully, you can check the completed time from the Azure portal. You can verify the time the script ran by viewing the completion time along with the current status.

To view the status and logs

  1. From the menu, choose Azure Active Directory.
  2. Choose Enterprise applications and select the AWS App.
  3. From the left navigation menu, choose Provisioning and then choose View provisioning details. This displays the last time the sync completed.
     
    Figure 19: View the Provisioning details about the job

    Figure 19: View the Provisioning details about the job

Summary

In this post, I demonstrate how you can use a PowerShell script to trigger the SCIM endpoint to on-demand synchronize Azure AD with AWS Single Sign-On. You can find the code in this GitHub repository and use it to synchronize user and group changes on demand.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Aidan Keane

Aidan is a Senior Technical Account Manager for AWS Enterprise Support. He has been working with Cloud technologies for more than 5 years. Outside of technology, he is a sports enthusiast who enjoys golf, biking, and watching Liverpool FC. He spends his free time with his family and enjoys traveling to Ireland and South America.

Cross-account and cross-region deployment using GitHub actions and AWS CDK

Post Syndicated from DAMODAR SHENVI WAGLE original https://aws.amazon.com/blogs/devops/cross-account-and-cross-region-deployment-using-github-actions-and-aws-cdk/

GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.

A cross-account deployment strategy is a CI/CD pattern or model in AWS. In this pattern, you have a designated AWS account called tools, where all CI/CD pipelines reside. Deployment is carried out by these pipelines across other AWS accounts, which may correspond to dev, staging, or prod. For more information about a cross-account strategy in reference to CI/CD pipelines on AWS, see Building a Secure Cross-Account Continuous Delivery Pipeline.

In this post, we show you how to use GitHub Actions to deploy an AWS Lambda-based API to an AWS account and Region using the cross-account deployment strategy.

Using GitHub Actions may have associated costs in addition to the cost associated with the AWS resources you create. For more information, see About billing for GitHub Actions.

Prerequisites

Before proceeding any further, you need to identify and designate two AWS accounts required for the solution to work:

  • Tools – Where you create an AWS Identity and Access Management (IAM) user for GitHub Actions to use to carry out deployment.
  • Target – Where deployment occurs. You can call this as your dev/stage/prod environment.

You also need to create two AWS account profiles in ~/.aws/credentials for the tools and target accounts, if you don’t already have them. These profiles need to have sufficient permissions to run an AWS Cloud Development Kit (AWS CDK) stack. They should be your private profiles and only be used during the course of this use case. So, it should be fine if you want to use admin privileges. Don’t share the profile details, especially if it has admin privileges. I recommend removing the profile when you’re finished with this walkthrough. For more information about creating an AWS account profile, see Configuring the AWS CLI.

Solution overview

You start by building the necessary resources in the tools account (an IAM user with permissions to assume a specific IAM role from the target account to carry out deployment). For simplicity, we refer to this IAM role as the cross-account role, as specified in the architecture diagram.

You also create the cross-account role in the target account that trusts the IAM user in the tools account and provides the required permissions for AWS CDK to bootstrap and initiate creating an AWS CloudFormation deployment stack in the target account. GitHub Actions uses the tools account IAM user credentials to the assume the cross-account role to carry out deployment.

In addition, you create an AWS CloudFormation execution role in the target account, which AWS CloudFormation service assumes in the target account. This role has permissions to create your API resources, such as a Lambda function and Amazon API Gateway, in the target account. This role is passed to AWS CloudFormation service via AWS CDK.

You then configure your tools account IAM user credentials in your Git secrets and define the GitHub Actions workflow, which triggers upon pushing code to a specific branch of the repo. The workflow then assumes the cross-account role and initiates deployment.

The following diagram illustrates the solution architecture and shows AWS resources across the tools and target accounts.

Architecture diagram

Creating an IAM user

You start by creating an IAM user called git-action-deployment-user in the tools account. The user needs to have only programmatic access.

  1. Clone the GitHub repo aws-cross-account-cicd-git-actions-prereq and navigate to folder tools-account. Here you find the JSON parameter file src/cdk-stack-param.json, which contains the parameter CROSS_ACCOUNT_ROLE_ARN, which represents the ARN for the cross-account role we create in the next step in the target account. In the ARN, replace <target-account-id> with the actual account ID for your designated AWS target account.                                             Replace <target-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the tools AWS account profile you created earlier. The script compiles the code, builds a package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd aws-cross-account-cicd-git-actions-prereq/tools-account/
./deploy.sh "<AWS-TOOLS-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in the tools account: CDKToolkit and cf-GitActionDeploymentUserStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an Amazon Simple Storage Service (Amazon S3) bucket needed to hold deployment assets such as a CloudFormation template and Lambda code package. cf-GitActionDeploymentUserStack creates the IAM user with permission to assume git-action-cross-account-role (which you create in the next step). On the Outputs tab of the stack, you can find the user access key and the AWS Secrets Manager ARN that holds the user secret. To retrieve the secret, you need to go to Secrets Manager. Record the secret to use later.

Stack that creates IAM user with its secret stored in secrets manager

Creating a cross-account IAM role

In this step, you create two IAM roles in the target account: git-action-cross-account-role and git-action-cf-execution-role.

git-action-cross-account-role provides required deployment-specific permissions to the IAM user you created in the last step. The IAM user in the tools account can assume this role and perform the following tasks:

  • Upload deployment assets such as the CloudFormation template and Lambda code package to a designated S3 bucket via AWS CDK
  • Create a CloudFormation stack that deploys API Gateway and Lambda using AWS CDK

AWS CDK passes git-action-cf-execution-role to AWS CloudFormation to create, update, and delete the CloudFormation stack. It has permissions to create API Gateway and Lambda resources in the target account.

To deploy these two roles using AWS CDK, complete the following steps:

  1. In the already cloned repo from the previous step, navigate to the folder target-account. This folder contains the JSON parameter file cdk-stack-param.json, which contains the parameter TOOLS_ACCOUNT_USER_ARN, which represents the ARN for the IAM user you previously created in the tools account. In the ARN, replace <tools-account-id> with the actual account ID for your designated AWS tools account.                                             Replace <tools-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the target AWS account profile you created earlier. The script compiles the code, builds the package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd ../target-account/
./deploy.sh "<AWS-TARGET-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in your target account: CDKToolkit and cf-CrossAccountRolesStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an S3 bucket to hold deployment assets such as the CloudFormation template and Lambda code package. The cf-CrossAccountRolesStack creates the two IAM roles we discussed at the beginning of this step. The IAM role git-action-cross-account-role now has the IAM user added to its trust policy. On the Outputs tab of the stack, you can find these roles’ ARNs. Record these ARNs as you conclude this step.

Stack that creates IAM roles to carry out cross account deployment

Configuring secrets

One of the GitHub actions we use is aws-actions/configure-aws-credentials@v1. This action configures AWS credentials and Region environment variables for use in the GitHub Actions workflow. The AWS CDK CLI detects the environment variables to determine the credentials and Region to use for deployment.

For our cross-account deployment use case, aws-actions/configure-aws-credentials@v1 takes three pieces of sensitive information besides the Region: AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_SECRET, and CROSS_ACCOUNT_ROLE_TO_ASSUME. Secrets are recommended for storing sensitive pieces of information in the GitHub repo. It keeps the information in an encrypted format. For more information about referencing secrets in the workflow, see Creating and storing encrypted secrets.

Before we continue, you need your own empty GitHub repo to complete this step. Use an existing repo if you have one, or create a new repo. You configure secrets in this repo. In the next section, you check in the code provided by the post to deploy a Lambda-based API CDK stack into this repo.

  1. On the GitHub console, navigate to your repo settings and choose the Secrets tab.
  2. Add a new secret with name as TOOLS_ACCOUNT_ACCESS_KEY_ID.
  3. Copy the access key ID from the output OutGitActionDeploymentUserAccessKey of the stack GitActionDeploymentUserStack in tools account.
  4. Enter the ID in the Value field.                                                                                                                                                                Create secret
  5. Repeat this step to add two more secrets:
    • TOOLS_ACCOUNT_SECRET_ACCESS_KEY (value retrieved from the AWS Secrets Manager in tools account)
    • CROSS_ACCOUNT_ROLE (value copied from the output OutCrossAccountRoleArn of the stack cf-CrossAccountRolesStack in target account)

You should now have three secrets as shown below.

All required git secrets

Deploying with GitHub Actions

As the final step, first clone your empty repo where you set up your secrets. Download and copy the code from the GitHub repo into your empty repo. The folder structure of your repo should mimic the folder structure of source repo. See the following screenshot.

Folder structure of the Lambda API code

We can take a detailed look at the code base. First and foremost, we use Typescript to deploy our Lambda API, so we need an AWS CDK app and AWS CDK stack. The app is defined in app.ts under the repo root folder location. The stack definition is located under the stack-specific folder src/git-action-demo-api-stack. The Lambda code is located under the Lambda-specific folder src/git-action-demo-api-stack/lambda/ git-action-demo-lambda.

We also have a deployment script deploy.sh, which compiles the app and Lambda code, packages the Lambda code into a .zip file, bootstraps the app by copying the assets to an S3 bucket, and deploys the stack. To deploy the stack, AWS CDK has to pass CFN_EXECUTION_ROLE to AWS CloudFormation; this role is configured in src/params/cdk-stack-param.json. Replace <target-account-id> with your own designated AWS target account ID.

Update cdk-stack-param.json in git-actions-cross-account-cicd repo with TARGET account id

Finally, we define the Git Actions workflow under the .github/workflows/ folder per the specifications defined by GitHub Actions. GitHub Actions automatically identifies the workflow in this location and triggers it if conditions match. Our workflow .yml file is named in the format cicd-workflow-<region>.yml, where <region> in the file name identifies the deployment Region in the target account. In our use case, we use us-east-1 and us-west-2, which is also defined as an environment variable in the workflow.

The GitHub Actions workflow has a standard hierarchy. The workflow is a collection of jobs, which are collections of one or more steps. Each job runs on a virtual machine called a runner, which can either be GitHub-hosted or self-hosted. We use the GitHub-hosted runner ubuntu-latest because it works well for our use case. For more information about GitHub-hosted runners, see Virtual environments for GitHub-hosted runners. For more information about the software preinstalled on GitHub-hosted runners, see Software installed on GitHub-hosted runners.

The workflow also has a trigger condition specified at the top. You can schedule the trigger based on the cron settings or trigger it upon code pushed to a specific branch in the repo. See the following code:

name: Lambda API CICD Workflow
# This workflow is triggered on pushes to the repository branch master.
on:
  push:
    branches:
      - master

# Initializes environment variables for the workflow
env:
  REGION: us-east-1 # Deployment Region

jobs:
  deploy:
    name: Build And Deploy
    # This job runs on Linux
    runs-on: ubuntu-latest
    steps:
      # Checkout code from git repo branch configured above, under folder $GITHUB_WORKSPACE.
      - name: Checkout
        uses: actions/checkout@v2
      # Sets up AWS profile.
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.TOOLS_ACCOUNT_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.TOOLS_ACCOUNT_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.REGION }}
          role-to-assume: ${{ secrets.CROSS_ACCOUNT_ROLE }}
          role-duration-seconds: 1200
          role-session-name: GitActionDeploymentSession
      # Installs CDK and other prerequisites
      - name: Prerequisite Installation
        run: |
          sudo npm install -g [email protected]
          cdk --version
          aws s3 ls
      # Build and Deploy CDK application
      - name: Build & Deploy
        run: |
          cd $GITHUB_WORKSPACE
          ls -a
          chmod 700 deploy.sh
          ./deploy.sh

For more information about triggering workflows, see Triggering a workflow with events.

We have configured a single job workflow for our use case that runs on ubuntu-latest and is triggered upon a code push to the master branch. When you create an empty repo, master branch becomes the default branch. The workflow has four steps:

  1. Check out the code from the repo, for which we use a standard Git action actions/checkout@v2. The code is checked out into a folder defined by the variable $GITHUB_WORKSPACE, so it becomes the root location of our code.
  2. Configure AWS credentials using aws-actions/configure-aws-credentials@v1. This action is configured as explained in the previous section.
  3. Install your prerequisites. In our use case, the only prerequisite we need is AWS CDK. Upon installing AWS CDK, we can do a quick test using the AWS Command Line Interface (AWS CLI) command aws s3 ls. If cross-account access was successfully established in the previous step of the workflow, this command should return a list of buckets in the target account.
  4. Navigate to root location of the code $GITHUB_WORKSPACE and run the deploy.sh script.

You can check in the code into the master branch of your repo. This should trigger the workflow, which you can monitor on the Actions tab of your repo. The commit message you provide is displayed for the respective run of the workflow.

Workflow for region us-east-1 Workflow for region us-west-2

You can choose the workflow link and monitor the log for each individual step of the workflow.

Git action workflow steps

In the target account, you should now see the CloudFormation stack cf-GitActionDemoApiStack in us-east-1 and us-west-2.

Lambda API stack in us-east-1 Lambda API stack in us-west-2

The API resource URL DocUploadRestApiResourceUrl is located on the Outputs tab of the stack. You can invoke your API by choosing this URL on the browser.

API Invocation Output

Clean up

To remove all the resources from the target and tools accounts, complete the following steps in their given order:

  1. Delete the CloudFormation stack cf-GitActionDemoApiStack from the target account. This step removes the Lambda and API Gateway resources and their associated IAM roles.
  2. Delete the CloudFormation stack cf-CrossAccountRolesStack from the target account. This removes the cross-account role and CloudFormation execution role you created.
  3. Go to the CDKToolkit stack in the target account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.
  4. Delete the CloudFormation stack cf-GitActionDeploymentUserStack from tools account. This removes cross-account-deploy-user IAM user.
  5. Go to the CDKToolkit stack in the tools account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.

Security considerations

Cross-account IAM roles are very powerful and need to be handled carefully. For this post, we strictly limited the cross-account IAM role to specific Amazon S3 and CloudFormation permissions. This makes sure that the cross-account role can only do those things. The actual creation of Lambda, API Gateway, and Amazon DynamoDB resources happens via the AWS CloudFormation IAM role, which AWS  CloudFormation assumes in the target AWS account.

Make sure that you use secrets to store your sensitive workflow configurations, as specified in the section Configuring secrets.

Conclusion

In this post we showed how you can leverage GitHub’s popular software development platform to securely deploy to AWS accounts and Regions using GitHub actions and AWS CDK.

Build your own GitHub Actions CI/CD workflow as shown in this post.

About the author

 

Damodar Shenvi Wagle is a Cloud Application Architect at AWS Professional Services. His areas of expertise include architecting serverless solutions, ci/cd and automation.

Role-based access control using Amazon Cognito and an external identity provider

Post Syndicated from Eran Medan original https://aws.amazon.com/blogs/security/role-based-access-control-using-amazon-cognito-and-an-external-identity-provider/

Amazon Cognito simplifies the development process by helping you manage identities for your customer-facing applications. As your application grows, some of your enterprise customers may ask you to integrate with their own Identity Provider (IdP) so that their users can sign-on to your app using their company’s identity, and have role-based access-control (RBAC) based on their company’s directory group membership.

For your own workforce identities, you can use AWS Single Sign-On (SSO) to enable single sign-on to your cloud applications or AWS resources.

For your customers who would like to integrate your application with their own IdP, you can use Amazon Cognito user pools’ external identity provider integration.

In this post, you’ll learn how to integrate Amazon Cognito with an external IdP by deploying a demo web application that integrates with an external IdP via SAML 2.0. You will use directory groups (for example, Active Directory or LDAP) for authorization by mapping them to Amazon Cognito user pool groups that your application can read to make access decisions.

Architecture

The demo application is implemented using Amazon Cognito, AWS Amplify, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, Amazon Simple Storage Service (S3), and Amazon CloudFront to achieve a serverless architecture. You will make use of infrastructure-as-code by using AWS CloudFormation and the AWS Cloud Development Kit (CDK) to model and provision your cloud application resources, using familiar programming languages.

The following diagram shows an overview of this architecture and the steps in the login flow, which should help clarify what you are going to deploy.
 

Figure 1: Architecture Diagram

Figure 1: Architecture Diagram

First visit

When a user visits the web application at the first time, the flow is as follows:

  1. The client side of the application (also referred to as the front end) uses the AWS Amplify JavaScript library (Amplify.js) to simplify authentication and authorization. Using Amplify, the application detects that the user is unauthenticated and redirects to Amazon Cognito, which then sends a SAML request to the IdP.
  2. The IdP authenticates the user and sends a SAML response back to Amazon Cognito. The SAML response includes common attributes and a multi-value attribute for group membership.
  3. Amazon Cognito handles the SAML response, and maps the SAML attributes to a just-in-time user profile. The SAML groups attribute is mapped to a custom user pool attribute named custom:groups.
  4. An AWS Lambda function named PreTokenGeneration reads the custom:groups custom attribute and converts it to a JSON Web Token (JWT) claim named cognito:groups. This associates the user to a group, without creating a group.

    This attribute conversion is optional and implemented to demo how you can use Pre Token Generation Lambda trigger to customize your JWT token claims, mapping the IdP groups to the attributes your application recognizes. You can also use this trigger to make additional authorization decisions. For example, if user is a member of multiple groups, you may choose to map only one of them.

  5. Amazon Cognito returns the JWT tokens to the front end.
  6. The Amplify client library stores the tokens and handles refreshes.
  7. The front end makes a call to a protected API in Amazon API Gateway.
  8. API Gateway uses an Amazon Cognito user pools authorizer to validate the JWT’s signature and expiration. If this is successful, API Gateway passes the JWT to the application’s Lambda function (also referred to as the backend).
  9. The backend application code reads the cognito:groups claim from the JWT and decides if the action is allowed. If the user is a member of the right group then the action is allowed, otherwise the action is denied.

We will go into more detail about these steps after describing a bit more about the implementation details.

For more information about JWT tokens and claims, see Introduction to JSON Web Tokens.

Prerequisites

The following are the prerequisites for the solution described in this post:

Cost estimate

For an account under the 12-month Free Tier period, there should be no cost associated with running this example. However, to avoid any unexpected costs you should terminate the example stack after it’s no longer needed. For more information, see AWS Free Tier and AWS Pricing.

Running the demo application

In this part, you will go over the steps to setup and run the demo application. All the example code in this solution can be found on the amazon-cognito-example-for-external-idp code repository on GitHub.

To deploy the application without an IdP integration

  1. Open a bash-compatible command-line terminal and navigate to a directory of your choice. For Windows users: install Git for Windows and open Git BASH from the start menu.
  2. To get the code from the GitHub repository, enter the following:
    git clone https://github.com/aws-samples/amazon-cognito-example-for-external-idp 
    cd amazon-cognito-example-for-external-idp
    

  3. The template env.sh.template contains configuration settings for the application that you will modify later when you configure the IdP. To copy env.sh.template to env.sh, enter the following:
    cp env.sh.template env.sh

    Figure 2: Cloning the example repository and copying the template configuration file

    Figure 2: Cloning the example repository and copying the template configuration file

  4. The install.sh script will install the AWS CDK toolkit with the dependencies and will configure and bootstrap your environment:
    ./install.sh
    

    Figure 3: Installing dependencies

    Figure 3: Installing dependencies

    You may get prompted to agree to sending Angular analytics. You will also get notified if there are package vulnerabilities. If this is the case run npm audit –fix –prod in all subdirectories to resolve them.

  5. Once the environment has been successfully bootstrapped you need to deploy the CloudFormation stack:
    ./deploy.sh 
    

    Figure 4: Deploying the CloudFormation stack

    Figure 4: Deploying the CloudFormation stack

  6. You will be prompted to accept the IAM changes. These changes will allow API gateway service to call the demo application lambda function (APIFunction), Amazon Cognito to invoke Pre-Token Generation lambda function, demo application lambda function to access DynamoDB user’s table (used to implement user’s global sign out), and more. You’ll need to review these changes according to your current security approval level and confirm them to continue.

    Under Do you wish to deploy these changes (y/n)?, type y and press Enter.

    Figure 5: Reviewing and confirming changes

    Figure 5: Reviewing and confirming changes

  7. A few moments after deploying the application’s CloudFormation stack, the terminal displays the IdP settings, which should look like the following:
     
    Figure 6: IdP settings

    Figure 6: IdP settings

    Make a note of these values; you will use them later to configure the IdP.

Configure the IdP

Every IdP is different, but there are some common steps you will need to follow. To configure the IdP, do the following:

  1. Provide the IdP with the values for the following two properties, which you made note of in the previous section:
    • Single sign on URL / Assertion Consumer Service URL / ACS URL:
      https://<domainPrefix>.auth.<region>.amazoncognito.com/saml2/idpresponse
      

    • Audience URI / SP Entity ID / Entity ID:
      urn:amazon:cognito:sp:<yourUserPoolID>
      

  2. Configure the field mapping for the SAML response in the IdP. Map the first name, last name, email, and groups (as a multivalue attribute) into SAML response attributes with the names firstName, lastName, email, and groups, respectively.

    Recommended: Filter the mapped groups to only those that are relevant to the application (for example, by a prefix filter). There is a 2,048-character limit on the custom attribute, so filtering avoids exceeding the character limit, and also avoids passing irrelevant information to the application.

  3. In the IdP, create two demo groups called pet-app-users and pet-app-admins, and create two demo users, for example, [email protected] and [email protected], and then assign one to each group, respectively.

See the following specific instructions for some popular IdPs, or see the documentation for your customer’s specific IdP:

Get the IdP SAML metadata URL or file

Get the metadata URL or file from the IdP: you will use this later to configure your Cognito user pool integration with the IdP. For more information, see Integrating Third-Party SAML Identity Providers with Amazon Cognito User Pools.

To update the application with the SAML metadata URL or file

The following will configure the SAML IdP in the Amazon Cognito User Pool using the IdP metadata above:

  1. Using your favorite text editor, open the env.sh file.
  2. Uncomment the line starting with # export IDENTITY_PROVIDER_NAME (remove the # sign).
  3. Uncomment the line starting with # export IDENTITY_PROVIDER_METADATA.
  4. If you have a metadata URL from the IdP, enter it following the = sign:
    export IDENTITY_PROVIDER_METADATA=REPLACE_WITH_URL
    

    Or, if you downloaded the metadata as a file, enter $(cat path/to/downloaded-metadata.xml):

    export IDENTITY_PROVIDER_METADATA=$(cat REPLACE_WITH_PATH)
    

    Figure 7: Editing the identity provider metadata in the env.sh configuration file

    Figure 7: Editing the identity provider metadata in the env.sh configuration file

To re-deploy the application

  1. Run ./diff.sh to see the changes to the CloudFormation stack (added metadata URL).
     
    Figure 8: Run ./diff.sh

    Figure 8: Run ./diff.sh

  2. Run ./deploy.sh to deploy the update.

To launch the UI

There is both an Angular version and a React version of the same UI, both have the same functionality. You can use either version depending on your preference.

  1. Start the front end application with your chosen version of the UI with one of the following:
    • React: cd ui-react && npm start
    • Angular: cd ui-angular && npm start
  2. To simulate a new session, in your web browser, open a new window in private browsing or incognito mode, then for the URL, enter http://localhost:3000. You should see a screen similar to the following:
     
    Figure 9: Private browsing sign-in screen

    Figure 9: Private browsing sign-in screen

  3. Choose Single Sign On to be taken to the IdP’s sign-in page, where you will sign in if needed. After you are authenticated by the IdP, you’ll be redirected back to the application.

    If you have multiple IdPs, or if you have both internal and external users that will authenticate directly with the user pool, you can choose the Sign In / Sign Up button instead. This redirects you to the Amazon Cognito hosted UI sign in page, rather than taking you directly to the IdP. For more information, see Using the Amazon Cognito Hosted UI for Sign-Up and Sign-In.

  4. Using a new private browsing session (to clear any state), sign in with the user associated with the group pet-app-users and create some sample entries. Then, sign out. Open another private browsing session, and sign in with the user associated with the pet-app-admins group. Notice that you can see the other user’s entries. Now, create a few entries as an admin, then sign out. Open another new private browsing session, sign in again as the pet-app-users user, and notice that you can’t see the entries created by the admin user.
     
    Figure 10: Example view for a user who is only a member of the pet-app-users group

    Figure 10: Example view for a user who is only a member of the pet-app-users group

     

    Figure 11: Example view for a user who is also a member of the pet-app-admins group

    Figure 11: Example view for a user who is also a member of the pet-app-admins group

Implementation

Next, review the details of what each part of the demo application does, so that you can modify it and use it as a starting point for your own application.

Infrastructure

Take a look at the code in the cdk.ts file—a sample CDK file that creates the infrastructure. You can find it in the amazon-cognito-example-for-external-idp/cdk/src directory in the cloned GitHub repo. The key resources it creates are the following:

  1. A Cognito user pool (new cognito.UserPool…). This is where the just-in-time provisioning created users who federate in from the IdP. It also creates a custom attribute named groups, which you can see as custom:groups in the console.
     
    Figure 12: Custom attribute named groups

    Figure 12: Custom attribute named groups

  2. IdP integration which provides the mapping between the attributes in the SAML assertion from the IdP and Amazon Cognito attributes. For more information, see Specifying Identity Provider Attribute Mappings for Your User Pool.
    (new cognito.CfnUserPoolIdentityProvider…).
  3. An authorizer (new apigateway.CfnAuthorizer…). The authorizer is linked to an API resource method (authorizer: {authorizerId: cfnAuthorizer.ref}).

    It ensures that the user must be authenticated and must have a valid JWT token to make API calls to this resource. It uses Lambda proxy integration to intercept requests.

  4. The PreTokenGeneration Lambda trigger, which is used for the mapping between a user’s Active Directory or LDAP groups (passed on the SAML response from the IdP) to user pool groups (const preTokenGeneration = new lambda.Function…). For the PreTokenGeneration Lambda trigger code used in this solution, see the index.ts file on GitHub.

The application

Backend

The example application in this solution uses a serverless backend, but you can modify it to use Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, AWS Elastic Beanstalk, or even an on-premises server as the backend. To configure your API gateway to point to a server-based application, see Set up HTTP Integrations in API Gateway or Set up API Gateway Private Integrations.

Middleware

Take a look at the code in the express.js sample in the app.ts file on GitHub. You’ll notice some statements starting with app.use. These are interceptors that are invoked for all requests.

app.use(eventContext());

app.use(authorizationMiddleware({
  authorizationHeaderName: authorizationHeaderName,
  supportedGroups: [adminsGroupName, usersGroupName],
  forceSignOutHandler: forceSignOutHandler,
  allowedPaths: ["/"],
}));

Some explanation:

  1. eventContext: the example application in this solution uses AWS Serverless Express which allows you to run the Express framework for Node.js directly on AWS Lambda.
  2. authorizationMiddleware is a helper middleware that does the following:
    1. It enriches the express.js request object with several syntactic sugars such as req.groups and req.username (a shortcut to get the respective claims from the JWT token).
    2. It ensures that the currently logged in user is a member of at least one of the supportedGroups provided. If not, it will return a 403 response.

Endpoints

Still in the Express.js app.ts file on GitHub, take a closer look at one of the API’s endpoints (GET /pets).

app.get("/pets", async (req: Request, res: Response) => {

  if (req.groups.has(adminsGroupName)) {
    // if the user has the admin group, we return all pets
    res.json(await storageService.getAllPets());
  } else {
    // else, just owned pets (middleware ensure that the user has at least one group)
    res.json(await storageService.getAllPetsByOwner(req.username));
  }
});

With the groups claim information, your application can now make authorization decisions based on the user’s role (show all items if they are an admin, otherwise just items they own). Having this logic as part of the application also allows you to unit test your authorization logic, and run it locally, or offline, before deploying it.

Front end

The front end can be built in your framework of choice. You can start with the sample UIs provided for either React or Angular. In both, the AWS Amplify client library handles the integration with Amazon Cognito and API Gateway for you. For more information about AWS Amplify, see the Amplify Framework page on GitHub.

Note: You can use AWS Amplify to create the infrastructure in a wizard-like way, without writing CloudFormation. In our example, because we used the AWS CDK for the infrastructure, we needed a configuration file to point Amplify to the created infrastructure.

The following are some notable files, and explanations of what they do:

  • generateConfig.ts reads the CloudFormation stack output parameters, and creates a file named autoGenConfig.js, which looks like the following:
    // this file is auto generated, do not edit it directly
    export default {
      cognitoDomain: "youruniquecognitodomain.auth.region.amazoncognito.com",
      region: "region",
      cognitoUserPoolId: "youruserpoolid",
      cognitoUserPoolAppClientId: "yourusepoolclientid",
      apiUrl: "https://yourapigwapiid.execute-api.region.amazonaws.com/prod/",
    };
    

    The file generateConfig.ts is triggered after calling ./deploy.sh, or ./config-ui.sh.

  • APIService.ts: calls the backend API, passing the user’s token. For example, calling the GET/pets API:
    public async getAllPets(): Promise<Pet[]> {
      const authorizationHeader = await this.getAuthorizationHeader();
      return await this.api.get(REST_API_NAME, '/pets', {headers: authorizationHeader});
    }
    

Step-by-step example

Now that you have an understanding of the solution, we will take you through a step-by-step example. You can see how everything works together in sequence, and how the tokens are passing between Cognito, your demo application, and the API gateway.

  1. Create a new browser session by starting a private/incognito session.
  2. Launch the UI by using the Angular example from the To launch the UI section:
    cd ui-angular && npm start
    

  3. Open the developer tools in your browser. In most browsers, you can do this by pressing F12 (in Chrome and FireFox in Windows), or Option+Command+i (Chrome, Firefox, or Safari on a Mac).
  4. In the developer tools panel, navigate to the Network tab, and ensure that it is in recording mode and logs are persisting. For more details for various browsers, see How to View a SAML Response in Your Browser for Troubleshooting.
  5. When the page loads, the following happens behind the scenes in the front end (example code available for either Angular or React):
    1. Using Amplify.js, AWS Amplify checks if the user is currently logged in
      let cognitoUser = await Auth.currentAuthenticatedUser();
      

      Because this is a new browsing session, the user is not logged in, and the Sign In / Sign Up and Single Sign On buttons will appear.

    2. Choose Single Sign On, and AWS Amplify will redirect the browser to the IdP.
      Auth.federatedSignIn(idpName)
      

  6. In the IdP sign-in page, sign in as one of the users created earlier (e.g. [email protected] or [email protected]).
  7. In the Network tab of your browser’s developer tools panel, locate the request to Amazon Cognito’s /saml2/idresponse endpoint.
  8. The following is an example using Chrome, but you can do it similarly using other browsers. In the Form Data section, you can see the SAMLResponse field that was sent back from the IdP after you authenticated.
     
    Figure 13: Inspecting the SAML response

    Figure 13: Inspecting the SAML response

  9. Copy the SAMLResponse value (drag to select the area marked in green above, and make sure you don’t include the RelayState field).
  10. At the command line, use the following example to decode the SAMLResponse value. Be sure to replace SAMLResponse by pasting the text copied in the previous step:
    echo "SAMLResponse" | base64 --decode > saml_response.xml 
    

  11. Open the saml_response.xml file, and look at the part that starts with <saml2:Attribute Name="groups". This is the attribute that contains the groups that your user belongs to, according to the IdP. For more ways to inspect and troubleshoot the SAML response, see How to View a SAML Response in Your Browser for Troubleshooting.
  12. Amazon Cognito applies the mapping defined in the CloudFormation stack to these attributes. For example, the IdP SAML response attribute named groups is mapped to the user pool custom attribute named custom:groups.
    • In order to modify the mapping, edit your local copy of the cdk.ts file.
    • In order to view the mapped attribute for a user, do the following:
      1. Sign into the AWS Management Console using the same account you used for the demo setup.
      2. Select Manage User Pools.
      3. Select the pool you created for this demo and choose Users and groups.
      4. Search for the user account you just signed in with, and choose its username.

        As you can see in the following example, the custom:groups claim is set automatically. (the custom: prefix is added to all custom attributes automatically):

    Figure 14: Mapped user attributes

    Figure 14: Mapped user attributes

  13. The PreTokenGeneration Lambda function then reads the mapped custom:groups attribute value, parses it, and converts it to an array; and then stores it in the cognito:groups claim. In order to customize the mapping, edit the Lambda function’s code in your local copy of the index.ts file and run ./deploy.sh to redeploy your application.
  14. Now that the front end has the JWT token, when the page loads, it will request to load all the items (a call to a protected API, passing the token in the form of an authorization header).
  15. Look at the Network tab again, under the GET request that starts with pets.

    Under Request Headers, look at the authorization header. The long value you see is the encoded token passed as part of the request. The following is an example of how the decoded JWT will look:

    {
      "cognito:groups": [
        "pet-app-users",
        "pet-app-admins"
      ], // <- this is what the PreTokenGeneration lambda added
      
      "cognito:username": "IdP_Alice",
      "custom:groups": "[pet-app-users, pet-app-admins]", //what we got via SAML
      "email": "[email protected]"
      …
    
    }
    

  16. Optionally, if you’d like to modify or add new requests to a new API paths, edit your local copy of the APIService.ts file by using one of the following examples.
    • Sending the request with the authorization header:
      public async getAllPets(): Promise<Pet[]> {
        const authorizationHeader = await this.getAuthorizationHeader();
        return await this.api.get(REST_API_NAME, '/pets', 
          {headers: authorizationHeader});
      }
      

    • The authorization header is obtained using this helper function:
      private async getAuthorizationHeader() {
        const session = await this.auth.currentSession();
        const idToken = session.getIdToken().getJwtToken();
        return {Authorization: idToken}
      }
      

  17. After the previous request is sent to Amazon API Gateway, the Amazon Cognito user pool authorizer validated the JWT token based on the token signature, to ensure that it was not tampered with, and that it was still valid. You can see the way the authorizer is setup in the cdk.ts file on GitHub.
  18. Based on which user you signed-in with previously, you’ll either see all items, or only items you own. How does it work? As mentioned earlier, the backend application code reads the groups claim from the validated token and decides if the action is allowed. If the user is a member of a specific group or has a specific attribute, allow; else, deny. The relevant code that makes that decision can be seen in the Express.js example app file in the app.ts file on GitHub.

Customizing the application

The following are some important issues to consider when customizing the app to your needs:

  • If you modify the app client, do not add the aws.cognito.signin.user.admin scope to it. The aws.cognito.signin.user.admin scope grants access to Amazon Cognito User Pool API operations that require access tokens, such as UpdateUserAttributes and VerifyUserAttribute. The demo application makes authorization decisions based on the custom:group attribute populated from the IdP. Because the IdP is the single source of truth for its users, they should not be able to modify any attribute, particularly the custom:groups attribute.
  • We recommend that you do not change the mapped attribute after the stack is deployed. The reason is that the attribute gets persisted in the user profile after it is mapped. For example, if you first map groups to custom:groups, and a user signs in, then later you change the mapping of groups to custom:groups2, the next time the user signs in, their profile will have both attributes: custom:groups (with the last value it was mapped to it) as well as custom:groups2 (with the current value). To avoid having to clear old mapped attributes, we recommend not changing the mapping after it is created.
  • This solution utilizes Amazon Cognito’s OAuth 2.0 flows to provide federated sign-in from an external IdP (and optionally also sign-in directly with the user pool via the hosted UI in case you would like to support both use cases). It is not applicable for non OAuth 2.0 flows (e.g. the custom UI), for example, using InititateAuth/SRP.

Conclusion

You can integrate your application with your customer’s IdP of choice for authentication and authorization for your application, without integrating with LDAP, or Active Directory directly. Instead, you can map read-only, need-to-know information from the IdP to the application. By using Amazon Cognito, you can normalize the structure of the JWT token, so that you can add multiple IdPs, social login providers, and even regular username and password-based users (stored in user pools). And you can do all this without changing any application code. Amazon API Gateway’s native integration with Amazon Cognito user pools authorizer streamlines your validation of the JWT integrity, and after it has been validated, you can use it to make authorization decisions in your application’s backend. Using this example, you can focus on what differentiates your application, and let AWS do the undifferentiated heavy lifting of identity management for your customer-facing applications.

For all the code examples described in this post, see the amazon-cognito-example-for-external-idp code repository on GitHub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Eran Medan

Eran is a Software Development Manager based in Atlanta and leads the AWS Jam team, which uses Amazon Cognito and other services mentioned in this post to run their service. Other than jamming on AWS, Eran likes to jam on his guitar or fly airplanes in virtual reality.

Yuri Duchovny

Yuri is a New York-based Solutions Architect specializing in cloud security, identity, and compliance. He supports cloud transformations at large enterprises, helping them make optimal technology and organizational decisions. Prior to his AWS role, Yuri’s areas of focus included application and networking security, DoS, and fraud protection. Outside of work, he enjoys skiing, sailing, and traveling the world.

Integrating AWS CloudFormation security tests with AWS Security Hub and AWS CodeBuild reports

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/integrating-aws-cloudformation-security-tests-with-aws-security-hub-and-aws-codebuild-reports/

The concept of infrastructure as code, by using pipelines for continuous integration and delivery, is fundamental for the development of cloud infrastructure. Including code quality and vulnerability scans in the pipeline is essential for the security of this infrastructure as code. In one of our previous posts, How to build a CI/CD pipeline for container vulnerability scanning with Trivy and AWS Security Hub, you learned how to scan containers to efficiently identify Common Vulnerabilities and Exposures (CVEs) and work with your developers to address them.

In this post, we’ll continue this topic, and also introduce a method for integrating open source tools that find potentially insecure patterns in your AWS CloudFormation templates with both AWS Security Hub and AWS CodeBuild reports. We’ll be using Stelligent’s open source tool CFN-Nag. We also show you how you can extend the solution to use AWS CloudFormation Guard (currently in preview).

One reason to use this integration is that it gives both security and development teams visibility into potential security risks, and resources that are insecure or non-compliant to your company policy, before they’re deployed.

Solution benefit and deliverables

In this solution, we provide you with a ready-to-use template for performing scanning of your AWS CloudFormation templates by using CFN-Nag. This tool has more than 140 predefined patterns, such as AWS Identity and Access Management (IAM) rules that are too permissive (wildcards), security group rules that are too permissive (wildcards), access logs that aren’t enabled, or encryption that isn’t enabled. You can additionally define your own rules to match your company policy as described in the section later in this post, by using custom profiles and exceptions, and suppressing false positives.

Our solution enables you to do the following:

  • Integrate CFN-Nag in a CodeBuild project, scanning the infrastructure code for more than 140 possible insecure patterns, and classifying them as warnings or a failing test.
  • Learn how to integrate AWS CloudFormation Guard (CFN-Guard). You need to define your scanning rules in this case.
  • Generate CodeBuild reports, so that developers can easily identify failed security tests. In our sample, the build process fails if any critical findings are identified.
  • Import to Security Hub the aggregated finding per code branch, so that security professionals can easily spot vulnerable code in repositories and branches. For every branch, we import one aggregated finding.
  • Store the original scan report in an Amazon Simple Storage Service (Amazon S3) bucket for auditing purposes.

Note: in this solution, the AWS CloudFormation scanning tools won’t scan your application code that is running at AWS Lambda functions, Amazon Elastic Container Service (Amazon ECS), or Amazon Elastic Compute Cloud (Amazon EC2) instances.

Architecture

Figure 1 shows the architecture of the solution. The main steps are as follows:

  1. Your pipeline is triggered when new code is pushed to CodeCommit (which isn’t part of the template) to start a new build.
  2. The build process scans the AWS CloudFormation templates by using the cfn_nag_scan or cfn-guard command as defined by the build job.
  3. A Lambda function is invoked, and the scan report is sent to it.
  4. The scan report is published in an S3 bucket via the Lambda function.
  5. The Lambda function aggregates the findings report per repository and git branch and imports the report to Security Hub. The Lambda function also suppresses any previous findings related to this current repo and branch. The severity of the finding is calculated by the number of findings and a weight coefficient that depends on whether the finding is designated as warning or critical.
  6. Finally, the Lambda function generates the CodeBuild test report in JUnit format and returns it to CodeBuild. This report only includes information about any failed tests.
  7. CodeBuild creates a new test report from the new findings under the SecurityReports test group.
Figure 1: Solution architecture

Figure 1: Solution architecture

Walkthrough

To get started, you need to set up the sample solution that scans one of your repositories by using CFN-Nag or CFN-Guard.

To set up the sample solution

  1. Log in to your AWS account if you haven’t done so already. Choose Launch Stack to launch the AWS CloudFormation console with the prepopulated AWS CloudFormation demo template. Choose Next.

    Select the Launch Stack button to launch the templateAdditionally, you can find the latest code on GitHub.

  2. Fill in the stack parameters as shown in Figure 2:
    • CodeCommitBranch: The name of the branch to be monitored, for example refs/heads/master.
    • CodeCommitUrl: The clone URL of the CodeCommit repo that you want to monitor. It must be in the same Region as the stack being launched.
    • TemplateFolder: The folder in your repo that contains the AWS CloudFormation templates.
    • Weight coefficient for failing: The weight coefficient for a failing violation in the template.
    • Weight coefficient for warning: The weight coefficient for a warning in the template.
    • Security tool: The static analysis tool that is used to analyze the templates (CFN-Nag or CFN-Guard).
    • Fail build: Whether to fail the build when security findings are detected.
    • S3 bucket with sources: This bucket contains all sources, such as the Lambda function and templates. You can keep the default text if you’re not customizing the sources.
    • Prefix for S3 bucket with sources: The prefix for all objects. You can keep the default if you’re not customizing the sources.
Figure 2: AWS CloudFormation stack

Figure 2: AWS CloudFormation stack

View the scan results

After you execute the CodeBuild project, you can view the results in three different ways depending on your preferences: CodeBuild report, Security Hub, or the original CFN-Nag or CFN-Guard report.

CodeBuild report

In the AWS Management Console, go to CodeBuild and choose Report Groups. You can find the report you are interested in under SecurityReports. Both failures and warnings are represented as failed tests and are prefixed with W(Warning) or F(Failure), respectively, as shown in Figure 3. Successful tests aren’t part of the report because they aren’t provided by CFN-Nag reports.

Figure 3: AWS CodeBuild report

Figure 3: AWS CodeBuild report

In the CodeBuild navigation menu, under Report groups, you can see an aggregated view of all scans. There you can see a historical view of the pass rate of your tests, as shown in Figure 4.

Figure 4: AWS CodeBuild Group

Figure 4: AWS CodeBuild Group

Security Hub findings

In the AWS Management Console, go to Security Hub and select the Findings view. The aggregated finding per branch has the title CFN scan repo:name:branch with Company Personal and Product Default. The name and branch are placeholders for the repo and branch name. There is one active finding per repo and branch. All previous reports for this repo and branch are suppressed, so that by default you see only the last ones. If necessary, you can see the previous reports by removing the selection filter in the Security Hub finding console. Figure 5 shows an example of the Security Hub findings.

Figure 5: Security Hub findings

Figure 5: Security Hub findings

Original scan report

Lastly, you can find the original scan report in the S3 bucket aws-sec-build-reports-hash. You can also find a reference to this object in the associated Security Hub finding source URL. The S3 object key is constructed as follows.


cfn-nag-report/repo:source_repository/branch:branch_short/cfn-nag-createdAt.json

where source_repository is the name of the repository, branch_short is the name of the branch, and createdAt is the report date.

The following screen capture shows a sample view of the content.

Figure 6: CFN_NAG report sample

Figure 6: CFN_NAG report sample

Security Hub severity and weight coefficients

The Lambda function aggregates CFN-Nag findings to one Security Hub finding per branch and repo. We consider that in this way you get the best visibility without losing orientation in too many findings if you have a large code base.

The Security Hub finding severity is calculated as follows:

  • CFN-Nag critical findings are weighted (multiplied) by 20 and the warnings by 1.
  • The sum of all CFN-Nag findings multiplied by their weighted coefficient results in the severity of the Security Hub finding.

The severity label or normalized severity (from 0 to 100) (see AWS Security Finding Format (ASFF) for more information) is calculated from the summed severity. We implemented the following convention:

  • If the severity is more than 100 points, the label is set as CRITICAL (100).
  • If the severity is lower than 100, the normalized severity and label are mapped as described in AWS Security Finding Format (ASFF).

Your company might have a different way to calculate the severity. If you want to adjust the weight coefficients, change the stack parameter. If you want to adjust the mapping of the CFN-Nag findings to Security hub severity, then you’ll need to adapt the Lambda’s calculateSeverity Python function.

Using custom profiles and exceptions, and suppressing false positives

You can customize CFN-Nag to use a certain rule set by including the specific list of rules to apply (called a profile) within the repository. Customizing rule sets is useful because developers or applications might have different security considerations or risk profiles in specific applications. Additionally the operator might prefer to exclude rules that are prone to introducing false positives.

To add a custom profile, you can modify the cfn_nag_scan command specified in the CodeBuild buildspec.yml file. Use the –profile-path command argument to point to the file that contains the list of rules to use, as shown in the following code sample.


cfn_nag_scan --fail-on-warnings –profile-path .cfn_nag.profile  --input-path  ${TemplateFolder} -o json > ./report/cfn_nag.out.json

Where .cfn_nag.profile file contains one rule identifier per line:


F2
F3
F5
W11

You can find the full list of available rules using cfn_nag_rules command.

You can also choose instead to use a global deny list of rules, or directly suppress findings per resource by using Metadata tags in each AWS CloudFormation resource. For more information, see the CFN-Nag GitHub repository.

Integrating with AWS CloudFormation Guard

The integration with AWS CloudFormation Guard (CFN-Guard) follows the same architecture pattern as CFN-Nag. The ImportToSecurityHub Lambda function can process both CFN-Nag and CFN-Guard results to import to Security Hub and generate a CodeBuild report.

To deploy the CFN-Guard tool

  1. In the AWS Management Console, go to CloudFormation, and then choose Update the previous stack deployed.
  2. Choose Next, and then change the SecurityTool parameter to cfn-guard.
  3. Continue to navigate through the console and deploy the stack.

This creates a new buildspec.yml file that uses the cfn-guard command line interface (CLI) to scan all AWS CloudFormation templates in the source repository. The scans use an example rule set found in the CFN-Guard repository.

You can choose to generate the rule set for the AWS CloudFormation templates that are required by the scanning engine and add the rule set to your repository as described on the GitHub page for AWS CloudFormation Guard. The rule set must reflect your company security policy. This can be one set for all templates, or dependent on the security profile of the application.

You can use your own rule set by modifying the cfn-guard –rule_path parameter to point to a file from within your repository, as follows.


cfn-guard --rule_set .cfn_guard.ruleset --template  "$template" > ./report/template_report

Troubleshooting

If the build report fails, you can find the CloudBuild run logs in the CodeBuild Build history. The build will fail if critical security findings are detected in the templates.

Additionally, the Lambda function execution logs can be found in the CloudWatch Log group aws/lambda/ImportToSecurityHub.

Summary

In this post, you learned how to scan the AWS CloudFormation templates for resources that are potentially insecure or not compliant to your company policy in a CodeBuild project, import the findings to Security Hub, and generate CodeBuild test reports. Integrating this solution to your pipelines can help multiple teams within your organization detect potential security risks in your infrastructure code before its deployed to your AWS environments. If you would like to extend the solution further and need support, contact AWS professional services or an Amazon Partner Network (APN) Partner. If you have technical questions, please use the AWS Security Hub or AWS CodeBuild forums.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

Author

Joaquin Manuel Rinaudo

Joaquin is a Senior Security Consultant with AWS Professional Services. He is passionate about building solutions that help developers improve their software quality. Prior to AWS, he worked across multiple domains in the security industry, from mobile security to cloud and compliance related topics. In his free time, Joaquin enjoys spending time with family and reading science-fiction novels.

How to use trust policies with IAM roles

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/

November 3, 2022: We updated this post to fix some syntax errors in the policy statements and to add additional use cases.

August 30, 2021: This post is currently being updated. We will post another note when it’s complete.


AWS Identity and Access Management (IAM) roles are a significant component of the way that customers operate on Amazon Web Service (AWS). In this post, we will dive into the details of how role trust policies work and how you can use them to restrict how your roles are assumed.

There are several different scenarios where you might use IAM roles on AWS:

  • An AWS service or resource accesses another AWS resource in your account – When an AWS resource needs access to other AWS services, functions, or resources, you can create a role that has appropriate permissions for use by that AWS resource. Services like AWS Lambda and Amazon Elastic Container Service (Amazon ECS) assume roles to deliver temporary credentials to your code that’s running in them.
  • An AWS service generates AWS credentials to be used by devices running outside AWS
    AWS IAM Roles Anywhere, AWS IoT Core, and AWS Systems Manager hybrid instances can deliver role session credentials to applications, devices, and servers that don’t run on AWS.
  • An AWS account accesses another AWS account – This use case is commonly referred to as a cross-account role pattern. It allows human or machine IAM principals from one AWS account to assume this role and act on resources within a second AWS account. A role is assumed to enable this behavior when the resource in the target account doesn’t have a resource-based policy that could be used to grant cross-account access.
  • An end user authenticated with a web identity provider or OpenID Connect (OIDC) needs access to your AWS resources – This use case allows identities from Facebook or OIDC providers such as GitHub, Amazon Cognito, or other generic OIDC providers to assume a role to access resources in your AWS account.
  • A customer performs workforce authentication using SAML 2.0 federation – This occurs when customers federate their users into AWS from their corporate identity provider (IdP) such as Okta, Microsoft Azure Active Directory, or Active Directory Federation Services (ADFS), or from AWS Single Sign-On (AWS SSO).

An IAM role is an IAM principal whose entitlements are assumed in one of the preceding use cases. An IAM role differs from an IAM user as follows:

  • An IAM role can’t have long-term AWS credentials associated with it. Rather, an authorized principal (an IAM user, AWS service, or other authenticated identity) assumes the IAM role and inherits the permissions assigned to that role.
  • Credentials associated with an IAM role are temporary and expire.
  • An IAM role has a trust policy that defines which conditions must be met to allow other principals to assume it.

Managing access to IAM roles

Let’s dive into how you can control access to IAM roles by understanding the policy types that you can apply to an IAM role.

There are three circumstances where policies are used for an IAM role:

  • Trust policy – The trust policy defines which principals can assume the role, and under which conditions. A trust policy is a specific type of resource-based policy for IAM roles. The trust policy is the focus of the rest of this blog post.
  • Identity-based policies (inline and managed) – These policies define the permissions that the user of the role is able to perform (or is denied from performing), and on which resources.
  • Permissions boundary – A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions for a role. A principal’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based permissions policies and its permissions boundaries. You can use permissions boundaries to delegate permissions management tasks, such as IAM role creation, to non-administrators so that they can create roles in self-service.

For the rest of this post, you’ll learn how to enforce the conditions under which roles can be assumed by configuring their trust policies.

An example of a simple trust policy

A common use case is when you need to provide access to a role in account A to assume a role in Account B. To facilitate this, you add an entry in the role in account B’s trust policy that allows authenticated principals from account A to assume the role through the sts:AssumeRole API call.

Important: If you reference :root in an IAM role’s trust policy, you might allow more principals to assume your role than you intended, so it’s a best practice to use the Principal element or conditions to only allow specific principals or paths to assume a role. Later in this post, we show you how to limit this access to more specific principals.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

This trust policy has the same structure as other IAM policies with Effect, Action, and Condition components. It also has the Principal element, but no Resource element. This is because the resource is the IAM role itself. For the same reason, the Action element will only ever be set to relevant actions for role assumption.

Note: The suffix :root in the policy’s Principal element equates to the principals in the account, not the root user of that account.

Using the Principal element to limit who can assume a role

In a trust policy, the Principal element indicates which other principals can assume the IAM role. In the preceding example, 111122223333 represents the AWS account number for the auditor’s AWS account. This allows a principal in the 111122223333 account with sts:AssumeRole permissions to assume this role.

To allow a specific IAM role to assume a role, you can add that role within the Principal element. For example, the following trust policy would allow only the IAM role LiJuan from the 111122223333 account to assume the role it is attached to.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/LiJuan"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The principals included in the Principal element can be a principal defined within the IAM documentation, and can refer to an AWS or a federated principal. You can’t use a wildcard (“*” or “?”) within the Principal element for a trust policy, other than one case which we will cover later. You must define precisely which principal you are referring to because there is a translation that occurs when you submit your trust policy that ties it to each principal’s ID. For more information, see Why is there an unknown principal format in my IAM resource-based policy?

If an IAM role has a principal from the same account in its trust policy directly, that principal doesn’t need an explicit entitlement in its identity-attached policy to assume the role.

Using the Condition element in a trust policy

The Condition element in a role trust policy sets additional requirements for the Principal trying to assume the role. The Condition element is a flexible way to reduce the set of users that are able to assume the role without necessarily specifying the principals.

Condition elements of role trust policies behave identically to condition elements in identity-based policies and other resource policies on AWS.

Using SAML identity federation on AWS

Federated users from a SAML 2.0 compliant IdP are given permissions to access AWS accounts through the use of IAM roles. The mapping of which enterprise users get which roles is established within the directory used by the SAML 2.0 IdP and is placed inside the signed SAML assertion by the IdP.

The Principal element of a role trust policy for SAML federation contains the ARN of the SAML IdP in the same AWS account. IdPs in other accounts can’t be referenced. Roles assumed by SAML federation can use SAML-specific condition keys in their role trust policy.

A role trust policy for a role to be assumed by SAML places the ARN of the SAML IDP in the Principal element, and checks the intended audience (SAML:aud) of the SAML assertion. Setting the audience condition is important because it means that only SAML assertions intended for AWS can be used to assume a role:

{
    "Version": "2012-10-17",
    "Statement": {
      "Effect": "Allow",
      "Action": "sts:AssumeRoleWithSAML",
      "Principal": {"Federated": "arn:aws:iam::account-id:saml-provider/PROVIDER-NAME"},
      "Condition": {"StringEquals": {"SAML:aud": "https://signin.aws.amazon.com/saml"}}
    }
  }

The AWS documentation covers creating roles for SAML 2.0 federation in detail. For information about how to manage the role trust policies of roles assumed by SAML from multiple AWS Regions for resiliency, see the blog post How to use regional SAML endpoints for failover.

For federating workforce access to AWS, you can use AWS IAM Identity Center (successor to AWS Single Sign-On) to broker access to IAM roles through SAML. Roles managed by IAM Identity Center can’t have their trust policy modified by IAM directly.

SAML IDPs used in a role trust policy must be in the same account as the role is.

Assuming a role with WebIdentity

Roles can also be assumed with tokens issued by web identity providers and OpenID Connect (OIDC) compliant providers.

After you’ve created an OpenID Connect identity provider in your account, you can configure roles to be assumed by that OpenID Connect identity provider.

The following is a trust policy that allows a role to be assumed by the identity provider auth.example.com where the value of the sub claim is equal to Administrator and the aud is equal to MyappWebIdentity.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::11112222333:oidc-provider/auth.example.com"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "auth.example.com:sub": "Administrator",
  "auth.example.com:aud": "MyappWebIdentity"
                }
            }
        }
    ]
}

The condition keys used for roles assumed by OIDC identity providers are always prefixed with the name of the OIDC identity provider (for example, auth.example.com). So to use claims in the ID Token like aud, sub, and amr, they are prefixed to become auth.example.com:aud, auth.example.com:sub, and auth.example.com:amr, respectively, in a trust policy to be evaluated as a condition key. Only ID Token claims listed in the STS documentation can be used in role trust policies as condition keys.

It’s important to set the:aud condition in role trust policies to help verify that the tokens being used to assume roles in your AWS accounts are tokens that are intended to be used for that purpose, and are for your application or tenant if your web identity provider is a public or multi-tenant identity provider, such as Google or GitHub.

Amazon Elastic Kubernetes Service (Amazon EKS) clusters have OIDC identity provider capabilities and can be used to assume roles in AWS accounts.

OIDC identity providers used to assume a role must be in the same AWS account as the role.

Limiting role use based on an identifier

At times you might need to give a third-party access to your AWS resources. Suppose that you hired a third-party company, Example Corp, to monitor your AWS account and help optimize costs. To track your daily spending, Example Corp needs access to your AWS resources, so you allow them to assume an IAM role in your account. However, Example Corp also tracks spending for other customers, and there could be a configuration issue in the Example Corp environment that allows another customer to compel Example Corp to attempt to take an action in your AWS account, even though that customer should only be able to take the action in their own account. This is referred to as the cross-account confused deputy problem. This section shows you a way to mitigate this risk.

The following trust policy requires that principals from the Example Corp AWS account, 444455556666, have provided a special string, called an external ID, when making their request to assume the role. Adding this condition reduces the risk that someone from the 444455556666 account will assume this role by mistake. This string is configured by specifying an ExternalID conditional context key.

External IDs should be generated by the third-party assuming your role, like Example Corp, and associated with the other assume role calls to assume a given customer’s role by Example Corp. By doing this, other Example Corp customers won’t be able to compel Example Corp to assume your roles on their behalf because they can’t force Example Corp to use your external ID through their tenant even if they become aware of your external ID.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::444455556666:role/ExampleCorpRole"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "ExampleUniquePhrase"
        }
      }
    }
  ]
}

The external IDs should be unique for every customer of a service provider. AWS doesn’t treat external IDs as secrets—they can be seen by anyone with entitlements to view a role’s trust policy.

If you assume roles in your customers’ accounts, it’s a best practice to generate unique external ID values on behalf of your customers and associate them with your customers, and you shouldn’t allow your customers to specify an external ID.

Roles with the sts:ExternalId condition can’t be assumed through the AWS console, unless there is another Allow statement without that condition.

Limiting role use based on IP addresses or CIDR ranges

You can put IP address conditions into a role trust policy to limit the networks from which a role can be assumed. For example, you can limit role assumption to a corporate network or VPN range. The following example trust policy will only allow the role to be assumed if the call is made from within the 203.0.113.0/24 CIDR range.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/IpBoundedRole"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        }
      }
    }
  ]
}

By using aws:SourceIP in the trust policy, you limit where the role can be assumed from, but this doesn’t limit where the credentials can be used from after they are assumed. To restrict where the credentials can be used from, you can use aws:SourceIP as a condition within the principal’s identity-based policy or the service control policies that apply to it. For more information on restricting where credentials can be used from, see Establishing a data perimeter on AWS.

Limiting role use based on tags

You can use IAM tagging capabilities to build flexible and adaptive trust policies. You can use an attribute-based access control (ABAC) model for assuming IAM roles in the same way that you can for accessing objects in an Amazon Simple Storage Service (Amazon S3) bucket. You can build trust policies that only permit principals that have already been tagged with a specific key and value to assume a specific role. The following example trust policy requires that IAM principals in the AWS account 111122223333 have the value of their principal tag department match the value of the IAM role’s tag owningDepartment.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/department": "${aws:ResourceTag/owningDepartment}"
        }
      }
    }
  ]
}

As an example, in the preceding policy, if the role is tagged with an owningDepartment of finance, then only principals within account 111122223333 who have a tag department with a value of finance will be able to assume the role.

When using ABAC, it’s important to have governance around who can set tags on resources, principals, and sessions. If someone can change or modify tags on principals, resources, or sessions, they might be able to access resources that you didn’t intend them to. Principals from AWS accounts outside of your control might have different tag governance practices than your own organization, and you should take this into account when using principal tags as part of cross-account role assumption. You can use tag policies to help govern tags within your organization, and later in this blog post, we show how to manage tags set on assumption by using role trust policies.

For more information, see the Attribute-Based Access Control (ABAC) for AWS page.

Limiting role assumption to only principals within your organization

Since its announcement in 2016, the vast majority of enterprise customers that we work with use AWS Organizations. This AWS service allows you to create an organizational structure for your accounts by creating logical boundaries/organizational units that allow grouping of AWS accounts that need common guardrails applied. You can use the PrincipalOrgID condition key to limit the use of roles solely to principals within your organization in AWS Organizations.

The following example shows a policy that denies assumption of this role except by AWS services or by principals that are a member of the o-abcd12efg1 organization. This statement can be broadly applied to prevent someone outside your AWS organization from assuming your roles.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringNotEquals": {
                    "aws:PrincipalOrgID": "o-abcd12efg1"
                },
                "Bool": {
                    "aws:PrincipalIsAWSService": "false"
                }
            }
        }
    ]
}

In the preceding example, the StringNotEquals operator denies access to this role by a principal that doesn’t belong to a member account of the specified organization.

AWS roles that you intend to use with AWS services need to be able to be assumed by those AWS services. In the preceding example, we added the aws:PrincipalIsAWSService condition key so that an AWS service principal isn’t impacted by the explicit Deny statement. All principals, including AWS services, are still required to have an explicit Allow statement in a role’s trust policy to assume that role. Requests to customer resources by AWS service principals do so with the aws:PrincipalIsAWSService condition key set to a value of true, which means that the preceding Deny statement won’t apply to a service principal, but an Allow statement will let a service principal assume the role.

You can also use the aws:PrincipalOrgPaths condition key to limit role assumption to member accounts within a specific OU of an organization if you want role assumption to be more fine-grained.

Enforcing invariants with Deny statements

Only allowing principals in your organization to assume your roles is an example of a security invariant. Security invariants are security principles that you want to always apply. Deny statements are useful in trust policies to restrict conditions under which you would never want a role to be assumable. In AWS authorization, the presence of an applicable Deny statement overrides an applicable Allow statement. So having a Deny statement with conditions in it that should never be met such as allowing a role to be assumed by a principal outside of your organization is powerful.

Setting the source identity on role sessions to help trace actions in CloudTrail

You can configure a role session to have a source identity when assumed. This is most common when customers federate users into IAM through SAML2.0 or Web Identity/OpenID Connect to assume roles. You can configure your IdP to set the SourceIdentity attribute on the role session. Setting the source identity causes AWS CloudTrail logs for actions taken by this role session to contain the source identity so that you can trace actions taken by roles back to the user that assumed them. The SourceIdentity attribute also follows that role session if it assumes another role.

To set a source identity, you need to grant the IdP the sts:SetSourceIdentity entitlement in the role’s trust policy.

{
    "Version": "2012-10-17",
    "Statement": {
      "Effect": "Allow",
      "Action": ["sts:AssumeRoleWithSAML","sts:SetSourceIdentity"],
      "Principal": {"Federated": "arn:aws:iam::111122223333:saml-provider/PROVIDER-NAME"},
      "Condition": {"StringEquals": {"SAML:aud": "https://signin.aws.amazon.com/saml"}}
    }
  }

In order for a role session that has a SourceIdentity set to assume a second role, it must also have the sts:SetSourceIdentity entitlement in that second role’s trust policy. If it doesn’t, the first role won’t be able to assume the second role.

You can also use the sts:SourceIdentity condition key to enforce that the SourceIdentity attribute that is being set conforms to an expected standard:

            "Condition": {
                "StringLike": {"sts:SourceIdentity": "*@example.org"}
            }

In the preceding example, for the Condition element, all requests must contain @example.org.

Setting tags on role sessions

You can set tags on role sessions, which can then be used in IAM and resource policy authorization decisions. Tags on role sessions are evaluated with the same condition key that tags on IAM roles are: aws:PrincipalTag/TagKey. Tag values that are set when a role is assumed have precedence over tag values that are attached to the role.

If you’re basing authorization on principal tags in your AWS accounts, it’s important that you control who can set the session tags and principal tags in your accounts so that access isn’t granted to unintended parties.

The ability to tag a role session must be granted in a role’s trust policy using the sts:TagSession permission, and you can use conditions and condition keys to restrict which tags can be set to which values.

The following is an example statement for a role trust policy that allows a principal from account 111122223333 to assume the role and requires that the three session tags for Project, CostCenter and Department are set. The Department tag must have a value of either Engineering or Marketing. The third Condition statement allows the Project and Department tags to be set as transitive when the role is assumed. Because conditions for the tags are in the same Allow statement as the AssumeRole entitlement, these tags are required to be set.

        {
            "Effect": "Allow",
            "Action": ["sts:TagSession","sts:AssumeRole"],
            "Principal": {"AWS": "arn:aws:iam::111122223333:root"},
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/Project": "*",
                    "aws:RequestTag/CostCenter": "*"
                },
                "StringEquals": {
                    "aws:RequestTag/Department": [
                        "Engineering",
                        "Marketing"
                    ]
                },
                "ForAllValues:StringEquals": {
                    "sts:TransitiveTagKeys": [
                        "Project",
                        "Department"
                    ]
                }
            }
        }

When a role session assumes another role, transitive tags from the calling role session are set to the same value within the subsequent role session. For more information, see Chaining roles with session tags.

You can use Deny statements with the sts:TagSession operation to restrict certain tags from being set. In the following example, attempts to tag a session with an Admin tag would be denied:

{
    "Effect": "Deny",
    "Action": "sts:TagSession",
    "Principal": {"AWS": "*"},
    "Condition": {
        "Null": {
            "aws:RequestTag/Admin": false
        }
    }
}

In the following example statements, we deny tagging operations on role sessions where the Team tag is equal to Admin, but we allow the setting of a different tag value.

{
    "Effect": "Deny",
    "Action": "sts:TagSession",
    "Principal": {"AWS": "*"},
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/Team": "Admin"
        }
    }
},
{
    "Effect": "Allow",
    "Action": "sts:TagSession",
    "Principal": {"AWS": "*"},
    "Condition": {
        "StringLike": {
            "aws:RequestTag/Team": "*"
        }
    }
}

What happens when a role in a trust policy is deleted

When you specify a role in the Principal element of a trust policy, AWS uses that role’s unique RoleId to make the authorization decision. If the ExampleCorpRole role from the earlier policy examples was deleted and re-created in account 111122223333, then the unique RoleId would be different, and the new ExampleCorpRole wouldn’t be able to assume the roles that trusted it in the Principal element.

When a role is deleted, the trust policy of the remaining roles that referenced this now-deleted role will show the unique RoleId it trusted in the Principal element when viewed:

"Principal": {
				"AWS": "AROA1234567123456D"
			}

Because the policy references a now-invalid RoleID, it can’t be modified until the invalid RoleID is removed from it. You can retrieve the original role ARNs by looking at CloudTrail logs for UpdateAssumeRolePolicy and CreateRole events for a role and reading the trust policy from the log entries.

For more information about using the Principal element in policy statements, see IAM role principals.

Principals placed inside the Condition block of a trust policy statement are not referenced to their RoleID but instead use the ARN of the role. The following trust policy statement would allow the ExampleCorpRole to assume the role that trusted it, even if the ExampleCorpRole role was deleted and re-created.

  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
    "ArnEquals": {
      "aws:PrincipalArn": "arn:aws:iam::111122223333:role/ExampleCorpRole"
    }
  }
    }
  ]
}

When creating a role trust policy, you should determine the behavior that you want to occur when a role is deleted. Your organization’s security posture might dictate that a deleted and re-created role should no longer be able to assume a role in your account, so using a specific principal in the Principal element is appropriate. Or you might want to allow the role to be assumed in the event that a given principal is deleted and re-created.

If you use the aws:PrincipalArn condition with a principal of :root to allow role assumption within the same account, the principal doing the assuming will need the sts:AssumeRole action in its identity-based policy.

Wildcarding principals

Earlier we noted that wildcards can’t be placed in the Principal element of a policy as part of an ARN. However, wildcards can be used in the Condition block of a policy, so wildcarding is possible with the ArnLike and StringLike condition operators. This is useful when you don’t know the specific roles, but you do have other controls that limit the path where known roles are created, such as delegated administrator models. The following policy allows a role from account 111122223333 in the path OpsRoles to assume it.

  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
    "ArnLike": {
      "aws:PrincipalArn": "arn:aws:iam::111122223333:role/OpsRoles/*"
    }
  }
    }
  ]
}

It’s a best practice to restrict role assumption to specific paths or principals instead of allowing an entire account where possible.

Using multiple statements

So far, the examples in this post have been single policy statements. Trust policies, like other policies on AWS, can have multiple statements up to the quota for role trust policy length.

You can combine multiple statements together to create complex role trusts like the following, which allows ExampleRole to assume a role and tag the session, but only from the network range 203.0.113.0/24 while forbidding that the Admin tag be set:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:role/ExampleRole"
                ]
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "203.0.113.0/24"
                }
            }
        },
        {
            "Effect": "Deny",
            "Action": "sts:TagSession",
            "Principal": {
                "AWS": "*"
            },
            "Condition": {
                "Null": {
                    "aws:RequestTag/Admin": false 
                }
            }
        }
    ]
}

Although it’s possible to use multiple statements, it’s a best practice that you don’t use roles for unrelated purposes, and that you don’t share roles across different AWS services. It’s also a best practice to use different IAM roles for different use cases and AWS services, and to avoid situations where different principals have access to the same IAM role.

Working with services that deliver role-session credentials

IAM Roles Anywhere, AWS IoT Core, and Systems Manager can deliver AWS role session credentials to devices, servers, and applications running outside of AWS. These roles are assumed by AWS services and delivered to your devices, servers, and applications after they authenticate to their respective AWS services.

For more information about these services and their requirements, see the following documentation:

Role chaining

When a role assumes another role, it’s called role chaining. Sessions created by role chaining have a maximum lifetime of 1 hour regardless of the maximum session length that a role is configured to allow.

Roles that are assumed by other means are not considered role chaining and are not subject to this restriction.

Conclusion

In this post, you learned how to craft trust policies for your IAM roles to restrict their assumption by specific principals and under certain conditions, and to combine multiple statements with different conditions. You also learned how to use features like source identity and session tags, how to protect against the cross-account confused deputy problem, and the nuances of the Principal element. You now have the tools that you need to build robust and effective trust policies for roles in your organization.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Liam Wadman

Liam Wadman

Liam is a Solutions Architect with the Identity Solutions team. When he’s not building exciting solutions on AWS or helping customers, he’s often found in the hills of British Columbia on his Mountain Bike. Liam points out that you cannot spell LIAM without IAM.