Tag Archives: Security, Identity & Compliance

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/identify-unintended-resource-access-with-aws-identity-and-access-management-iam-access-analyzer/

Today I get to share my favorite kind of announcement. It’s the sort of thing that will improve security for just about everyone that builds on AWS, it can be turned on with almost no configuration, and it costs nothing to use. We’re launching a new, first-of-its-kind capability called AWS Identity and Access Management (IAM) Access Analyzer. IAM Access Analyzer mathematically analyzes access control policies attached to resources and determines which resources can be accessed publicly or from other accounts. It continuously monitors all policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues. With IAM Access Analyzer, you have visibility into the aggregate impact of your access controls, so you can be confident your resources are protected from unintended access from outside of your account.

Let’s look at a couple examples. An IAM Access Analyzer finding might indicate an S3 bucket named my-bucket-1 is accessible to an AWS account with the id 123456789012 when originating from the source IP 11.0.0.0/15. Or IAM Access Analyzer may detect a KMS key policy that allow users from another account to delete the key, identifying a data loss risk you can fix by adjusting the policy. If the findings show intentional access paths, they can be archived.

So how does it work? Using the kind of math that shows up on unexpected final exams in my nightmares, IAM Access Analyzer evaluates your policies to determine how a given resource can be accessed. Critically, this analysis is not based on historical events or pattern matching or brute force tests. Instead, IAM Access Analyzer understands your policies semantically. All possible access paths are verified by mathematical proofs, and thousands of policies can be analyzed in a few seconds. This is done using a type of cognitive science called automated reasoning. IAM Access Analyzer is the first service powered by automated reasoning available to builders everywhere, offering functionality unique to AWS. To start learning about automated reasoning, I highly recommend this short video explainer. If you are interested in diving a bit deeper, check out this re:Invent talk on automated reasoning from Byron Cook, Director of the AWS Automated Reasoning Group. And if you’re really interested in understanding the methodology, make yourself a nice cup of chamomile tea, grab a blanket, and get cozy with a copy of Semantic-based Automated Reasoning for AWS Access Policies using SMT.

Turning on IAM Access Analyzer is way less stressful than an unexpected nightmare final exam. There’s just one step. From the IAM Console, select Access analyzer from the menu on the left, then click Create analyzer.

Creating an Access Analyzer

Analyzers generate findings in the account from which they are created. Analyzers also work within the region defined when they are created, so create one in each region for which you’d like to see findings.

Once our analyzer is created, findings that show accessible resources appear in the Console. My account has a few findings that are worth looking into, such as KMS keys and IAM roles that are accessible by other accounts and federated users.Viewing Access Analyzer Findings

I’m going to click on the first finding and take a look at the access policy for this KMS key.

An Access Analyzer Finding

From here we can see the open access paths and details about the resources and principals involved. I went over to the KMS console and confirmed that this is intended access, so I archived this particular finding.

All IAM Access Analyzer findings are visible in the IAM Console, and can also be accessed using the IAM Access Analyzer API. Findings related to S3 buckets can be viewed directly in the S3 Console. Bucket policies can then be updated right in the S3 Console, closing the open access pathway.

An Access Analyzer finding in S3

You can also see high-priority findings generated by IAM Access Analyzer in AWS Security Hub, ensuring a comprehensive, single source of truth for your compliance and security-focused team members. IAM Access Analyzer also integrates with CloudWatch Events, making it easy to automatically respond to or send alerts regarding findings through the use of custom rules.

Now that you’ve seen how IAM Access Analyzer provides a comprehensive overview of cloud resource access, you should probably head over to IAM and turn it on. One of the great advantages of building in the cloud is that the infrastructure and tools continue to get stronger over time and IAM Access Analyzer is a great example. Did I mention that it’s free? Fire it up, then send me a tweet sharing some of the interesting things you find. As always, happy building!

— Brandon

Use AWS Fargate and Prowler to send security configuration findings about AWS services to Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/use-aws-fargate-prowler-send-security-configuration-findings-about-aws-services-security-hub/

In this blog post, I’ll show you how to integrate Prowler, an open-source security tool, with AWS Security Hub. Prowler provides dozens of security configuration checks related to services such as Amazon Redshift, Amazon ElasticCache, Amazon API Gateway and Amazon CloudFront. Integrating Prowler with Security Hub will provide posture information about resources not currently covered by existing Security Hub integrations or compliance standards. You can use Prowler checks to supplement the existing CIS AWS Foundations compliance standard Security Hub already provides, as well as other compliance-related findings you may be ingesting from partner solutions.

In this post, I’ll show you how to containerize Prowler using Docker and host it on the serverless container service AWS Fargate. By running Prowler on Fargate, you no longer have to provision, configure, or scale infrastructure, and it will only run when needed. Containers provide a standard way to package your application’s code, configurations, and dependencies into a single object that can run anywhere. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers.

Solution overview

The following diagram shows the flow of events in the solution I describe in this blog post.
 

Figure 1: Prowler on Fargate Architecture

Figure 1: Prowler on Fargate Architecture

 

The integration works as follows:

  1. A time-based CloudWatch Event starts the Fargate task on a schedule you define.
  2. Fargate pulls a Prowler Docker image from Amazon Elastic Container Registry (ECR).
  3. Prowler scans your AWS infrastructure and writes the scan results to a CSV file.
  4. Python scripts in the Prowler container convert the CSV to JSON and load an Amazon DynamoDB table with formatted Prowler findings.
  5. A DynamoDB stream invokes an AWS Lambda function.
  6. The Lambda function maps Prowler findings into the AWS Security Finding Format (ASFF) before importing them to Security Hub.

Except for an ECR repository, you’ll deploy all of the above via AWS CloudFormation. You’ll also need the following prerequisites to supply as parameters for the CloudFormation template.

Prerequisites

  • A VPC with at least 2 subnets that have access to the Internet plus a security group that allows access on Port 443 (HTTPS).
  • An ECS task role with the permissions that Prowler needs to complete its scans. You can find more information about these permissions on the official Prowler GitHub page.
  • An ECS task execution IAM role to allow Fargate to publish logs to CloudWatch and to download your Prowler image from Amazon ECR.

Step 1: Create an Amazon ECR repository

In this step, you’ll create an ECR repository. This is where you’ll upload your Docker image for Step 2.

  1. Navigate to the Amazon ECR Console and select Create repository.
  2. Enter a name for your repository (I’ve named my example securityhub-prowler, as shown in figure 2), then choose Mutable as your image tag mutability setting, and select Create repository.
     
    Figure 2: ECR Repository Creation

    Figure 2: ECR Repository Creation

Keep the browser tab in which you created the repository open so that you can easily reference the Docker commands you’ll need in the next step.

Step 2: Build and push the Docker image

In this step, you’ll create a Docker image that contains scripts that will map Prowler findings into DynamoDB. Before you begin step 2, ensure your workstation has the necessary permissions to push images to ECR.

  1. Create a Dockerfile via your favorite text editor, and name it Dockerfile.
    
    FROM python:latest
    
    # Declar Env Vars
    ENV MY_DYANMODB_TABLE=MY_DYANMODB_TABLE
    ENV AWS_REGION=AWS_REGION
    
    # Install Dependencies
    RUN \
        apt update && \
        apt upgrade -y && \
        pip install awscli && \
        apt install -y python3-pip
    
    # Place scripts
    ADD converter.py /root
    ADD loader.py /root
    ADD script.sh /root
    
    # Installs prowler, moves scripts into prowler directory
    RUN \
        git clone https://github.com/toniblyx/prowler && \
        mv root/converter.py /prowler && \
        mv root/loader.py /prowler && \
        mv root/script.sh /prowler
    
    # Runs prowler, ETLs ouput with converter and loads DynamoDB with loader
    WORKDIR /prowler
    RUN pip3 install boto3
    CMD bash script.sh
    

  2. Create a new file called script.sh and paste in the below code. This script will call the remaining scripts, which you’re about to create in a specific order.

    Note: Change the AWS Region in the Prowler command on line 3 to the region in which you’ve enabled Security Hub.

    
    #!/bin/bash
    echo "Running Prowler Scans"
    ./prowler -b -n -f us-east-1 -g extras -M csv > prowler_report.csv
    echo "Converting Prowler Report from CSV to JSON"
    python converter.py
    echo "Loading JSON data into DynamoDB"
    python loader.py
    

  3. Create a new file called converter.py and paste in the below code. This Python script will convert the Prowler CSV report into JSON, and both versions will be written to the local storage of the Prowler container.
    
    import csv
    import json
    
    # Set variables for within container
    CSV_PATH = 'prowler_report.csv'
    JSON_PATH = 'prowler_report.json'
    
    # Reads prowler CSV output
    csv_file = csv.DictReader(open(CSV_PATH, 'r'))
    
    # Create empty JSON list, read out rows from CSV into it
    json_list = []
    for row in csv_file:
        json_list.append(row)
    
    # Writes row into JSON file, writes out to docker from .dumps
    open(JSON_PATH, 'w').write(json.dumps(json_list))
    
    # open newly converted prowler output
    with open('prowler_report.json') as f:
        data = json.load(f)
    
    # remove data not needed for Security Hub BatchImportFindings    
    for element in data: 
        del element['PROFILE']
        del element['SCORED']
        del element['LEVEL']
        del element['ACCOUNT_NUM']
        del element['REGION']
    
    # writes out to a new file, prettified
    with open('format_prowler_report.json', 'w') as f:
        json.dump(data, f, indent=2)
    

  4. Create your last file, called loader.py and paste in the below code. This Python script will read values from the JSON file and send them to DynamoDB.
    
    from __future__ import print_function # Python 2/3 compatibility
    import boto3
    import json
    import decimal
    import os
    
    awsRegion = os.environ['AWS_REGION']
    prowlerDynamoDBTable = os.environ['MY_DYANMODB_TABLE']
    
    dynamodb = boto3.resource('dynamodb', region_name=awsRegion)
    
    table = dynamodb.Table(prowlerDynamoDBTable)
    
    # CHANGE FILE AS NEEDED
    with open('format_prowler_report.json') as json_file:
        findings = json.load(json_file, parse_float = decimal.Decimal)
        for finding in findings:
            TITLE_ID = finding['TITLE_ID']
            TITLE_TEXT = finding['TITLE_TEXT']
            RESULT = finding['RESULT']
            NOTES = finding['NOTES']
    
            print("Adding finding:", TITLE_ID, TITLE_TEXT)
    
            table.put_item(
               Item={
                   'TITLE_ID': TITLE_ID,
                   'TITLE_TEXT': TITLE_TEXT,
                   'RESULT': RESULT,
                   'NOTES': NOTES,
                }
            )
    

  5. From the ECR console, within your repository, select View push commands to get operating system-specific instructions and additional resources to build, tag, and push your image to ECR. See Figure 3 for an example.
     
    Figure 3: ECR Push Commands

    Figure 3: ECR Push Commands

    Note: If you’ve built Docker images previously within your workstation, pass the –no-cache flag with your docker build command.

  6. After you’ve built and pushed your Image, note the URI within the ECR console (such as 12345678910.dkr.ecr.us-east-1.amazonaws.com/my-repo), as you’ll need this for a CloudFormation parameter in step 3.

Step 3: Deploy CloudFormation template

Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the CloudFormation User Guide.

You’ll need the values you noted in Step 2 and during the “Solution overview” prerequisites. The description of each parameter is provided on the Parameters page of the CloudFormation deployment (see Figure 4)
 

Figure 4: CloudFormation Parameters

Figure 4: CloudFormation Parameters

After the CloudFormation stack finishes deploying, click the Resources tab to find your Task Definition (called ProwlerECSTaskDefinition). You’ll need this during Step 4.
 

Figure 5: CloudFormation Resources

Figure 5: CloudFormation Resources

Step 4: Manually run ECS task

In this step, you’ll run your ECS Task manually to verify the integration works. (Once you’ve tested it, this step will be automatic based on CloudWatch events.)

  1. Navigate to the Amazon ECS console and from the navigation pane select Task Definitions.
  2. As shown in Figure 6, select the check box for the task definition you deployed via CloudFormation, then select the Actions dropdown menu and choose Run Task.
     
    Figure 6: ECS Run Task

    Figure 6: ECS Run Task

  3. Configure the following settings (shown in Figure 7), then select Run Task:
    1. Launch Type: FARGATE
    2. Platform Version: Latest
    3. Cluster: Select the cluster deployed by CloudFormation
    4. Number of tasks: 1
    5. Cluster VPC: Enter the VPC of the subnets you provided as CloudFormation parameters
    6. Subnets: Select 1 or more subnets in the VPC
    7. Security groups: Enter the same security group you provided as a CloudFormation parameter
    8. Auto-assign public IP: ENABLED
       
      Figure 7: ECS Task Settings

      Figure 7: ECS Task Settings

  4. Depending on the size of your account and the resources within it, your task can take up to an hour to complete. Follow the progress by looking at the Logs tab within the Task view (Figure 8) by selecting your task. The stdout from Prowler will appear in the logs.

    Note: Once the task has completed it will automatically delete itself. You do not need to take additional actions for this to happen during this or subsequent runs.

     

    Figure 8: ECS Task Logs

    Figure 8: ECS Task Logs

  5. Under the Details tab, monitor the status. When the status reads Stopped, navigate to the DynamoDB console.
  6. Select your table, then select the Items tab. Your findings will be indexed under the primary key NOTES, as shown in Figure 9. From here, the Lambda function will trigger each time new items are written into the table from Fargate and will load them into Security Hub.
     
    Figure 9: DynamoDB Items

    Figure 9: DynamoDB Items

  7. Finally, navigate to the Security Hub console, select the Findings menu, and wait for findings from Prowler to arrive in the dashboard as shown in figure 10.
     
    Figure 10: Prowler Findings in Security Hub

    Figure 10: Prowler Findings in Security Hub

If you run into errors when running your Fargate task, refer to the Amazon ECS Troubleshooting guide. Log errors commonly come from missing permissions or disabled Regions – refer back to the Prowler GitHub for troubleshooting information.

Conclusion

In this post, I showed you how to containerize Prowler, run it manually, create a schedule with CloudWatch Events, and use custom Python scripts along with DynamoDB streams and Lambda functions to load Prowler findings into Security Hub. By using Security Hub, you can centralize and aggregate security configuration information from Prowler alongside findings from AWS and partner services.

From Security Hub, you can use custom actions to send one or a group of findings from Prowler to downstream services such as ticketing systems or to take custom remediation actions. You can also use Security Hub custom insights to create saved searches from your Prowler findings. Lastly, you can use Security Hub in a master-member format to aggregate findings across multiple accounts for centralized reporting.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Security Hub forum.

Want more AWS Security news? Follow us on Twitter.

Jonathon Rau

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

How to get started with security response automation on AWS

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-get-started-security-response-automation-aws/

At AWS, we encourage you to use automation to help quickly detect and respond to security events within your AWS environments. In addition to increasing the speed of detection and response, automation also helps you scale your security operations as you expand your workloads running on AWS. For these reasons, security automation is a key ­principle outlined in both the Well-Architected and Cloud Adoption frameworks as well as in the AWS Security Incident Response Guide.

In this blog post, you’ll learn to implement automated security response mechanisms within your AWS environments. This post will include common patterns, implementation considerations, and an example solution. Security response automation is a broad topic that spans many areas. The goal of this blog post is to introduce you to core concepts and help you get started.

A word from our lawyers: Please note that you are responsible for making your own independent assessment of the information in this post. This post: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers, or licensors.

What is security response automation?

Security response automation is a planned and programmed action taken to achieve a desired state for an application or resource based on a condition or event. When you implement security response automation, you should adopt an approach that draws from existing security frameworks. Frameworks are published materials which consist of standards, guidelines, and best practices in order help organizations manage cybersecurity-related risk. Using frameworks helps you achieve consistency and scalability and enables you to focus more on the strategic aspects of your security program. You should work with compliance professionals within your organization to understand any specific security frameworks that may also be relevant for your AWS environment.

Our example solution is based on the NIST Cybersecurity Framework (CSF), which is designed to help organizations assess and improve their ability to prevent, detect, and respond to security events. According to the CSF, “cybersecurity incident response” supports your ability to contain the impact of potential cybersecurity incidents. Although automation is not a CSF requirement, automating responses to events enables you to create repeatable, predictable approaches to monitoring and responding to threats.

The five main steps in the CSF are identify, protect, detect, respond and recover. We’ve expanded the detect and respond steps to include automation and investigation activities.
 

Figure 1: The five steps in the CSF

Figure 1: The five steps in the CSF

The following definitions for each step in the diagram above are based on the CSF but have been adapted for our example in this blog post. Although we will focus on the detect, automate and respond steps, it’s important to understand the entire process flow.

  • Identify: Identify and understand the resources, applications, and data within your AWS environment.
  • Protect: Develop and implement appropriate controls and safeguards to ensure delivery of services.
  • Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. This step includes the implementation of monitoring capabilities which will be discussed further in the next section.
  • Automate: Develop and implement planned, programmed actions that will achieve a desired state for an application or resource based on a condition or event.
  • Investigate: Perform a systematic examination of the security event to establish the root cause.
  • Respond: Develop and implement appropriate activities to take automated or manual actions regarding a detected security event.
  • Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a security event.

Security response automation on AWS

AWS CloudTrail, AWS Config, and Amazon EventBridge continuously record details about the resources and configuration changes in your AWS account. You can use this information to automatically detect resource changes and to react to deviations from your desired state.
 

Figure 2: Automated remediation flow

Figure 2: Automated remediation flow

As shown in the diagram above, an automated remediation flow on AWS has three stages:

  • Monitor: Your automated monitoring tools collect information about resources and applications running in your AWS environment. For example, they might collect AWS CloudTrail information about activities performed in your AWS account, usage metrics from your Amazon EC2 instances, or flow log information about the traffic going to and from network interfaces in your Amazon Virtual Private Cloud (VPC).
  • Detect: When a monitoring tool detects a predefined condition—such as a breached threshold, anomalous activity, or configuration deviation—it raises a flag within the system. A triggering condition might be an anomalous activity detected by Amazon GuardDuty, a resource becoming out of compliance with an AWS Config Rule, or a high rate of blocked requests on an Amazon VPC security group or AWS WAF web access control list.
  • Respond: When a condition is flagged, an automated response is triggered that performs an action you’ve predefined—something intended to remediate or mitigate the flagged condition. Examples of automated response actions might include modifying a VPC security group, patching an Amazon EC2 instance, or rotating credentials.

You can use the event-driven flow described above to achieve many automated response patterns with varying degrees of complexity. Your response pattern could be as simple as invoking a single AWS Lambda function, or it could be a complex series of AWS Step Function tasks with advanced logic. In this blog post, we’ll use two simple Lambda functions in our example solution.

How to define your response automation

Now that we’ve introduced the concept of security response automation, start thinking about security requirements within your environment that you’d like to enforce through automation. These design requirements might come from general best practices you’d like to follow, or they might be specific controls from compliance frameworks relevant for your business. Regardless, your objectives should be quantitative, not qualitative. Here are some examples of quantitative objectives:

  • Remote administrative network access to servers should be limited.
  • Server storage volumes should be encrypted.
  • AWS console logins should be protected by multi-factor authentication.

As an optional step, you can expand these objectives into user stories that define the conditions and remediation actions when there is an event. User stories are informal descriptions that briefly document a feature within a software system. User stories may be global and span across multiple applications or they may be specific to a single application. For example:

“Remote administrative network access to servers should be limited. Remote access ports include SSH TCP port 22 and RDP TCP port 3389. If open remote access ports are detected within the environment, they should be automatically closed and the owner will be notified.”

Once you’ve completed your user story, you can determine how to use automated remediation to help achieve these objectives in your AWS environment. User stories should be stored in a location that provides versioning support and can reference the associated automation code.

You should carefully consider the effect of your remediation mechanisms in order to prevent unintended impact on your resources and applications. Remediation actions such as instance termination, credential revocation, and security group modification can adversely affect application availability. Depending on the level of risk that’s acceptable to your organization, your automated mechanism might only provide a notification which can then be manually investigated prior to remediation. Once you’ve identified an automated remediation mechanism, you can build out the required components and test them in a non-production environment.

Sample response automation walkthrough

In the following section, we’ll walk you through an automated remediation for a simulated event that indicates potential unauthorized activity—the unintended disabling of CloudTrail logging. Outside parties might want to disable logging to prevent detection and recording of their unauthorized activity. Our response is to re-enable the CloudTrail logging and immediately notify the security contact. Here’s the user story for this scenario:

“CloudTrail logging should be enabled for all AWS accounts and regions. If CloudTrail logging is disabled, it will automatically be enabled and the security operations team will be notified.”

Note: The sample response automation below references Amazon EventBridge which extends and builds upon CloudWatch Events. Amazon EventBridge uses the same Amazon CloudWatch Events API, so the event structure and rules configuration are the same. This blog post uses base functionality that is identical in both EventBridge and CloudWatch Events.

Prerequisites

In order to use our sample remediation, you will need to enable Amazon GuardDuty and AWS Security Hub in the AWS Region you have selected. Both of these services include a 30-day free trial. See the AWS Security Hub pricing page and the Amazon GuardDuty pricing page for additional details.

Important: You’ll use AWS CloudTrail to test the sample remediation. Running more than one CloudTrail trail in your AWS account will result in charges based on the number of events processed while the trail is running. Charges for additional copies of management events recorded in a Region are applied based on the published pricing plan. To minimize the charges, follow the clean-up steps that we provide later in this post to remove the sample automation and delete the trail.

Deploy the sample response automation

In this section, we’ll show you how to deploy and test the CloudTrail logging remediation sample. Amazon GuardDuty generates the finding Stealth:IAMUser/CloudTrailLoggingDisabled when CloudTrail logging is disabled, and AWS Security Hub collects findings from GuardDuty using the standardized finding format mentioned earlier. We recommend that you deploy this sample into a non-production AWS account.

Select the Launch Stack button below to deploy a CloudFormation template with an automation sample in the us-east-1 Region. You can also download the template and implement it in another Region. The template consists of an Amazon EventBridge rule, an AWS Lambda function and the IAM permissions necessary for both components to execute. It takes several minutes for the CloudFormation stack build to complete.

Select this image to open a link that starts building the CloudFormation stack

  1. In the CloudFormation console, choose the Select Template form, and then select Next.
  2. On the Specify Details page, provide the email address for a security contact. (For the purpose of this walkthrough, it should be an email address you have access to.) Then select Next.
  3. On the Options page, accept the defaults, then select Next.
  4. On the Review page, confirm the details, then select Create.
  5. While the stack is being created, check the inbox of the email address you provided in step 2. Look for an email message with the subject AWS Notification – Subscription Confirmation. Select the link in the body of the email to confirm your subscription to the Amazon Simple Notification Service (Amazon SNS) topic. You should see a success message similar to the screenshot below:
     
    Figure 3: SNS subscription confirmation

    Figure 3: SNS subscription confirmation

  6. Return to the CloudFormation console. Once the Status field for the CloudFormation stack changes to CREATE COMPLETE (as shown in figure 4), the solution is implemented and is ready for testing.
     
    Figure 4: CREATE_COMPLETE status

    Figure 4: CREATE_COMPLETE status

Test the sample automation

You’re now ready to test the automated response by creating a test trail in CloudTrail, then trying to stop it.

  1. From the AWS Management Console, choose Services > CloudTrail.
  2. Select Trails, then select Create Trail.
  3. On the Create Trail form:
    1. Enter a value for Trail name. We use test-trail in our example below.
    2. Under Management events, select Write-only (to minimize event volume).
       
      Figure 5: Create a CloudTrail trail

      Figure 5: Create a CloudTrail trail

    3. Under Storage location, choose an existing S3 bucket or create a new one. Note that since S3 bucket names are globally unique, you must add characters (such as a random string) to the name. For example: my-test-trail-bucket-<random-string>.
  4. On the Trails page of the CloudTrail console, verify that the new trail has started. You should see a green checkmark in the Status column, as shown in figure 6.
     
    Figure 6: Verify new trail has started

    Figure 6: Verify new trail has started

  5. You’re now ready to act like an unauthorized user trying to cover their tracks! Stop the logging for the trail you just created:
    1. Select the new trail name to display its configuration page.
    2. Toggle the Logging switch in the top-right corner to OFF.
    3. When prompted with a warning dialog box, select Continue.
    4. Verify that the Logging switch is now off, as shown below.
       
      Figure 7: Verify logging switch is off

      Figure 7: Verify logging switch is off

      You have now simulated a security event by disabling logging for one of the trails in the CloudTrail service. Within the next few seconds, the near real-time automated response will detect the stopped trail, restart it, and send an email notification. You can refresh the Trails page of the CloudTrail console to verify that the trail’s status is ON again.

      Within the next several minutes, the investigatory automated response will also begin. GuardDuty will detect the action that stopped the trail and enrich the data about the source of unexpected behavior. Security Hub will then ingest that information and optionally correlate with other security events.

      Following the steps below, you can monitor findings within Security Hub for the finding type TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled to be generated:

  6. In the AWS Management Console, choose Services > Security Hub
    1. Select Findings in the left pane.
    2. Select the Add filters field, then select Type.
    3. Select EQUALS, paste TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled into the field, then select Apply.
    4. Refresh your browser periodically until the finding is generated.
    5. Figure 8: Monitor Security Hub for your finding

      Figure 8: Monitor Security Hub for your finding

While you wait on that detection, let’s dig into the components of automation.

How the sample automation works

This example incorporates two automated responses: a near real-time workflow and an investigatory workflow. The near real-time workflow provides a rapid response to an individual event, in this case the stopping of a trail. The goal is to restore the trail to a functioning state and alert security responders as quickly as possible. The investigatory workflow still includes a response to provide defense in depth and also uses services that support a more in-depth investigation of the incident.

Figure 9: Sample automation workflow

Figure 9: Sample automation workflow

In the near real-time workflow, Amazon EventBridge monitors for the undesired activity. When a trail is stopped, AWS CloudTrail publishes an event on the EventBridge bus. An EventBridge rule detects the trail-stopping event and invokes a Lambda function to respond to the event by restarting the trail and notifying the security contact via an Amazon Simple Notification Service (SNS) topic.

In the investigative workflow, CloudTrail logs are monitored for undesired activities. For example, if a trail is stopped, there will be a corresponding log record. GuardDuty detects this activity and retrieves additional data points about the source IP that executed the API call. Two common examples of those additional data points in GuardDuty findings include whether the API call came from an IP address on a threat list, or whether it came from a network not commonly used in your AWS account. An AWS Lambda function responds by restarting the trail and notifying the security contact. Finally, the finding is imported into AWS Security Hub for additional investigation.

AWS Security Hub imports findings from AWS security services such as GuardDuty, Amazon Macie and Amazon Inspector, plus from any third-party product integrations you’ve enabled. All findings are provided to Security Hub in AWS Security Finding Format, which eliminates the need for data conversion. Security Hub correlates these findings to help you identify related security events and determine a root cause. Security Hub also publishes its findings to Amazon EventBridge to enable further processing by other AWS services such as AWS Lambda.

Respond step deep dive

Amazon EventBridge and AWS Lambda work together to respond to a security finding. Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and Software-as-a-Service (SaaS) applications without writing code. In this example, EventBridge identifies a Security Hub finding that requires action and invokes a Lambda function that performs remediation. As shown in figure 10, the Lambda function both notifies the security operator via SNS and restarts the stopped CloudTrail.

Figure 10: Sample "respond" workflow

Figure 10: Sample “respond” workflow

To set this response up, we looked for an event to indicate that a trail had stopped or was disabled. We knew that the GuardDuty finding Stealth:IAMUser/CloudTrailLoggingDisabled is raised when CloudTrail logging is disabled. Therefore, we configured the default event bus to look for this event. You can learn more about all of the available GuardDuty findings in the user guide.

How the code works

When Security Hub publishes a finding to EventBridge, it includes full details of the incident as discovered by GuardDuty. The finding is published in JSON format. If you review the details of the sample finding, note that it has several fields helping you identify the specific events that you’re looking for. Here are some of the relevant details:


{
   …
   "source":"aws.securityhub",
   …
   "detail":{
      "findings": [{
		…
    	“Types”: [
			"TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled"
			],
		…
      }]
}

You can build an event pattern using these fields, which an EventBridge filtering rule can then use to identify events and to invoke the remediation Lambda function. Below is a snippet from the CloudFormation template we provided earlier that defines that event pattern for the EventBridge filtering rule:


# pattern matches the nested JSON format of a specific Security Hub finding
      EventPattern:
        source:
        - aws.securityhub
        detail-type:
          - "Security Hub Findings - Imported"
        detail:
          findings:
            Types:
              - "TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled"

Once the rule is in place, EventBridge continuously scans the event bus for this pattern. When EventBridge finds a match, it invokes the remediating Lambda function and passes the full details of the event to the function. The Lambda function then parses the JSON fields in the event so that it can act as shown in this Python code snippet:


# extract trail ARN by parsing the incoming Security Hub finding (in JSON format)
trailARN = event['detail']['findings'][0]['ProductFields']['action/awsApiCallAction/affectedResources/AWS::CloudTrail::Trail']   

# description contains useful details to be sent to security operations
description = event['detail']['findings'][0]['Description']

The code also issues a notification to security operators so they can review the findings and insights in Security Hub and other services to better understand the incident and to decide whether further manual actions are warranted. Here’s the code snippet that uses SNS to send out a note to security operators:


#Sending the notification that the AWS CloudTrail has been disabled.
snspublish = snsclient.publish(
	TargetArn = snsARN,
	Message="Automatically restarting CloudTrail logging.  Event description: \"%s\" " %description
	)

While notifications to human operators are important, the Lambda function will not wait to take action. It immediately remediates the condition by restarting the stopped trail in CloudTrail. Here’s a code snippet that restarts the trail to reenable logging:


#Enabling the AWS CloudTrail logging
try:
	client = boto3.client('cloudtrail')
	enablelogging = client.start_logging(Name=trailARN)
	logger.debug("Response on enable CloudTrail logging- %s" %enablelogging)
except ClientError as e:
	logger.error("An error occured: %s" %e)

After the trail has been restarted, API activity is once again logged and can be audited. This can help provide relevant data for the remaining steps in the incident response process. The data is especially important for the post-incident phase, when your team analyzes lessons learned to prevent future incidents. You can also use this phase to identify additional steps to automate in your incident response.

Clean up

After you’ve completed the sample security response automation, we recommend that you remove the resources created in this walkthrough example from your account in order to minimize any charges associated with the trail in CloudTrail and data stored in S3.

Important: Deleting resources in your account can negatively impact the applications running in your AWS account. Verify that applications and AWS account security do not depend on the resources you’re about to delete.

Here are the clean-up steps:

  1. Delete the CloudFormation stack.
  2. Delete the trail you created in CloudTrail.
  3. If you created an S3 bucket for CloudTrail logs, you can also delete that S3 bucket.
  4. New accounts can try GuardDuty at no cost for 30 days. You can suspend or disable GuardDuty before the free trial period to avoid charges.
  5. Security Hub comes with a 30-day free trial. You can avoid charges by disabling the service before the trial period is over.

Summary

You’ve learned the basic concepts and considerations behind security response automation on AWS and how to use Amazon EventBridge, Amazon GuardDuty and AWS Security Hub to automatically re-enable AWS CloudTrail when it becomes disabled unexpectedly. As a next step, you may want to start building your own response automations and dive deeper into the AWS Security Incident Response Guide, NIST Cybersecurity Framework (CSF) or the AWS Cloud Adoption Framework (CAF) Security Perspective. You can explore additional automatic remediation solutions on the AWS Security Blog. You can find the code used in this example on GitHub.

If you have feedback about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the EventBridgeGuardDuty or Security Hub forums, or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Cameron Worrell

Cameron Worrell

Cameron is a Solutions Architect with a passion for security and enterprise transformation. He joined AWS in 2015.

Alex Tomic

Alex Tomic

Alex is an AWS Enterprise Solutions Architect focused on security and compliance. He joined AWS in 2014.

Nathan Case

Nathan Case

Nathan is a Senior Security Strategist, and joined AWS in 2016. He is always interested to see where our customers plan to go and how we can help them get there. He is also interested in intel, combined data lake sharing opportunities, and open source collaboration. In the end Nathan loves technology and that we can change the world to make it a better place.

Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/rely-employee-attributes-from-corporate-directory-create-fine-grained-permissions-aws/

In my earlier post Simplify granting access to your AWS resources by using tags on AWS IAM users and roles, I explained how to implement attribute-based access control (ABAC) in AWS to simplify permissions management at scale. In that scenario, I talked about relying on attributes on your IAM users and roles for access control in AWS. But more often, customers manage workforce user identities with an identity provider (IdP) and want to use identity attributes from their IdP for fine-grained permissions in AWS. In this post I introduce a new capability that enables you to do just that.

In AWS, you can configure your IdP to allow workforce users federated access to AWS resources using credentials from your corporate directory. Along with user credentials, your directory also stores user attributes such as cost center, department, and email address. Now you can configure your IdP to pass in user attributes as tags in federated AWS sessions. These are called session tags. You can then control access to AWS resources based on these session tags. Moreover, when user attributes change or new users are added to your directory, permissions automatically apply based on these attributes. For example, developers can federate into AWS using an IAM role, but can only access resources specific to their project. This is because you define permissions that require the project attribute from their IdP to match the project tag on AWS resources. Additionally, AWS logs these attributes in AWS CloudTrail and enable security administrators to track the user identity for a given role session.

In this post, I introduce session tags and walk you through an example of how to use session tags for ABAC and tracking user activity.

What are session tags?

Session tags are attributes passed in the AWS session. You can use session tags for access control in IAM policies and for monitoring. These tags are not stored in AWS and are valid only for the duration of the session. You define session tags just like tags in AWS—consisting of a customer-defined key and an optional value.

How to pass session tags in the AWS session?

One of the most widely used mechanisms for requesting a session in AWS is by assuming an IAM role. For user identities stored in an external directory, you can configure your SAML IdP in IAM to allow your users federated access to AWS using IAM roles. To understand how to set up SAML federation using an IdP, read AWS Federated Authentication with Active Directory Federation Services (ADFS). If you’re using IAM users, you can also request a session in AWS using AssumeRole and GetFederationToken APIs or using AssumeRoleWithWebIdentity API for applications that require access to AWS resources.

For session tags, you can use all of the above-mentioned APIs to pass tags into your AWS session based on your use case. For details on how to use these APIs to pass session tags, please visit Tags in AWS Sessions.

What permissions do I need to use session tags?

To perform any action in AWS, developers need permissions. For example, to assume a role, your developers need sts:AssumeRole permission. Similarly with session tags, we’re introducing a new action, sts:TagSession, that is required to pass session tags in the session. Additionally, you can require and control session tags using existing AWS conditions:

ActionUse CaseWhere to add
sts:TagSessionRequired to pass attributes as session tags when using AssumeRole, AssumRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederatioToken APIRole’s trust policy or IAM user’s permissions policy based on the API you are using to pass session tags.
Condition KeyUse CaseActions that supports the condition key
aws:RequestTagUse this condition to require specific tags in the session.sts:TagSession
aws:TagKeysUse this condition key to control the tag keys that are allowed in the session.sts:TagSession
aws:PrincipalTag*Use this condition in IAM policies to compare tags on AWS resources.AWS Global Condition Keys (all actions across all services support this condition key)

Note: The table above explains only the additional use cases that the keys now support. Support for existing use cases, such as IAM users and roles remains unchanged. For details please visit AWS Global Condition Keys.

Now, I’ll show you how to create fine-grained permissions based on user attributes from your directory and how permissions automatically apply based on attributes when employees switch projects within your organization.

Example: Grant employees access to their project resources in AWS based on their job function

Consider a scenario where your organization deployed AWS EC2 and RDS instances in your AWS account for your company’s production web applications. Your systems engineers manage the EC2 instances and database engineers manage the RDS instances. They both access AWS by federating into your AWS account from a SAML IdP. Your organization’s security policy requires employees to have access to manage only the resources related to their job function and project they work on.

To meet these requirements, your cloud administrator, Michelle, implements attribute-based access control (ABAC) using the jobfunction and project attributes as session tags by following three steps:

  1. Michelle tags all existing EC2 and RDS instances with the corresponding project attribute.
  2. She creates a MyProjectResources IAM role and an IAM permission policy for this role such that employees can access resources with their jobfunction and project tags.
  3. She then configures your SAML IdP to pass the jobfunction and project attributes in the federated session when employees federate into AWS using the MyProjectResources role.

Let’s have a look at these steps in detail.

Step 1: Tag all the project resources

Michelle tags all the project resources with the appropriate project tag. This is important since she wants to create permission rules based on this tag to implement ABAC. To learn how to tag resources in EC2 and RDS, read tagging your Amazon EC2 resources and tagging Amazon RDS resources.

Step 2: Create an IAM role with permissions based on attributes

Next, Michelle creates an IAM role called MyProjectResources using the AWS Management Console or CLI. This is the role that your systems engineers and database engineers will assume when they federate into AWS to access and manage the EC2 and RDS instances respectively. To grant this role permissions, Michelle creates the following IAM policy and attaches it to the MyProjectResources role.

IAM Permissions Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": "rds:DescribeDBInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"rds:RebootDBInstance",
				"rds:StartDBInstance",
				"rds:StopDBInstance"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "DatabaseEngineer",
					"rds:db-tag/project": "${aws:PrincipalTag/project}"
				}
			}
		},
		{
			"Effect": "Allow",
			"Action": "ec2:DescribeInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"ec2:StartInstances",
				"ec2:StopInstances",
				"ec2:RebootInstances",
				"ec2:TerminateInstances"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "SystemsEngineer",
					"ec2:ResourceTag/project": "${aws:PrincipalTag/project}"
				}
			}
		}
	]
}

In the policy above, Michelle allows specific actions related to EC2 and RDS that the systems engineers and database engineers need to manage their project instances. In the condition element of the policy statements, Michelle adds a condition based on the jobfunction and project attributes to ensure engineers can access only the instances which belong to their jobfunction and have a matching project tag.

To ensure your systems engineers and database engineers can assume this role when they federate into AWS from your IdP, Michelle modifies the role’s trust policy to trust your SAML IdP as shown in the policy statement below. Since we also want to include session tags when engineers federate in, Michelle adds the new action sts:TagSession in the policy statement as shown below. She also adds a condition that requires the jobfunction and project attributes to be included as session tags when engineers assume this role.

Role Trust Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"Federated": "arn:aws:iam::999999999999:saml-provider/ExampleCorpProvider"
			},
			"Action": [
				"sts:AssumeRoleWithSAML",
				"sts:TagSession"
			],
			"Condition": {
				"StringEquals": {
					"SAML:aud": "https://signin.aws.amazon.com/saml"
				},
				"StringLike": {
					"aws:RequestTag/project": "*",
					"aws:RequestTag/jobfunction": [
						"SystemsEngineer",
						"DatabaseEngineer"
					]
				}
			}
		}
	]
}

Step 3: Configuring your SAML IdP to pass the jobfunction and project attributes as session tags

Once Michelle creates the role and permissions policy in AWS, she configures her SAML IdP to include the jobfunction and project attributes as session tags in the SAML assertion when engineers federate into AWS using this role.

To pass attributes as session tags in the federated session, the SAML assertion must contain the attributes with the following prefix:

https://aws.amazon.com/SAML/Attributes/PrincipalTag

The example given below shows a part of the SAML assertion generated from my IdP with two attributes (project:Automation and jobfunction:SystemsEngineer) that we want to pass as session tags.


<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:project">
		< AttributeValue >Automation<AttributeValue>
</ Attribute>
<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:jobfunction">
		< AttributeValue >SystemsEngineer<AttributeValue>
</ Attribute>

Note: This sample only contains the new properties in the SAML assertion. There are additional required fields in the SAML assertion that must be present to successfully federate into AWS. To learn more about creating SAML assertions with session tags, visit configuring SAML assertions for the authentication response.

AWS identity partners such as Ping Identity, OneLogin, Auth0, ForgeRock, IBM, Okta, and RSA have validated the end-to-end experience for this new capability with their identity solutions, and we look forward to additional partners validating this capability. To learn more about how to use these identity providers for configuring session tags, please visit integrating third-party SAML solution providers with AWS. If you are using Active Directory Federation Services (ADFS) for SAML federation with AWS, then please visit Configuring ADFS to start using session tags for attribute-based access control.

Now, when your systems engineers and database engineers federate into AWS using the MyProjectResources role, they only get access to their project resources based on the project and jobfunction attributes passed in their federated session. Session tags enabled Michelle to define unique permissions based on user attributes without having to create and manage multiple roles and policies. This helps simplify permissions management in her company.

Permissions automatically apply when employees change projects

Consider the same example with a scenario where your systems engineer, Bob, switches from the automation project to the integration project. Due to this switch, Michelle sets Bob’s project attribute in the IdP to integration. Now, the next time Bob federates into AWS he automatically has access to resources in integration project. Using session tags, permissions automatically apply when you update attributes or create new AWS resources with appropriate attributes without requiring any permissions updates in AWS.

Track user identity using session tags

When developers federate into AWS with session tags, AWS CloudTrail logs these tags to make it easier for security administrators to track the user identity of the session. To view session tags in CloudTrail, your administrator Michelle looks for the AssumeRoleWithSAML event in the eventName filter of CloudTrail. In the example below, Michelle has configured the SAML IdP to pass three session tags: project, jobfunction, and userID. When developers federate into your account, Michelle views the AssumeRoleWithSAML event in CloudTrail to track the user identity of the session using the session tags project, jobfunction, and userID as shown below:
 

Figure 1: Search for the logged events

Figure 1: Search for the logged events

Note: You can use session tags in conjunction with the instructions to track account activity to its origin using AWS CloudTrail to trace the identity of the session.

Summary

You can use session tags to rely on your employee attributes from your corporate directory to create fine-grained permissions at scale in AWS to simplify your permissions management workflows. To learn more about session tags, please visit tags in AWS session.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon IAM forum.

Want more AWS Security news? Follow us on Twitter.

Sulay Shah

Sulay is a Senior Product Manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Additional on-premises option for data localization with AWS

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/additional-on-premises-option-for-data-localization-with-aws/

Today, AWS released an updated resource — AWS Policy Perspectives-Data Residency — to provide an additional option for you if you need to store and process your data on premises. This white paper update discusses AWS Outposts, which offers a hybrid solution for customers that might find that certain workloads are better suited for on-premises management — whether for lower latency or other local processing needs.

Until this capability, you’d select the nearest AWS region to keep data closer in proximity. By extending AWS infrastructure and services to your environments, you can support workloads — including sensitive workloads — that need to remain on-premises while leveraging the security and operational capabilities of AWS cloud services.

Read the updated whitepaper to learn why data residency doesn’t mean better data security, and to learn about an additional option available for you to address various data protection needs, including data localization.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Min Hyun

Min is the Global Lead for Growth Strategies at AWS. Her team’s mission is to set the industry bar in thought leadership for security and data privacy assurance in emerging technology, trends and strategy to advance customers’ journeys to AWS. View her other Security Blog publications here.

Digital signing with the new asymmetric keys feature of AWS KMS

Post Syndicated from Raj Copparapu original https://aws.amazon.com/blogs/security/digital-signing-asymmetric-keys-aws-kms/

AWS Key Management Service (AWS KMS) now supports asymmetric keys. You can create, manage, and use public/private key pairs to protect your application data using the new APIs via the AWS SDK. Similar to the symmetric key features we’ve been offering, asymmetric keys can be generated as customer master keys (CMKs) where the private portion never leaves the service, or as a data key where the private portion is returned to your calling application encrypted under a CMK. The private portion of asymmetric CMKs are used in AWS KMS hardware security modules (HSMs) designed so that no one, including AWS employees, can access the plaintext key material. AWS KMS supports the following asymmetric key types – RSA 2048, RSA 3072, RSA 4096, ECC NIST P-256, ECC NIST P-384, ECC NIST-521, and ECC SECG P-256k1.

We’ve talked with customers and know that one popular use case for asymmetric keys is digital signing. In this post, I will walk you through an example of signing and verifying files using some of the new APIs in AWS KMS.

Background

A common way to ensure the integrity of a digital message as it passes between systems is to use a digital signature. A sender uses a secret along with cryptographic algorithms to create a data structure that is appended to the original message. A recipient with access to that secret can cryptographically verify that the message hasn’t been modified since the sender signed it. In cases where the recipient doesn’t have access to the same secret used by the sender for verification, a digital signing scheme that uses asymmetric keys is useful. The sender can make the public portion of the key available to any recipient to verify the signature, but the sender retains control over creating signatures using the private portion of the key. Asymmetric keys are used for digital signature applications such as trusted source code, authentication/authorization tokens, document e-signing, e-commerce transactions, and secure messaging. AWS KMS supports what are known as raw digital signatures, where there is no identity information about the signer embedded in the signature object. A common way to attach identity information to a digital signature is to use digital certificates. If your application relies on digital certificates for signing and signature verification, we recommend you look at AWS Certificate Manager and Private Certificate Authority. These services allow you to programmatically create and deploy certificates with keys to your applications for digital signing operations. A common application of digital certificates is TLS termination on a web server to secure data in transit.

Signing and verifying files with AWS KMS

Assume that you have an application A that sends a file to application B in your AWS account. You want the file to be digitally signed so that the receiving application B can verify it hasn’t been tampered with in transit. You also want to make sure only application A can digitally sign files using the key because you don’t want application B to receive a file thinking it’s from application A when it was really from a different sender that had access to the signing key. Because AWS KMS is designed so that the private portion of the asymmetric key pair used for signing cannot be used outside the service or by unauthenticated users, you’re able to define and enforce policies so that only application A can sign with the key.

To start, application A will submit either the file itself or a digest of the file to the AWS KMS Sign API under an asymmetric CMK. If the file is less than 4KB, AWS KMS will compute a digest for you as a part of the signing operation. If the file is greater than 4KB, you must send only the digest you created locally and you must tell AWS KMS that you’re passing a digest in the MessageType parameter of the request. You can use any of several hashing functions in your local environment to create a digest of the file, but be aware that the receiving application in account B will need to be able to compute the digest using the same hash function in order to verify the integrity of the file. In my example, I’m using SHA256 as the hash function. Once the digest is created, AWS KMS uses the private portion of the asymmetric CMK to encrypt the digest using the signing algorithm specified in the API request. The result is a binary data object, which we’ll refer to as “the signature” throughout this post.

Once application B receives the file with the signature, it must create a digest of the file. It then passes this newly generated digest, the signature object, the signing algorithm used, and the CMK keyId to the Verify API. AWS KMS uses the corresponding public key of the CMK with the signing algorithm specified in the request to verify the signature. Instead of submitting the signature to the Verify API, application B could verify the signature locally by acquiring the public key. This might be an attractive option if application B didn’t have a way to acquire valid AWS credentials to make a request of AWS KMS. However, this method requires application B to have access to the necessary cryptographic algorithms and to have previously received the public portion of the asymmetric CMK. In my example, application B is running in the same account as application A, so it can acquire AWS credentials to make the Verify API request. I’ll describe how to verify signatures using both methods in a bit more detail later in the post.

Creating signing keys and setting up key policy permissions

To start, you need to create an asymmetric CMK. When calling the CreateKey API, you’ll pass one of the asymmetric values for the CustomerMasterKeySpec parameter. In my example, I’m choosing a key spec of ECC_NIST_P384 because keys used with elliptic curve algorithms tend to be more efficient than those used with RSA-based algorithms.

As a part of creating your asymmetric CMK, you need to attach a resource policy to the key to control which cryptographic operations the AWS principals representing applications A and B can use. A best practice is to use a different IAM principal for each application in order to scope down permissions. In this case, you want application A to only be able to sign files, and application B to only be able to verify them. I will assume each of these applications are running in Amazon EC2, and so I’ll create a couple of IAM roles.

  • The IAM role for application A (SignRole) will be given kms:Sign permission in the CMK key policy
  • The IAM role for application B (VerifyRole) will be given kms:Verify permission in the CMK key policy

The stanza in the CMK key policy document to allow signing should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for digital signing",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/SignRole"},
	"Action": "kms:Sign",
	"Resource": "*"
}

The stanza in the CMK key policy document to allow verification should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:Verify",
	"Resource": "*"
}

Signing Workflow

Once you have created the asymmetric CMK and IAM roles, you’re ready to sign your file. Application A will create a message digest of the file and make a sign request to AWS KMS with the asymmetric CMK keyId, and signing algorithm. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms sign \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest

I chose the ECDSA_SHA_256 signing algorithm for this example. See the Sign API specification for a complete list of supported signing algorithms.

After validating that the API call is authorized by the credentials available to SignRole, KMS generates a signature around the digest and returns the CMK keyId, signature, and the signing algorithm.

Verify Workflow 1 — Calling the verify API

Once application B receives the file and the signature, it computes the SHA 256 digest over the copy of the file it received. It then makes a verify request to AWS KMS, passing this new digest, the signature it received from application A, signing algorithm, and the CMK keyId. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms verify \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest \
	--signature fileb://Signature

After validating that the verify request is authorized, AWS KMS verifies the signature by first decrypting the signature using the public portion of the CMK. It then compares the decrypted result to the digest received in the verify request. If they match, it returns a SignatureValid boolean of True, indicating that the original digest created by the sender matches the digest created by the recipient. Because the original digest is unique to the original file, the recipient can know that the file was not tampered with during transit.

One advantage of using the AWS KMS verify API is that the caller doesn’t have to keep track of the specific public key matching the private key used to create the signature; the caller only has to know the CMK keyId and signing algorithm used. Also, because all request to AWS KMS are logged to AWS CloudTrail, you can audit that the signature and verification operations were both executed as expected. See the Verify API specification for more detail on available parameters.

Verify Workflow 2 — Verifying locally using the public key

Apart from using the Verify API directly, you can choose to retrieve the public key in the CMK using the AWS KMS GetPublicKey API and verify the signature locally. You might want to do this if application B needs to verify multiple signatures at a high rate and you don’t want to make a network call to the Verify API each time. In this method, application B makes a GetPublicKey request to AWS KMS to retrieve the public key. The CLI command to do this is below. Replace the key-id parameter with your CMK’s specific keyId.

aws kms get-public-key \
–key-id <1234abcd-12ab-34cd-56ef-1234567890ab>

Note that the application B will need permissions to make a GetPublicKey request to AWS KMS. The stanza in the CMK key policy document to allow the VerifyRole identity to download the public key should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow retrieval of the public key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:GetPublicKey ",
	"Resource": "*"
}

Once application B has the public key, it can use your preferred cryptographic provider to perform the signature verification locally. Application B needs to keep track of the public key and signing algorithm used for each signature object it will verify locally. Using the wrong public key will fail to decrypt the signature from application A, making the signature verification operation unsuccessful.

Availability and pricing

Asymmetric keys and operations in AWS KMS are available now in the Northern Virginia, Oregon, Sydney, Ireland, and Tokyo AWS Regions with support for other regions planned. Pricing information for the new feature can be found at the AWS KMS pricing page.

Summary

I showed you a simple example of how you can use the new AWS KMS APIs to digitally sign and verify an arbitrary file. By having AWS KMS generate and store the private portion of the asymmetric key, you can limit use of the key for signing only to IAM principals you define. OIDC ID tokens, OAuth 2.0 access tokens, documents, configuration files, system update messages, and audit logs are but a few of the types of objects you might want to sign and verify using this feature.

You can also perform encrypt and decrypt operations under asymmetric CMKs in AWS KMS as an alternative to using the symmetric CMKs available since the service launched. Similar to how you can ask AWS KMS to generate symmetric keys for local use in your application, you can ask AWS KMS to generate and return asymmetric key pairs for local use to encrypt and decrypt data. Look for a future AWS Security Blog post describing these use cases. For more information about asymmetric key support, see the AWS KMS documentation page.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about the asymmetric key feature, please start a new thread on the AWS KMS Discussion Forum.

Want more AWS Security news? Follow us on Twitter.

Raj Copparapu

Raj Copparapu

Raj Copparapu is a Senior Product Manager Technical. He’s a member of the AWS KMS team and focuses on defining the product roadmap to satisfy customer requirements. He spent over 5 years innovating on behalf of customers to deliver products to help customers secure their data in the cloud. Raj received his MBA from the Duke’s Fuqua School of Business and spent his early career working as an engineer and a business intelligence consultant. In his spare time, Raj enjoys yoga and spending time with his kids.

Announcing AWS Managed Rules for AWS WAF

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/announcing-aws-managed-rules-for-aws-waf/

Building and deploying secure applications is critical work, and the threat landscape is always shifting. We’re constantly working to reduce the pain of maintaining a strong cloud security posture. Today we’re launching a new capability called AWS Managed Rules for AWS WAF that helps you protect your applications without needing to create or manage the rules directly. We’ve also made multiple improvements to AWS WAF with the launch of a new, improved console and API that makes it easier than ever to keep your applications safe.

AWS WAF is a web application firewall. It lets you define rules that give you control over which traffic to allow or deny to your application. You can use AWS WAF to help block common threats like SQL injections or cross-site scripting attacks. You can use AWS WAF with Amazon API Gateway, Amazon CloudFront, and Application Load Balancer. Today it’s getting a number of exciting improvements. Creating rules is more straightforward with the introduction of the OR operator, allowing evaluations that would previously require multiple rules. The API experience has been greatly improved, and complex rules can now be created and updated with a single API call. We’ve removed the limit of ten rules per web access control list (ACL) with the introduction of the WAF Capacity Unit (WCU). The switch to WCUs allows the creation of hundreds of rules. Each rule added to a web access control list (ACL) consumes capacity based on the type of rule being deployed, and each web ACL has a defined WCU limit.

Using the New AWS WAF

Let’s take a look at some of the changes and turn on AWS Managed Rules for AWS WAF. First, I’ll go to AWS WAF and switch over to the new version.

Next I’ll create a new web ACL and add it to an existing API Gateway resource on my account.

Now I can start adding some rules to our web ACL. With the new AWS WAF, the rules engine has been improved. Statements can be combined with AND, OR, and NOT operators, allowing for more complex rule logic.

Screenshot of the WAF v2 Boolean operators

I’m going to create a simple rule that blocks any request that uses the HTTP method POST. Another cool feature is support for multiple text transformations, so for example, you could have all your requests transformed to decode HTML entities, and then made lowercase.

JSON objects now define web ACL rules (and web ACLs themselves), making them versionable assets you can match with your application code. You can also use these JSON documents to create or update rules with a single API call.

Using AWS Managed Rules for AWS WAF

Now let’s play around with something totally new: AWS Managed Rules. AWS Managed Rules give you instant protection. The AWS Threat Research Team maintains the rules, with new ones being added as additional threats are identified. Additional rule sets are available on the AWS Marketplace. Choose a managed rule group, add it to your web ACL, and AWS WAF immediately helps protect against common threats.

I’ve selected a rule group that protects against SQL attacks, and also enabled core rule set. The core rule set covers some of the common threats and security risks described in OWASP Top 10 publication. As soon as I create the web ACL and the changes are propagated, my app will be protected from a whole range of attacks such as SQL injections. Now let’s look at both rules that I’ve added to our ACL and see how things are shaping up.

Since my demo rule was quite simple, it doesn’t require much capacity. The managed rules use a bit more, but we’ve got plenty of room to add many more rules to this web ACL.

Things to Know

That’s a quick tour of the benefits of the new and improved AWS WAF. Before you head to the console to turn it on, there’s a few things to keep in mind.

  • The new AWS WAF supports AWS CloudFormation, allowing you to create and update your web ACL and rules using CloudFormation templates.
  • There is no additional charge for using AWS Managed Rules. If you subscribe to managed rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller.
  • Pricing for AWS WAF has not changed.

As always, happy (and secure) building, and I’ll see you at re:Invent or on the re:Invent livestreams soon!

— Brandon

Ramp-Up Learning Guide available for AWS Cloud Security, Governance, and Compliance

Post Syndicated from Sireesh Pachava original https://aws.amazon.com/blogs/security/ramp-up-learning-guide-cloud-security-governance-compliance/

Cloud security is the top priority for AWS and for our customers around the world. It’s important that professionals have a way to keep up with this dynamically evolving area of cloud computing. Often, customers seek AWS guidance on cloud-specific security, governance, and compliance best practices, including skills upgrade plans. To address this need, AWS recently released the AWS Ramp-Up Learning Guide for AWS Cloud Security, Governance, and Compliance
(PDF). AWS experts curated this learning plan to teach in-demand cloud skills and real-world knowledge that you can rely on to keep up with cloud security, governance, and compliance developments and grow your career.

The guide starts with AWS Cloud fundamentals and progresses all the way through the AWS Certified Security Specialty certification. You can use the guide to find answers to questions such as, What resources are available? How do I earn AWS credentials? What order should I consume learning resources and training? Where do I find information on AWS events, blogs, and user groups to enhance my learning?

The list of resources includes free digital training offerings, classroom courses, videos, whitepapers, certifications, and other materials. The AWS Ramp-Up Guide is an extension to the AWS Training and Certification security learning path. To enroll in training and certification exams and track your progress, visit aws.training and set up a free account.

For a training plan customized for your requirements, contact your AWS Account Manager or contact us at aws.amazon.com/contact-us/aws-training/.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sireesh Pachava

Sai Sireesh Pachava is a senior leader in the AWS Professional Services, Global Security, Risk & Compliance (SRC) practice. He specializes in solving complex issues across strategy, business risk, compliance, security, digital, and cloud. Prior to AWS, he held leadership roles at Russell Investments, Microsoft, Reuters, and KPMG. He holds an M.S. and MBA. His certifications include CRISC, CCP, US FedRAMP Level 300, and Internet Law. He’s a pro-bono Seattle director for the risk professional non-profit association PRMIA.

Kate Behbehani

Kate is a Senior Product Marketing leader at AWS. Previously, Kate served as Director, Product Marketing for Veritas’s Hybrid Cloud Data Management & Symantec’s SMB Data Protection solutions, where she developed go-to-market strategy & marketing programs. She’s held numerous senior positions in enterprise marketing & channel marketing throughout the UK, EMEA, & APJ. She has a bachelor’s degree from Southampton University and a Post Graduate Diploma in Marketing from the Chartered Institute of Marketing.

How to set up Sign in with Apple for Amazon Cognito

Post Syndicated from Jason Cai original https://aws.amazon.com/blogs/security/how-to-set-up-sign-in-with-apple-for-amazon-cognito/

Amazon Cognito user pools enables you to add user sign-in and sign-up to your mobile and web applications using a secure and scalable user directory. With Amazon Cognito user pools, your end users can sign in using a user name or password, or with a third-party identity service, such as Facebook or Google. The process of using a third-party identity service is called federation. With federation, you can build applications that retrieve information about your end users that they have provided to another service and have consented to give to your applications.

Amazon Cognito user pools now supports Sign in with Apple as an identity provider (IdP). You can now federate users using the Sign in with Apple service, map these users to a user directory, and retrieve standard authentication tokens from a user pool after the user authenticates with Apple using their Apple ID credentials.

Much like login with Facebook or Google, Sign in with Apple acts as an authorization server and verifies an end user with their Apple ID credentials. Sign in with Apple is built on the OpenID Connect (OIDC) protocol. As of writing this post, there are a few notable differences about Sign in with Apple compared to other OpenID Providers.

  • Using Sign in with Apple, an end user can choose whether to share the email linked to their Apple ID or use a generated one provided by Apple. The generated email will be of the form “<randomstring>@privaterelay.appleid.com”.
  • Unlike other identity providers, Sign in with Apple only honors the scopes requested for an end user on their first authentication through the service for the app configured on Apple’s developer portal. In other words, if you start requesting name after an end user has authenticated, for example, that information will not be returned.
  • Sign in with Apple returns the requested scopes in the initial return from their authorization endpoint for the first user authentication; however, only the email associated with the Apple ID is returned in a trusted form via the ID token.

How to set up Sign in with Apple and associate it with an Amazon Cognito user pool

The prerequisites for setting up the IdP end-to-end are:

  • An Amazon Cognito user pool with an application client
  • A domain that is associated with the user pool
  • An Apple ID with two-factor authentication enabled

Step 1: Set up Sign in with Apple service in Apple’s Developer portal

  1. Enroll in the Apple Developer Program with an Apple ID and then sign in using it.
  2. On the main developer portal page, select Certificates, IDs, & Profiles.
  3. On the left navigation bar, select Identifiers.
  4. On the Identifiers page, select the + icon.
  5. On the Register a New Identifier page, select App IDs.
  6. On the Register an App ID page, under App ID Prefix, take note of the Team ID value.
  7. Select the operating system the app will be run on (choose macOS for web-based apps).
  8. Provide a description in the Description text box.
  9. Provide a string for identifying the app under Bundle ID.
  10. Under Capabilities, select Sign in with Apple, and then select either Enable as a primary App ID (default) for use in a single Apple app or Group with an existing primary App ID for use in multiple Apple apps.
  11. Select Continue, review the configuration, and then select Register.
  12. On the Identifiers page, on the right, select App IDs, and then select Services ID.
  13. Select the + icon and, on the Register a New Identifier page, select Services IDs.
  14. On the Register a Services ID page, select the Sign in with Apple checkbox to enable the service, and then select Configure.
  15. Select the App ID that you created in step 1.1.
  16. Under Web Domain, put the domain associated with your user pool.

    NOTE: You do not have to verify the domain because the verification is required for a transaction method that Amazon Cognito does not use.

  17. Under Return URLs, type https://<your domain>/oauth2/idpresponse, select Add, and then select Save.
  18. Provide a description in the Description text box.
  19. Provide an identifier in the Identifier text box.

    Important: Make a note of this identifier because you will need it later.

    Figure 1: Provide an identifier

    Figure 1: Provide an identifier

  20. Select Continue, review the information, and then select Register.
  21. On the left navigation bar, select Keys, and on the new page, select the + icon.
  22. On the Register a New Key page, select the check box next to Sign in with Apple.
  23. Select the App ID you created in 1.1 and then select Save.
  24. Provide a key name (can be anything).
  25. Click Continue, review the information, then select Register.
  26. On the page you are redirected to take note of the Key ID and download the .p8 file containing the private key.

Step 2: Set up the Sign in with Apple IdP in Amazon Cognito user pools console

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and then select the user pool that you will be using with Sign in with Apple.
  2. Under Federation, under the Identity providers tab, select Sign in with Apple.
  3. Provide the Apple Services ID, Team ID, Key ID, and private key for the Sign in with Apple application along with the desired scopes.

    Note: The private key is provided in the .p8 file; the contents are plain text. You can provide either the file or the contents within the file for the private key.

  4. Select the Attribute mapping tab, and then select the Apple tab.
  5. Select the checkboxes under Capture next to the Apple attributes, and select the user pool attribute under User pool attribute that will receive the value from the Apple attribute and that you would like to receive in the tokens from Amazon Cognito.

    Figure 2: Select checkboxes and user pool attribute

    Figure 2: Select checkboxes and user pool attribute

  6. To enable your app client to allow federation through the Sign in with Apple IdP, under the App client settings tab under App Integration, find the App client that you want to allow Sign in with Apple and select the Sign in with Apple check box.

Step 3: Get started with your application

  1. To test that you have everything configured correctly, under the configured app client, select the Launch Hosted UI link to bring you to a sample Login page.Your configured Sign in with Apple provider will be displayed on this page through a button labelled Continue with Apple.

    Figure 3: "Continue with Apple" button

    Figure 3: “Continue with Apple” button

  2. (Optional) Perform a test authentication to ensure you have everything configured correctly on Apple’s and the Amazon Cognito side.

When a user federates using Sign in with Apple, the interactions between the end user, Amazon Cognito App Client, and Sign in with Apple looks like this:

Figure 4: Federation flow

Figure 4: Federation flow

  1. New user goes to app and selects Sign in with Apple
  2. App redirects to Apple authentication web page
  3. Apple requests Apple ID credentials
  4. User provides credentials
  5. Apple requests consent for information
  6. User chooses share/don’t share email (if requested)
  7. Redirect back to Cognito app with Authorization code
  8. Requests ID token using Authorization code, client ID, and generated client secret
  9. ID token response containing requested scopes

Tips for using Sign in with Apple in your application

  • If you want to revoke the private key associated with the Sign in with Apple service, create a new private key in the Apple developer portal and provide it to Amazon Cognito prior to revoking the old key. Doing so will ensure that you do not invalidate any ongoing end-user authentication on Apple’s side.
  • If you decide to increase the requested scopes and want the additional information from existing users, those users will have to go to appleid.apple.com and, under Apps & Websites Using Apple ID, select the application, select Stop using Apple ID, and then federate again using Sign in with Apple.
  • The name provided by Sign in with Apple is not verified in any manner and should only be used for non-essential features; for example, a welcome message on the landing UI of your app after an end user logs in.
  • If you get an “invalid redirect_url” error message on Apple’s authentication page and the redirect URL in the request is correct, check that you’ve provided the Service Identifier and not the Application Identifier for the Sign in with Apple IdP settings in Amazon Cognito user pools.

For more information, see Adding Social Identity Providers to a User Pool in the Amazon Cognito Developer Guide. You can reach us by posting to the Amazon Cognito forums. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Jason Cai

Jason is a Software Development Engineer for Amazon Cognito. He holds a Master of Science degree in Electrical and Computer Engineering with a focus in Software Engineering and Systems from The University of Texas at Austin.

Use attribute-based access control with AD FS to simplify IAM permissions management

Post Syndicated from Louay Shaat original https://aws.amazon.com/blogs/security/attribute-based-access-control-ad-fs-simplify-iam-permissions-management/

AWS Identity and Access Management (IAM) allows customers to provide granular access control to resources in AWS. One approach to granting access to resources is to use attribute-based access control (ABAC) to centrally govern and manage access to your AWS resources across accounts. Using ABAC enables you to simplify your authentication strategy by enabling you to scale your authorization strategy by granting access to groups of resources, as specified by tags, as opposed to managing long lists of individual resources. The new ability to include tags in sessions—combined with the ability to tag IAM users and roles—means that you can now incorporate user attributes from your AD FS environment as part of your tagging and authorization strategy.

In other words, you can use ABAC to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the developers’ project tag. As the team of developers adds resources to projects, permissions are automatically applied based on attributes (tags, in this case). As a result, each new resource that gets added requires no update to the IAM permissions policy.

In this blog post, I walk you through how to enable AD FS to pass tags as part of the SAML 2.0 token, so that you can enable ABAC for your AWS resources.

AD FS federated authentication process

The following diagram describes the process that a user follows to authenticate to AWS by using Active Directory and AD FS as the identity provider:
 

Figure 1: AD FS federation to AWS

Figure 1: AD FS federation to AWS

  1. A corporate user accesses the corporate Active Directory Federation Services (AD FS) portal sign-in page and provides their Active Directory authentication credentials.
  2. AD FS authenticates the user against Active Directory.
  3. Active Directory returns the user’s information, including Active Directory group membership information.
  4. AD FS dynamically builds a list of Amazon Resource Names (ARNs) for IAM Roles in one or more AWS accounts; these mappings are defined in advance by the administrator and rely on user attributes and Active Directory group memberships.
  5. AD FS sends a signed SAML 2.0 token to the user’s browser with a redirect to post the token to AWS Security Token Service (STS) including the attributes that use define in the claim rules.
  6. Temporary credentials are returned using STS AssumeRoleWithSAML.
  7. The user is authenticated and provided access to the AWS Management Console.

Prerequisites

Attribute-based access control in AWS relies on the use of tags for access-control decisions. Therefore, it’s important to have in place a tagging strategy for your resources. Please see AWS Tagging Strategies.

Implementing ABAC enables organizations to enhance the use of tags from an operational and billing construct to a security construct. Ensuring that tagging is enforced and secure is essential to an enterprise-wide strategy.

For more information about enforcing a tagging policy, see the blog post Enforce Centralized Tag Compliance Using AWS Service Catalog, DynamoDB, Lambda, and CloudWatch Events.

AD FS session tagging setup

After you’ve set up AD FS federation to AWS, you can enable additional attributes to be sent as part of the SAML token. For information about how to enable AD FS, see the blog post AWS Federated Authentication with Active Directory Federation Services (AD FS).

Follow these steps to send standard Active Directory attributes to AWS in the SAML token:

  1. Open Server Manager, choose Tools, then choose AD FS Management.
  2. Under Relying Party Trusts, choose AWS.
  3. Choose Edit Claim Issuance Policy, choose Add Rule, choose Send LDAP Attributes as Claims, then choose Next.
  4. On the Edit Rule page, add the requested details.
    For example, to create a Department Attribute claim rule, add the following details:

    • Claim rule name: Department Attribute
    • Attribute Store: Active Directory
    • LDAP Attribute: Department
    • Outgoing Claim Type:
      https://aws.amazon.com/SAML/Attributes/PrincipalTag:<department> (where <department> is the tag that will be passed in the session)

     

    Figure 2: Claim Rule

    Figure 2: Claim Rule

  5. Repeat the previous step for each attribute you want to send, modifying the details as necessary. For more information about how to configure claim rules, see Configuring Claim Rules in the Windows Server 2012 AD FS Deployment Guide.

Send custom attributes to AWS as part of a federated session

Session Tags in AWS can be derived from Active Directory custom attributes as well as standard attributes, which we demonstrated in the example above. For more information on custom attributes, please How to Create a Custom Attribute in Active Directory.

To ensure that your setup is correct, you should confirm that AD FS is configured correctly:

  1. Perform an identity provider (IdP) initiated authentication. Go to the following URL: https://<your.domain.name>/adfs/ls/idpinitiatedsignon.aspx, where <your.domain.name> is the DNS name of your AD FS server.
  2. Select Sign in to one of the following sites, then choose AWS from the dropdown:
     
    Figure 3: Choose AWS

    Figure 3: Choose AWS

  3. Choose Sign in, then enter your credentials.
  4. After you’ve successfully logged into AWS, navigate to AWS CloudTrail events within 15 minutes of your login event.
  5. Filter on AssumeRoleWithSAML, then locate your login event and look under principalTags to ensure you see the tags that you configured.
     
    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    After you see the tags were sent, you’re ready to use the tags to build your IAM policies.

The following is an example policy that uses multiple tags.

Example: grant IAM users access to your AWS resources by using tags

In this example I will assume that two tags will be passed from AD FS to enable you to build your ABAC tagging strategy: Project and Department. This example assumes that you have multiple teams of developers who need permissions to start and stop specific Amazon Elastic Compute Cloud (Amazon EC2) instances, based on their project allocation. Because these EC2 instances are part of AWS Autoscaling Groups, they come and go depending on scaling conditions; therefore, it would be impractical to try to write policies that refers to lists of individual EC2 instances; writing policies against the tags on the EC2 instances is a more manageable approach. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element, and all resources in the Resource element of the policy. In the Condition element of the policy, I use two conditions:

  • Matching statement where the resource tag project ec2:ResourceTag/project matches the key aws:PrincipalTag for project.
  • Matching statement where the resource tag project ec2:ResourceTag/department matches the key aws:PrincipalTag for department.

This ensures that the principal is able to start and stop an instance only if the project and department tags match value of the tags on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, because you only need to manage a single policy for all your development teams that require permissions to start and stop instances, and you can rely on tag values to specify the resources.


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/project": "${aws:PrincipalTag/project}",
"ec2:ResourceTag/department": "${aws:PrincipalTag/department}" }  } }
]
}

This policy will ensure that the user can only start and stop EC2 instances for the resources that are assigned to their department and project.

Conclusion

In this post, I’ve shown how you can enable AD FS to pass tags as part of the SAML token, so that you can enable ABAC for your AWS resources to simplify permissions management at scale.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security news? Follow us on Twitter.

Louay Shaat

Louay Shaat

Louay Shaat

Louay is a Solutions Architect with AWS based out of Melbourne. He spends his days working with customers, from startups to the largest of enterprises helping them build cool new capabilities and accelerating their cloud journey. He has a strong focus on Security and Automation helping customers improve their security, risk, and compliance in the cloud.

New for Identity Federation – Use Employee Attributes for Access Control in AWS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-for-identity-federation-use-employee-attributes-for-access-control-in-aws/

When you manage access to resources on AWS or many other systems, you most probably use Role-Based Access Control (RBAC). When you use RBAC, you define access permissions to resources, group these permissions in policies, assign policies to roles, assign roles to entities such as a person, a group of persons, a server, an application, etc. Many AWS customers told us they are doing so to simplify granting access permissions to related entities, such as persons sharing similar business functions in the organisation.

For example, you might create a role for a finance database administrator and give that role access to the tables and compute resources necessary for finance. When Alice, a database admin, moves into that department, you assign her the finance database administrator role.

On AWS, you use AWS Identity and Access Management (IAM) permissions policies and IAM roles to implement your RBAC strategy.

The multiplication of resources makes it difficult to scale. When a new resource is added to the system, system administrators must add permissions for that new resource to all relevant policies. How do you scale this to thousands of resources and thousands of policies? How do you verify that a change in one policy does not grant unnecessary privileges to a user or application?

Attribute-Based Access Control
To simplify the management of permissions, in the context of an evergrowing number of resources, a new paradigm emerged: Attribute-Based Access Control (ABAC). When you use ABAC, you define permissions based on matching attributes. You can use any type of attributes in the policies: user attributes, resource attributes, environment attributes. Policies are IF … THEN rules, for example: IF user attribute role == manager THEN she can access file resources having attribute sensitivity == confidential.

Using ABAC permission control allows to scale your permission system, as you no longer need to update policies when adding resources. Instead, you ensure that resources have the proper attributes attached to them. ABAC allows you to manage fewer policies because you do not need to create policies per job role.

On AWS, attributes are called tags. You can attach tags to resources such as Amazon Elastic Compute Cloud (EC2) instance, Amazon Elastic Block Store (EBS) volumes, AWS Identity and Access Management (IAM) users and many others. Having the possibility to tag resources, combined with the possibility to define permissions conditions on tags, effectively allows you to adopt the ABAC paradigm to control access to your AWS resources.

You can learn more about how to use ABAC permissions on AWS by reading the new ABAC section of the documentation or taking the tutorial, or watching Brigid’s session at re:Inforce.

This was a big step, but it only worked if your user attributes were stored in AWS. Many AWS customers manage identities (and their attributes) in another source and use federation to manage AWS access for their users.

Pass in Attributes for Federated Users
We’re excited to announce that you can now pass user attributes in the AWS session when your users federate into AWS, using standards-based SAML. You can now use attributes defined in external identity systems as part of attributes-based access control decisions within AWS. Administrators of the external identity system manage user attributes and define attributes to pass in during federation. The attributes you pass in are called “session tags”. Session tags are temporary tags which are only valid for the duration of the federated session.

Granting access to cloud resources using ABAC has several advantages. One of them is you have fewer roles to manage. For example, imagine a situation where Bob and Alice share the same job function, but different cost centers; and you want to grant access only to resources belonging to each individual’s cost center. With ABAC, only one role is required, instead of two roles with RBAC. Alice and Bob assume the same role. The policy will grant access to resources where their cost center tag value matches the resource cost center tag value. Imagine now you have over 1,000 people across 20 cost centers. ABAC can reduce the cost center roles from 20 to 1.

Let us consider another example. Let’s say your systems engineer configures your external identity system to include CostCenter as a session tag when developers federate into AWS using an IAM role. All federated developers assume the same role, but are granted access only to AWS resources belonging to their cost center, because permissions apply based on the CostCenter tag included in their federated session and on the resources.

Let’s illustrate this example with the diagram below:


In the figure above, blue, yellow, and green represent the three cost centers my workforce users are attached to. To setup ABAC, I first tag all project resources with their respective CostCenter tags and configure my external identity system to include the CostCenter tag in the developer session. The IAM role in this scenario grants access to project resources based on the CostCenter tag. The IAM permissions might look like this :

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "ec2:DescribeInstances"],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": ["ec2:StartInstances","ec2:StopInstances"],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

The access will be granted (Allow) only when the condition matches: when the value of the resources’ CostCenter tag matches the value of the principal’s CostCenter tag. Now, whenever my workforce users federate into AWS using this role, they only get access to the resources belonging to their cost center based on the CostCenter tag included in the federated session.

If a user switches from cost center green to blue, your system administrator will update the external identity system with CostCenter = blue, and permissions in AWS automatically apply to grant access to the blue cost center AWS resources, without requiring permissions update in AWS. Similarly, when your system administrator adds a new workforce user in the external identity system, this user immediately gets access to the AWS resources belonging to her cost center.

We have worked with Auth0, ForgeRock, IBM, Okta, OneLogin, Ping Identity, and RSA to ensure the attributes defined in their systems are correctly propagated to AWS sessions. You can refer to their published guidelines on configuring session tags for AWS for more details. In case you are using other Identity Providers, you may still be able to configure session tags, if they support the industry standards SAML 2.0 or OpenID Connect (OIDC). We look forward to working with additional Identity Providers to certify Session Tags with their identity solutions.

Sessions Tags are available in all AWS Regions today at no additional cost. You can read our new session tags documentation page to follow step-by-step instructions to configure an ABAC-based permission system.

— seb

15 additional AWS services receive DoD Impact Level 4 and 5 authorization

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/15-additional-aws-services-receive-dod-impact-level-4-and-5-authorization/

I’m pleased to announce that the Defense Information Systems Agency (DISA) has extended the Provisional Authorization to Operate (P-ATO) of AWS GovCloud (US) Regions for Department of Defense (DoD) workloads at DoD Impact Levels (IL) 4 and 5 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). Our authorizations at DoD IL 4 and IL 5 allow DoD Mission Owners to process unclassified, mission critical workloads for National Security Systems in the AWS GovCloud (US) Regions.

AWS successfully completed an independent, third-party evaluation that confirmed we effectively implement over 400 security controls using applicable criteria from NIST SP 800-53 Rev 4, the US General Services Administration’s FedRAMP High baseline, the DoD CC SRG, and the Committee on National Security Systems Instruction No. 1253 at the High Confidentiality, High Integrity, and High Availability impact levels.

In addition to a P-ATO extension through November 2020 for the existing 24 services, DISA authorized an additional 15 AWS services at DoD Impact Levels 4 and 5 as listed below.

Newly authorized AWS services at DoD Impact Levels 4 and 5

With these additional 15 services, we can now offer AWS customers a total of 39 services, authorized to store, process, and analyze DoD mission critical data at Impact Level 4 and Impact Level 5.

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Tyler Harding

Tyler is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.

re:Invent 2019 guide to AWS Cryptography sessions, workshops, and chalk talks at AWS

Post Syndicated from Phil Lin original https://aws.amazon.com/blogs/security/reinvent-2019-guide-to-aws-cryptography-sessions-workshops-and-chalk-talks-at-aws/

re:Invent AWS Cryptography announcement

AWS re:Invent 2019 is just over a week away! We have many Security, Identity, and Compliance sessions, and this is a post about AWS Cryptography-related breakout sessions, workshops, builders sessions, and chalk talks at AWS re:Invent 2019.

The AWS Cryptography mission is to help you get encryption right. We build tools that help you navigate this process, whether we’re helping you secure the encryption keys that you use in algorithms or the certificates used in asymmetric cryptography.

AWS Certificate Manager

SEC218-R – Deploying private certificates using ACM Private CA
Organizations are looking at projects requiring a private certificate infrastructure like service meshes for microservices, full path encryption of traffic, device manufacturing, and app development and deployment. In this session, we discuss how to deploy AWS Certificate Manager Private Certificate Authority to provide certificate infrastructure and walk through a few examples of projects like these. During the session, learn how to build a CA hierarchy, choose the correct CA templates, configure IAM permission options, and manage certificate lifecycle. Participants will be able to apply these lessons and use cases to their own PKI infrastructure to accelerate their projects. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Chalk Talk
Todd Cignetti, Josh Rosenthol

SEC314-R – Building and operating a private certificate authority on AWS
In this workshop, we cover private certificate management on AWS employing the concepts of least privilege, separation of duties, monitoring for privileged actions and automation. You learn operational aspects of creating a complete certificate-authority (CA) hierarchy, building a simple web app, and issuing a private certificate. You learn how job functions—including CA Admins, application developers, and security admins—can follow the principal of least privilege to perform various functions associated with certificate management. The workshop includes quizzes throughout with information to enhance your understanding of the AWS Certificate Manager Private Certificate Authority capability. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Workshop
Ram Ramani

AWS CloudHSM

SEC305-R – Achieving security goals with AWS CloudHSM
In this talk, we compare AWS CloudHSM with other AWS cryptography services for common use cases. We dive deep on how to build scalable, reliable workloads with CloudHSM, and we teach you how to configure the service for performance, error resilience, and cross-region redundancy. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Session
Avni Rambhia

SEC406-R – Deep dive on AWS CloudHSM
Organizations building applications that handle confidential or sensitive data are subject to many types of regulatory requirements. They also often rely on hardware security modules (HSMs) to provide validated control of encryption keys and cryptographic operations. AWS CloudHSM is a cloud-based HSM that enables you to easily generate and use your own encryption keys on the AWS Cloud using FIPS 140-2 Level 3 validated HSMs. In this talk, we demonstrate best practices in configuring and scaling your CloudHSM cluster, implementing cross-region disaster recovery, and optimizing throughput. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Chalk Talk
Rohit Mathur, Avni Rambhia

AWS Key Management Service

SEC340-R – Using AWS KMS for data protection, access control, and audit
This session focuses on how customers are using AWS Key Management Service (AWS KMS) to raise the bar for security and compliance with their workloads. Along with a detailed explanation of how AWS KMS fits into the AWS suite of services, we walk you through popular and sophisticated examples of how AWS KMS can be deployed in the context of access control, separation of duties, data protection, and auditability. We also cover the latest developments in AWS KMS functionality that will further expand the range of use cases to include additional cryptographic capabilities and system integrations. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Session
Raj Copparapu, Peter O’Donnell

SEC322-R – Deep dive into AWS KMS
In this session, learn the dos and don’ts of using AWS Key Management Service (AWS KMS). We cover topics such as envelope encryption, encryption context, and permissions. We also dig into common scenarios that customers encounter. At the end of this presentation, you leave with a working knowledge of how to use the permissions and authorization systems built into AWS KMS and with an understanding of how to appropriately encrypt data using AWS KMS. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Chalk Talk
Paul Radulovic, Jim Irving

SEC337 – Toyota Motor North America: Securing the cloud with AWS KMS
Imagine being tasked with collecting, analyzing, and securing data from hundreds of sources around the world, in multiple cloud and on-premises environments. Toyota Motor North America, along with Booz Allen Hamilton, has created a secure, cloud-native solution to analyze billions of messages per day using AWS Key Management Service (AWS KMS). We discuss how AWS KMS with AWS native services provides granular access and secures corporate assets with data segregation using AWS KMS encryption. Toyota uses AWS Glue, Amazon Athena, and Amazon SageMaker to generate actionable intelligence in its corporate IT and vehicle telematics environments to solve its business and analytics challenges.

Session
Raj Copparapu, Matthew Costello (Booz Allen Hamilton), Kell Rozman (Toyota)

SEC401-R – Using the AWS Encryption SDK for multi-master key encryption
In this workshop, learn the basics of client-side encryption, perform encrypt/decrypt operations using AWS Key Management Service (AWS KMS) and the AWS Encryption SDK, and discuss security and performance considerations when implementing client-side encryption in your software. We cover the basic challenges of this domain: a best practice for protecting data end-to-end with client-side encryption; KMS-style services and their uses, including AWS KMS; the open-source, open-format AWS Encryption SDK; and considerations for advanced integrations, such as performance trade-offs and high-availability strategies. All attendees need a laptop, an active AWS account, an AWS IAM administrator, and familiarity with core AWS services. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Workshop
Liz Roth, Jamie Angell

AWS Secrets Manager

SEC354-R – How the BBC uses AWS Secrets Manager to manage secrets
Join this chalk talk to hear from the BBC about their journey adopting AWS Secrets Manager for managing the lifecycle of their secrets such as database passwords, API keys, and third-party keys. In this session, you learn the key features and benefits of Secrets Manager and what factors to consider while adopting Secrets Manager across your enterprise. You will also learn how the BBC chose to go all in on Secrets Manager to meet their secrets management needs. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Chalk Talk
Divya Sridhar, Andrew Carlson

SEC302-R – DevSecOps: Integrating security into pipelines
In this workshop, you practice running an environment with a test and production deployment pipeline. Along the way, we cover topics such as static code analysis, dynamic infrastructure review, and workflow types. You also learn how to update your process in response to security events. We write new AWS Lambda functions and incorporate them into the pipeline, and we consider capabilities such as AWS Systems Manager Parameter Store and AWS Secrets Manager. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Workshop
Jonathan VanKim, Nathan Case

GPSTEC418-R – Securing your .NET container secrets
Although this Global Partner Summit builders session is open to anyone, it is geared toward current and potential AWS Partner Network Partners. As customers move .NET workloads to the cloud, many start to consider containerizing their applications because of the agility and cost savings that containers provide. Combine those compelling drivers with the multi-OS capabilities that come with .NET Core, and customers have an exciting reason to migrate their applications. A primary question is how they can safely store secrets and sensitive configuration values in containerized workloads. In this builders session, learn how to safely containerize an ASP.NET Core application while leveraging services like AWS Secrets Manager and AWS Fargate. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Builders Session
Carmen Puccio

MOB318-R – AWS AppSync does that: Support for alternative data sources
AWS AppSync supports a number of data sources out of the box, but can also support a variety of alternative data sources, including Amazon ElastiCache and Amazon Neptune. During this chalk talk, we discuss how to GraphQL-ify subscriptions to alternative data sources, including AWS services such as AWS Secrets Manager and AWS Step Functions. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Chalk Talk
Josh Kahn, Sarah Vine

Other cryptography-related sessions you might be interested in

AIM327 – Security for ML environments with Amazon SageMaker, featuring Vanguard
Amazon SageMaker is a modular, fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. In this session, we dive deep into the security configurations of Amazon SageMaker components, including notebooks, training, and hosting endpoints. A representative from Vanguard joins us to discuss the company’s use of Amazon SageMaker and its implementation of key controls in a highly regulated environment, including fine-grained access control, end-to-end encryption in transit, encryption at rest with customer master keys (CMKs), private connectivity to all Amazon SageMaker API operations, and comprehensive audit trails for resource and data access. If you want to build secure ML environments, this session is for you.

Session
Ilya Epshteyn, Ritesh Shah

CMP335 – Streamlining Amazon EC2 instance provisioning and management
Provisioning and managing instances is fundamental to creating a secure, scalable environment for your application. This session guides you through recommended practices for selecting instance types, provisioning resources, connecting to instances, building automation and governance, and monitoring and optimizing instance usage for your workloads. Learn how to move seamlessly from a proof of concept to an automated production environment using launch templates and newly launched features. We also cover some best practices and share tips on how you can simplify your instance launch experience.

Chalk Talk
Saloni Sonpal, Laura Thomson

CON205-R – Deploying applications using Amazon EKS
Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. In this hands-on workshop, we cover how to set up Amazon EKS to run common production applications, including how to build a deployment pipeline, perform code updates and rollbacks with health checks, run batch workloads, set up load balancing, and manage secrets. This is the second of three workshops for running Kubernetes on AWS. Come prepared to build with a laptop; AWS credits are provided. (Note that this session is repeated three more times during the week and the additional session(s) is denoted with a suffix of “-R1, -R2, -R3”.)

Workshop
Michael Hausenblas, Theodore Salvo

DAT303 – Data security best practices on Amazon DynamoDB
In this session, learn about the security features built into Amazon DynamoDB and how you can best use them to protect your data. We show you how customers are using the available options for controlling access to their tables and the content stored within those tables. We also show you how customers are protecting the contents of their tables with encryption, and how they monitor access to their data.

Chalk Talk
Somu Perianayagam, Padma Malligarjunan

DOP409-R – Faster Cryptography in Java with Amazon Corretto Crypto Provider (ACCP)
In this session, learn how to integrate Amazon Corretto Crypto Provider (ACCP) into a sample Java application, which will significantly speed up the common cryptographic algorithms that are being performed. Then use Amazon CloudWatch to measure how ACCP improves both the latency and the throughput of the sample application. Please bring your laptop. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Builders Session
Petr Praus

MGT406-R – Eliminate bastion hosts with AWS Systems Manager Session Manager
AWS Systems Manager Session Manager improves a customer’s security posture for instance access with a browser-based and CLI interactive shell experience that requires no open inbound ports or access/jump servers, and enables customer key encryption using AWS KMS. With IAM access control, sessions audited using AWS CloudTrail, and session output logged to Amazon S3 or Amazon CloudWatch Logs, Session Manager makes it easy to control and secure access to instances in operational scenarios while complying with corporate policies and security best practices. Dive deep with the Session Manager team to see how it works for Linux or Windows instances, in the cloud, or on-premises. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Builders Session (various speakers, each with 1 session)
Spiros Liolis, Nitika Goyal

SEC205-R – The fundamentals of AWS cloud security
The services that make up AWS are many and varied, but the set of concepts you need to secure your data and infrastructure is simple and straightforward. By the end of this session, you know the fundamental patterns that you can apply to secure any workload you run in AWS with confidence. We cover the basics of network security, the process of reading and writing access management policies, and data encryption. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Session
Becky Weiss

SEC319-R – Deep dive on security in Amazon S3
At AWS, security is our top priority, and Amazon Simple Storage Service (Amazon S3) provides some of the most advanced data-security features available in the cloud today to help you mitigate security risks. In this chalk talk, learn directly from the AWS engineering team that builds and maintains Amazon S3 security functionality such as encryption, block public access, and much more. Bring your feedback, questions, and expertise to discuss innovative ways to ensure that your data is available only to the users and applications that need it. (Note that this session is repeated once more during the week and the additional session(s) is denoted with a suffix of “-R1”.)

Chalk Talk
Sam Parmett, Felix Davis

SEC348-R – Protecting sensitive data in your AWS workloads
As you start moving your data to AWS, you want to employ the appropriate controls and mechanisms to protect it. In this builders session, learn how to protect data on AWS using services such as AWS Identity and Access Management (IAM), AWS Key Management Service (AWS KMS), AWS CloudHSM, and AWS Secrets Manager. In particular, learn about data protection best practices that you can incorporate into your AWS architecture and use in the pursuit of your security and compliance objectives. (Note that this session is repeated three more times during the week and the additional session(s) is denoted with a suffix of “-R1, -R2, -R3”.)

Builders Session (various speakers, each with 1 session)
Ben Eichorst, Nigel Harris, Somasundaram Subbu, Soumya Sagiri

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Phil Lin

Phil Lin is a Senior Manager, Product Marketing for Security, Identity, and Compliance. Outside of work you’ll find him enjoying time with his wife and kids, reading the occasional fix-it-yourself book, and finally learning D&D with his kids.

How to use CI/CD to deploy and configure AWS security services with Terraform

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/how-use-ci-cd-deploy-configure-aws-security-services-terraform/

Like the infrastructure your applications are built on, security infrastructure can be handled using infrastructure as code (IAC) and continuous integration/continuous deployment (CI/CD). In this post, I’ll show you how to build a CI/CD pipeline using AWS Developer Tools and HashiCorp’s Terraform platform as an IAC tool for AWS Web Application Firewall (WAF) deployments. AWS WAF is a web application firewall that helps protect your applications from common web exploits that could affect availability, compromise security, or consume excessive resources.

Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently. With Terraform, you can manage AWS services and custom defined provisioning logic. You create a configuration file that describes to Terraform the components needed to run a single application or your entire AWS footprint. When Terraform consumes the configuration file, it generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure.

In this solution, you’ll use Terraform configuration files to build your WAF, deploy it automatically through a CI/CD pipeline, and retain the WAF state files to be later referenced, changed, or destroyed through subsequent deployments in a durable backend. The CI/CD solution is flexible enough to deploy many other AWS services, security or otherwise, using Terraform. For a full list of supported services, see HashiCorp’s documentation.

Note: This post assumes you’re comfortable with Terraform and its core concepts, such as state management, syntax, and command terms. You can learn about Terraform here.

Solution Overview

Figure 1: Architecture diagram

Figure 1: Architecture diagram

For this solution, you’ll use AWS CodePipeline, an automated CD service to form the foundation of the CI/CD pipeline. CodePipeline helps us automate our release pipeline through build, test, and deployment. For the purpose of this post, I will not demonstrate how to configure any test or deployment stages.

The source stage uses AWS CodeCommit, which is the AWS fully-managed managed, Git-based source code management service that can be interacted with via the console and CLI. CodeCommit encrypts the source at rest and in transit, and is integrated with AWS Identity and Access Management (IAM) to customize fine-grained access controls to the source.

Note: CodePipeline supports different sources, such as S3 or GitHub – if you’re comfortable with those services, feel free to substitute them as you walk through the solution.

For the build stage, you’ll use AWS CodeBuild, which is a fully managed CI service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild uses a build specification file, which is a collection of build commands, variables and related settings, in a YAML file, that CodeBuild uses to run a build.

Finally, you’ll create a new Amazon Simple Storage Service (S3) bucket and Amazon DynamoDB table to durably store the Terraform state files outside of the CI/CD pipeline. These files are used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.

For the purpose of this post, the security infrastructure resource deployed through the pipeline will be an AWS WAF, specifically a Global Web ACL that can attach to an Amazon CloudFront distribution, with a sample SQL Injection and Blacklist filtering rule.

The deployment steps will be as shown in Figure 1:

  1. Push artifacts, Terraform configuration files and a build specification to a CodePipeline source.
  2. CodePipeline automatically invokes CodeBuild and downloads the source files.
  3. CodeBuild installs and executes Terraform according to your build specification.
  4. Terraform stores the state files in S3 and a record of the deployment in DynamoDB.
  5. The WAF Web ACL is deployed and ready for use by your application teams.

Step 1: Set-up

In this step, you’ll create a new CodeCommit repository, S3 bucket, and DynamoDB table.

Create a CodeCommit repository

  1. Navigate to the AWS CodeCommit console, and then choose Create repository.
  2. Enter a name, description, and then choose Create. You will be taken to your repository after creation.
  3. Scroll down, and then choose Create file, as shown in Figure 2:
     
    Figure 2: CodeCommit create file

    Figure 2: CodeCommit create file

  4. You will be taken to a new screen to create a sample file, write readme into the text body, name the file readme.md, and then choose Commit changes, as shown in Figure 3:
     
    Figure 3: CodeCommit editing files

    Figure 3: CodeCommit editing files

Note: You need to create a sample file to initialize your Master branch that will not interfere with the build process. You can safely delete this file later.

Create a DynamoDB table

  1. Navigate to the Amazon DynamoDB console, and then choose Create table.
  2. Give your table a name like terraform-state-lock-dynamo.
  3. Enter LockID as your Primary key, keep the box checked for Use default settings, and then choose Create, as shown in Figure 4.

Note: Copy the name and ARN of the DynamoDB table because you will need it later when configuring your Terraform backend and CodeBuild service role.

 

Figure 4: Create DynamoDB table

Figure 4: Create DynamoDB table

Create an S3 bucket

  1. Navigate to the Amazon S3 console, and then choose Create bucket.
  2. Enter a unique name and choose the Region you have built the rest of your resources in, and then choose Next.
  3. Enable Versioning and Default encryption, and then choose Next.
  4. Select Block all public access, choose Next, and then choose Create bucket.

Note: Copy the name and ARN of the S3 bucket because you will need it later when configuring your Terraform backend and CodeBuild service role.

Step 2: Create the CI/CD pipeline

In this step, you will create the rest of your pipeline using CodePipeline and CodeBuild. If you have decided to not use CodeCommit, read CodePipeline’s documentation here about other sources.

  1. Navigate to the AWS CodePipeline console, and then choose Create pipeline.
  2. Enter a Pipeline name, select New service role, and then choose Next, as shown in Figure 5:
     
    Figure 5: CodePipeline settings

    Figure 5: CodePipeline settings

  3. Select AWS CodeCommit as the Source provider, select the name of the repository you created, and then choose master as your Branch name.
  4. Choose Amazon CloudWatch Events (recommended) as your detection option, and then choose Next, as shown in Figure 6:
     
    Figure 6: CodePipeline source stage

    Figure 6: CodePipeline source stage

  5. For Build provider, choose AWS CodeBuild and change your region as needed, and then choose Create project.

    Important: Selecting Create Project will open a new screen in your browser with the AWS CodeBuild console; do not close the browser because you will need it!

  6. Enter a Project name and description, and then scroll to the Environment section.
  7. For Environment image, choose Managed image, and then configure the following sub-selections, as shown in Figure 7:
    1. Operating system: Ubuntu
    2. Runtimes(s): Standard
    3. Image: aws/codebuild/standard:1.0
    4. Image version: Always use the latest image for this runtime version
       
      Figure 7: CodeBuild environment image

      Figure 7: CodeBuild environment image

  8. Select the checkbox under Privileged, select New service role, and take note of this Role name because you will be modifying it later.
     
    Figure 8: CodeBuild service role

    Figure 8: CodeBuild service role

  9. Choose the dropdown menu named Additional configuration (shown in Figure 8), scroll down to Environment variables, and then enter the following values, as shown in Figure 9:
    1. Name: TF_COMMAND
    2. Value: apply (this is case sensitive)
    3. Type: Plaintex
       
      Figure 9: CodeBuild variables

      Figure 9: CodeBuild variables

      Note: These values are used by the build specification to inject Terraform commands into Runtime.

  10. In the Buildspec section, choose Use a buildspec file. You don’t need to provide a name because buildspec.yaml in your ZIP package is the default value CodeBuild will look for.
  11. In the Logs section, choose the checkbox next to CloudWatch logs – optional, and then choose Continue to CodePipeline (see Figure 10).
     
    Figure 10: CodeBuild logging

    Figure 10: CodeBuild logging

    Note: The separate window will close at this point and you will be back in the CodePipeline console.

  12. Now, back in the CodePipeline console, choose Next, choose Skip deploy stage, and then choose Skip when prompted, as shown in Figure 11.
     
    Figure 11: CodePipeline skip deploy stage

    Figure 11: CodePipeline skip deploy stage

  13. Confirm your details are correct in the Review screen, and then choose Create pipeline.

After creation, you will be taken to the Pipeline Status view for the pipeline you just created. This interface allows you to monitor the status of CodePipeline in near real time. You can pivot to your Source repository and Build project by selecting the Details link, as shown in Figure 12.
 

Figure 12: CodePipeline status

Figure 12: CodePipeline status

You can also see previous CodePipeline runs by choosing the History view on the navigation pane on the left, as shown in Figure 13. This view is also useful for viewing multiple concurrent CodePipeline runs.
 

Figure 13: CodePipeline History

Figure 13: CodePipeline History

Step 3: Modify the CodeBuild service role

In this section, you will add an additional policy to your CodeBuild service role to allow Terraform to deploy your WAF and write state information to DynamoDB and S3.

  1. Navigate to the IAM Console, and then choose Roles from the navigation pane.
  2. Search for the CodeBuild service role, select it, and then choose Add inline policy.

    Note: The inline policy is used to avoid accidental deletions or modifications, and provide a one-to-one relationship between the permissions and the service role.

  3. Choose the JSON tab and paste in the following policy. Ensure you populate the Resources section of the policy with the ARN of your S3 Bucket and DynamoDB table created in Step 3.1, as shown in Figure 14.
    
    	{
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "WafSID",
            "Action": [
                "waf:CreateIPSet",
                "waf:CreateRule",
                "waf:CreateRuleGroup",
                "waf:CreateSqlInjectionMatchSet",
                "waf:CreateWebACL",
                "waf:DeleteIPSet",
                "waf:DeleteLoggingConfiguration",
                "waf:DeletePermissionPolicy",
                "waf:DeleteRule",
                "waf:DeleteRuleGroup",
                "waf:DeleteSqlInjectionMatchSet",
                "waf:DeleteWebACL",
                "waf:GetChangeToken",
                "waf:GetChangeTokenStatus",
                "waf:GetGeoMatchSet",
                "waf:GetIPSet",
                "waf:GetLoggingConfiguration",
                "waf:GetPermissionPolicy",
                "waf:GetRule",
                "waf:GetRuleGroup",
                "waf:GetSampledRequests",
                "waf:GetSqlInjectionMatchSet",
                "waf:GetWebACL",
                "waf:ListActivatedRulesInRuleGroup",
                "waf:ListGeoMatchSets",
                "waf:ListIPSets",
                "waf:ListLoggingConfigurations",
                "waf:ListRuleGroups",
                "waf:ListRules",
                "waf:ListSqlInjectionMatchSets",
                "waf:ListSubscribedRuleGroups",
                "waf:ListTagsForResource",
                "waf:ListWebACLs",
                "waf:PutLoggingConfiguration",
                "waf:PutPermissionPolicy",
                "waf:TagResource",
                "waf:UntagResource",
                "waf:UpdateIPSet",
                "waf:UpdateRule",
                "waf:UpdateRuleGroup",
                "waf:UpdateSqlInjectionMatchSet",
                "waf:UpdateWebACL"
              ],
            "Effect": "Allow",
            "Resource": "*"
          },
          {
            "Sid": "S3SID",
            "Action": [
              "s3:GetObject",
              "s3:ListBucket",
              "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": ""
          },
          {
            "Sid": "DDBSID",
            "Action": [
              "dynamodb:DeleteItem",
              "dynamodb:GetItem",
              "dynamodb:PutItem"
            ],
            "Effect": "Allow",
            "Resource": ""
          }
        ]
      }
    

     

    Figure 14: IAM resource edits

    Figure 14: IAM resource edits

  4. Choose Review policy, enter a name for the inline policy, and then choose Create policy.

You now have the required permissions to deploy, modify, and delete your WAF, as needed. For pipelines that will be deploying multiple services, or using different backends for the state files, the permissions will need to be much more broadly defined.

Step 4: Deploy the WAF with CodePipeline

With all permissions and supporting infrastructure set up, you can now deploy your WAF. Navigate to this GitHub repository and clone it; there are five files you will need:

  • provider.tf
  • variables.tf
  • waf-conditions.tf
  • waf-rules.tf
  • buildspec.yaml
  1. Open the file named provider.tf in a text editor and modify the following values, as shown in Figure 15:
    1. region=: Enter your preferred AWS Region (on lines 3 & 13)
    2. bucket=: Name of your S3 bucket (on line 10)
    3. dynamodb_table=: Name of your DynamoDB table (on line 11)
       
      Figure 15: provider.tf modification

      Figure 15: provider.tf modification

  2. Save and close this file, navigate to the AWS CodeCommit console, and then select your repository.
  3. Choose the drop-down menu named Add file, and then select Upload file (see Figure 16).
     
    Figure 16: CodeCommit Upload files

    Figure 16: CodeCommit Upload files

  4. Using the Console, upload all five files downloaded from GitHub. Alternatively, you can learn how to do this using the CLI in the AWS CodeCommit User Guide.
  5. After you’ve uploaded the last file, navigate to the CodePipeline console, and then select your pipeline.

    Note: If the source message within the UI doesn’t match what you entered for your last upload commit message, use the History tab to find your execution with all files added because the previous deployments will fail due to the missing files.

  6. To access the Build project Build logs console, in the Build section, choose Details, as shown in Figure 17.
     
    Figure 17: CodePipeline status details

    Figure 17: CodePipeline status details

  7. Choose Tail logs to view logs in near real-time from the CodeBuild environment. You will be able to see the output from Terraform, as well as other information, such as errors and environmental logs, from the CodeBuild service, as shown in Figure 18.This view can be useful for debugging missing permissions for Terraform, as it will cause a failure and Terraform will log what IAM permissions were denied
     
    Figure 18: CodeBuild tail logs

    Figure 18: CodeBuild tail logs

  8. After a successful deployment, navigate to the AWS WAF Web ACL Console, and then choose the Web ACL that was deployed.
  9. Choose the Rules tab, and then select the Rules’ hyperlinks to inspect how they were created, as shown in Figure 19.
     
    Figure 19: Web ACL views

    Figure 19: Web ACL views

From here, you can associate the Global Web ACL with a CloudFront distribution to test the efficacy. This AWS Samples GitHub repository contains a more in-depth demo on how to effectively tune a WAF.

Important clean up

You will now clean up your deployed Web ACL. Doing this is important because you will be charged $5.00 USD per Web ACL, and $1.00 per rule per Web ACL, per month, on top of other related charges. Read the AWS WAF Pricing page for more details around AWS WAF pricing.

  1. Navigate to the AWS CodeBuild console, and then choose your CodeBuild project.
  2. Choose the Build details tab, scroll to the Environment section, and then choose Edit.
  3. Expand the Additional configuration drop-down menu, and then scroll to Environment variables.
  4. Under the Value of your previously created variable, replace the value with destroy, and then choose Update environment.
  5. Navigate back to the Pipelines menu in the AWS CodePipeline console, and then select your pipeline.
  6. Choose Release Change, and then choose Release, when prompted. Wait for the Build stage to report success to confirm deletion of our WAF resources.

Conclusion

In this post, you learned how to use AWS Developer Tools to create a Serverless CI/CD pipeline that you can use to automate deployments of infrastructure with Terraform. By using Terraform and CI/CD, your security engineers can deploy security infrastructure services in a clearly defined and immutable process, such as AWS WAF.

To further extend this solution, you can include manual confirmation stages via Amazon Simple Notification Service (SNS) to enforce approvals before all CI/CD pipelines deploy resources into your accounts. You can also choose to isolate your CI/CD pipelines by placing them in a VPC. Finally, you can select the WAF Rules deployed by Terraform as the starting point for a Rule group in AWS Firewall Management Service (FMS), which allows you to define multi-account WAF deployments for accounts in AWS Organizations.

Jonthan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

AWS Security Profiles: Dan Plastina, VP of Security Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-dan-plastina-vp-security-services/

In the weeks leading up to re:Invent 2019, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do as the VP of Security Services?

I’ve been at Amazon for just over two years. I lead the External Security Services organization—our team builds AWS services that help customers improve the security of their workloads. Our services include Amazon Macie, Amazon GuardDuty, Amazon Inspector, and AWS Security Hub.

What drew me to Amazon is the culture of ownership and accountability. I wake up every day and get to help AWS customers do things that transform their world—and I get to do that work with a whole bunch of people who feel the same way and take the same level of ownership. It’s very energizing.

What’s your favorite part of your job?

That’s hard! I love most aspects of my job. Forced to pick one, I’d have to say my favorite part is helping customers. Our Shared Responsibility Model says that AWS is accountable for the security of AWS, and customers are responsible for the resources and workloads they manage in AWS. My job allows me to sit on the customer side of the shared responsibility model. Our team builds the services that help customers improve the security of their workloads on AWS. Being able to help in that way is very rewarding.

One of Amazon’s widely-known leadership principles is Customer Obsession. Can you speak to what that looks like in the context of your work?

Being customer obsessed means that you’re in tune with the needs of the customer you’re working with. In the case of external security services, “customer obsessed” requires you to deeply understand what it means for individual customers to protect their assets in AWS, to empathize with those needs, and then to help them figure out how to get from where they are, to where they want to be. Because of this, I spend a lot of time with customers.

Our team participates in many in-person executive customer briefings. We hold a lot of conference calls. I’m flying to the UK on Monday to meet with customers—and I was there three weeks ago. I’ve spent over six weeks this fall traveling to talk with customers.  That much travel time can be hard, but it’s necessary to be in front of customers and listen to what they tell us. I’m fortunate to have a really strong team and so when I’m not traveling, I’m still able to spend a lot of time thinking about customer needs and about what my team should do next.

You’re on an elevator packed with CISOs, and they want you to explain the difference between Security Hub, GuardDuty, Macie, and Inspector before the doors open. What do you say?

First, I would tell them that the services are best understood as a suite of security services, and that AWS Security Hub offers a single pane of glass [Editor: a management tool that integrates information and offers a unified view] into everything else: Use it to understand the severity and sensitivity of findings across the other services you’re using.

Amazon GuardDuty is a continuous security monitoring and threat detection service. You simply choose to have it on or off in your AWS accounts. When it’s enabled, it detects highly suspicious activity and unauthorized access across the entirety of your AWS workloads. While GuardDuty alerts you to potential threats, Amazon Inspector helps you ensure that you address publicly known software vulnerabilities in a timely manner, removing them as a potential entry point for unauthorized users. Amazon Macie offers a particular focus on protecting your sensitive data by giving you a highly scalable and cost effective way to scan AWS for sensitive data and report back what is found and how it is being protected with access controls and encryption.

Then, I’d invite the entire elevator to come to re:Invent, to learn more about the new work my team is doing.

What can you tell us about your team’s re:Invent plans?

We have some exciting things planned for re:Invent this year. I can’t go into specifics yet, but we’re excited about it. A lot of my team will be present, and we’re looking forward to speaking with customers and learning more about what we should work on for next year.

We’ve got a variety of sessions about Security Hub, GuardDuty, and Inspector. If you can only make it to three security-specific sessions, I recommend Threat management in the cloud (SEC206-R), Automating threat detection and response in AWS (SEC301-R), and Use AWS Security Hub to act on your compliance and security posture (SEC342-R).

Is there some connecting thread to all of the various projects that your teams are working on right now?

I see a few threads. One is the concept of security being priority zero. It’s a theme that we live by at AWS, but customers ask us to stretch a little bit further and include their workloads in our security considerations. So workload security is now priority zero too. We’re spending a lot of time working that out and looking for ways to improve our services.

Another thread is that customers are asking us for prescriptive guidance. They’re saying, “Just tell me how I can ensure that my environment is safe. I promise you won’t offend me. Guide me as much as you can, and I’ll disregard anything that isn’t relevant to my environment.”

What’s one currently available security feature that you wish more customers were aware of?

A service, not a feature: AWS Security Hub. It has the ability to bring together security findings from many different AWS, partner, and customer security detection services. Security Hub takes security findings and normalizes them into our Amazon Security Findings Format, ASFF, and then sends them all back out through Amazon CloudWatch events to many partners that are capable of consuming them.

I think customers underestimate the value of having all of these security events normalized into a format that they can use to write a Splunk Phantom runbook, for example, or a Demisto runbook, or a Lambda function, or to send it to Rapid7 or cut a ticket in Jira. There’s a lot of power in what Security Hub does and it’s very cost effective. Many customers have started to use these capabilities, but I know that not everyone knows about it yet.

How do you stay up to date on important cloud security developments across the industry?

I get a lot of insight from customers. Customers have a lot of questions, and I can take these questions as a good indicator of what’s on peoples’ minds. I then do the research needed to get them smart answers, and in the process I learn things myself.

I also subscribe to a number of newsletters, such as Last Week In AWS, that give some interesting information about what’s trending. Reading our AWS blogs also helps because just keeping up with AWS is hard. There’s a lot going on! Listening to the various feeds and channels that we have is very informative.

And then there’s tinkering. I tinker with home automation / Internet of Things projects and with vendor-provided offers such as those provided to me by Splunk, Palo Alto, and CheckPoint of recent. It’s been fun learning partner offerings by building out VLANs, site-to-site tunnels, VPNs, DNS filters, SSL inspection, gateway-level anonymizers, central logging, and intrusion detection systems. You know, the home networking ‘essentials’ we all need.

You’re into riding Superbikes as a hobby. What’s the appeal?

I ride fast bikes on well-known race tracks all around the US several times a year. I love how speed and focus must come together. Going through different corners requires orchestrating all kinds of different motor and mental skills. It flushes the brain and clears your thoughts like nothing else. So, I appreciate the hobby as a way of escaping from normal day-to-day routine. Honestly, there’s nothing like doing 160 mph down a straightaway to teach you how to focus on what is needed, now.

You’re originally from Montreal. What’s one thing a visitor should eat on a trip there?

Let me give you two, eh. If you find yourself in a small rural Quebec restaurant, you must have poutine, the local ‘delicacy’. If you find yourself downtown, near my Alma matter Concordia University, you must enjoy our local student staple, Kojax. That said, it’s honestly hard to make a mistake when you’re eating in Montreal. They have a lot of good food there.

Want more AWS Security news? Follow us on Twitter.

The AWS External Security Services team is hiring! Want to find out more? Check out our career page.

Dan Plastina

Dan is Vice President, Head of Security Services for Amazon Web Services (AWS). He’s most often seen working alongside his team leaders on product design, management and engineering development efforts to enable business and government customers to secure themselves, when using AWS. He has travelled extensively, meeting c-suite and security leaders at all corners of the globe.

Use IAM to share your AWS resources with groups of AWS accounts in AWS Organizations

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/iam-share-aws-resources-groups-aws-accounts-aws-organizations/

You can now reference Organizational Units (OUs), which are groups of AWS accounts in AWS Organizations, in AWS Identity and Access Management (IAM) policies, making it easier to define access for your IAM principals (users and roles) to the AWS resources in your organization. AWS Organizations lets you organize your accounts into OUs to align them with your business or security purposes. Now, you can use a new condition key, aws:PrincipalOrgPaths, in your policies to allow or deny access based on a principal’s membership in an OU. This makes it easier than ever to share resources between accounts you own in your AWS environments.

For example, you might have an Amazon S3 bucket you need to share with developers and applications from accounts that are members of a specific OU. To accomplish this, you can specify the aws:PrincipalOrgPaths condition and set the value to the organizational unit ID of the caller in the resource-based policy attached to the bucket. When a principal tries to access the bucket, AWS verifies that their account’s OU matches the one specified in the policy. With this condition, permissions automatically apply when you add accounts to the OU without any additional updates to the policy.

In this post, I introduce the new condition key, and show you how to use it in two examples. In the first example you will see how to use the aws:PrincipalOrgPaths condition key to grant multiple AWS accounts access to a resource, without needing to maintain a list of account IDs in your policy. In the second example, you will see how to add a guardrail to your administrative roles that prevents access to powerful actions unless coming from a protected OU in your organization.

AWS Organizations Concepts

Before I walk through the condition, let’s review some important concepts from AWS Organizations.

AWS Organizations allows you to group a set of AWS accounts into an organization that you can manage centrally. Once the accounts have joined the organization, you can group them into organizational units (OUs), allowing you to set policies that help you meet your security and compliance requirements. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form hierarchical relationships between your accounts. When you create an organization, AWS Organizations creates your first account container automatically. It has a special name, called a root. All OUs you create exist inside the root.

Organizations, roots, and OUs use a different format for their identifiers. You can see the differences in the table below:

ResourceID FormatExample ValueGlobally Unique
Organizationo-exampleorgido-p8iu8lkookYes
Rootr-examplerootidr-tkh7No
Organizational Unitou-examplerootid-exampleouidou-tkh7-pbevdy6hNo

Organization IDs are globally unique, meaning no organizations share Organization IDs. OU and Root IDs are not globally unique. This means another customer’s organization OU may have the same ID as those from your organization. OU and Root IDs are unique within an organization. Therefore, you should always include the organization identifier when specifying an OU to make sure it is unique to your organization.

Control access to resources based on OU

You use condition keys in the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition.

Condition keyDescriptionOperator(s)Value(s)
aws:PrincipalOrgPathsThe paths of the principals’ OU from AWS OrganizationsAll string operatorsPaths of AWS Organization IDs and organizational unit IDs

The aws:PrincipalOrgPaths condition key is a global condition, meaning you can use it in conjunction with any AWS action. When you use it in the condition element of your IAM policy, it validates the organization, root, and OUs of the principal performing the action on the resource. For example, let’s say a principal was a member of an OU with the id ou-abcd-zzyyxxww inside a root r-abcd in the organization o-1122334455. When the principal makes a request on the resource, its aws:PrincipalOrgPaths value is:

["o-1122334455/r-abcd/ou-abcd-zzyyxxww/"]

The path includes the organization ID to ensure global uniqueness. This ensures only principals from your organization can access your AWS resources. You can use any string operator, such as StringEquals, with the condition. You can also use the wildcard characters (* and ?) when providing a path.

Aws:PrincipalOrgPaths is a multi-value condition key. Multi-value keys allow you to provide multiple values in a list format. Here’s a sample condition statement from a policy that uses the key to validate that a principal is from either ou-1 or ou-2:


"Condition":{
	"ForAnyValue:StringLike":{
		"aws:PrincipalOrgPaths":[
		  "o-1122334455/r-abcd/ou-1/",
		  "o-1122334455/r-abcd/ou-2/"
		]
	}
}

For all multi-value condition keys, you must provide the value as a JSON-formatted list as shown above, even if you’re only specifying one value. As shown in the example above, you also must use the ForAnyValue qualifier in your conditions to specify you’re checking membership of one OU path. For more information, see Creating a Condition That Tests Multiple Key Values in the IAM documentation.

In the next section, I’ll go over an example of how to use the new condition key to protect resources in your account from access outside of a given OU.

Example: Grant S3 bucket access to all principals in an OU in your organization

This example demonstrates how you can use the new condition key to share resources with groups of accounts. By placing the accounts into an OU and granting access based on membership, you can grant targeted access without having to list and maintain all the AWS account IDs in your permission policies.

Consider an example where I want to grant my Machine Learning team permissions to access an S3 bucket training-data that contains images that the team will use to train their machine learning models. I’ve set up my organization such that all AWS accounts owned by my Machine Learning team are part of a specific OU with the ID ou-machinelearn. For the purpose of this example, my organization ID is o-myorganization.

For this example, I want to allow users and applications from the Machine Learning OU or any OU beneath it to have permissions to read the training-data S3 bucket. Any other AWS accounts should not have the ability to view the resource.

To grant these permissions, I author an S3 bucket policy for my training-data resource as shown below.


{
	"Version":"2012-10-17",
	"Statement":{
		"Sid":"TrainingDataS3ReadOnly",
		"Effect":"Allow",
		"Principal": "*",
		"Action":"s3:GetObject",
		"Resource":"arn:aws:s3:::training-data/*",
		"Condition":{
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-machinelearn/*"]
			}
		}
	}
}

In the policy above, I assert that principals trying to read the contents of the training-data bucket must be either a member of the OU that corresponds to the ou-machinelearn ID I provided (my Machine Learning OU Identifier), or a member of any OUs that are children of it. For the aws:PrincipalOrgPaths value, I used two asterisk (*) wildcards. I used the first asterisk (*) between my organization ID and my OU ID because OU IDs are unique within my organization. This means specifying the full path is not necessary to select the OU I need. The second asterisk (*), at the end of the path, is used to specify that I want to allow all child OUs to be included in my string comparison. If I didn’t want to include the child OUs, I could remove the wildcard character.

With this policy on the bucket, any principals in the Machine Learning OU may read objects inside the bucket if the user or role has the appropriate S3 permissions. Note that if this policy did not have the condition statement, it would be accessible by any AWS account. As a best practice, AWS recommends only granting access to the principals that need it. As for next steps, I could edit the Principal section of the policy to restrict access to specific principals in my Machine Learning accounts. For more information, see Specifying a Principal in a Policy in the S3 documentation.

Example: Restrict access to an IAM role to only accounts in an OU in my organization

The next example will show how to use aws:PrincipalOrgPaths to add another layer of security to your existing IAM role trust policies, ensuring only members of specific OUs may assume your roles.

For this example, say my company requires that only network security engineers can create or manage AWS Virtual Private Cloud (VPC) resources in my accounts. The network security team has a dedicated OU, ou-netsec, for their workloads. I have the same organization ID as the previous example, o-myorganization.

Each account in my organization has a dedicated IAM role, VPCManager, with the permissions needed to manage VPCs. I want to ensure that only my network security team, who use principals that are tagged as such, has access to the role. To do this, I edited the role trust policy for VPCManager, which defines who can access an IAM role. In this case, I added a condition to the policy to require that anyone assuming the role must come from an account in ou-netsec.

This is the trust policy I created for VPCManager:


{
  "Version": "2012-10-17",
  "Statement": [
	{
			"Effect": "Allow",
			"Principal": {
			"AWS": [
				"123456789012",
				"345678901234",
				"567890123456"
			]
		},
		"Action": "sts:AssumeRole",
		"Condition":{
		"StringEquals":{
		"aws:PrincipalTag/JobRole":"NetworkAdmin"
			},
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-netsec/"]
            }
         }
      }
   ]
}

I started by adding the Effect, Principal, and Action to allow principals from three network security accounts to assume the role. To ensure they have the right job role, I added a condition to require the JobRole=NetworkAdmin tag must be applied to principals before they can assume the role. Finally, as an added layer of security, I added the second condition that requires anyone assuming the role must come from an account in the network security OU. This final step ensures that I specified the correct account IDs for my network security accounts—even if I accidentally provided an account that is not part of my organization, members of that account won’t be able to assume the role because they aren’t part of ou-netsec.

Though only members of the network security team may assume the role, it’s still possible for any principals with IAM permissions to modify it. As next steps, I could apply a Service Control Policy (SCP) that protects the role from modification and prevents other roles in the account from modifying VPCs. For more information, see How to use service control policies to set permission guardrails in the AWS Security Blog.

Summary

AWS offers tools to control access for individual principals, accounts, OUs, or entire organizations—this helps you manage permissions at the appropriate scale for your business. You can now use the aws:PrincipalOrgPaths condition key to control access to your resources based on OUs configured in AWS Organizations. For more information about these global condition keys and policy examples, read the IAM documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Identity and Access Management forum.

Want more AWS Security news? Follow us on Twitter.

Michael Switzer, Senior Product Manager AWS Identity

Michael Switzer

Mike Switzer is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. Mike holds a master’s degree in computational mathematics from the University of Washington.

Continuously monitor unused IAM roles with AWS Config

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/continuously-monitor-unused-iam-roles-aws-config/

Developing in the cloud encourages you to iterate frequently as your applications and resources evolve. You should also apply this iterative approach to the AWS Identity and Access Management (IAM) roles you create. Periodically ensuring that all the resources you’ve created are still being used can reduce operational complexity by eliminating the need to track unnecessary resources. It also improves security: identifying unused IAM roles helps reduce the potential for improper or unintended access to your critical infrastructure and workloads.

The IAM API now provides you with information about when a role has last been used to make an AWS request. In this post, I demonstrate how you can identify inactive roles using role last used information. Additionally, I’ll show you how to implement continuous monitoring of role activity using AWS Config.

AWS services and features used

This solution uses the following services and features:

  • AWS IAM: This service enables you to manage access to AWS services and resources securely. It provides an API to retrieve the timestamp of an IAM role’s last use when making an AWS request, and the region where the request was made.
  • AWS Config: This service allows you to continuously monitor and record your AWS resource configurations. It will periodically trigger your AWS Config rule (see next bullet) and will record compliance status.
  • AWS Config Rule: This resource represents your desired configuration settings for specific AWS resources or for an entire AWS account. This resource will check the compliance status of your AWS resources. You can provide the logic that determines compliance, which enables you to mark IAM roles in use as “compliant” and inactive roles as “non-compliant.”
  • AWS Lambda: This service lets you run code without provisioning or managing servers. Lambda will be used to execute API calls to retrieve role last used information and to provide compliance evaluations to AWS Config.
  • Amazon Simple Storage Service (Amazon S3): This is a highly available and durable object store. You’ll use it to store your Lambda code in .zip format prior to deploying your Lambda function.
  • AWS CloudFormation: This service provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. You’ll use it to provision all the resources described in this solution.

Solution logic

This solution identifies unused IAM roles within your account. First, you’ll identify unused roles based on a time window (last number of days) you set. I use 60 days in my example, but this range is configurable. Second, you’ll use AWS Lambda to process all the roles in your account. Third, you’ll determine if they’re compliant based on their creation time and role last used information. Last, you’ll send your evaluations to AWS Config, which records the results and reports if each role is compliant or not. If not, you can take steps to remediate, such as denying all actions that the role can perform.

Prerequisites

This solution has the following prerequisites:

Solution architecture

 

Figure 1: Solution architecture

Figure 1: Solution architecture

As shown in the diagram, AWS Config (1) executes the AWS Config custom rule daily, and this frequency is configurable (2), which in turn invokes the Lambda function (3). The Lambda function enumerates each role and determines its creation date and role last used timestamp, both of which are provided via IAM’s GetAccountAuthorizationDetails API (4). When the Lambda function has determined the compliance of all your roles, the function returns the compliance results to AWS Config (5). AWS Config retains the history of compliance changes evaluated by the rule. If configured, compliance notifications can be sent to an Amazon Simple Notification Service (Amazon SNS) topic. Compliance status is viewable either in the AWS Management Console or through use of the AWS CLI or AWS SDK.

Deploying the solution

The resources for this solution are deployed through AWS CloudFormation. You must prepare the Lambda function’s source code for packaging before AWS CloudFormation can deploy the complete solution into your account.

Step 1: Prepare the Lambda deployment

First, make sure you’re running a *nix prompt (Linux, Mac, or Windows subsystem for Linux). Follow the commands below to create an empty folder named iam-role-last-used where you’ll place your Lambda source code.


mkdir iam-role-last-used
cd iam-role-last-used
touch lambda_function.py

Note that the directory you create and the code it contains will later be compressed into a .zip file by the AWS CLI’s cloudformation package command. This command also uploads the deployment .zip file to your S3 bucket. The cloudformation deploy command will reference this bucket when deploying the solution.

Next, create a Lambda layer with the latest boto3 package. This ensures that your Lambda function is using an up-to-date boto3 SDK and allows you to control the dependencies in your function’s deployment package. You can do this by following steps 1 through 4 in these directions. Be sure to record the Lambda layer ARN that you create because you will use it later.

Finally, open the lambda_function.py file in your favorite editor or integrated development environment (IDE), and place the following code into the lambda_function.py file:


import boto3
from botocore.exceptions import ClientError
from botocore.config import Config
import datetime
import fnmatch
import json
import os
import re
import logging


logger = logging.getLogger()
logging.basicConfig(
    format="[%(asctime)s] %(levelname)s [%(module)s.%(funcName)s:%(lineno)d] %(message)s", datefmt="%H:%M:%S"
)
logger.setLevel(os.getenv('log_level', logging.INFO))

# Configure boto retries
BOTO_CONFIG = Config(retries=dict(max_attempts=5))

# Define the default resource to report to Config Rules
DEFAULT_RESOURCE_TYPE = 'AWS::IAM::Role'

CONFIG_ROLE_TIMEOUT_SECONDS = 60

# Set to True to get the lambda to assume the Role attached on the Config service (useful for cross-account).
ASSUME_ROLE_MODE = False

# Evaluation strings for Config evaluations
COMPLIANT = 'COMPLIANT'
NON_COMPLIANT = 'NON_COMPLIANT'


# This gets the client after assuming the Config service role either in the same AWS account or cross-account.
def get_client(service, execution_role_arn):
    if not ASSUME_ROLE_MODE:
        return boto3.client(service)
    credentials = get_assume_role_credentials(execution_role_arn)
    return boto3.client(service, aws_access_key_id=credentials['AccessKeyId'],
                        aws_secret_access_key=credentials['SecretAccessKey'],
                        aws_session_token=credentials['SessionToken'],
                        config=BOTO_CONFIG
                        )


def get_assume_role_credentials(execution_role_arn):
    sts_client = boto3.client('sts')
    try:
        assume_role_response = sts_client.assume_role(RoleArn=execution_role_arn,
                                                      RoleSessionName="configLambdaExecution",
                                                      DurationSeconds=CONFIG_ROLE_TIMEOUT_SECONDS)
        return assume_role_response['Credentials']
    except ClientError as ex:
        if 'AccessDenied' in ex.response['Error']['Code']:
            ex.response['Error']['Message'] = "AWS Config does not have permission to assume the IAM role."
        else:
            ex.response['Error']['Message'] = "InternalError"
            ex.response['Error']['Code'] = "InternalError"
        raise ex


# Validates role pathname whitelist as passed via AWS Config parameters and returns a list of comma separated patterns.
def validate_whitelist(unvalidated_role_pattern_whitelist):
    # Names of users, groups, roles must be alphanumeric, including the following common
    # characters: plus (+), equal (=), comma (,), period (.), at (@), underscore (_), and hyphen (-).

    if not unvalidated_role_pattern_whitelist:
        return None

    regex = re.compile('^[-a-zA-Z0-9+=,[email protected]_/|*]+')
    if regex.search(unvalidated_role_pattern_whitelist):
        raise ValueError("[Error] Provided whitelist has invalid characters")

    return unvalidated_role_pattern_whitelist.split('|')


# This uses Unix filename pattern matching (as opposed to regular expressions), as documented here:
# https://docs.python.org/3.7/library/fnmatch.html.  Please note that if using a wildcard, e.g. "*", you should use
# it sparingly/appropriately.
# If the rolename matches the pattern, then it is whitelisted
def is_whitelisted_role(role_pathname, pattern_list):
    if not pattern_list:
        return False

    # If role_pathname matches pattern, then return True, else False
    # eg. /service-role/aws-codestar-service-role matches pattern /service-role/*
    # https://docs.python.org/3.7/library/fnmatch.html
    for pattern in pattern_list:
        if fnmatch.fnmatch(role_pathname, pattern):
            # whitelisted
            return True

    # not whitelisted
    return False


# Form an evaluation as a dictionary. Suited to report on scheduled rules.  More info here:
#   https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/config.html#ConfigService.Client.put_evaluations
def build_evaluation(resource_id, compliance_type, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=None):
    evaluation = {}
    if annotation:
        evaluation['Annotation'] = annotation
    evaluation['ComplianceResourceType'] = resource_type
    evaluation['ComplianceResourceId'] = resource_id
    evaluation['ComplianceType'] = compliance_type
    evaluation['OrderingTimestamp'] = notification_creation_time
    return evaluation


# Determine if any roles were used to make an AWS request
def determine_last_used(role_name, role_last_used, max_age_in_days, notification_creation_time):

    last_used_date = role_last_used.get('LastUsedDate', None)
    used_region = role_last_used.get('Region', None)

    if not last_used_date:
        compliance_result = NON_COMPLIANT
        reason = "No record of usage"
        logger.info(f"NON_COMPLIANT: {role_name} has never been used")
        return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)


    days_unused = (datetime.datetime.now() - last_used_date.replace(tzinfo=None)).days

    if days_unused > max_age_in_days:
        compliance_result = NON_COMPLIANT
        reason = f"Was used {days_unused} days ago in {used_region}"
        logger.info(f"NON_COMPLIANT: {role_name} has not been used for {days_unused} days, last use in {used_region}")
        return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)

    compliance_result = COMPLIANT
    reason = f"Was used {days_unused} days ago in {used_region}"
    logger.info(f"COMPLIANT: {role_name} used {days_unused} days ago in {used_region}")
    return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)


# Returns a list of docts, each of which has authorization details of each role.  More info here:
#   https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iam.html#IAM.Client.get_account_authorization_details
def get_role_authorization_details(iam_client):

    roles_authorization_details = []
    roles_list = iam_client.get_account_authorization_details(Filter=['Role'])

    while True:
        roles_authorization_details += roles_list['RoleDetailList']
        if 'Marker' in roles_list:
            roles_list = iam_client.get_account_authorization_details(Filter=['Role'], MaxItems=100, Marker=roles_list['Marker'])
        else:
            break

    return roles_authorization_details


# Check the compliance of each role by determining if role last used is > than max_days_for_last_used
def evaluate_compliance(event, context):

    # Initialize our AWS clients
    iam_client = get_client('iam', event["executionRoleArn"])
    config_client = get_client('config', event["executionRoleArn"])

    # List of resource evaluations to return back to AWS Config
    evaluations = []

    # List of dicts of each role's authorization details as returned by boto3
    all_roles = get_role_authorization_details(iam_client)

    # Timestamp of when AWS Config triggered this evaluation
    notification_creation_time = str(json.loads(event['invokingEvent'])['notificationCreationTime'])

    # ruleParameters is received from AWS Config's user-defined parameters
    rule_parameters = json.loads(event["ruleParameters"])

    # Maximum allowed days that a role can be unused, or has been last used for an AWS request
    max_days_for_last_used = int(os.environ.get('max_days_for_last_used', '60'))
    if 'max_days_for_last_used' in rule_parameters:
        max_days_for_last_used = int(rule_parameters['max_days_for_last_used'])

    whitelisted_role_pattern_list = []
    if 'role_whitelist' in rule_parameters:
        whitelisted_role_pattern_list = validate_whitelist(rule_parameters['role_whitelist'])

    # Iterate over all our roles.  If the creation date of a role is <= max_days_for_last_used, it is compliant
    for role in all_roles:

        role_name = role['RoleName']
        role_path = role['Path']
        role_creation_date = role['CreateDate']
        role_last_used = role['RoleLastUsed']
        role_age_in_days = (datetime.datetime.now() - role_creation_date.replace(tzinfo=None)).days

        if is_whitelisted_role(role_path + role_name, whitelisted_role_pattern_list):
            compliance_result = COMPLIANT
            reason = "Role is whitelisted"
            evaluations.append(
                build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason))
            logger.info(f"COMPLIANT: {role} is whitelisted")
            continue

        if role_age_in_days <= max_days_for_last_used:
            compliance_result = COMPLIANT
            reason = f"Role age is {role_age_in_days} days"
            evaluations.append(
                build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason))
            logger.info(f"COMPLIANT: {role_name} - {role_age_in_days} is newer or equal to {max_days_for_last_used} days")
            continue

        evaluation_result = determine_last_used(role_name, role_last_used, max_days_for_last_used, notification_creation_time)
        evaluations.append(evaluation_result)

    # Iterate over our evaluations 100 at a time, as put_evaluations only accepts a max of 100 evals.
    evaluations_copy = evaluations[:]
    while evaluations_copy:
        config_client.put_evaluations(Evaluations=evaluations_copy[:100], ResultToken=event['resultToken'])
        del evaluations_copy[:100]

Here’s how the above code works. The AWS Config custom rule invokes the Lambda function, calling the evaluate_compliance() method. evaluate_compliance() does the following:

  1. Retrieves information on all roles from IAM using the GetAccountAuthorizationDetails API as mentioned previously. This includes each role’s creation date and role last used timestamp.
  2. Marks each role as compliant if the role name matches one of the patterns in your whitelisted_role_pattern_list. This pattern list is passed to your rule via a user-configurable AWS CloudFormation parameter named RolePatternWhitelist. “Whitelisting roles,” below, provides instructions about how to do this.
  3. Marks each role as compliant if the age of the role in days (role_age_in_days) is less than or equal to the parameter MaxDaysForLastUsed (max_days_for_last_used). This is set via a user-configurable parameter in your CloudFormation stack. You’ll use this parameter to set the time window for how long a role can be inactive.
  4. If neither of the above conditions are met, then determine_last_used() is called, and each role will be marked as non-compliant if days_unused is greater than max_age_in_days.
  5. Finally, evaluate_compliance() calls put_evaluations() against AWS Config to store your evaluations of each role.

Step 2: Deploy the AWS CloudFormation template

Next, create an AWS CloudFormation template file named  iam-role-last-used.yml. This template uses the AWS Serverless Application Model (AWS SAM), which is an extension of CloudFormation. AWS SAM simplifies the deployment so that you don’t have to manually upload your deployment .zip file to your Amazon S3 bucket. To ensure that your template knows the location of your code .zip file, place the file on the same directory level as the iam-role-last-used directory that you created above. Then copy and paste the code below and save it to the iam-role-last-used.yml file.


AWSTemplateFormatVersion: '2010-09-09'
Description: "Creates an AWS Config rule and Lambda to check all roles' last used compliance"
Transform: 'AWS::Serverless-2016-10-31'
Parameters:

  MaxDaysForLastUsed:
    Description: Checks the number of days allowed for a role to not be used before being non-compliant
    Type: Number
    Default: 60
    MaxValue: 365

  NameOfSolution:
    Type: String
    Default: iam-role-last-used
    Description: The name of the solution - used for naming of created resources

  RolePatternWhitelist:
    Description: Pipe separated whitelist of role pathnames using simple pathname matching
    Type: String
    Default: ''
    AllowedPattern: '[-a-zA-Z0-9+=,[email protected]_/|*]+|^$'

  LambdaLayerArn:
    Type: String
    Description: The ARN for the Lambda Layer you will use.
  
Resources:
  LambdaInvokePermission:
    Type: 'AWS::Lambda::Permission'
    DependsOn: CheckRoleLastUsedLambda
    Properties: 
      FunctionName: !GetAtt CheckRoleLastUsedLambda.Arn
      Action: lambda:InvokeFunction
      Principal: config.amazonaws.com
      SourceAccount: !Ref 'AWS::AccountId'

  LambdaExecutionRole:
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: !Sub '${NameOfSolution}-${AWS::Region}'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: /
      Policies:
      - PolicyName: !Sub '${NameOfSolution}'
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - config:PutEvaluations
            Resource: '*'
          - Effect: Allow
            Action:
            - iam:GetAccountAuthorizationDetails
            Resource: '*'
          - Effect: Allow
            Action:
            - logs:CreateLogStream
            - logs:PutLogEvents
            Resource:
            - !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:*:log-group:/aws/lambda/${NameOfSolution}:log-stream:*'

  CheckRoleLastUsedLambda:
    Type: 'AWS::Serverless::Function'
    Properties:
      Description: "Checks IAM roles' last used info for AWS Config"
      FunctionName: !Sub '${NameOfSolution}'
      Handler: lambda_function.evaluate_compliance
      MemorySize: 256
      Role: !GetAtt LambdaExecutionRole.Arn
      Runtime: python3.7
      Timeout: 300
      CodeUri: ./iam-role-last-used
      Layers:
      - !Ref LambdaLayerArn

  LambdaLogGroup:
    Type: 'AWS::Logs::LogGroup'
    Properties: 
      LogGroupName: !Sub '/aws/lambda/${NameOfSolution}'
      RetentionInDays: 30

  ConfigCustomRule:
    Type: 'AWS::Config::ConfigRule'
    DependsOn:
    - LambdaInvokePermission
    - LambdaExecutionRole
    Properties:
      ConfigRuleName: !Sub '${NameOfSolution}'
      Description: Checks the number of days that an IAM role has not been used to make a service request. If the number of days exceeds the specified threshold, it is marked as non-compliant.
      InputParameters: !Sub '{"role-whitelist":"${RolePatternWhitelist}","max_days_for_last_used":"${MaxDaysForLastUsed}"}'
      Source: 
        Owner: CUSTOM_LAMBDA
        SourceDetails: 
        - EventSource: aws.config
          MaximumExecutionFrequency: TwentyFour_Hours
          MessageType: ScheduledNotification
        SourceIdentifier: !GetAtt CheckRoleLastUsedLambda.Arn

For your reference, below is a summary of the template.

  • Parameters (these are user-configurable variables):
    • MaxDaysForLastUsed—maximum amount of days allowed for a role that has not been used to make an AWS request before becoming non-compliant
    • NameOfSolution—the name of the solution, used for naming of created resources
    • RolePatternWhitelist—a pipe (“|”) separated whitelist of role pathnames using simple pathname matching (see Whitelisting roles below)
    • LambdaLayerArn—the unique ARN for your Lambda layer
  • Resources (these are the AWS resources that will be created within your account):
    • LambdaInvokePermission—allows AWS Config to invoke your Lambda function
    • LambdaExecutionRole—the role and permissions that Lambda will assume to process your roles. The policies assigned to this role allow you to perform the iam:GetAccountAuthorizationDetails, config:PutEvaluations, logs:CreateLogStream, and logs:PutLogEvents actions. The PutEvaluations action allows you to send evaluation results back to AWS Config. The CreateLogStream and PutLogEvents actions allows you to write the Lambda execution logs to AWS CloudWatch Logs.
    • CheckRoleLastUsedLambda—defines your Lambda function and its attributes
    • LambdaLogGroup—logs from Lambda will be written to this CloudWatch Log Group
    • ConfigCustomRule—defines your custom AWS Config rule and its attributes

With the CloudFormation template you created above, use the AWS CLI’s cloudformation package command to zip the deployment package and upload it to the S3 bucket that you specify, as shown below. Make sure to replace <YOUR S3 BUCKET> with your bucket name only. Do not include the s3:// prefix:


aws cloudformation package --region <YOUR REGION> --template-file iam-role-last-used.yml \
--s3-bucket <YOUR S3 BUCKET> \
--output-template-file iam-role-last-used-transformed.yml

This will create the file iam-role-last-used-transformed.yml, which adds a reference to the S3 bucket and the pathname needed by CloudFormation to deploy your Lambda function.

Finally, deploy the solution into your AWS account using the cloudformation deploy command below. You can provide different values for NameOfSolutionMaxDaysForLastAccess, or RolePatternWhitelist by using the –parameter-overrides option. Otherwise, defaults will be used. These are specified at the top of the AWS Cloudformation template pasted above, under the Parameters section.


aws cloudformation deploy --region <YOUR REGION> --template-file iam-role-last-used-transformed.yml \
--stack-name iam-role-last-used \
--parameter-overrides NameOfSolution='iam-role-last-used' \
MaxDaysForLastUsed=60 \
RolePatternWhitelist='/breakglass-role|/security-*' \
LambdaLayerArn='<YOUR LAMBDA LAYER ARN>' \
--capabilities CAPABILITY_NAMED_IAM

The deployment is complete after the AWS CLI indicates success. This typically takes only a few minutes:


Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - iam-role-last-used

Step 3: View your findings

Now that your deployment is complete, you can view your compliance findings by going to the AWS Config console.

  1. Select the same region where you deployed the CloudFormation template.
  2. Select Rules in the left pane, which brings up the current list of rules in your account.
  3. Select the iam-role-last-used rule to view the rule’s details, as shown in Figure 2.

When a successful evaluation is indicated in the Overall rule status field, the compliance evaluation is complete. You may need to wait a few minutes for the function to complete successfully as results may not be available yet. You can periodically refresh your web browser to check for completion.
 

Figure 2: AWS Config rule details

Figure 2: AWS Config custom rule details

After the rule completes its evaluations of your roles, you’ll be able to view your compliance results on the same page. In the screenshot below, you can see that there are multiple non-compliant roles. You can switch between viewing compliant and non-compliant resources by selecting the dropdown menu under Compliance status.
 

Figure 3: Viewing the compliance status

Figure 3: Viewing the compliance status

For more insight, you can hover over the “i” symbol, which provides additional information about the role’s non-compliant status (see Figure 4).
 

Figure 4: Hover over the information icon

Figure 4: Hover over the information icon

Step 4: Export a report of your compliance

Once a successful evaluation has completed, you may want to create an exportable report of compliance. You can use the AWS CLI to programmatically script and automatically generate reports for your application, infrastructure, and security teams. They can use these reports to review non-compliant roles and take action if the role is no longer needed. The AWS CLI command below demonstrates how you can achieve this. Note that the command below encompasses a single line:

aws configservice get-compliance-details-by-config-rule –config-rule-name iam-role-last-used –output text –query ‘EvaluationResults [*].{A:EvaluationResultIdentifier.EvaluationResultQualifier.ResourceId,B:ComplianceType,C:Annotation}’

The output is tab-delimited and will be similar to the lines below. The first column displays the role name. The second column states the compliance status. The last column explains the reason for the compliance status:

AdminRole   COMPLIANT      Was last used in us-west-2 46 days ago
Ec2DevRole  NON_COMPLIANT  No record of usage

Remediation

Now that you have a report of non-compliant roles, you must decide what to do with them. If your teams agree that a role is not necessary, the remediation can be to simply delete the role. If unsure, you can retain the role but deny it from performing any action. You can do this by attaching a new permissions policy that will deny all actions for all resources. Re-enabling the role would be as easy as removing the added policy. Otherwise, if the role is necessary but not frequently used, you can whitelist the role through the method below.

Whitelisting roles

Whitelisted roles will be reported as compliant by the custom rule even if left unused. You might have roles such as a security incident response or a break-glass role that require whitelisting.

The whitelist is supplied via the CloudFormation parameter RolePatternWhitelist and is stored as an AWS Config rule parameter. The syntax uses UNIX filename pattern matching. If you need to specify multiple patterns, you can use the | (pipe) character as a delimiter between each pattern. Each delimited pattern will then be matched against the role name, including the path. For example, if you wish to whitelist the breakglass-role, security-incident-response-role and security-audit-role roles, the whitelist patterns you provide to the AWS CloudFormation template might be:

/breakglass-role|/security-*

Important: The use of wildcards (*) should be used thoughtfully, as they will match anything.

Enhancements

In this walkthrough, I’ve kept the architecture and code simple to make the solution easier to follow. You can further customize the solution through the following enhancements:

Conclusion

In this post, I’ve shown you how to use AWS IAM and AWS Config to implement a detective security control that provides visibility into your IAM roles and their last time of use. I’ve also shown how you can view the results in the AWS Management Console and export them using the AWS CLI. Finally, I’ve presented different options for remediation and a means to whitelist roles that are necessary but infrequently used. These techniques can augment your security and compliance program by preventing unintended access through your IAM roles.

Additional resources

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Michael Chan

Michael Chan

Michael is a Developer Advocate for AWS Identity. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Roland AbiHanna

Roland is a Sr. Solutions Architect with Amazon Web Services. He’s focused on helping enterprise customers realize their business needs through cloud solutions, specializing in DevOps and automation. Prior to AWS, Roland ran DevOps for a variety of start-ups in Europe and the Middle East. Outside of work, Roland enjoys hiking and searching for the perfect blend of hops, barley, and water.

AWS Security Profiles: Sarah Cecchetti, Principal Product Manager, Amazon Cognito

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-sarah-cecchetti-principal-product-manager-amazon-cognito/

Sarah Cecchetti photo

In the weeks leading up to re:Invent 2019, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


What do you do in your current role at AWS?

I’m an identity nerd! I think most login experiences are terrible today, especially passwords. The login experience is very important. It’s usually the first way that consumers interact with companies directly and far too often it’s frustrating and off-putting. My job as principal product manager for Amazon Cognito is to build products that make that experience easy and secure. Cognito is the front door to many of the brands you use on the internet today.

How did you enter the IAM field?

I was a full-stack developer for many years before I was recruited to the University of Washington’s Identity and Access Management (IAM) team. I knew nothing about IAM, but they were happy to train me. Because the team built and maintained a lot of their own tools, “growing up” on it helped me master the subject quickly. The team sent me to some Identity and Access Management conferences, and I loved the community and the people so much that I used my vacation time and my own money to go to more conferences.

As I was establishing myself in the field, I met lots of identity teams who were struggling to find more people to bring on. There are no formal training programs for Identity and Access Management. They asked me to consider consulting in addition to my day job, and I did (and called my company Engage Identity because I have an unhealthy obsession with Captain Picard, who always says Engage!) It did so well that I eventually turned the consulting role into my full-time job.

I later accepted an offer from Ping Identity, a company that makes cloud and on-premises identity software. I continued to go to a lot of conferences but was getting tired of the travel. About this time, I had lunch with Darin McAdams (principal engineer on AWS Identity), who told me about Jim Scharf, the new VP of Identity at AWS, who was making some big team investments. Darin suggested I come talk about an open position, and I was amazed by how smart and hardworking the people I met were, and how quickly they were building things. The level of productivity at AWS is just shocking. I joined AWS this past February.

What’s the most useful piece of career advice you’ve ever received?

I have a quote on my office wall from Albert Einstein that’s provided a lot of inspiration: Try to become not a man of success, but try rather to become a man of value. If you talk to the people who started AWS, you’ll find that they didn’t do it because they thought it would make them rich and famous. They did it because they hoped the service would be valuable to lots of people. When you face decision points in your career, it’s tempting to take the path that looks like it will make you “successful.” The truth about success is unintuitive. In order to achieve success in your career, you actually have to focus on making other people successful—on providing so much value that people come to rely on the work you’ve done. That’s why the Amazon leadership principles focus so much on customer obsession and delivering results. We wouldn’t be where we are today if our leadership principles were “make money” and “get famous.”

The Amazon leadership principle “Are right, a lot” can be a source of anxiety—can you share your take on what it means and why it matters?

A lot of people think that “Are right, a lot” means that AWS hires geniuses, pundits, and people who have very high opinions of themselves—people who think they’re constantly right about everything. That’s the exact opposite of what this leadership principle is about. The description says that leaders should work to disconfirm our own beliefs and seek out the opinions of other people.

Diversity is the driving force behind “Are right, a lot.” If you’re a leader at Amazon, your job is to create a diverse team that will call you out when you’re wrong. Part of being “right, a lot” involves learning how to be wrong. It forces you to start thinking two steps ahead of your team, and then two steps ahead of customers from all over the world, from all sorts of backgrounds. If your team is all the same, and you think about technology systems in the same way, your product will never be good enough to meet the security needs of all of your customers.

Can you talk about some of the recent enhancements to IAM that you’re excited about?

Recently, the AWS Identity team has been doing more work at the multi-account level. Customers can have hundreds or even thousands of AWS accounts, and figuring out how to secure that many accounts is the sort of thing that can keep you up at night. So we’re increasingly focused on building tools that allow you to secure multiple accounts at once.

For example, we now have service control policies that you can set at an organizational level. You can say, I want all of these accounts to have AWS CloudTrail turned on, and I want to make sure none of these accounts can turn it off. If an unauthorized user gains access to an account, the first thing they’ll try to do is turn off logging. When we asked, What’s one thing that can help customers with thousands of accounts sleep better at night? the answer was, Make it so people can’t turn off logging.

You’re doing a Leadership session on AWS Identity at re:Invent with Jim Scharf. What do you plan to cover?

We’ll announce a lot of new releases at the session. We’ve been building new services for our customers, and during the session, we’ll be pairing these releases with themes that we’ve seen in the industry.

For example, one really broad theme is interoperability. Think of the IAM industry as a student getting graded in kindergarten: We’re pretty good at keeping our hands and feet to ourselves. We’re pretty good at responding to questions. But we’re absolutely terrible at playing well with others. As an industry, IAM does not play well with others. When our customers try to integrate AWS with Microsoft, Google, or Apple, it’s a frustrating experience. We know that to make it less frustrating, we have to work with our peers in the industry. We have to say, It’s not important what product customers are using, or what cloud they’re using. What’s important is that they have a great experience. Identity experiences can be especially painful because “identity” isn’t what customers are trying to do. It’s not the end goal. Identity is the process you have to go through to get into the system that actually allows you to do your work. When we get in the way of that, it’s a uniquely terrible customer experience. And so, it’s our job to make these systems work together in a simple way that’s easy for our customers.

During your time as a consultant, you worked with NIST to rewrite identity guidelines for US federal agencies. Can you talk to us about this work, and why it was important?

So, NIST is the National Institute of Standards and Technology. NIST was founded in 1901 with a mission to measure things. For example, how long is a second? How much is a gallon? How heavy is a kilogram? These standards allow for fair competition in a free market. Well, now it’s the twenty-first century, and NIST is answering questions like how secure is a given digital identity system? A few years back, I got to work for NIST to rewrite the digital identity guidelines for the federal government. We actually transformed the way the government measures identity, which was really amazing.

When NIST first created digital identity guidelines, they only provided one measure of security: a system could achieve an “level of assurance” on a scale of 1 to 4, and that rating had to do with identity proofing (how well you know who a person is), and authentication (how secure the person is in terms of logging in).

What we found during the process of revising these guidelines is that there are a bunch of use cases where the organization shouldn’t know who the person is when they log in—maybe the person is a political dissident, or a spy, or just a “normal” person who wants to protect their own privacy. But those people still need a high level of security. So we needed a way for organizations to verify that this person logging in is the same person who created the account, without needing to see their photo ID or verify their documentation, or even verify their email address.

So we recreated the guidelines to separate those two measures. Instead of having a single “1 to 4” scale of how secure you are, there are now three scales—how well do we know who you are, how secure is your authentication, and then if you’re going cross between two different systems, how secure is that federation?

You co-founded an organization called IDPro. What led you to found it, and what should people know about it?

The idea for IDPro stemmed from all the time I spent at IAM conferences. Industry folks would have fascinating productive conversations at those conferences but there wasn’t really a forum for discussions outside of that. Security has professional organizations, like ISACA. Privacy has a professional organization, IAPP. But there was nothing for Identity.

So I worked with Ian Glazer, one of the heads of identity at Salesforce, and together we founded IDPro. We wanted to create a grassroots movement, so we got people to join first, and later went looking for corporate sponsors. IDPro is a good way for people who are new to Identity and Access Management to learn from people who have been in the field for years. We now also support a big conference each year called Identiverse.

One of the first things we did when we started the organization was to survey identity professionals. Most of the people who took the survey had more than ten years of experience in the field. We asked how long it took them to feel proficient in IAM (because everyone learns it on the job—it’s the only way to learn IAM). The most common response was ten to fifteen years. And the next most common response was I still don’t feel proficient. As a person who loves identity and access management and wants more people to become identity nerds, those responses broke my heart.

The responses reflected a very real problem. There’s just not enough educational material out there about identity. So we looked to other professional organizations, specifically to the field of project management, for inspiration. In 1996, a group of project management professionals built a body of knowledge that outlined methodologies like waterfall and agile, and tools like stand-ups and kanban boards. Most technology professionals know these words now because back in the 90s that industry deliberately built that body of knowledge. So we began gathering a group of volunteers to build a similar body of knowledge for IAM. Some of our corporate sponsors then asked if we could build a certification, too. We’re working on creating both of those things now. It’s the most ambitious project any of us have ever taken on.

Do you think identity products will be able to replace firewall products in the next five years?

In my opinion, not completely. That said, we’re finding that whether you’re inside or outside a firewalled network is a really weak indicator of whether or not you’re an attacker.

We used to think security was all about the network: The network is the castle, and we had to defend the castle. The people inside the castle were “good,” and the people outside the castle were “bad.” But that’s simply not accurate. Security isn’t just about keeping outsiders out. Legitimate users work from all sorts of devices all over the world. But people with malicious intentions can sometimes find their way onto secure networks. For these reasons, many security professionals are coming around to the idea that identity is the new perimeter. We’re realizing that the key to separating the legitimate users from the unauthorized users is very secure authentication. By secure, I mean things like 2-factor authentication. A factor might be something you know, like a password; something you have, like a YubiKey; or something you are, like a fingerprint. Two-factor authentication requires two of those three things.

The other part of identity that’s important to this “secure authentication” story is access management. You should have access to the things you need to do your job—not more, not less. The AWS Identity team is working on intelligence tools that give administrators the ability to see what roles a person used, what resources they’ve accessed, and what type of work they’re doing, so that admins can confidently scope their users’ permissions to the actions they actually perform. This is called “the principle of least privilege.” And it’s hard. People change jobs, they need to do certain tasks only once a year, or they need access to systems that most people in their role wouldn’t need access to. It’s a complicated problem but it’s one that’s important to solve for the future of the industry.

In your opinion, what’s the biggest challenge facing Identity right now?

Recruiting and education. It’s really hard to get people in the door. The field is exciting. It’s incredibly challenging and provides a lot of value. But it’s hard to explain Identity and Access Management to people so that they know exactly what they’re signing up for. It’s a very wide and very deep field. And once people are in the door, we have to help them figure out how the whole thing works—ideally without needing to spend ten to fifteen years on it.

You sing soprano in an award-winning choir. Tell us about that.

I sing in a choir called Opus 7. One of the things that we like to do is sing more obscure pieces, and pieces by living composers. We recently gave a concert where one of our composers was in attendance. It was a piece that he had written several years ago for a teacher at University of Washington who died. The teacher’s widow was also there, which was really amazingly powerful. Then, we also sang a song by one of his composing students who was also in attendance. One of the things you get when you sing a lot of living composers is an opportunity to sing pieces by female composers. So this female composer wrote a beautiful piece that hardly ever gets sung because people sing a lot of Mozart, Beethoven, and old stuff that people have sung before. We’re bored with that, and we want to highlight the amazing pieces that are being written by new composers.

Author

Sarah Cecchetti

Sarah is the Principal Product Manager for Amazon Cognito. She co-founded and serves on the board of directors of IDPro. She is a co-author of NIST Special Publication 800-63C Digital Identity Guidelines, which outlines federated authentication standards for all US federal agencies. She has been named one of the top 100 influencers in identity. She has been quoted as an industry expert in The LA Times, Forbes, and Wired. Sarah holds a Bachelor of Physics and a Master of Science in Information Management from the University of Washington where she was a NASA Space Grant Scholar.

Identify unused IAM roles and remove them confidently with the last used timestamp

Post Syndicated from Mathangi Ramesh original https://aws.amazon.com/blogs/security/identify-unused-iam-roles-remove-confidently-last-used-timestamp/

As you build on AWS, you create AWS Identity and Access Management (IAM) roles to enable teams and applications to use AWS services. As those teams and applications evolve, you might only rely on a sub-set of your original roles to meet your needs. This can leave unused roles in your AWS account. To help you identify these unused roles, IAM now reports the last-used timestamp that represents when a role was last used to make an AWS request. You or your security team can use this information to identify, analyze, and then confidently remove unused roles. This helps you improve the security posture of your AWS environments. Additionally, by removing unused roles, you can simplify your monitoring and auditing efforts by focusing only on roles that are in use. You can review when a role was last used to access your AWS environment in the IAM console, using the AWS Command Line Interface (AWS CLI), or AWS SDK.

In this post, I demonstrate how to identify and remove roles that your team or applications don’t use by viewing the last-used timestamp in the IAM console.  Before I share an example, I’ll describe the existing IAM APIs where we now also report the last-used timestamp:

  • Get-role: Returns role details, including the path, ARN (Amazon resource number), and trust policy. You can now use this API to retrieve the last-used timestamp.
  • Get-account-authorization-details: Retrieves information about all the IAM users, groups, roles, and policies in your AWS account. You can now view the last-used timestamp along with the other role details.

How to use the AWS Management Console to view last-used information for roles

Imagine you’re a system administrator for Example Inc. and your development team is working on a new application. To enable them to get started with AWS quickly, you create roles for the team and their application. As the application goes through final review, you learn the team and application now rely on a smaller set of roles to access AWS services. This leaves unused roles in your AWS accounts that you might want to remove. You’re going to check the last time each role made a request to AWS and use this information to determine whether the team is using the role. If they aren’t, you plan to remove it knowing the team doesn’t need it for the application.

To view role-last-used information in the IAM Console, select Roles in the IAM navigation pane, then look for the Last activity column (see Figure 1 below). This displays the number of days that have passed since each role made an AWS service request. AWS records last-used information for the trailing 400 days. This is referred to as the tracking period. You can sort the column to identify the roles your team has not used recently.

In the case of Example Inc., let’s say you want to get rid of any roles that have been inactive for 90 days or more. From the information in Figure 1, you see that your team is using ApplicationEC2Access, TestRole, and CodeDeployRole. You also see they haven’t used AdminAccess, EC2FullAccess, and InfraSetupRole in the last 90 days. You can now delete these roles confidently. (Last activity “None”, as seen for the AdminAccess role, means that the role was not used within the trailing 400-day tracking period to make any service request.)
 

Figure 1: "Last activity" column in IAM console

Figure 1: “Last activity” column in IAM console

While analyzing the last-used timestamp for each role, you notice that the MigrationRole role was last active two months ago. You want to gather more information about the role’s access patterns to determine whether you ought to delete it. To do this, select the name of the role. From the role detail page, navigate to the Access Advisor tab and investigate the list of accessed services and verify what the role was used for. Access advisor provides a report that displays a list of services and timestamps that indicate when the selected IAM principal last accessed each of the services that it has permissions to. Based on this report, you can decide to follow up with the development team to see if they still need this role. Thus, you have reduced the number of roles in your account from 9 to 6, making it easier to monitor active roles and restrict access to your AWS environments.
 

Figure 2: Access Advisor report

Summary

In this post, I showed you how to use role-last-used information to identify and remove unused roles. By removing unused roles, you can simplify monitoring and improve your security posture. To learn more about deleting roles, visit the deleting roles or instance profiles documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Mathangi Ramesh

Mathangi Ramesh

Mathangi is the product manager for AWS Identity and Access Management. She enjoys talking to customers and working with data to solve problems. Outside of work, Mathangi is a fitness enthusiast and a Bharatanatyam dancer. She holds an MBA degree from Carnegie Mellon University.

Add defense in depth against open firewalls, reverse proxies, and SSRF vulnerabilities with enhancements to the EC2 Instance Metadata Service

Post Syndicated from Colm MacCarthaigh original https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/

Since it first launched over 10 years ago, the Amazon EC2 Instance Metadata Service (IMDS) has helped customers build secure and scalable applications. The IMDS solved a big security headache for cloud users by providing access to temporary, frequently rotated credentials, removing the need to hardcode or distribute sensitive credentials to instances manually or programatically. Attached locally to every EC2 instance, the IMDS runs on a special “link local” IP address of 169.254.169.254 that means only software running on the instance can access it. For applications with access to IMDS, it makes available metadata about the instance, its network, and its storage. The IMDS also makes the AWS credentials available for any IAM role that is attached to the instance.

When you run applications in the cloud, application security is as critical as instance security; if the applications running on an instance have vulnerabilities or misconfigurations, there can be serious consequences. While application security plays an important role in a layered defense, AWS also constantly evaluates where to add layers, even within the instance, to minimize the damage that can occur when these situations occur.

Today, AWS is making v2 of the EC2 Instance Metadata Service (IMDSv2) available. The existing instance metadata service (IMDSv1) is fully secure, and AWS will continue to support it. But IMDSv2 adds new “belt and suspenders” protections for four types of vulnerabilities that could be used to try to access the IMDS. These new protections go well beyond other types of mitigations, while working seamlessly with existing mitigations such as restricting IAM roles and using local firewall rules to restrict access to the IMDS. AWS is also making new versions of the AWS SDKs and CLIs available that support IMDSv2.

What’s new in IMDSv2

With IMDSv2, every request is now protected by session authentication. A session begins and ends a series of requests that software running on an EC2 instance uses to access the locally-stored EC2 instance metadata and credentials. The software starts a session with a simple HTTP PUT request to IMDSv2. IMDSv2 returns a secret token to the software running on the EC2 instance, which will use the token as a password to make requests to IMDSv2 for metadata and credentials. Unlike traditional passwords, you don’t need to worry about getting the token to the software, because the software gets it for itself with the PUT request. The token is never stored by IMDSv2 and can never be retrieved by subsequent calls, so a session and its token are effectively destroyed when the process using the token terminates. There’s no limit on the number of requests within a single session, and there’s no limit on the number of IMDSv2 sessions. Sessions can last up to six hours and, for added security, a session token can only be used directly from the EC2 instance where that session began.

For example, this curl recipe retrieves a session token that’s valid for the full six hours (21600 seconds) and then uses that token to access the EC2 instance’s profile metadata:


TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`

curl http://169.254.169.254/latest/meta-data/profile -H "X-aws-ec2-metadata-token: $TOKEN"

If you need to write code against the IMDSv2 directly, you can get more detail on the new scheme in the EC2 User Guide.

How these changes add defense in depth

IMDSv2’s changes are easy to use, and you’ll start using it automatically if you’re using the updated AWS SDKs and CLIs. These changes go beyond other types of mitigations to protect against misconfigured-open website application firewalls, misconfigured-open reverse proxies, unpatched SSRF vulnerabilities, and misconfigured-open layer-3 firewalls and network address translation.

Protecting against open Website Application Firewalls

Some Web Application Firewall (WAF) services, such as AWS WAF, can’t be configured to act as open WAFs. However, some third-party WAFs can be misconfigured to allow attackers unauthorized access to the network behind the WAF, including the EC2 IMDS.

Many WAFs are designed to act invisibly, so that they can protect websites and applications without administrators having to change or reconfigure the applications that are behind the WAF. To be transparent, WAFs usually pass on all of the headers that come with a request, and do not add their own headers, such as the standard X-Forwarded-For header that other kinds of proxies add. In other words, applications behind a WAF get requests just as the requester sent them.

The AWS approach is to block open WAFs by using a type of request that open WAFs very rarely support, HTTP PUT requests. Although web services such as Amazon S3 use PUT requests for object storage, they’re an uncommon type of request for websites and browsers to use. Our analysis of third-party WAF products and open WAF misconfigurations found that the vast majority do not permit HTTP PUT requests. We’re using this PUT request to provide a new layer of defense that goes beyond any existing capabilities – we’ve architected the IMDSv2 service to require a PUT request at the beginning of a session, which will prevent open WAFs from being abused to access the IMDS in the vast majority of cases.

Protecting against open reverse proxies

As it happens, it’s also very rare for open reverse proxies to allow PUT requests, but IMDSv2 has another layer of defense against open reverse proxies. Reverse proxies, such as Apache httpd or Squid, can also be misconfigured to allow external requests that reach internal resources, but it’s still normal for these proxies to send an X-Forwarded-For HTTP header. That header itself is used to pass on the IP address of the original caller. IMDSv2 will also not issue session tokens to any caller with an X-Forwarded-For header, which is effective at blocking unauthorized access due to misconfigurations like an open reverse proxy.

Protecting against SSRF vulnerabilities

SSRF vulnerabilities allow attackers to make unauthorized requests from web applications. Since these requests come from the application itself, they can be used to access internal resources that the application has access to but that were not intended to be accessible to outsiders. SSRF vulnerabilities vary in their severity, and some are immune to other types of mitigations. For instance, blocking SSRFs through static headers in instance metadata requests is effective only when the vulnerability merely allows the attacker to control the URL that is being requested; however, AWS analysis found many SSRF vulnerabilities that allow attackers to set arbitrary headers because the SSRF vulnerability impacts the application’s own header processing.

IMDSv2’s combination of beginning a session with a PUT request, and then requiring the secret session token in other requests, is always strictly more effective than requiring only a static header. AWS analysis of real-world vulnerabilities found that this combination protects against the vast majority of SSRF vulnerabilities.

Protecting against open layer 3 firewalls and NATs

Last, there is a final layer of defense in IMDSv2 that is designed to protect EC2 instances that have been misconfigured as open routers, layer 3 firewalls, VPNs, tunnels, or NAT devices. With IMDSv2, the PUT response containing the secret token will, by default, not be able to travel outside the instance. This is accomplished by having the default Time To Live (TTL) on the low-level IP packets containing the secret token set to “1,” much lower than a typical value, such as “64.” Hardware and software that handle packets, including EC2 instances, subtract 1 from each packet’s TTL field whenever they pass it on. If the TTL gets to 0, the packet is discarded, and an error message is sent back to the sender. A packet with a TTL of “64” can therefore make sixty-four “hops” in a network before giving up, while a packet with a TTL of “1” can exist in just one. This feature allows legitimate traffic to get to an intended destination, but is designed to stop packets from endlessly running around in circles if there’s a loop in a network.

With IMDSv2, setting the TTL value to “1” means that requests from the EC2 instance itself will work because they’re returned to the caller (on the instance) before the subtraction occurs. But if the EC2 instance has been misconfigured as an open router, layer 3 firewall, VPN, tunnel, or NAT device, the response containing the token will have its TTL reduced to zero before leaving the instance, and the packet containing the response will be discarded on its way out of the instance, preventing transport to the attacker. The information simply won’t make it further than the EC2 instance itself, which means that an attacker won’t get the response back with the token, and with it the ability to access instance metadata, even if they’ve been successful at getting past all other defenses.

Making the transition

Both IMDSv1 and IMDSv2 will be available and enabled by default, and customers can choose which they will use. The IMDS can now be restricted to v2 only, or IMDS (v1 and v2) can also be disabled entirely. AWS recommends adopting v2 and restricting access to v2 only for added security. IMDSv1 remains available for customers who have tools and scripts using v1, and who are comfortable with the existing security posture of their instances.

A number of tools are available to make transitioning to v2 and disabling v1 seamless. Starting today, a new CloudWatch metric is available that provides visibility into the number of v1 calls that are being made on any given instance. Customers can use this metric to monitor how often v1 is still being accessed as Amazon Machine Images, the AWS SDKs, CLIs, cloud-init, and other software accessing the IMDS is updated, released, and upgraded. When you can see that an instance can be launched, activated, used for service, and the metric is zero, it is safe to require v2 of the IMDS, disabling v1. For more information on transitioning to IMDSv2, see the user guide.

Security can also be further enhanced while this transition is happening. AWS credentials provided by the IMDS now include an ec2:RoleDelivery IAM context key. Credentials provided by the older IMDSv1 have an ec2:RoleDelivery value of “1.0,” and credentials using the new scheme will have an ec2:RoleDelivery value of “2.0.” This context key makes it easy to enforce use of the new scheme on a service-by-service or resource-by-resource basis by using those context keys as conditions in IAM policies, resource policies, or AWS Organizations service control policies. For example, if all of the software accessing an S3 bucket has been upgraded to use IMDSv2, then that S3 bucket can be safely restricted to only allow access to role-account credentials that have the “2.0” value (or greater) for the context key. The effect is that credentials retrieved using IMDSv1 will be prevented from accessing the bucket. AWS CloudTrail is also being updated to record the new ec2:RoleDelivery parameters.

Hear about IMDSv2 at re:Invent

Mark Ryland will be talking in more detail about IMDSv2, and the transition to it, at AWS re:Invent in December. We’ll update this post soon with a link to the session in the re:Invent catalog.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.