Tag Archives: Security, Identity & Compliance

How to use AWS WAF to filter incoming traffic from embargoed countries

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-use-aws-waf-to-filter-incoming-traffic-from-embargoed-countries/

AWS WAF provides inline inspection of inbound traffic at the application layer to detect and filter against critical web application security flaws from common web exploits that could affect application availability, compromise security, or consume excessive resources. The inbound traffic is inspected against web access control list (web ACL) rules that you can create manually or programmatically—either through AWS WAF Security Automations or through the AWS Marketplace. AWS WAF functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service. It can detect and filter malicious web requests and scale to handle bursts in traffic.

We have customers in public sector and financial services who use AWS WAF to block requests from certain geographical locations, like embargoed countries, by applying geographic match conditions. By using AWS WAF, our customers can create a customized list to easily manage an automated solution for geographic blocking.

In order to reduce the operational burden of maintaining an up-to-date list of rules for geographical location blocking, this blog post provides you with an automated solution that applies geography-based IP (GeoIP) restrictions based on a descriptive JSON file that lists all the locations that you want to block. When you update this file, the automation applies all rules to the specified AWS WAF web ACL. For countries not listed on the geographic match condition (or if you just need to block a subset of IPs from a country), the JSON file also has a section where you can list IP ranges that should be blocked.

If you deploy our solution with the default parameters, it builds the following environment:
 

Figure 1: Solution diagram

Figure 1: Solution diagram

As the diagram shows, the solution uses these resources:

  • AWS WAF, which functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service.
  • Two AWS Lambda functions — a Custom Resource function and an Embargoed Countries Parser.
    1. The Custom Resource function helps provision the solution when the AWS WAF conditions, rules, and web ACL are created and configured. It’s also triggered when you upload an initial version of the embargoed countries JSON file to your Amazon Simple Storage (Amazon S3) bucket.
    2. The Embargoed Countries Parser function is trigged whenever a new JSON file is uploaded to the S3 bucket. When an upload occurs, the function parses the new file and enforces AWS WAF rules that reflect what the file describes.
  • An Amazon Simple Storage Service (Amazon S3) bucket, where you’ll save the embargoed countries JSON file.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following resources:
    1. AWS WAF, to list, create, obtain, and update geographic IP restrictions, conditions, and web ACLs.
    2. Amazon CloudWatch logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload and read the embargoed countries JSON file.

The image below shows a reference architecture where malicious traffic is blocked by AWS WAF rules.
 

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

As a starting point for this walk-through, we created a list of embargoed countries based on information published by the Office of Foreign Assets Control (OFAC) of the US Department of the Treasury.
OFAC sanctions and restrictions vary in scope, and OFAC does not maintain one single list of embargoed countries. OFAC also imposes additional restrictions on doing business with certain individuals and entities that are not covered by the embargoed country sanctions list. For the most up-to-date information about embargoed countries and other OFAC sanctions programs, see the US Department of the Treasury’s Resource Center.

IMPORTANT NOTES:

You’re responsible for updating your list of embargoed countries, based on geographic IP restrictions that you establish and keep up-to-date. Later in the post, we’ll show you how to update and edit your list, but we want to emphasize that ensuring your embargo list is current and comprehensive for your business and compliance needs is a critical part of your responsibility as a customer.

Further, the accuracy of the IP Address to country lookup database used by WAF varies by region. Based on recent tests, our overall accuracy for the IP address to country mapping is 99.8%. We recommend that you work with regulatory compliance experts to decide whether your solution meets your compliance needs.

Deploying the CloudFormation stack

To get started, first make sure you have at least one resource that’s associated with your web ACL. This can be either a CloudFront distribution or an Application Load Balancer (ALB). Then, select the Launch Stack button below to launch an AWS CloudFormation stack in your account. It will take approximately 5 minutes for the CloudFormation stack to complete:
 
Select this image to open a link that starts building the CloudFormation stack

The code for this solution is available on GitHub.

Notes: The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar.

  1. On the Select Template page, select Next.
  2. On the Specify Details page, give your solution stack a name.
  3. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    The following screenshot illustrates the same.
     

    Figure 3: Review and modify parameters

    Figure 3: Review and modify parameters

    Parameter Value Description
    EndpointType <Requires input>

    Default: CloudFront

    Choose whether the endpoint that needs to be protected by AWS WAF is associated with CloudFront or ALB.
    WebAclId Insert the webACL id (or leave it empty to create a new one)
    RuleAction AllowedValues:

    BLOCK, COUNT

    Default: BLOCK

    Select the action that AWS WAF takes when a web request comes from an embargoed country.
    RulePriorityIp Default: 100 Specifies the order in which the embargoed IPs rule will be evaluated in a WebACL.
    RulePriorityGeo Default: 101 Specifies the order in which the embargoed country rule will be evaluated in a WebACL.

     

  4. Select Next.
  5. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like. Then select Next.
  6. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  7. To deploy the stack, select Create. In approximately two minutes, the stack creation should be complete. You can verify this on the Events tab by finding your stack ID and looking for the CREATE_COMPLETE status:

    Upon the completion of the CloudFormation stack, you should see CREATE_COMPLETE as the Status. It should look like this:
     

    Figure 4: Look for "CREATE_COMPLETE" as the "Status"

    Figure 4: Look for “CREATE_COMPLETE” as the “Status”

  8. Return to the AWS Management Console, where you’ll see that an additional rule has been added, as shown in the following diagram:
     
    Figure 5: An additional rule has been added

    Figure 5: An additional rule has been added

  9. Choose your newly created rule, then go to the Rules details page. You should now see the JSON file that contains our initial list of embargoed countries to filter traffic from. This is a starting point list: it’s your responsibility as a customer to update the embargoed countries list going forward. To update the list of countries, you can edit the JSON file located in the Amazon S3 bucket using the steps in the next section of this post.
     

    Note:Check to make sure that the web ACL is associated with the endpoint you need to protect, or you run the risk of leaving the endpoint unprotected against inbound traffic from the geographic regions you want to block. More information about how to associate an endpoint with WAF web ACL can be found here.

Updating the list of embargoed countries

  1. To find the S3 bucket, on the completed CloudFormation Template, go to the Resources tab.
  2. Select the Physical ID to see an Amazon S3 bucket with an S3 object called
    embargoed-countries.json. Youll be directed to the Amazon S3 Bucket.
     
    Figure 6: The "embargoed-countries.json" file

    Figure 6: The “embargoed-countries.json” file

  3. Download the embargoed-countries.json file, edit it, and upload the edited file to the same location. Wait for a couple of minutes for the changes to propagate to AWS WAF.

Conclusion

You now have access to a simple solution to block inbound traffic from specific geographic regions. Using this solution, you can use AWS WAF to help protect your applications from unwanted or unauthorized traffic to your application served by CloudFront or ALB.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Rajat Ravinder Varuni

As a security architect with Amazon Web Services, Rajat provides subject matter expertise in the architecture and deployment of solutions that reduce the likelihood of data leakage, web application and denial-of-service attacks, as well as in the design of data encryption methodologies to secure mission-critical data. Find his other contributions to the AWS Security Blog here.

Author

Heitor Vital

Heitor Vital is a Solutions Builder at Amazon Web Services. His team outlines AWS best practices and provides prescriptive architectural guidance, as well as automated solutions that you can deploy in your AWS account in minutes. He contributes to projects such as AWS WAF Security Automation, Data Lake Solution, and Serverless Bot Framework.

New whitepaper: Achieving Operational Resilience in the Financial Sector and Beyond

Post Syndicated from Rahul Prabhakar original https://aws.amazon.com/blogs/security/new-whitepaper-achieving-operational-resilience-in-the-financial-sector-and-beyond/

AWS has released a new whitepaper, Amazon Web Services’ Approach to Operational Resilience in the Financial Sector and Beyond, in which we discuss how AWS and customers build for resiliency on the AWS cloud. We’re constantly amazed at the applications our customers build using AWS services — including what our financial services customers have built, from credit risk simulations to mobile banking applications. Depending on their internal and regulatory requirements, financial services companies may need to meet specific resiliency objectives and withstand low-probability events that could otherwise disrupt their businesses. We know that financial regulators are also interested in understanding how the AWS cloud allows customers to meet those objectives. This new whitepaper addresses these topics.

The paper walks through the AWS global infrastructure and how we build to withstand failures. Reflecting how AWS and customers share responsibility for resilience, the paper also outlines how a financial institution could build a mission-critical application across AWS Regions in a way that improves its resiliency compared to a traditional, on-premises environment.

Security and resiliency remain our highest priority. We encourage you to check out the paper and provide feedback. We’d love to hear from you, so don’t hesitate to get in touch with us by reaching out to your account executive or contacting AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Learn about AWS Services & Solutions – January AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-january-aws-online-tech-talks/

AWS Tech Talks

Happy New Year! Join us this January to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Containers

January 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into AWS Cloud Map: Service Discovery for All Your Cloud Resources – Learn how to increase your application availability with AWS Cloud Map, a new service that lets you discover all your cloud resources.

Data Lakes & Analytics

January 22, 2019 | 1:00 PM – 2:00 PM PT– Increase Your Data Engineering Productivity Using Amazon EMR Notebooks – Learn how to develop analytics and data processing applications faster with Amazon EMR Notebooks.

Enterprise & Hybrid

January 29, 2019 | 1:00 PM – 2:00 PM PTBuild Better Workloads with the AWS Well-Architected Framework and Tool – Learn how to apply architectural best practices to guide your cloud migration.

IoT

January 29, 2019 | 9:00 AM – 10:00 AM PTHow To Visually Develop IoT Applications with AWS IoT Things Graph – See how easy it is to build IoT applications by visually connecting devices & web services.

Mobile

January 21, 2019 | 11:00 AM – 12:00 PM PTBuild Secure, Offline, and Real Time Enabled Mobile Apps Using AWS AppSync and AWS Amplify – Learn how to easily build secure, cloud-connected data-driven mobile apps using AWS Amplify, GraphQL, and mobile-optimized AWS services.

Networking

January 30, 2019 | 9:00 AM – 10:00 AM PTImprove Your Application’s Availability and Performance with AWS Global Accelerator – Learn how to accelerate your global latency-sensitive applications by routing traffic across AWS Regions.

Robotics

January 29, 2019 | 11:00 AM – 12:00 PM PTUsing AWS RoboMaker Simulation for Real World Applications – Learn how AWS RoboMaker simulation works and how you can get started with your own projects.

Security, Identity & Compliance

January 23, 2019 | 1:00 PM – 2:00 PM PTCustomer Showcase: How Dow Jones Uses AWS to Create a Secure Perimeter Around Its Web Properties – Learn tips and tricks from a real-life example on how to be in control of your cloud security and automate it on AWS.

January 30, 2019 | 11:00 AM – 12:00 PM PTIntroducing AWS Key Management Service Custom Key Store – Learn how you can generate, store, and use your KMS keys in hardware security modules (HSMs) that you control.

Serverless

January 31, 2019 | 9:00 AM – 10:00 AM PT Nested Applications: Accelerate Serverless Development Using AWS SAM and the AWS Serverless Application Repository – Learn how to compose nested applications using the AWS Serverless Application Model (SAM), SAM CLI, and the AWS Serverless Application Repository.

January 31, 2019 | 11:00 AM – 12:00 PM PTDeep Dive Into Lambda Layers and the Lambda Runtime API – Learn how to use Lambda Layers to enable re-use and sharing of code, and how you can build and test Layers locally using the AWS Serverless Application Model (SAM).

Storage

January 28, 2019 | 11:00 AM – 12:00 PM PTThe Amazon S3 Storage Classes – Learn about the Amazon S3 Storage Classes and how to use them to optimize your storage resources.

January 30, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Amazon FSx for Windows File Server: Running Windows on AWS – Learn how to deploy Amazon FSx for Windows File Server in some of the most common use cases.

How to centralize and automate IAM policy creation in sandbox, development, and test environments

Post Syndicated from Mahmoud ElZayet original https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-environments/

To keep pace with AWS innovation, many customers allow their application teams to experiment with AWS services in sandbox environments as they move toward production-ready architecture. These teams need timely access to various sets of AWS services and resources, which means they also need a mechanism to help ensure least privilege is granted. In other words, your application team generally shouldn’t have access to administrative resources, such as an AWS Lambda function that takes periodic Amazon Elastic Block Store snapshot backups, or an Amazon CloudWatch Events rule that sends events to a centralized information security account managed by your security team.

In this blog post, I’ll show you how to create a centralized and automated workflow that creates and validates AWS Identity and Access Management (IAM) policies for application teams working in various sandbox, development, and test environments. Your security developers can customize this workflow according to the specific requirements of your security team. They can create logic to limit the allowed permission sets based on account type or owning team. I’ll use AWS CodePipeline to create and manage a workflow containing various stages and spanning multiple AWS accounts that I’ll describe in more detail in the next section.

Solution overview

I’ll start with this scenario: Alice is an administrator for an AWS sandbox account used by her organization’s data scientists to try out AWS analytics services such as Amazon Athena and Amazon EMR. The data scientists assess the suitability of these services for their production use cases by running sample analytics jobs on portions of real data sets after any sensitive information has been taken out. The data sets are stored in an existing Amazon Simple Storage Service (Amazon S3) bucket. For every new project, Alice authors a new IAM policy that allows the project team to access their requested Amazon S3 bucket and create their analytics clusters. However, Alice must follow a company guideline that sandbox accounts can only launch specific Amazon Elastic Compute Cloud (Amazon EC2) instance types. She must also restrict access to all administrative AWS Lambda functions and CloudWatch Events rules that the security team use to monitor sandbox account compliance. Below is the solution that meets these requirements and makes it easier for Alice and other administrators to perform their tasks.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Alice uses the IAM visual editor to author a template that gives the data science team access to launch and manage EMR clusters that analyze S3-based data sets. She then uploads the IAM JSON policy document into an existing S3 bucket using an AWS Key Management Service (AWS KMS) key. The key and the S3 bucket are already created by the security team as part of account baselining, which I will detail later in this post.
  2. AWS CodePipeline automatically fetches the IAM JSON policy document and invokes a sequence of validation checks that use a single and central Lambda function hosted in an AWS account managed by the security team.
  3. If the IAM JSON policy adheres to all account and general security requirements coded by the security developers, the central Lambda function automatically creates the policy in Alice’s account and the pipeline will succeed. The central validation Lambda function will also attach a set of predefined explicit denies to the IAM policy to ensure that it limits undesired user capabilities in the sandbox account. If the IAM JSON policy fails the checks, the pipeline will fail and provide Alice the specific reason for non-compliance. Alice must then modify the policy and resubmit. When the policy has been successfully created, Alice will attach it to the right IAM user, group, or role.

Solution deployment

This solution includes the following three steps:

Prerequisites

As this solution manages permissions granted to AWS services or IAM entities, I highly recommend that you try the solution first in an isolated test environment to make sure it meets all your security requirements.

  1. You’ll need administrator access in two AWS accounts to set up the solution. The deployment of this solution is typically done by one of your organization’s administrators while setting up new AWS accounts. These are the two account types you’ll need access to:
     

    • A sandbox account. This lets application teams experiment with various AWS architectures. This could be a development or test account, as mentioned earlier.
    • A central information security account. Typically, this is owned by an information security team who monitors and enforces security compliance within a multi-account structure.


Important
: Because the Lambda function that you’ll create in the information security account has highly privileged permissions, it’s important to strictly follow best practices for securing the account. You need to limit account access to security team members. Sandbox account administrators should also not give this central Lambda function any IAM permissions in their sandbox account beyond IAM Policy creation.

  1. Because you’ll use the AWS Management Console for both AWS accounts, I strongly recommended that you have roles in both AWS accounts and use the console’s Switch Role feature. You can attach an alias to each account and give each a different color code so that you always know which one you’re logged into.
  2. Make sure to use the same AWS region for all the resources that you create for this solution.

Step 1: Deploy the solution prerequisites

Before building the pipeline across the two AWS accounts, you must first configure the required resources in both accounts, such as IAM roles and encryption keys. This configuration is typically done according to your security team’s guidelines when your organization first sets up the sandbox, development, or test environment.

Important

  • In addition to the initial setup you’ll create in this section, your security team must explicitly deny sandbox, development, or test account administrators from attaching IAM Policies that do not meet the allowed security policies for that account type, such as the AdministratorAccess IAM policy. Moreover, your security team must ensure any current or future users, groups, or roles in the account have no permissions to directly set or update IAM policies like (for example) CreatePolicy, CreatePolicyVersion, PutRolePolicy, PutUserPolicy, PutGroupPolicy, or UpdateAssumeRolePolicy. You want to ensure that creating permissions can only be done through the automation pipeline, which I’ll show you how to build shortly.
  • Because the solution I’ll be describing focuses on the creation of least privilege permissions, it’s highly advisable that your security team combines the solution with IAM permission boundaries to make sure that any permissions defined in this solution are scoped by a set of pre-defined permissions for every type of account in the organization. For example, your account administrators might only be allowed to create IAM users or roles with a pre-defined set of permission boundaries that limit the permissions attached to those principals. For more information about permission boundaries, please refer to this AWS Security blog post.

Create the sandbox account prerequisites

Follow the steps below to deploy an AWS CloudFormation template that will create the following resources in the sandbox account:

  • An S3 bucket where your sandbox administrators will upload IAM policies
  • An IAM role that your automated pipeline will use to access the S3 bucket that stores the IAM policies
  • An AWS KMS key that you will use to encrypt the IAM policies in your S3 bucket
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
     
    Figure 2: CloudFormation console

    Figure 2: CloudFormation console with prepopulated URL

  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. The template defines an input parameter called CentralAccount that you can populate with the AWS account ID of your security account. For more information on how to find the account ID of your security account, check here.
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles that your pipeline will use, select the check box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Select the Stack info tab and refresh periodically while watching the Stack Status field value. After your stack reaches the state CREATE_COMPLETE, navigate to the CloudFormation Outputs tab and copy the following output values to the text editor of your choice. You’ll use these values in subsequent CloudFormation stacks.
     
    Figure 3: CloudFormation Outputs tab

    Figure 3: CloudFormation Outputs tab

Create the information security account prerequisites

Follow the steps below to deploy a CloudFormation template that will create the following resources in your information security account:

  • An IAM role used by your automated pipeline to invoke your central Lambda function and to provide access to the sandbox account KMS key
  • An IAM role used by the central Lambda function to assume a role in the sandbox account and manage IAM policies
  1. While logged in to your security account in your default browser, select this link to launch an AWS stack with the security environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. Populate the following input parameter fields:
    • SandboxAccount: The AWS account ID for the sandbox account.
    • ArtifactBucket: The bucket name that you noted in your text editor from the previous stack run in the sandbox account
    • CMKARN: The Amazon Resource Name (ARN) of the KMS key that you noted in your text editor from the previous stack run in the sandbox account
    • PolicyCheckerFunctionName: The name of the Lambda function to be created later. The default value is PolicyChecker
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles used by your pipeline, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Create the sandbox account pipeline

Now, switch back to your sandbox account and deploy the CloudFormation template that will create the following resources in the sandbox account:

  • An AWS CodePipeline automation pipeline that fetches the IAM policy document from S3 and sends it to the security account for centralized validation. If valid, a Lambda function in the information security account will also create the IAM policy in the sandbox account.
  • An S3 bucket policy to allow your central Lambda function to fetch the IAM policy JSON document from your bucket
  • An IAM role that will be assumed by the Lambda function in the central information security account and used to create IAM policies in the sandbox account. Sandbox account administrator can then attach those IAM policies to the required entities, like an IAM user or role.
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Click Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Pipeline, should already be populated.
  3. Populate the following input parameter fields:
    • CentralAccount: The AWS account ID of the information security account, without hyphens.
    • ArtifactBucket: The same bucket name that you noted in your text editor earlier and used in the previous stack in the information security account.
    • CMKARN: The ARN of the KMS key that you noted in your text editor earlier and used in the previous stack in the information security account.
    • PolicyCheckerFunctionName: Again, the name of the Lambda function to be created later. It must be the same value you provided to the information security account template.
  4. Select Next, and then select Next again.
  5. To have the stack create the required IAM roles, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Step 2: Set up the policy validation Lambda function in the central information security account

In the central information security account, create the Lambda function to validate the IAM policies created in sandbox environment.

  1. In the AWS Lambda console, select Create Function and then select Author from scratch. Provide values for the following fields:
    • Name. This must be the same function name defined as input parameter PolicyCheckerFunctionName to CloudFormation in step 1, when you set up the information security account prerequisites. If you did not change the default value in step 1, the default is still PolicyChecker.
    • Runtime. Python 2.7.
    • Role. To set the role, select Choose an existing role, and then select the role named policy-checker-lambda-role. This is the role you created in step 1, when you set up the information security account prerequisites.

    Choose Create Function, scroll down to Function Code, and then paste the following code into the editor (replacing the existing code):

    
    #  Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #  Licensed under the Apache License, Version 2.0 (the "License"). You may not
    #  use this file except in compliance with
    #  the License. A copy of the License is located at
    #      http://aws.amazon.com/apache2.0/
    #  or in the "license" file accompanying this file. This file is distributed
    #  on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
    #  either express or implied. See the License for the
    #  specific language governing permissions and
    #  limitations under the License.
    from __future__ import print_function
    import json
    import boto3
    import zipfile
    import tempfile
    import os
    
    print('Loading function')
    PERMISSIVE_ERROR_MSG = """Policy creation request rejected: * permissions not
                             allowed in both actions and resources"""
    GENERAL_ERROR_MSG = """An error has occurred while validating policy.
                            Please contact admin"""
    
    
    def get_template(event, s3, artifact, file_in_zip):
        tmp_file = tempfile.NamedTemporaryFile()
        bucket = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['bucketName']
        key = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['objectKey']
    
        with tempfile.NamedTemporaryFile() as tmp_file:
            s3.download_file(bucket, key, tmp_file.name)
            with zipfile.ZipFile(tmp_file.name, 'r') as zip:
                return zip.read(file_in_zip)
    
    
    def get_sts_session(event, account, rolename):
        sts = boto3.client("sts")
        RoleArn = str("arn:aws:iam::" + account + ":role/" + rolename)
        response = sts.assume_role(
            RoleArn=RoleArn,
            RoleSessionName='SecurityManageAccountPermissions',
            DurationSeconds=900)
        sts_session = boto3.Session(
            aws_access_key_id=response['Credentials']['AccessKeyId'],
            aws_secret_access_key=response['Credentials']['SecretAccessKey'],
            aws_session_token=response['Credentials']['SessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        return (sts_session)
    
    
    def ManagePolicy(event, context):
        # Set boto session to get pipeline artifact from sandbox/dev/test account
        artifact_session = boto3.Session(
            aws_access_key_id=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['accessKeyId'],
            aws_secret_access_key=event['CodePipeline.job']['data']
                                       ['artifactCredentials']['secretAccessKey'],
            aws_session_token=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['sessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        # Fetch pipeline artifact from S3
        s3 = artifact_session.client('s3')
        permission_doc = get_template(event, s3, '', 'policy.json')
        metadata_doc = json.loads(get_template(event, s3, '', 'metadata.json'))
        permission_doc_json = json.loads(permission_doc)
        # Assume the central account role in sandbox/dev/test account
        global STS_SESSION
        STS_SESSION = ''  
        STS_SESSION = get_sts_session(
            event, event['CodePipeline.job']['accountId'], 'central-account-role')
        iam = STS_SESSION.client('iam')
        codepipeline = STS_SESSION.client('codepipeline')
        policy_arn = 'arn:aws:iam::' + event['CodePipeline.job']['accountId'] + ':policy/' + metadata_doc['PolicyName']
    
        try:
            # 1.Sample code - Validate policy sent from sandbox/dev/test account:
            # look for * actions and * resources
            for statement in permission_doc_json['Statement']:
                if statement['Action'] == '*' and statement['Resource'] == '*':
                    return codepipeline.put_job_failure_result(
                                        jobId=event['CodePipeline.job']['id'],
                                        failureDetails={
                                            'type': 'JobFailed',
                                            'message': PERMISSIVE_ERROR_MSG})
            # 2.Sample code - Attach any required denies from central
            # pre-defined policy
            iam_local = boto3.client('iam')
            account_id = context.invoked_function_arn.split(":")[4]
            local_policy_arn = 'arn:aws:iam::' + account_id + ':policy/central-deny-policy-sandbox'
            policy_response = iam_local.get_policy(PolicyArn=local_policy_arn)
            policy_version_id = policy_response['Policy']['DefaultVersionId']
            policy_version_doc = iam_local.get_policy_version(
                PolicyArn=local_policy_arn,
                VersionId=policy_version_id)
            for statement in policy_version_doc['PolicyVersion']['Document']['Statement']:
                permission_doc_json['Statement'].append(
                   statement
                )
            # 3. If validated successfully, create policy in
            # sandbox/dev/test account
            iam.create_policy(
                PolicyName=metadata_doc['PolicyName'],
                PolicyDocument=json.dumps(permission_doc_json),
                Description=metadata_doc['PolicyDescription'])
    
            # successful creation, put result back to
            # sandbox/dev/test account pipeline
            codepipeline.put_job_success_result(
                jobId=event['CodePipeline.job']['id'])
        except Exception as e:
            print('Error: ' + str(e))
            codepipeline.put_job_failure_result(
                jobId=event['CodePipeline.job']['id'],
                failureDetails={'type': 'JobFailed', 'message': GENERAL_ERROR_MSG})
    
    def lambda_handler(event, context):
        print(event)
        ManagePolicy(event, context)
    

    This sample code shows how the Lambda function checks the IAM JSON policy submitted by Alice for policies that are too permissive because they allow all IAM actions on all account resources. The sample code also shows an IAM Deny action that prevents the launch of Amazon EC2 instances that are not part of the T2 EC2 instance family. An explicit deny here ensures that only T2 instances can be launched. Your security developers should author code similar to this sample code, in order to meet the security policies of every account type and control the IAM policies created in various sandbox, development, and test environments.

  2. Before saving your new Lambda function code, scroll further down to the Basic Settings section and increase the function timeout to 10 seconds.
  3. Select Save.

Step 3: Test the sandbox account pipeline

Now it’s time to deploy the solution in your sandbox account.

  1. Create the following files and compress them into an archive with the name policy.zip (this is the name expected by your created pipeline).
    • metadata.json: This file contains metadata like the name and description of the IAM policy to be created.
      
                      {
                      "PolicyDescription": "ec2 start permission policy",
                      "PolicyName": "Ec2RunTeamA"
                      }
                      

    • policy.json: This file contains the JSON body of the IAM policy to be created.
      
                      {
                      "Version": "2012-10-17",
                      "Statement": [
                              {
                              "Sid": "EC2Run",
                              "Effect": "Allow",
                              "Action": "ec2:RunInstances",
                              "Resource": "*"
                              }
                      ]
                      }
                      

  2. To upload your policy.zip file to the bucket you created earlier, go to the Amazon S3 console in the sandbox account and, in the search box at the top of the page, search for the bucket you noted in your text editor earlier as ArtifactBucket.
  3. When you locate your bucket, select the bucket name, and then select Upload. The upload dialog will appear.
  4. Select Add Files and navigate to the folder with the policy.zip file. Select the file, select Open, select Next, and then select Next again.
     
    Figure 4: S3 upload dialog

    Figure 4: S3 upload dialog

  5. Select the AWS KMS master-key radio button, and then select the KMS key that has the alias codepipeline-policy-crossaccounts.
     
    Figure 5: Selecting the KMS key

    Figure 5: Selecting the KMS key

  6. Select Next, and then select Upload.
  7. Go to AWS CodePipeline console, select your sandbox pipeline, and wait for the pipeline to start running. It can take up to a minute for it to start.
     
    Figure 6: AWS CodePipeline console

    Figure 6: AWS CodePipeline console

  8. Wait for your pipeline to complete. There should be no validation errors for the IAM policy you just uploaded and your IAM policy should be successfully created. To view the newly created IAM policy, open the AWS IAM console.
  9. Select Policies on the left and search for the policy with the name defined in the metadata.json file.
     
    Figure 7: Viewing your new policy

    Figure 7: Viewing your new policy

  10. Select the policy name. Note the IAM deny that was automatically added to your defined policy.

If you’d like to test the pipeline further, you can modify the policy to permit all actions on all resources. When policy.zip is uploaded again, the pipeline should return the following error:


Policy creation request rejected: * permissions not allowed in both actions and resources

If you encounter any errors as you modify your Lambda function code, you can always go back to the Lambda function logs in the central information security account. For more information on how to access Lambda function logs, please refer to the documentation.

The same logic used here can be extended to other sandbox, development, or test environments. However, for the central information security account, the existing roles will need to be updated to trust and have access to the resources in the newly added sandbox, development, or test account.

Summary

In this blog post, I showed you how to centralize the validation and creation of IAM policies across various AWS accounts. This allows your security developers to start coding your security best practices; permitting automatic creation and validation of IAM policies across your various sandbox, development, and test accounts. Account administrators can then attach those validated IAM policies to the required IAM users, groups or roles. This process strikes the balance between agility and control. It empowers your account administrators to create compliant and least-privilege permission IAM policies, while also allowing your application teams to keep quickly experimenting and innovating. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mahmoud ElZayet

Mahmoud is a Global Accounts Solutions Architect at AWS. He works with large enterprise customers providing guidance and technical assistance for building cloud solutions. Mahmoud is passionate about DevOps and Cloud Compliance topics. Outside of work, he enjoys exploring new places with his wife and two kids.

Top 11 posts in 2018

Post Syndicated from Tom Olsen original https://aws.amazon.com/blogs/security/top-11-posts-in-2018/

We covered a lot of ground in 2018: from GDPR to re:Inforce and numerous feature announcements, AWS GuardDuty deep-dives to SOC reports, automated reasoning explanations, and a series of interviews with AWS thought leaders.

We’ve got big plans for 2019, but there’s room for more: please let us know what you want to read about in the Comments section below.

The top 11 posts from 2018 based on page views

  1. Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article
  2. All AWS Services GDPR ready
  3. Use YubiKey security key to sign into AWS Management Console with YubiKey for multi-factor authentication
  4. AWS Federated Authentication with Active Directory Federation Services (AD FS)
  5. AWS GDPR Data Processing Addendum – Now Part of Service Terms
  6. Announcing the First AWS Security Conference: AWS re:Inforce 2019
  7. Easier way to control access to AWS regions using IAM policies
  8. How to Use Bucket Policies and Apply Defense-in-Depth to Help Secure Your Amazon S3 Data
  9. Preparing for AWS Certificate Manager (ACM) Support of Certificate Transparency
  10. How to Create an AWS IAM Policy to Grant AWS Lambda Access to an Amazon DynamoDB Table
  11. How to retrieve short-term credentials for CLI use with AWS Single Sign-on

If you’re new to AWS and are just discovering the Security Blog, we’ve also compiled a list of older posts that customers continue to find useful.

The top 10 posts of all time based on page views

  1. Where’s My Secret Access Key?
  2. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. Securely Connect to Linux Instances Running in a Private Amazon VPC
  5. Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article
  6. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
  7. How to Connect Your On-Premises Active Directory to AWS Using AD Connector
  8. A New and Standardized Way to Manage Credentials in the AWS SDKs
  9. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  10. How to Control Access to Your Amazon Elasticsearch Service Domain

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

New SOC 2 Report Available: Privacy

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-soc-2-report-available-privacy/

Maintaining your trust is an ongoing commitment of ours, and your voice drives our growing portfolio of compliance reports, attestations, and certifications. As a result of your feedback and deep interest in privacy and data security, we are happy to announce the publication of our new SOC 2 Type I Privacy report.

Keeping you informed of our privacy and data security policies, practices, and technologies we’ve put in place is important to us. The SOC 2 Privacy Type I report is complementary to that effort . The SOC 2 Privacy Trust Principle, developed by the American Institute of CPAs (AICPA), establishes the criteria for evaluating controls related to how personal information is collected, used, retained, disclosed, and disposed to meet the entity’s objectives. The AWS SOC 2 Privacy Type I report provides you with a third-party attestation of our systems and the suitability of the design of our privacy controls, as stated in our Privacy Notice.

The scope of the privacy report includes systems AWS uses to collect personal information and all 72 services and locations in scope for the latest AWS SOC reports. You can download the new SOC 2 Type I Privacy report now through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profile (and re:Invent 2018 wrap-up): Eric Docktor, VP of AWS Cryptography

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profile-and-reinvent-2018-wrap-up-eric-docktor-vp-of-aws-cryptography/

Eric Docktor

We sat down with Eric Docktor to learn more about his 19-year career at Amazon, what’s new with cryptography, and to get his take on this year’s re:Invent conference. (Need a re:Invent recap? Check out this post by AWS CISO Steve Schmidt.)


How long have you been at AWS, and what do you do in your current role?

I’ve been at Amazon for over nineteen years, but I joined AWS in April 2015. I’m the VP of AWS Cryptography, and I lead a set of teams that develops services related to encryption and cryptography. We own three services and a tool kit: AWS Key Management Service (AWS KMS), AWS CloudHSM, AWS Certificate Manager, plus the AWS Encryption SDK that we produce for our customers.

Our mission is to help people get encryption right. Encryption algorithms themselves are open source, and generally pretty well understood. But just implementing encryption isn’t enough to meet security standards. For instance, it’s great to encrypt data before you write it to disk, but where are you going to store the encryption key? In the real world, developers join and leave teams all the time, and new applications will need access to your data—so how do you make a key available to those who really need it, without worrying about someone walking away with it?

We build tools that help our customers navigate this process, whether we’re helping them secure the encryption keys that they use in the algorithms or the certificates that they use in asymmetric cryptography.

What did AWS Cryptography launch at re:Invent?

We’re really excited about the launch of KMS custom key store. We’ve received very positive feedback about how KMS makes it easy for people to control access to encryption keys. KMS lets you set up IAM policies that give developers or applications the ability to use a key to encrypt or decrypt, and you can also write policies which specify that a particular application—like an Amazon EMR job running in a given account—is allowed to use the encryption key to decrypt data. This makes it really easy to encrypt data without worrying about writing massive decrypt jobs if you want to perform analytics later.

But, some customers have told us that for regulatory or compliance reasons, they need encryption keys stored in single-tenant hardware security modules (HSMs) that they manage. This is where the new KMS custom key store feature comes in. Custom key store combines the ease of using KMS with the ability to run your own CloudHSM cluster to store your keys. You can create a CloudHSM cluster and link it to KMS. After setting that up, any time you want to generate a new master key, you can choose to have it generated and stored in your CloudHSM cluster instead of using a KMS multi-tenant HSM. The keys are stored in an HSM under your control, and they never leave that HSM. You can reference the key by its Amazon Resource Name (ARN), which allows it to be shared with users and applications, but KMS will handle the integration with your CloudHSM cluster so that all crypto operations stay in your single-tenant HSM.

You can read our blog post about custom key store for more details.

If both AWS KMS and AWS CloudHSM allow customers to store encryption keys, what’s the difference between the services?

Well, at a high level, sure, both services offer customers a high level of security when it comes to storing encryption keys in FIPS 140-2 validated hardware security modules. But there are some important differences, so we offer both services to allow customers to select the right tool for their workloads.

AWS KMS is a multi-tenant, managed service that allows you to use and manage encryption keys. It is integrated with over 50 AWS services, so you can use familiar APIs and IAM policies to manage your encryption keys, and you can allow them to be used in applications and by members of your organization. AWS CloudHSM provides a dedicated, FIPS 140-2 Level 3 HSM under your exclusive control, directly in your Amazon Virtual Private Cloud (VPC). You control the HSM, but it’s up to you to build the availability and durability you get out of the box with KMS. You also have to manage permissions for users and applications.

Other than helping customers store encryption keys, what else does the AWS Cryptography team do?

You can use CloudHSM for all sorts of cryptographic operations, not just key management. But we definitely do more than KMS and CloudHSM!

AWS Certificate Manager (ACM) is another offering from the cryptography team that’s popular with customers, who use it to generate and renew TLS certificates. Once you’ve got your certificate and you’ve told us where you want it deployed, we take care of renewing it and binding the new certificate for you. Earlier this year, we extended ACM to support private certificates as well, with the launch of ACM Private Certificate Authority.

We also helped the AWS IoT team launch support for cryptographically signing software updates sent to IoT devices. For IoT devices, and for software installation in general, it’s a best practice to only accept software updates from known publishers, and to validate that the new software has been correctly signed by the publisher before installing. We think all IoT devices should require software updates to be signed, so we’ve made this really easy for AWS IoT customers to implement.

What’s the most challenging part of your job?

We’ve built a suite of tools to help customers manage encryption, and we’re thrilled to see so many customers using services like AWS KMS to secure their data. But when I sit down with customers, especially large customers looking seriously at moving from on-premises systems to AWS, I often learn that they have years and years of investment into their on-prem security systems. Migrating to the cloud isn’t easy. It forces them to think differently about their security models. Helping customers think this through and map a strategy can be challenging, but it leads to innovation—for our customers, and for us. For instance, the idea for KMS custom key store actually came out of a conversation with a customer!

What’s your favorite part of your job?

Ironically, I think it’s the same thing! Working with customers on how they can securely migrate and manage their data in AWS can be challenging, but it’s really rewarding once the customer starts building momentum. One of my favorite moments of my AWS career was when Goldman Sachs went on stage at re:Invent last year and talked about how they use KMS to secure their data.

Five years from now, what changes do you think we’ll see within the field of encryption?

The cryptography community is in the early stages of developing a new cryptographic algorithm that will underpin encryption for data moving across the internet. The current standard is RSA, and it’s widely used. That little padlock you see in your web browser telling you that your connection is secure uses the RSA algorithm to set up an encrypted connection between the website and your browser. But, like all good things, RSA’s time may be coming to an end—the quantum computer could be its undoing. It’s not yet certain that quantum computers will ever achieve the scale and performance necessary for practical applications, but if one did, it could be used to attack the RSA algorithm. So cryptographers are preparing for this. Last year, the National Institute of Standards and Technology (NIST) put out a call for algorithms that might be able to replace RSA, and got 68 responses. NIST is working through those ideas now and will likely select a smaller number of algorithms for further study. AWS participated in two of those submissions and we’re keeping a close eye on NIST’s process. New cryptographic algorithms take years of testing and vetting before they make it into any standards, but we want to be ready, and we want to be on the forefront. Internally, we’re already considering what it would look like to make this change. We believe it’s our job to look around corners and prepare for changes like this, so our customers don’t have to.

What’s the most common misconception you encounter about encryption?

Encryption technology itself is decades-old and fairly well understood. That’s both the beauty and the curse of encryption standards: By the time anything becomes a standard, there are years and years of research and proof points into the stability and the security of the algorithm. But just because you have a really good encryption algorithm that takes an encryption key and a piece of data you want to secure and spits out an impenetrable cipher text, it doesn’t mean that you’re done. What did you do with the encryption key? Did you check it into source code? Did you write it on a piece of paper and leave it in the conference room? It’s these practices around the encryption that can be difficult to navigate.

Security-conscious customers know they need to encrypt sensitive data before writing it to disk. But, if you want your application to run smoothly, sometimes you need that data in clear text. Maybe you need the data in a cache. But who has access to the cache? And what logging might have accidentally leaked that information while the application was running and interacting with the cache?

Or take TLS certificates. Each TLS certificate has a public piece—the certificate—and a private piece—a private key. If an adversary got ahold of the private key, they could use it to impersonate your website or your API. So, how do you secure that key after you’ve procured the certificate?

It’s practices like this that some customers still struggle with. You have to think about all the places that your sensitive data is moving, and about real-world realities, like the fact that the data has to be unecrypted somewhere. That’s where AWS can help with the tooling.

Which re:Invent session videos would you recommend for someone interested in learning more about encryption?

Ken Beer’s encryption talk is a very popular session that I recommend to people year after year. If you want to learn more about KMS custom key store, you should also check out the video from the LaunchPad event, where we talked with Box about how they’re using custom key store.

People do a lot of networking during re:Invent. Any tips for maintaining those connections after everyone’s gone home?

Some of the people that I meet at re:Invent I get to see again every year. With these customers, I tend to stay in touch through email, and through Executive Briefing Center sessions. That contact is important since it lets us bounce ideas off each other and we use that feedback to refine AWS offerings. One conference I went to also created a Slack channel for attendees—and all the attendees are still on it. It’s quiet most of the time, but people have a way to re-engage with each other and ask a question, and it’ll be just like we’re all together again.

If you had to pick any other job, what would you want to do with your life?

If I could do anything, I’d be a backcountry ski guide. Now, I’m not a good enough skier to actually have this job! But I like being outside, in the mountains. If there was a way to make a living out of that, I would!

Author photo

Erick Docktor

Eric joined Amazon in 1999 and has worked in a variety of Amazon’s businesses, including being part of the teams that launched Amazon Marketplace, Amazon Prime, the first Kindle, and Fire Phone. Eric has also worked in Supply Chain planning systems and in Ordering. Since 2015, Eric has led the AWS Cryptography team that builds tools to make it easy for AWS customers to encrypt their data in AWS. Prior to Amazon, Eric was a journalist and worked for newspapers including the Oakland Tribune and the Doylestown (PA) Intelligencer.

New podcast: VP of Security answers your compliance and data privacy questions

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/new-podcast-vp-of-security-answers-your-compliance-and-data-privacy-questions/

Does AWS comply with X program? How about GDPR? What about after Brexit? And what happens with machine learning data?

In the latest AWS Security & Compliance Podcast, we sit down with VP of Security Chad Woolf, who answers your compliance and data privacy questions. Including one of the most frequently asked questions from customers around the world, which is: how many compliance programs does AWS have/attest to/audit against?

Chad also shares what it was like to work at AWS in the early days. When he joined, AWS was housed on just a handful of floors, in a single building. Over the course of nearly nine years with the company, he has witnessed tremendous growth of the business and industry.

Listen to the podcast and hear about company history and get answers to your tough questions. If you have a compliance or data privacy question, you can submit it through our contact us form.

Want more AWS news? Follow us on Twitter.

Creating an opportunistic IPSec mesh between EC2 instances

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/creating-an-opportunistic-ipsec-mesh-between-ec2-instances/

IPSec diagram

IPSec (IP Security) is a protocol for in-transit data protection between hosts. Configuration of site-to-site IPSec between multiple hosts can be an error-prone and intensive task. If you need to protect N EC2 instances, then you need a full mesh of N*(N-1)IPSec tunnels. You must manually propagate every IP change to all instances, configure credentials and configuration changes, and integrate monitoring and metrics into the operation. The efforts to keep the full-mesh parameters in sync are enormous.

Full mesh IPSec, known as any-to-any, builds an underlying network layer that protects application communication. Common use cases are:

  • You’re migrating legacy applications to AWS, and they don’t support encryption. Examples of protocols without encryption are File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) or Lightweight Directory Access Protocol (LDAP).
  • You’re offloading protection to IPSec to take advantage of fast Linux kernel encryption and automated certificate management, the use case we focus on in this solution.
  • You want to segregate duties between your application development and infrastructure security teams.
  • You want to protect container or application communication that leaves an EC2 instance.

In this post, I’ll show you how to build an opportunistic IPSec mesh that sets up dynamic IPSec tunnels between your Amazon Elastic Compute Cloud (EC2) instances. IPSec is based on Libreswan, an open-source project implementing opportunistic IPSec encryption (IKEv2 and IPSec) on a large scale.

Solution benefits and deliverable

The solution delivers the following benefits (versus manual site-to-site IPSec setup):

  • Automatic configuration of opportunistic IPSec upon EC2 launch.
  • Generation of instance certificates and weekly re-enrollment.
  • IPSec Monitoring metrics in Amazon CloudWatch for each EC2 instance.
  • Alarms for failures via CloudWatch and Amazon Simple Notification Service (Amazon SNS).
  • An initial generation of a CA root key if needed, including IAM Policies and two customer master keys (CMKs) that will protect the CA key and instance key.

Out of scope

This solution does not deliver IPSec protection between EC2 instances and hosts that are on-premises, or between EC2 instances and managed AWS components, like Elastic Load Balancing, Amazon Relational Database Service, or Amazon Kinesis. Your EC2 instances must have general IP connectivity that allows NACLs and Security Groups. This solution cannot deliver extra connectivity like VPC peering or Transit VPC can.

Prerequisites

You’ll need the following resources to deploy the solution:

  • A trusted Unix/Linux/MacOS machine with AWS SDK for Python and OpenSSL
  • AWS admin rights in your AWS account (including API access)
  • AWS Systems Manager on EC2
  • Linux RedHat, Amazon Linux 2, or CentOS installed on the EC2 instances you want to configure
  • Internet access on the EC2 instances for downloading Linux packages and reaching AWS Systems Manager endpoint
  • The AWS services used by the solution, which are AWS Lambda, AWS Key Management Service (AWS KMS), AWS Identity and Access Management (IAM), AWS Systems Manager, Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), and Amazon SNS

Solution and performance costs

My solution does not require any additional charges to standard AWS services, since it uses well-established open source software. Involved AWS services are as follows:

  • AWS Lambda is used to issue the certificates. Per EC2 and per week, I estimate the use of two 30 second Lambda functions with 256 MB of allocated memory. For 100 EC2 instances, the cost will be several cents. See AWS Lambda Pricing for details.
  • Certificates have no charge, since they’re issued by the Lambda function.
  • CloudWatch Events and Amazon S3 Storage usage are within the free tier policy.
  • AWS Systems Manager has no additional charge.
  • AWS EC2 is a standard AWS service on which you deploy your workload. There are no charges for IPSec encryption.
  • EC2 CPU performance decrease due to encryption is negligible since we use hardware encryption support of the Linux kernel. The IKE negotiation that is done by the OS in your CPU may add minimal CPU overhead depending on the number of EC2 instances involved.

Installation (one-time setup)

To get started, on a trusted Unix/Linux/MacOS machine that has admin access to your AWS account and AWS SDK for Python already installed, complete the following steps:

  1. Download the installation package from https://github.com/aws-quickstart/quickstart-ec2-ipsec-mesh.
  2. Edit the following files in the package to match your network setup:
    • config/private should contain all networks with mandatory IPSec protection, such as EC2s that should only be communicated with via IPSec. All of these hosts must have IPSec installed.
    • config/clear should contain any networks that do not need IPSec protection. For example, these might include Route 53 (DNS), Elastic Load Balancing, or Amazon Relational Database Service (Amazon RDS).
    • config/clear-or-private should contain networks with optional IPSec protection. These networks will start clear and attempt to add IPSec.
    • config/private-or-clear should also contain networks with optional IPSec protection. However, these networks will start with IPSec and fail back to clear.
  3. Execute ./aws_setup.py and carefully set and verify the parameters. Use -h to view help. If you don’t provide customized options, default values will be generated. The parameters are:
    • Region to install the solution (default: your AWS Command Line Interface region)
    • Buckets for configuration, sources, published host certificates and CA storage. (Default: random values that follow the pattern ipsec-{hostcerts|cacrypto|sources}-{stackname} will be generated.) If the buckets do not exist, they will be automatically created.
    • Reuse of an existing CA? (default: no)
    • Leave encrypted backup copy of the CA key? The password will be printed to stdout (default: no)
    • Cloud formation stackname (default: ipsec-{random string}).
    • Restrict provisioning to certain VPC (default: any)

     
    Here is an example output:

    
                ./aws_setup.py  -r ca-central-1 -p ipsec-host-v -c ipsec-crypto-v -s ipsec-source-v
                Provisioning IPsec-Mesh version 0.1
                
                Use --help for more options
                
                Arguments:
                ----------------------------
                Region:                       ca-central-1
                Vpc ID:                       any
                Hostcerts bucket:             ipsec-host-v
                CA crypto bucket:             ipsec-crypto-v
                Conf and sources bucket:      ipsec-source-v
                CA use existing:              no
                Leave CA key in local folder: no
                AWS stackname:                ipsec-efxqqfwy
                ---------------------------- 
                Do you want to proceed ? [yes|no]: yes
                The bucket ipsec-source-v already exists
                File config/clear uploaded in bucket ipsec-source-v
                File config/private uploaded in bucket ipsec-source-v
                File config/clear-or-private uploaded in bucket ipsec-source-v
                File config/private-or-clear uploaded in bucket ipsec-source-v
                File config/oe-cert.conf uploaded in bucket ipsec-source-v
                File sources/enroll_cert_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/generate_certifcate_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/ipsec_setup_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/cron.txt uploaded in bucket ipsec-source-v
                File sources/cronIPSecStats.sh uploaded in bucket ipsec-source-v
                File sources/ipsecSetup.yaml uploaded in bucket ipsec-source-v
                File sources/setup_ipsec.sh uploaded in bucket ipsec-source-v
                File README.md uploaded in bucket ipsec-source-v
                File aws_setup.py uploaded in bucket ipsec-source-v
                The bucket ipsec-host-v already exists
                Stack ipsec-efxqqfwy creation started. Waiting to finish (ca 3-5 min)
                Created CA CMK key arn:aws:kms:ca-central-1:123456789012:key/abcdefgh-1234-1234-1234-abcdefgh123456
                Certificate generation lambda arn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy
                Generating RSA private key, 4096 bit long modulus
                .............................++
                .................................................................................................................................................................................................................................................................................................................................................................................++
                e is 65537 (0x10001)
                Certificate and key generated. Subject CN=ipsec.ca-central-1 Valid 10 years
                The bucket ipsec-crypto-v already exists
                Encrypted CA key uploaded in bucket ipsec-crypto-v
                CA cert uploaded in bucket ipsec-crypto-v
                CA cert and key remove from local folder
                Lambda functionarn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy updated
                Resource policy for CA CMK hardened - removed action kms:encrypt
                
                done :-)
            

Launching the EC2 Instance

Now that you’ve installed the solution, you can start launching EC2 instances. From the EC2 Launch Wizard, execute the following steps. The instructions assume that you’re using RedHat, Amazon Linux 2, or CentOS.

Note: Steps or details that I don’t explicitly mention can be set to default (or according to your needs).

  1. Select the IAM Role already configured by the solution with the pattern Ec2IPsec-{stackname}
     
    Figure 1: Select the IAM Role

    Figure 1: Select the IAM Role

  2. (You can skip this step if you are using Amazon Linux 2.) Under Advanced Details, select User data as text and activate the AWS Systems Manager Agent (SSM Agent) by providing the following string (for RedHat and CentOS 64 Bits only):
    
        #!/bin/bash
        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
        sudo systemctl start amazon-ssm-agent
        

     

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

  3. Set the tag name to IPSec with the value todo. This is the identifier that triggers the installation and management of IPsec on the instance.
     
    Figure 3: Set the tag name to "IPSec" with the value "todo"

    Figure 3: Set the tag name to “IPSec” with the value “todo”

  4. On the Configuration page for the security group, allow ESP (Protocol 50) and IKE (UDP 500) for your network, like 172.31.0.0/16. You need to enter these values as shown in following screen:
     
    Figure 4: Enter values on the "Configuration" page

    Figure 4: Enter values on the “Configuration” page

After 1-2 minutes, the value of the IPSec instance tag will change to enabled, meaning the instance is successfully set up.
 

Figure 5: Look for the "enabled" value for the IPSec key

Figure 5: Look for the “enabled” value for the IPSec key

So what’s happening in the background?

 

Figure 6: Architectural diagram

Figure 6: Architectural diagram

As illustrated in the solution architecture diagram, the following steps are executed automatically in the background by the solution:

  1. An EC2 launch triggers a CloudWatch event, which launches an IPSecSetup Lambda function.
  2. The IPSecSetup Lambda function checks whether the EC2 instance has the tag IPSec:todo. If the tag is present, the Lambda function issues a certificate calling a GenerateCertificate Lambda.
  3. The GenerateCertificate Lambda function downloads the encrypted CA certificate and key.
  4. The GenerateCertificate Lambda function decrypts the CA key with a customer master key (CMK).
  5. The GenerateCertificate Lambda function issues a host certificate to the EC2 instance. It encrypts the host certificate and key with a KMS generated random secret in PKCS12 structure. The secret is envelope-encrypted with a dedicated CMK.
  6. The GenerateCertificate Lambda function publishes the issued certificates to your dedicated bucket for documentation.
  7. The IPSec Lambda function calls and runs the installation via SSM.
  8. The installation downloads the configuration and installs python, aws-sdk, libreswan, and curl if needed.
  9. The EC2 instance decrypts the host key with the dedicated CMK and installs it in the IPSec database.
  10. A weekly scheduled event triggers reenrollment of the certificates via the Reenrollcertificates Lambda function.
  11. The Reenrollcertificates Lambda function triggers the IPSecSetup Lambda (call event type: execution). The IPSecSetup Lambda will renew the certificate only, leaving the rest of the configuration untouched.

Testing the connection on the EC2 instance

You can log in to the instance and ping one of the hosts in your network. This will trigger the IPSec connection and you should see successful answers.


        $ ping 172.31.1.26
        
        PING 172.31.1.26 (172.31.1.26) 56(84) bytes of data.
        64 bytes from 172.31.1.26: icmp_seq=2 ttl=255 time=0.722 ms
        64 bytes from 172.31.1.26: icmp_seq=3 ttl=255 time=0.483 ms
        

To see a list of IPSec tunnels you can execute the following:


        sudo ipsec whack --trafficstatus
        

Here is an example of the execution:
 

Figure 7: Example execution

Figure 7: Example execution

Changing your configuration or installing it on already running instances

All configuration exists in the source bucket (default: ipsec-source prefix), in files for libreswan standard. If you need to change the configuration, follow the following instructions:

  1. Review and update the following files:
    1. oe-conf, which is the configuration for libreswan
    2. clear, private, private-to-clear and clear-to-ipsec, which should contain your network ranges.
  2. Change the tag for the IPSec instance to
    IPSec:todo.
  3. Stop and Start the instance (don’t restart). This will retrigger the setup of the instance.
     
    Figure 8: Stop and start the instance

    Figure 8: Stop and start the instance

    1. As an alternative to step 3, if you prefer not to stop and start the instance, you can invoke the IPSecSetup Lambda function via Test Event with a test JSON event in the following format:
      
                      { "detail" :  
                          { "instance-id": "YOUR_INSTANCE_ID" }
                      }
              

      A sample of test event creation in the Lambda Design window is shown below:
       

      Figure 9: Sample test event creation

      Figure 9: Sample test event creation

Monitoring and alarms

The solution delivers and takes care of IPSec/IKE Metrics and SNS Alarms in the case of errors. To monitor your IPSec environment, you can use Amazon CloudWatch. You can see metrics for active IPSec sessions, IKE/ESP errors, and connection shunts.
 

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

There are two SNS topics and alarms configured for IPSec setup failure or certificate reenrollment failure. You will see an alarm and an SNS message. It’s very important that your administrator subscribes to notifications so that you can react quickly. If you receive an alarm, please use the information in the “Troubleshooting” section of this post, below.
 

Figure 11: Alarms

Figure 11: Alarms

Troubleshooting

Below, I’ve listed some common errors and how to troubleshoot them:
 

The IPSec Tag doesn’t change to IPSec:enabled upon EC2 launch.

  1. Wait 2 minutes after the EC2 instance launches, so that it becomes reachable for AWS SSM.
  2. Check that the EC2 Instance has the right role assigned for the SSM Agent. The role is provisioned by the solution named Ec2IPsec-{stackname}.
  3. Check that the SSM Agent is reachable via a NAT gateway, an Internet gateway, or a private SSM endpoint.
  4. For CenOS and RedHat, check that you’ve installed the SSM Agent. See “Launching the EC2 instance.”
  5. Check the output of the SSM Agent command execution in the EC2 service.
  6. Check the IPSecSetup Lambda logs in CloudWatch for details.

The IPSec connection is lost after a few hours and can only be established from one host (in one direction).

  1. Check that your Security Groups allow ESP Protocol and UDP 500. Security Groups are stateful. They may only allow a single direction for IPSec establishment.
  2. Check that your network ACL allows UDP 500 and ESP Protocol.

The SNS Alarm on IPSec reenrollment is trigged, but everything seems to work fine.

  1. Certificates are valid for 30 days and rotated every week. If the rotation fails, you have three weeks to fix the problem.
  2. Check that the EC2 instances are reachable over AWS SSM. If reachable, trigger the certificate rotation Lambda again.
  3. See the IPSecSetup Lambda logs in CloudWatch for details.

DNS Route 53, RDS, and other managed services are not reachable.

  1. DNS, RDS and other managed services do not support IPSec. You need to exclude them from encryption by listing them in the config/clear list. For more details see step 2 of Installation (one-time setup) in this blog.

Here are some additional general IPSec commands for troubleshooting:

Stopping IPSec can be done by executing the following unix command:


        sudo ipsec stop 
        

If you want to stop IPSec on all instances, you can execute this command via AWS Systems Manager on all instances with the tag IPSec:enabled. Stopping encryption means all traffic will be sent unencrypted.

If you want to have a fail-open case, meaning on IKE(IPSec) failure send the data unencrypted, then configure your network in config/private-or-clear as described in step 2 of Installation (one-time setup).

Debugging IPSec issues can be done using Libreswan commands . For example:


        sudo ipsec status 
        
        sudo ipsec whack –debug `
        
        sudo ipsec barf 
        

Security

The CA key is encrypted using an Advanced Encryption Standard (AES) 256 CBC 128-byte secret and stored in a bucket with server-side encryption (SSE). The secret is envelope-encrypted with a CMK in AWS KMP pattern. Only the certificate-issuing Lambda function can decrypt the secret KMS resource policy. The encrypted secret for the CA key is set in an encrypted environment variable of the certificate-issuing Lambda function.

The IPSec host private key is generated by the certificate-issuing Lambda function. The private key and certificate are encrypted with AES 256 CBC (PKCS12) and protected with a 128-byte secret generated by KMS. The secret is envelope-encrypted with a user CMK. Only the EC2 instances with attached IPSec IAM policy can decrypt the secret and private key.

The issuing of the certificate is a full synchronous call: One request and one corresponding response without any polling or similar sync/callbacks. The host private key is not stored in a database or an S3 bucket.

The issued certificates are valid for 30 days and are stored for auditing purposes in a certificates bucket without a private key.

Alternate subject names and multiple interfaces or secondary IPs

The certificate subject name and AltSubjectName attribute contains the private Domain Name System (DNS) of the EC2 and all private IPs assigned to the instance (interfaces, primary, and secondary IPs).

The provided default libreswan configuration covers a single interface. You can adjust the configuration according to libreswan documentation for multiple interfaces, for example, to cover Amazon Elastic Container Service for Kubernetes (Amazon EKS).

Conclusion

With the solution in this blog post, you can automate the process of building an encryption IPSec layer for your EC2 instances to protect your workloads. You don’t need to worry about configuring certificates, monitoring, and alerting. The solution uses a combination of AWS KMS, IAM, AWS Lambda, CloudWatch and the libreswan implementation. If you need libreswan support, use the mailing list or github. AWS forums can give you more information on KMS for IAM. If you require a special enterprise enhancement, contact AWS professional services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Automate analyzing your permissions using IAM access advisor APIs

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/automate-analyzing-permissions-using-iam-access-advisor/

As an administrator that grants access to AWS, you might want to enable your developers to get started with AWS quickly by granting them broad access. However, as your developers gain experience and your applications stabilize, you want to limit permissions to only what they need. To do this, access advisor will determine the permissions your developers have used by analyzing the last timestamp when an IAM entity (for example, a user, role, or group) accessed an AWS service. This information helps you audit service access, remove unnecessary permissions, and set appropriate permissions across different environments. For example, you can grant broad access to services in development accounts and then reduce permissions for access to specific services in production accounts. Finally, as you manage more IAM entities and AWS accounts, you need a way to scale these processes through automation. To help you achieve this automation, you can now use IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first provide the details of the access advisor APIs. Next, I walk through an example to demonstrate how you can use the AWS CLI to create a report of the last-accessed timestamps for the services used by the roles in your account. In this post, I assume that you’re familiar with access advisor and how to Remove Unnecessary Permissions in Your IAM Policies by Using Service Last Accessed Data from the IAM console. Before I share an example, I’ll describe the new IAM access advisor APIs:

  • generate-service-last-accessed-details: Generates the service last accessed data for an IAM resource (user, role, group, or policy). You need to call this API first to start a job that generates the service last accessed data for the IAM resource. This API returns a JobId that you will use for the other APIs, such as get-service-last-accessed-details, to determine the status of the job completion.
  • get-service-last-accessed-details: Use this to retrieve the service last accessed data for an IAM resource based on the JobID you pass in.
  • get-service-last-accessed-details-with-entities: Use this to retrieve the service last accessed data for a specific AWS service. The API provides you with a list of all the IAM entities who have access to the service and includes the last accessed date for each IAM entity.
  • list-policies-granting-service-access: Use this to retrieve all the IAM policies that grant permissions to the services accessed for an IAM entity. This helps you identify the policies you need to modify to remove any unused permissions.

Now that you understand the different IAM access advisor APIs, I’ll walk through an example to demonstrate how to use them to set permissions based on service last accessed information.

Example use case: Setting permissions for IAM roles

Assume Arnav Desai is a security administrator for Example Corp. He works with several development teams and monitors their access across multiple accounts. To get his development teams up and running quickly, he initially created multiple roles with broad permissions that are based on job function in the development accounts. Now, his developers are ready to deploy workloads to production accounts. The developers need access to configure AWS, however, Arnav only wants to grant them access to what they need. To determine these permissions, he uses access advisor APIs to automate a process that helps him understand the services developers accessed in the last six months. Using this information, he authors policies to grant access to specific services in production. I’ll now show you an example to achieve this in one account using AWS CLI commands.

First, Arnav uses the list-roles command to list the IAM roles in his development account. For this example, there are two roles in his development account: DBAdminRole and NetworkAdminRole.

For each role, he uses the generate-service-last-accessed-details command to generate the service last accessed data for the role. Here’s an example of the command that he uses:


aws iam generate-service-last-accessed-details --arn arn:aws:iam::123456789012:role/DBAdminRole

The command above provides Arnav with a JobId for each role signaling that the job has started generating the service last accessed details. Arnav waits for the job to complete successfully to retrieve the access advisor information. In the meantime, he can call the get-service-last-accessed-details command to view the JobStatus of the job. Once the jobs for both roles are COMPLETED, Arnav can view the service last accessed report for both the roles, as shown below.

DBAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-01T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon DynamoDB",
            "ServiceNamespace": "dynamodb",
            "TotalAuthenticatedEntities": 1
        },
        {
            "LastAuthenticated": "2018-08-25T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon S3",
            "ServiceNamespace": "s3",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the DBAdminRole doesn’t access other services.

NetworkAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-21T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ NetworkAdminRole",
            "ServiceName": "Amazon EC2",
            "ServiceNamespace": "ec2",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the NetworkAdminRole doesn’t access other services.

Based on the output above, you can see that the two roles in development accessed Amazon DynamoDB, Amazon S3, and Amazon EC2 in the last six months. Using this information, Arnav can author a policy to grant access to these specific services for the production accounts.

Conclusion

In this post, I reviewed IAM access advisor APIs and shown how you can use them to determine service last accessed information programmatically. You can use this information to audit access, removed unused permissions, or grant appropriate permissions across your accounts.

If you have comments about retrieving Access Advisor service last accessed information programmatically, submit them in the Comments section below. If you have issues using access advisor commands, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Learn about New AWS re:Invent Launches – December AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-new-aws-reinvent-launches-december-aws-online-tech-talks/

AWS Tech Talks

Join us in the next couple weeks to learn about some of the new service and feature launches from re:Invent 2018. Learn about features and benefits, watch live demos and ask questions! We’ll have AWS experts online to answer any questions you may have. Register today!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Compute

December 19, 2018 | 01:00 PM – 02:00 PM PTDeveloping Deep Learning Models for Computer Vision with Amazon EC2 P3 Instances – Learn about the different steps required to build, train, and deploy a machine learning model for computer vision.

Containers

December 11, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS App Mesh – Learn about using AWS App Mesh to monitor and control microservices on AWS.

Data Lakes & Analytics

December 10, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS Lake Formation – Build a Secure Data Lake in Days – AWS Lake Formation (coming soon) will make it easy to set up a secure data lake in days. With AWS Lake Formation, you will be able to ingest, catalog, clean, transform, and secure your data, and make it available for analysis and machine learning.

December 12, 2018 | 11:00 AM – 12:00 PM PTIntroduction to Amazon Managed Streaming for Kafka (MSK) – Learn about features and benefits, use cases and how to get started with Amazon MSK.

Databases

December 10, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon RDS on VMware – Learn how Amazon RDS on VMware can be used to automate on-premises database administration, enable hybrid cloud backups and read scaling for on-premises databases, and simplify database migration to AWS.

December 13, 2018 | 09:00 AM – 10:00 AM PTServerless Databases with Amazon Aurora and Amazon DynamoDB – Learn about the new serverless features and benefits in Amazon Aurora and DynamoDB, use cases and how to get started.

Enterprise & Hybrid

December 19, 2018 | 11:00 AM – 12:00 PM PTHow to Use “Minimum Viable Refactoring” to Achieve Post-Migration Operational Excellence – Learn how to improve the security and compliance of your applications in two weeks with “minimum viable refactoring”.

IoT

December 17, 2018 | 11:00 AM – 12:00 PM PTIntroduction to New AWS IoT Services – Dive deep into the AWS IoT service announcements from re:Invent 2018, including AWS IoT Things Graph, AWS IoT Events, and AWS IoT SiteWise.

Machine Learning

December 10, 2018 | 09:00 AM – 10:00 AM PTIntroducing Amazon SageMaker Ground Truth – Learn how to build highly accurate training datasets with machine learning and reduce data labeling costs by up to 70%.

December 11, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS DeepRacer – AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and a global racing league.

December 12, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Forecast and Amazon Personalize – Learn about Amazon Forecast and Amazon Personalize – what are the key features and benefits of these managed ML services, common use cases and how you can get started.

December 13, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Textract: Now in Preview – Learn how Amazon Textract, now in preview, enables companies to easily extract text and data from virtually any document.

Networking

December 17, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS Transit Gateway – Learn how AWS Transit Gateway significantly simplifies management and reduces operational costs with a hub and spoke architecture.

Robotics

December 18, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS RoboMaker, a New Cloud Robotics Service – Learn about AWS RoboMaker, a service that makes it easy to develop, test, and deploy intelligent robotics applications at scale.

Security, Identity & Compliance

December 17, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS Security Hub – Learn about AWS Security Hub, and how it gives you a comprehensive view of high-priority security alerts and your compliance status across AWS accounts.

Serverless

December 11, 2018 | 11:00 AM – 12:00 PM PTWhat’s New with Serverless at AWS – In this tech talk, we’ll catch you up on our ever-growing collection of natively supported languages, console updates, and re:Invent launches.

December 13, 2018 | 11:00 AM – 12:00 PM PTBuilding Real Time Applications using WebSocket APIs Supported by Amazon API Gateway – Learn how to build, deploy and manage APIs with API Gateway.

Storage

December 12, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Windows File Server – Learn about Amazon FSx for Windows File Server, a new fully managed native Windows file system that makes it easy to move Windows-based applications that require file storage to AWS.

December 14, 2018 | 01:00 PM – 02:00 PM PTWhat’s New with AWS Storage – A Recap of re:Invent 2018 Announcements – Learn about the key AWS storage announcements that occurred prior to and at re:Invent 2018. With 15+ new service, feature, and device launches in object, file, block, and data transfer storage services, you will be able to start designing the foundation of your cloud IT environment for any application and easily migrate data to AWS.

December 18, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Lustre – Learn about Amazon FSx for Lustre, a fully managed file system for compute-intensive workloads. Process files from S3 or data stores, with throughput up to hundreds of GBps and sub-millisecond latencies.

December 18, 2018 | 01:00 PM – 02:00 PM PTIntroduction to New AWS Services for Data Transfer – Learn about new AWS data transfer services, and which might best fit your requirements for data migration or ongoing hybrid workloads.

2018 ISO certificates are here, with a 70% increase of in scope services

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/2018-iso-certificates-are-here-with-a-70-increase-of-in-scope-services/

In just the last year, we’ve increased the number of ISO services in scope by 70%. That makes 114 services in total that have been validated against ISO 9001, 27001, 27017, and 27018.

The following services are new to our ISO program:

  • Amazon AppStream 2.0
  • Amazon Athena
  • Amazon Chime
  • Amazon CloudWatch Events
  • Amazon CloudWatch
  • Amazon Comprehend
  • Amazon Elastic Container Service for Kubernetes
  • Amazon Elasticsearch Service
  • Amazon FreeRTOS
  • Amazon FSx*
  • Amazon GuardDuty
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Pinpoint
  • Amazon Polly
  • Amazon Rekognition
  • Amazon Transcribe
  • Amazon Translate
  • AWS Amplify*
  • AWS AppSync
  • AWS Artifact
  • AWS Certificate Manager
  • AWS CodeStar
  • AWS DataSync*
  • AWS Device Farm
  • AWS Elemental MediaConnect*
  • AWS Elemental MediaConvert
  • AWS Elemental MediaLive
  • AWS Firewall Manager
  • AWS Global Accelerator*
  • AWS Glue
  • AWS IoT Greengrass
  • AWS IoT 1-Click
  • AWS IoT Analytics
  • AWS License Manager*
  • AWS OpsWorks CM [includes Chef Automate, Puppet Enterprise]
  • AWS Organizations
  • AWS RoboMaker*
  • AWS Secrets Manager
  • AWS Server Migration Service
  • AWS Serverless Application Repository
  • AWS Service Catalog
  • AWS Single Sign-On
  • AWS Transfer for SFTP*
  • AWS Trusted Advisor
  • Amazon Route 53 Resolver*

*New Service

The latest certificates for ISO 9001, 27001, 27017, and 27018 are now available, giving you insight into our information security management system from third-party auditors. They contain the full list of AWS locations in scope and reference the ISO Certified webpage, which includes all services in scope. For convenience, you can also download the certs in the console via AWS Artifact, as well.

We’re clearly accelerating the pace that we add services in scope, but our ultimate goal is to eliminate your wait for compliant services altogether. To that end, in this latest audit cycle 9 of the 51 services added, launched generally available with the certifications at re:Invent 2018.

Want more AWS Security news? Follow us on Twitter.

New PCI DSS report now available, 31 services added to scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-pci-dss-report-now-available-31-services-added-to-scope/

In just the last 6 months, we’ve increased the number of Payment Card Industry Data Security Standard (PCI DSS) certified services by 50%. We were evaluated by third-party auditors from Coalfire and the latest report is now available on AWS Artifact.

I would like to especially call out the six new services (marked with asterisks) that just launched generally available at re:Invent with PCI certification. We’re increasing the rate we add existing services in scope and are also launching new services PCI certified, enabling you to use them for regulated workloads sooner. The goal is for all of our services to have compliance certifications so you never have to wait to verify their security and compliance posture. Additional work to that end is already underway, and we’ll be updating you about our progress at every significant milestone.

With the addition of the following 31 services, you can now select from a total of 93 PCI-compliant services. To see the full list, go to our Services in Scope by Compliance Program page.

  • Amazon Athena
  • Amazon Comprehend
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Amazon Elasticsearch Service
  • Amazon FreeRTOS
  • Amazon FSx*
  • Amazon GuardDuty
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Rekognition
  • Amazon Transcribe
  • Amazon Translate
  • AWS AppSync
  • AWS Certificate Manager (ACM)
  • AWS DataSync*
  • AWS Elemental MediaConnect*
  • AWS Global Accelerator*
  • AWS Glue
  • AWS Greengrass
  • AWS IoT Core {includes Device Management}
  • AWS OpsWorks for Chef Automate {includes Puppet Enterprise}
  • AWS RoboMaker*
  • AWS Secrets Manager
  • AWS Serverless Application Repository
  • AWS Server Migration Service (SMS)
  • AWS Step Functions
  • AWS Transfer for SFTP*
  • VM Import/Export

*New Service

If you want to know more about our compliance programs or provide feedback, please contact us. Your feedback helps us prioritize our decisions and innovate our programs.

Want more AWS Security news? Follow us on Twitter.

Scaling a governance, risk, and compliance program for the cloud, emerging technologies, and innovation

Post Syndicated from Michael South original https://aws.amazon.com/blogs/security/scaling-a-governance-risk-and-compliance-program-for-the-cloud/

Governance, risk, and compliance (GRC) programs are sometimes looked upon as the bureaucracy getting in the way of exciting cybersecurity work. But a good GRC program establishes the foundation for meeting security and compliance objectives. It is the proactive approach to cybersecurity that, if done well, minimizes reactive incident response.

Of the three components of cybersecurity—people, processes, and technology—technology is the viewed as the “easy button” because in relative terms, it’s simpler than drafting a policy with the right balance of flexibility and specificity or managing countless organizational principles and human behavior. Still, as much as we promote technology and automation at AWS, we also understand that automating a bad process with the latest technology doesn’t make the process or outcome better. Cybersecurity must incorporate all three aspects with a programmatic approach that scales. To reach that goal, an effective GRC program is essential because it ensures a holistic view has been taken while tackling the daunting mission of cybersecurity.

Although governance, risk, and compliance are oftentimes viewed as separate functions, there is a symbiotic relationship between them. Governance establishes the strategy and guardrails for meeting specific requirements that align and support the business. Risk management connects specific controls to governance and assessed risks, and provides business leaders with the information they need to prioritize resources and make risk-informed decisions. Compliance is the adherence and monitoring of controls to specific governance requirements. It is also the “ante” to play the game in certain industries and, with continuous monitoring, closes the feedback loop regarding effective governance. Security architecture, engineering, and operations are built upon the GRC foundation.

Without a GRC program, people tend to solely focus on technology and stove-pipe processes. For example, say a security operations employee is faced with four events to research and mitigate. In the absence of a GRC program, the staffer would have no context about the business risk or compliance impact of the events, which could lead them to prioritize the least important issue.

GRC relationship model

GRC has a symbiotic relationship

The breadth and depth of a GRC program varies with each organization. Regardless of its simplicity or complexity, there are opportunities to transform or scale that program for the adoption of cloud services, emerging technologies, and other future innovations.

Below is a checklist of best practices to help you on your journey. The key takeaways of the checklist are: base governance on objectives and capabilities, include risk context in decision-making, and automate monitoring and response.

Governance

Identify compliance requirements

__ Identify required compliance frameworks (such as HIPAA or PCI) and contract/agreement obligations.

__ Identify restrictions/limitations to cloud or emerging technologies.

__ Identify required or chosen standards to implement (for example NIST, ISO, COBIT, CSA, CIS, etc.).

Conduct program assessment

__ Conduct a program assessment based on industry processes such as the NIST Cyber Security Framework (CSF) or ISO/IEC TR 27103:2018 to understand the capability and maturity of your current profile.

__ Determine desired end-state capability and maturity, also known as target profile.

__ Document and prioritize gaps (people, process, and technologies) for resource allocation.

__ Build a Cloud Center of Excellence (CCoE) team.

__ Draft and publish a cloud strategy that includes procurement, DevSecOps, management, and security.

__ Define and assign functions, roles, and responsibilities.

Update and publish policies, processes, procedures

__ Update policies based on objectives and desired capabilities that align to your business.

__ Update processes for modern organization and management techniques such as DevSecOps and Agile, specifying how to upgrade old technologies.

__ Update procedures to integrate cloud services and other emerging technologies.

__ Establish technical governance standards to be used to select controls and that monitor compliance.

Risk management

Conduct a risk assessment*

__ Conduct or update an organizational risk assessment (e.g., market, financial, reputation, legal, etc.).

__ Conduct or update a risk assessment for each business line (such as mission, market, products/services, financial, etc.).

__ Conduct or update a risk assessment for each asset type.

* The use of pre-established threat models can simplify the risk assessment process, both initial and updates.

Draft risk plans

__ Implement plans to mitigate, avoid, transfer, or accept risk at each tier, business line, and asset (for example, a business continuity plan, a continuity of operations plan, a systems security plan).

__ Implement plans for specific risk areas (such as supply chain risk, insider threat).

Authorize systems

__ Use NIST Risk Management Framework (RMF) or other process to authorize and track systems.

__ Use NIST Special Publication 800-53, ISO ISO/IEC 27002:2013, or other control set to select, implement, and assess controls based on risk.

__ Implement continuous monitoring of controls and risk, employing automation to the greatest extent possible.

Incorporate risk information into decisions

__ Link system risk to business and organizational risk

__ Automate translation of continuous system risk monitoring and status to business and org risk

__ Incorporate “What’s the risk?” (financial, cyber, legal, reputation) into leadership decision-making

Compliance

Monitor compliance with policy, standards, and security controls

__ Automate technical control monitoring and reporting (advanced maturity will lead to AI/ML).

__ Implement manual monitoring of non-technical controls (for example periodic review of visitor logs).

__ Link compliance monitoring with security information and event management (SIEM) and other tools.

Continually self-assess

__ Automate application security testing and vulnerability scans.

__ Conduct periodic self-assessments from sampling of controls, entire functional area, and pen-tests.

__ Be overly critical of assumptions, perspectives, and artifacts.

Respond to events and changes to risk

__ Integrate security operations with the compliance team for response management.

__ Establish standard operating procedures to respond to unintentional changes in controls.

__ Mitigate impact and reset affected control(s); automate as much as possible.

Communicate events and changes to risk

__ Establish a reporting tree and thresholds for each type of incident.

__ Include general counsel in reporting.

__ Ensure applicable regulatory authorities are notified when required.

__ Automate where appropriate.

Want more AWS Security news? Follow us on Twitter.

Michael South

Michael joined AWS in 2017 as the Americas Regional Leader for public sector security and compliance business development. He supports customers who want to achieve business objectives and improve their security and compliance in the cloud. His customers span across the public sector, including: federal governments, militaries, state/provincial governments, academic institutions, and non-profits from North to South America. Prior to AWS, Michael was the Deputy Chief Information Security Officer for the city of Washington, DC and the U.S. Navy’s Chief Information Officer for Japan.

Are KMS custom key stores right for you?

Post Syndicated from Richard Moulds original https://aws.amazon.com/blogs/security/are-kms-custom-key-stores-right-for-you/

You can use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. The KMS custom key store integrates KMS with AWS CloudHSM to help satisfy compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs) while providing the AWS service integrations of KMS. However, the additional control comes with increased cost and potential impact on performance and availability. This post will help you decide if this feature is the best approach for you.

KMS is a fully managed service that generates encryption keys and helps you manage their use across more than 45 AWS services. It also supports the AWS Encryption SDK and other client-side encryption tools, and you can integrate it into your own applications. KMS is designed to meet the requirements of the vast majority of AWS customers. However, there are situations where customers need to manage their keys in single-tenant HSMs that they exclusively control. Previously, KMS did not meet these requirements since it offered only the ability to store keys in shared HSMs that are managed by KMS.

AWS CloudHSM is a service that’s primarily intended to support customer-managed applications that are specifically designed to use HSMs. It provides direct control over HSM resources, but the service isn’t, by itself, widely integrated with other AWS managed services. Before custom key store, this meant that if you required direct control of your HSMs but still wanted to use and store regulated data in AWS managed services, you had to choose between changing those requirements, not using a given AWS service, or building your own solution. KMS custom key store gives you another option.

How does a custom key store work?

With custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys rather than the default KMS key store. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Your KMS customer master keys (CMKs) never leave the CloudHSM instances, and all KMS operations that use those keys are only performed in your HSMs. In all other respects, the master keys stored in your custom key store are used in a way that is consistent with other KMS CMKs.

This diagram illustrates the primary components of the service and shows how a cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store.
 

Figure 1: A cluster of two CloudHSM instances is connected to the KMS front-end hosts to create a customer controlled key store

Figure 1: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Because you control your CloudHSM cluster, you can take direct action to manage certain aspects of the lifecycle of your keys, independently of KMS. Specifically, you can verify that KMS correctly created keys in your HSMs and you can delete key material and restore keys from backup at any time. You can also choose to connect and disconnect the CloudHSM cluster from KMS, effectively isolating your keys from KMS. However, with more control comes more responsibility. It’s important that you understand the availability and durability impact of using this feature, and I discuss the issues in the next section.

Decision criteria

KMS customers who plan to use a custom key store tell us they expect to use the feature selectively, deciding on a key-by-key basis where to store them. To help you decide if and how you might use the new feature, here are some important issues to consider.

Here are some reasons you might want to store a key in a custom key store:

  • You have keys that are required to be protected in a single-tenant HSM or in an HSM over which you have direct control.
  • You have keys that are explicitly required to be stored in an HSM validated at FIPS 140-2 level 3 overall (the HSMs used in the default KMS key store are validated to level 2 overall, with level 3 in several categories, including physical security).
  • You have keys that are required to be auditable independently of KMS.

And here are some considerations that might influence your decision to use a custom key store:

  • Cost — Each custom key store requires that your CloudHSM cluster contains at least two HSMs. CloudHSM charges vary by region, but you should expect costs of at least $1,000 per month, per HSM, if each device is permanently provisioned. This cost occurs regardless of whether you make any requests of the KMS API directly or indirectly through an AWS service.
  • Performance — The number of HSMs determines the rate at which keys can be used. It’s important that you understand the intended usage patterns for your keys and ensure that you have provisioned your HSM resources appropriately.
  • Availability — The number of HSMs and the use of availability zones (AZs) impacts the availability of your cluster and, therefore, your keys. The risk of your configuration errors that result in a custom key store being disconnected, or key material being deleted and unrecoverable, must be understood and assessed.
  • Operations — By using the custom key store feature, you will perform certain tasks that are normally handled by KMS. You will need to set up HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which you should have the appropriate resources and organizational controls in place to perform.

Getting Started

Here’s a basic rundown of the steps that you’ll take to create your first key in a custom key store within a given region.

  1. Create your CloudHSM cluster, initialize it, and add HSMs to the cluster. If you already have a CloudHSM cluster, you can use it as a custom key store in addition to your existing applications.
  2. Create a CloudHSM user so that KMS can access your cluster to create and use keys.
  3. Create a custom key store entry in KMS, give it a name, define which CloudHSM cluster you want it to use, and give KMS the credentials to access your cluster.
  4. Instruct KMS to make a connection to your cluster and log in.
  5. Create a CMK in KMS in the usual way except now select CloudHSM as the source of your key material. You’ll define administrators, users, and policies for the key as you would for any other CMK.
  6. Use the key via the existing KMS APIs, AWS CLI, or the AWS Encryption SDK. Requests to use the key don’t need to be context-aware of whether the key is stored in a custom key store or the default KMS key store.

Summary

Some customers need specific controls in place before they can use KMS to manage encryption keys in AWS. The new KMS custom key store feature is intended to satisfy that requirement. You can now apply the controls provided by CloudHSM to keys managed in KMS, without changing access control policies or service integration.

However, by using the new feature, you take responsibility for certain operational aspects that would otherwise be handled by KMS. It’s important that you have the appropriate controls in place and understand the performance and availability requirements of each key that you create in a custom key store.

If you’ve been prevented from migrating sensitive data to AWS because of specific key management requirements that are currently not met by KMS, consider using the new KMS custom key store feature.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Author

Richard Moulds

Richard is a Principal Product Manager at AWS. He’s a member of the KMS team and is primarily focused on helping to define the product roadmap and satisfy customer requirements. Richard has more than 15 years experience in helping customers build encryption and key management systems to protect their data. His attraction to cryptography stems from the challenge of taking such a complex subject and translating it into simple solutions that customers should be able to take for granted, on a grand scale. When he’s not thinking ahead he’s focused on the past, restoring classic cars, the more rust the better.

Announcing the First AWS Security Conference: AWS re:Inforce 2019

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/announcing-the-first-aws-security-conference-aws-reinforce-2019/

On the eve of re:Invent 2018, I’m pleased to announce that AWS is launching our first conference dedicated to cloud security: AWS re:Inforce. The event will offer a deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Security is the top priority at AWS, and AWS re:Inforce is emblematic of our commitment to giving direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services.

The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. The cost for a full conference pass will be $1,099.

Over the course of this two-day conference we will offer multiple content tracks designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. Our technical track will offer detailed tactical education to take your security posture from reactive to proactive. We’ll also be offering a business enablement track tailored to assisting with your strategic migration decisions. You’ll find content delivered in a number of formats to meet a diversity of learning preferences, including keynotes, breakout sessions, Q&As, hands-on workshops, simulations, training and certification, as well as our interactive Security Jam. We anticipate 100+ sessions ranging in levels of ability from beginner to expert.

AWS re:Inforce will also feature our AWS Security Competency Partners, each of whom has demonstrated success in building products and solutions on AWS to support customers in multiple domains. With hundreds of industry-leading products, these partners will give you an opportunity to learn how to enhance the security of both on-premises and cloud environments.

Finally, you’ll find sessions built around the Security Pillar of Well Architected and the Security Perspective of our Cloud Adoption Framework (CAF). These will include Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, Security Services, and Incident Response. Our automated reasoning, cryptography researchers and scientists will also be available, as well as our partners in the academic community discussing Provable Security and additional emerging security trends.

If you’d like to sign up to be notified of when registration opens, please visit:

https://reinforce.awsevents.com

Additional information and registration details will be shared in early 2019, we look forward to seeing you all there!

– Steve Schmidt, Chief Information Security Officer
Follow Steve on Twitter.

AWS Security Profiles: Quint Van Deman, Principal Business Development Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-quint-van-deman-principal-business-development-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I joined AWS in August 2014. I spent my first two and a half years in the Professional Services group, where I ran around the world to help some of our largest customers sort through their security and identity implementations. For the last two years, I’ve parleyed that experience into my current role of Business Development Manager for the Identity and Directory Services group. I help the product development team build services and features that address the needs I’ve seen in our customer base. We’re working on a next generation of features that we think will radically simplify the way customers implement and manage identities and permissions within the cloud environment. The other key element of my job is to find and disseminate the most innovative solutions I’m seeing today across the broadest possible set of AWS customers to help them be more successful faster.

How do you explain your job to non-tech friends?

I keep one foot in the AWS service team organizations, where they build features, and one foot in day-to-day customer engagement to understand the real-world experiences of people using AWS. I learn about the similarities and differences between how these two groups operate, and then I help service teams understand these similarities and differences, as well.

You’re a “bar raiser” for the Security Blog. What does that role entail?

The notion of being a bar raiser has a lot of different facets at Amazon. The general concept is that, as we go about certain activities — whether hiring new employees or preparing blog posts — we send things past an outside party with no team biases. As a bar raiser for the Security Blog, I don’t have a lot of incentive to get posts out because of a deadline. My role is to make sure that nothing is published until it successfully addresses a customer need. At Amazon, we put the best customer experience first. As a bar raiser, I work to hold that line, even though it might not be the fastest approach, or the path of least resistance.

What’s the most challenging part of your job?

Ruthless prioritization. One of our leadership principles at Amazon is frugality. Sometimes, that means staying in cheap hotel rooms, but more often it means frugality of resources. In my case, I’ve been given the awesome charter to serve as the Business Development Manager for our suite of Identity and Directory Services. I’m something of a one-man army the world over. But that means a lot of things come past my desk, and I have to prioritize ruthlessly to ensure I’m focusing on the things that will be most impactful for our customers.

What’s your favorite part of your job?

A lot of our customers are doing an awesome job being bar raisers themselves. They’re pushing the envelope in terms of identity-focused solutions in their own AWS environments. One fulfilling part of my work is getting to collaborate with those customers who are on the leading edge: Their AWS field teams will get ahold of me, and then I get to do two really fun things. First, I get to dive in and help these customers succeed at whatever they’re trying to do. Second, I get to learn from them. I get to examine the really amazing ideas they’ve come up with and see if we might be able to generalize their solutions and roll them out to the delight of many more AWS customers that might not have teams mature enough to build them on their own. While my title is Business Development Manager, I’m a technologist through and through. Getting to dive into these thorny technical situations and see them resolve into really great solutions is extremely rewarding.

How did you choose your particular topics for re:Invent 2018?

Over the last year, I’ve talked with lots of customers and AWS field teams. My Mastering Identity at Every Layer of the Cake session was born out of the fact that I noticed a lot of folks doing a lot of work to get identity for AWS right, but other layers of identity that are just as important weren’t getting as much attention. I made it my mission to provide a more holistic understanding of what identity in the cloud means to these customers, and over time I developed ways of articulating the topic which really seemed to resonate. My session is about sharing this understanding more broadly. It’s a 400-level talk, since I want to really dive deep with my audience. I have five embedded demos, all of which are going to show how to combine multiple features, sprinkle in a bit of code, and apply them to near universally applicable customer use cases.

Why use the metaphor of a layer cake?

I’ve found that analogies and metaphors are very effective ways of grounding someone’s mental imagery when you’re trying to relay a complex topic. Last year, my metaphor was bridges. This year, I decided to go with cake: It’s actually very descriptive of the way that our customers need to think about Identity in AWS since there are multiple layers. (Also, who doesn’t like cake? It’s delicious.)

What are you hoping that your audience will take away from the session?

Customers are spending a lot of time getting identity right at the AWS layer. And that’s a ground-level, must-do task. I’m going to put a few new patterns in the audience’s hands to do this more effectively. But as a whole, we aren’t consistently putting as much effort into the infrastructure and application layers. That’s what I’m really hoping to expose people to. We have a wealth of features that really raise the bar in terms of cloud security and identity — from how users authenticate to operating systems or databases, to how they authenticate to the applications and APIs that they put on AWS. I want to expose these capabilities to folks and paint a vivid image for them of the really powerful things that they can do today that they couldn’t have done before.

What do you want your audience to do differently after attending your session?

During the session, I’ll be taking a handful of features that are really interesting in their own right, and combining them in a way that I hope will absolutely delight my audience. For example, I’ll show how you can take AWS CloudFormation macros and AWS Identity and Access Management, layer a little bit of customization on top, and come up with something far more magical than either of the two individually. It’s an advanced use case that, with very little effort, can disproportionately improve your security posture while letting your organization move faster. That’s just one example though, and the session is going to be loaded with them, including a grand finale. I’ve already started the work to open source a lot of what I’m going to show, but even where I can’t open source, I want to paint a very clear, prescriptive blueprint for how to get there. My goal is that my audience goes back to work on Monday and, within a couple of hours, they’ve measurably moved the security bar for their organization.

Any tips for first-time conference attendees?

Be deliberate about going outside of your comfort zone. If you’re not working in Security, come to one of our sessions. If you do work in Security, go to some other tracks, like Dev-Ops or Analytics, to get that cross-pollination of ideas. One of the most amazing things about AWS is how it helps dramatically lower the barrier to entry for unfamiliar technology domains and tools. A developer can ship more secure code faster by being invested in security, and a security expert can disproportionally scale their impact by applying the tools of developers or data scientists. Re:Invent is an amazing place to start exploring that diversity, and if you do, I suspect you’ll find ways to immediately make yourself better at your day job.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

Complexity versus human understanding have always been at odds. I see initiatives across AWS that have all kinds of awesome innovation and computer science behind them. In the coming years, I think these will mature to the point that they will be able to offload much of the natural complexity that comes with securing large scale environments with extremely fine grain permissions. Folks will be able to provide very simple statements or rules of how they want their environment to be, and we should be able to manage the complexity for them, and present them with a nice, clean picture they can easily understand.

What does cloud security mean to you, personally?

I see possibilities today that were herculean tasks before. For example, the process to make sure APIs can properly authenticate and authorize each other used to be an extremely elaborate process at scale. It became such an impossible mess that only the largest of organizations with the best skills, the best technology, and the best automation were really able to achieve it. Everyone else just had to punt or put a band-aid on the problem. But in the world of the cloud, all it takes is attaching an AWS IAM role on one side, and a fairly small resource-based policy to an Amazon API Gateway API on the other. Examples like this show how we’re making security that would once have been extremely difficult for most customers to afford or implement simple to configure, get right, and deploy ubiquitously, and that’s really powerful. It’s what keeps me passionate about my work.

If you had to pick any other job, what would you want to do with your life?

I’ve got all kinds of whacky hobbies. I kiteboard, I surf, work on massive renovation projects at home, hike and camp in the backcountry, and fly small airplanes. It’s an overwhelming set of hobbies that didn’t align with my professional aptitude. But if the world were my oyster and I had to do something else, I would want to combine those hobbies into one single career that’s never before been seen.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Quint Van Deman

Quint is the global business development manager for AWS Identity and Directory services. In this role, he leads the incubation, scaling, and evolution of new and existing identity-based services, as well as field enablement and strategic customer advisement for the same. Before joining the BD team, Quint was an early member of the AWS Professional Services team, where he was a Senior Consultant leading cloud transformation teams at several prominent enterprise customers, and a company-wide subject matter expert on IAM and Identity federation.

AWS Security Profiles: Henrik Johansson, Principal, Office of the CISO

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-henrik-johansson-principal-office-of-the-ciso/

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

As a Principal for the Office of the CISO, I not only get to spend time directly with our customers and their executives and operational teams, I also get to work with our own service teams and other parts of our organization. Additionally, a big part of this role involves spending time with the industry as a whole, in both small and large settings, and trying to raise the bar of the overall industry together with a number of other teams within AWS.

How do you explain your job to non-tech friends?

Whether or not someone understands what the cloud is, I try to focus on the core part of the role: I help people and organizations understand AWS Security and what it means to operate securely on the cloud. And I focus on helping the industry achieve these same goals.

What’s your favorite part of your job?

Helping customers and their executive leadership to understand the benefits of cloud security and how they can improve the overall security posture by using cloud features. Getting to show them how we can help drive road maps and new features and functions that they can use to secure their workloads (based on their valuable feedback) is very rewarding.

Tell us about the open source communities you support. Why they are important to AWS?

The open source community is important to me for a couple of reasons. First, it helps enable and inspire innovation by inviting the community at large to expand on the various use cases our services provide. I also really appreciate how customers enable other customers by not only sharing their own innovations but also inviting others to contribute and further improve their solutions. I have a couple of open source repositories that I maintain, where I put various security automation tools that I’ve built to show various innovative ways that customers can use our services to strengthen their security posture. Even if you don’t use open source in your company, you can still look at the vast number of projects out there, both from customers and from AWS, and learn from them.

What does cloud security mean to you, personally?

For me, it represents the possibility of creating efficient, secure solutions. I’ve been working in various security roles for almost twenty-five years, and the ability we have to protect data and our infrastructure has never been stronger. We have an incredible opportunity to solve challenges that would have been insurmountable before, and this leads to one thing: trust. It allows us to earn trust from customers, trust from users, and trust from the industry. It also enables our customers to earn trust from their users.

In your opinion, what’s the biggest challenge facing cloud security right now?

The opportunities far outweigh the challenges, honestly. The different methods that customers and users have to gain visibility into what they’re actually running is mind-blowing. That visibility is a combination of knowing what you have, knowing what you run, and knowing all the ins and outs of it. I still hear people talking about that server in the corner under someone’s desk that no one else knows about. That simply doesn’t exist in the cloud, where everything is an API call away. If anything, the challenge lies in finding people who want to continue driving the innovation and solving the hard cases with all the technology that’s at our fingertips.

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

One shift we’re already seeing is that compliance is becoming a natural part of the security and innovation conversation. Previously, “compliance” meant that maybe you had a specific workload that needed to be PCI-compliant, or you were under HIPPA requirements. Nowadays, compliance is a more natural part of what we do. Privacy is everywhere. It has to be everywhere, based on requirements like GDPR, but we’re seeing that a lot of these “have to be” requirements turning into “want to be” requirements — we’re not distinguishing between the users that are required to be protected and the “regular” users. More and more, we’re seeing that privacy is always going to have a seat at the table, which is something we’ve always wanted.

At re:Invent 2018, you’re presenting two sessions together with Andrew Krug. How did you choose your topics?

They’re a combination of what I’m passionate about and what I see our customers need. This is the third year I’ve presented my Five New Security Automations Using AWS Security Services & Open Source session. Previously, I’ve also built boot camps and talks around secure automation, DevSecOps, and container security. But we have a big need for open source security talks that demonstrate how people can actually use open source to integrate with our services — not just as a standalone piece, but actually using open source as inspiration for what they can build on their own. That’s not to say that AWS services aren’t extremely important. They’re the driving force here. But the open source piece allows people to adapt solutions to their specific needs, further driving the use cases together with the various AWS security services.

What are you hoping that your audience will take away from your sessions?

I want my audience to walk away feeling that they learned something new, and that they can build something that they didn’t know how to before. They don’t have to take and use the specific open source tools we put out there, but I want them to see our examples as a way to learn how our services work. It doesn’t matter if you just download a sample script or if you run a full project, or a full framework, but it’s important to learn what’s possible with services beyond what you see in the console or in the documentation.

Any tips for first-time conference attendees?

Plan ahead, but be open to ad-hoc changes. And most importantly, wear sneakers or comfortable walking shoes. Your feet will appreciate it.

If you had to pick any other job, what would you want to do with your life?

If I picked another role at Amazon, it would definitely be a position around innovation, thinking big, and building stuff. Even if it was a job somewhere else, I’d still want it to involve building, whether woodshop projects or a robot. Innovation and building are my passions.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Henrik Johansson

Henrik is a Principal in the Office of the CISO at AWS Security. With over 22 years of experience in IT with a focus on security and compliance, he focuses on establishing and driving CISO-level relationships as a trusted cloud security advisor who has a passionate focus on developing services and features for security and compliance at scale.

AWS Security Profiles: Sam Koppes, Senior Product Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-sam-koppes-senior-product-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for a year, and I’m a Senior Product Manager for the AWS CloudTrail team. I’m responsible for product roadmap decisions, customer outreach, and for planning our engineering work.

How do you explain your job to non-tech friends?

I work on a technical product, and for any tech product, responsibility is split in half: We have product managers and engineering managers. Product managers are responsible for what the product does. They’re responsible for figuring out how it behaves, what needs it addresses, and why customers would want it. Engineering managers are responsible for figuring out how to make it. When you look to build a product, there’s always the how and the what. I’m responsible for the what.

What are you currently working on that you’re excited about?

The scale challenges that we’re facing today are extremely interesting. We’re allowing customers to build things at an absolutely unheard-of scale, and bringing security into that mix is a challenge. But it’s also one of the great opportunities for AWS — we can bring a lot of value to customers by making security as turnkey as possible so that it just comes with the additional scale and additional service areas. I want people to sleep easy at night knowing that we’ve got their backs.

What’s your favorite part of your job?

When I deliver a product, I love sending out the What’s New announcement. During our launch calls, I love collecting social media feedback to measure the impact of our products. But really, the best part is the post-launch investigation that we do, which allows us understand whether we hit the mark or not. My team usually does a really good job of making sure that we deliver the kinds of features that our customers need, so seeing the impact we’ve had is very gratifying. It’s a privilege to get to hear about the ways we’re changing people’s lives with the new features we’re building.

How did you choose your particular topic for re:Invent this year?

My session is called Augmenting Security Posture and Improving Operational Health with AWS CloudTrail. As a service, CloudTrail has been around a while. But I’ve found that customers face knowledge gaps in terms of what to do with it. There are a lot of people out there with an impressive depth of experience, but they sometimes lack an additional breadth that would be helpful. We also have a number of new customers who want more guidance. So I’m using the session to do a reboot: I’ll start from the beginning and go through what the service is and all the things it does for you, and then I’ll highlight some of the benefits of CloudTrail that might be a little less obvious. I built the session based on discussions with customers, who frequently tell me they start using the service — and only belatedly realize that they can do much more with it beyond, say, using it as a compliance tool. When you start using CloudTrail, you start amassing a huge pile of information that can be quite valuable. So I’ll spend some time showing customers how they can use this information to enhance their security posture, to increase their operational health, and to simplify their operational troubleshooting.

What are you hoping that your audience will take away from it?

I want people to walk away with two fistfuls of ideas for cool things they can do with CloudTrail. There are some new features we’re going to talk about, so even if you’re a power user, my hope is that you’ll return to work with three or four features you have a burning desire to try out.

What does cloud security mean to you, personally?

I’m very aware of the magnitude of the threats that exist today. It’s an evolving landscape. We have a lot of powerful tools and really smart people who are fighting this battle, but we have to think of it as an ongoing war. To me, the promise you should get from any provider is that of a safe haven — an eye in the storm, if you will — where you have relative calm in the midst of the chaos going on in the industry. Problems will constantly evolve. New penetration techniques will appear. But if we’re really delivering on our promise of security, our customers should feel good about the fact that they have a secure place that allows them to go about their business without spending much mental capacity worrying about it all. People should absolutely remain vigilant and focused, but they don’t have to spend all of their time and energy trying to stay abreast of what’s going on in the security landscape.

What’s the most common misperception you encounter about cloud security and compliance?

Many people think that security is a magic wand: You wave it, and it leads to a binary state of secure or not secure. And that’s just not true. A better way to think of security is as a chain that’s only as strong as its weakest link. You might find yourself in a situation where lots of people have worked very hard to build a very secure environment — but then one person comes in and builds on top of it without thinking about security, and the whole thing blows wide open. All it takes is one little hole somewhere. People need to understand that everyone has to participate in security.

In your opinion, what’s the biggest challenge that people face as they move to the cloud?

At AWS, we follow this thing called the Shared Responsibility Model: AWS is responsible for securing everything from the virtualization layer down, and customers are responsible for building secure applications. One of the biggest challenges that people face lies in understanding what it means to be secure while doing application development. Companies like AWS have invested hugely in understanding different attack vectors and learning how to lock down our systems when it comes to the foundational service we offer. But when customers build on a platform that is fundamentally very secure, we still need to make sure that we’re educating them about the kinds of things that they need to do, or not do, to ensure that they stay within this secure footprint.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think we’ll see a tremendous amount of growth in the application of machine learning and artificial intelligence. Historically, we’ve approached security in a very binary way: rules-based security systems in which things are either okay or not okay. And we’ve built complex systems that define “okay” based on a number of criteria. But we’ve always lacked the ability to apply a pseudo-human level of intelligence to threat detection and remediation, and today, we’re seeing that start to change. I think we’re in the early stages of a world where machine learning and artificial intelligence become a foundational, indispensable part of an effective security perimeter. Right now, we’re in a world where we can build strong defenses against known threats, and we can build effective hedging strategies to intercept things we consider risky. Beyond that, we have no real way of dynamically detecting and adapting to threat vectors as they evolve — but that’s what we’ll start to see as machine learning and artificial intelligence enter the picture.

If you had to pick any other job, what would you want to do with your life?

I have a heavy engineering background, so I could see myself becoming a very vocal and customer-obsessed engineering manager. For a more drastic career change, I’d write novels—an ability that I’ve been developing in my free time.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Sam Koppes

Sam is a Senior Product Manager at Amazon Web Services. He currently works on AWS CloudTrail and has worked on AWS CloudFormation, as well. He has extensive experience in both the product management and engineering disciplines, and is passionate about making complex technical offerings easy to understand for customers.