Tag Archives: Security, Identity & Compliance

Monitoring AWS Certificate Manager Private CA with AWS Security Hub

Post Syndicated from Anthony Pasquariello original https://aws.amazon.com/blogs/security/monitoring-aws-certificate-manager-private-ca-with-aws-security-hub/

Certificates are a vital part of any security infrastructure because they allow a company’s internal or external facing products, like websites and devices, to be trusted. To deploy certificates successfully and at scale, you need to set up a certificate authority hierarchy that provisions and issues certificates. You also need to monitor this hierarchy closely, looking for any activity that occurs within your infrastructure, such as creating or deleting a root certificate authority (CA). You can achieve this using AWS Certificate Manager (ACM) Private Certificate Authority (CA) with AWS Security Hub.

AWS Certificate Manager (ACM) Private CA is a managed private certificate authority service that extends ACM certificates to private certificates. With private certificates, you can authenticate resources inside an organization. Private certificates allow entities like users, web servers, VPN users, internal API endpoints, and IoT devices to prove their identity and establish encrypted communications channels. With ACM Private CA, you can create complete CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating your own certificate authority.

AWS Security Hub provides a comprehensive view of your security state within AWS and your compliance with security industry standards and best practices. Security Hub centralizes and prioritizes security and compliance findings from across AWS accounts, services, and supported third-party partners to help you analyze your security trends and identify the highest priority security issues.

In this example, we show how to monitor your root CA and generate a security finding in Security Hub if your root is used to issue a certificate. Following best practices, the root CA should be used rarely and only to issue certificates under controlled circumstances, such as during a ceremony to create a subordinate CA. Issuing a certificate from the root at any other time is a red flag that should be investigated by your security team. This will show up as a finding in Security Hub indicated by ‘ACM Private CA Certificate Issuance.’

Example scenario

For highly privileged actions within an IT infrastructure, it’s crucial that you use the principle of least privilege when allowing employee access. To ensure least privilege is followed, you should track highly sensitive actions using monitoring and alerting solutions. Highly sensitive actions should only be performed by authorized personnel. In this post, you’ll learn how to monitor activity that occurs within ACM Private CA, such as creating or deleting a root CA, using AWS Security Hub. In this example scenario, we cover a highly sensitive action within an organization building a private certificate authority hierarchy using ACM Private CA:

Creation of a subordinate CA that is signed by the root CA:

Creating a CA certificate is a privileged action. Only authorized personnel within the CA Hierarchy Management team should create CA certificates. Certificate authorities can sign private certificates that allow entities to prove their identity and establish encrypted communications channels.

Architecture overview

This solution requires some background information about the example scenario. In the example, the organization has the following CA hierarchy: root CA → subordinate CA → end entity certificates. To learn how to build your own private certificate infrastructure see this post.

Figure 1: An example of a certificate authority hierarchy

Figure 1: An example of a certificate authority hierarchy

There is one root CA and one subordinate CA. The subordinate CA issues end entity certificates (private certificates) to internal applications.

To use the test solution, you will first deploy a CloudFormation template that has set up an Amazon CloudWatch Events Rule and a Lambda function. Then, you will assume the persona of a security or certificate administrator within the example organization who has the ability to create certificate authorities within ACM Private CA.

Figure 2: Architecture diagram of the solution

Figure 2: Architecture diagram of the solution

The architecture diagram in Figure 2 outlines the entire example solution. At a high level this architecture enables customers to monitor activity within ACM Private CA in Security Hub. The components are explained as follows:

  1. Administrators within your organization have the ability to create certificate authorities and provision private certificates.
  2. Amazon CloudWatch Events tracks API calls using ACM Private CA as a source.
  3. Each CloudWatch Event triggers a corresponding AWS Lambda function that is also deployed by the CloudFormation template. The Lambda function reads the event details and formats them into an AWS Security Finding Format (ASFF).
  4. Findings are generated in AWS Security Hub by the Lambda function for your security team to monitor and act on.

This post assumes you have administrative access to the resources used, such as ACM Private CA, Security Hub, CloudFormation, and Amazon Simple Storage Service (Amazon S3). We also cover how to remediate through practicing the principle of least privilege, and what that looks like within the example scenario.

Deploy the example solution

First, make sure that AWS Security Hub is turned on, as it isn’t on by default. If you haven’t used the service yet, go to the Security Hub landing page within the AWS Management Console, select Go to Security Hub, and then select Enable Security Hub. See documentation for more ways to enable Security Hub.

Next, launch the CloudFormation template. Here’s how:

  1. Log in to the AWS Management Console and select AWS Region us-east-1 (N. Virginia) for this example deployment.
  2. Make sure you have the necessary privileges to create resources, as described in the “Architecture overview” section.
  3. Set up the sample deployment by selecting Launch Stack below.

The example solution must be launched in an AWS Region where ACM Private CA and Security Hub are enabled. The Launch Stack button will default to us-east-1. If you want to launch in another region, download the CloudFormation template from the GitHub repository found at the end of the blog.

Select this image to open a link that starts building the CloudFormation stack

Now that you’ve deployed the CloudFormation stack, we’ll help you understand how we’ve utilized AWS Security Finding Format (ASFF) in the Lambda functions.

How to create findings using AWS Security Finding Format (ASFF)

Security Hub consumes, aggregates, organizes, and prioritizes findings from AWS security services and from third-party product integrations. Security Hub receives these findings using a standard findings format called the AWS Security Finding Format (ASFF), thus eliminating the need for time-consuming data conversion efforts. Then it correlates ingested findings across products to prioritize the most important ones.

Below you can find an example input that shows how to use ASFF to populate findings in AWS Security Hub for the creation of a CA certificate. We placed this information in the Lambda function Certificate Authority Creation that was deployed in the CloudFormation stack.


{
 "SchemaVersion": "2018-10-08",
 "Id": region + "/" + accountNum + "/" + caCertARN,
 "ProductArn": "arn:aws:securityhub:" + region + ":" + accountNum + ":product/" + accountNum + "/default",
 "GeneratorId": caCertARN,
 "AwsAccountId": accountNum,
 "Types": [
     "Unusual Behaviors"
 ],
 "CreatedAt": date,
 "UpdatedAt": date,
 "Severity": {
     "Normalized": 60
 },
 "Title": "Private CA Certificate Creation",
 "Description": "A Private CA certificate was issued in AWS Certificate Manager Private CA",
 "Remediation": {
     "Recommendation": {
         "Text": "Verify this CA certificate creation was taken by a privileged user",
         "Url": "https://docs.aws.amazon.com/acm-pca/latest/userguide/ca-best-practices.html#minimize-root-use"
     }
 },
 "ProductFields": {
     "ProductName": "ACM PCA"
 },
 "Resources": [
     {
         "Details": {
             "Other": {
                "CAArn": CaArn,
                "CertARN": caCertARN
             }
         },
         "Type": "Other",
         "Id": caCertARN,
         "Region": region,
         "Partition": "aws"
    }
 ],
 "RecordState": "ACTIVE"
}

Below, we summarize some important fields within the finding generated by ASFF. We set these fields within the Lambda function in the CloudFormation template you deployed for the example scenario, and so you don’t have to do this yourself.

ProductARN

AWS services that are not yet integrated with Security Hub are treated similar to third party findings. Therefore, the company-id must be the account ID. The product-id must be the reserved word “default”, as shown below.


"ProductArn": "arn:aws:securityhub:" + region + ":" + accountNum + ":product/" + accountNum + "/default",

Severity

Assigning the correct severity is important to ensure useful findings. This example scenario sets the severity within the ASFF generated, as shown above. For future findings, you can determine the appropriate severity by comparing the score to the labels listed in Table 1.

Table 1: Severity labels in AWS Security Finding Format

Severity LabelSeverity Score Range
Informational0
Low1–39
Medium40–69
High70–89
Critical90–100
  • Informational: No issue was found.
  • Low: Findings with issues that could result in future compromises, such as vulnerabilities, configuration weaknesses, or exposed passwords.
  • Medium: Findings with issues that indicate an active compromise, but no indication that an adversary has completed their objectives. Examples include malware activity, hacking activity, or unusual behavior detection.
  • Critical: Findings associated with an adversary completing their objectives. Examples include data loss or compromise, or a denial of service.

Remediation

This provides the remediation options for a finding. In our example, we link you to least privilege documentation to learn how to fix the overly permissive resource.

Types

These indicate one or more finding types in the format of namespace/category/classifier that classify a finding. Finding types should match against the Types Taxonomy for ASFF.

To learn more about ASFF parameters, see ASFF syntax documentation.

Trigger a Security Hub finding

Figure 1 above shows the CA hierarchy you are building in this post. The CloudFormation template you deployed created the root CA. The following steps will walk you through signing the root CA and creating a CA certificate for the subordinate CA. The architecture we deployed will notify the security team of these actions via Security Hub.

First, we will activate the root CA and install the root CA certificate. This step signs the root CA Certificate Signing Request (CSR) with the root CA’s private key.

  1. Navigate to the ACM Private CA service. Under the Private certificate authority section, select Private CAs.
  2. Under Actions, select Install CA certificate.
  3. Set the validity period and signature algorithm for the root CA certificate. In this case, leave the default values for both fields as shown in Figure 3, and then select Next.

    Figure 3: Specify the root CA certificate parameters

    Figure 3: Specify the root CA certificate parameters

  4. Under Review, generate, and install root CA certificate, select Confirm and install. This creates the root CA certificate.
  5. You should now see the root CA within the console with a status of Active, as shown in Figure 4 below.

    Figure 4: The root CA is now active

    Figure 4: The root CA is now active

Now we will create a subordinate CA and install a CA certificate onto it.

  1. Select the Create CA button.
  2. Under Select the certificate authority (CA) type, select Subordinate CA, and then select Next.
  3. Configure the root CA parameters by entering the following values (or any values that make sense for the CA hierarchy you’re trying to build) in the fields shown in Figure 5, and then select Next.

    Figure 5: Configure the certificate authority

    Figure 5: Configure the certificate authority

  4. Under Configure the certificate authority (CA) key algorithm, select RSA 2048, and then select Next.
  5. Check Enable CRL distribution, and then, under Create a new S3 bucket, select No. Under S3 bucket name, enter acm-private-ca-crl-bucket-<account-number>, and then select Next.
  6. Under Configure CA permissions, select Authorize ACM to use this CA for renewals, and then select Next.
  7. To create the subordinate CA, review and accept the conditions described at the bottom of the page, select the check box if you agree to the conditions, and then select Confirm and create.

Now, you need to activate the subordinate CA and install the subordinate certificate authority certificate. This step allows you to sign the subordinate CA Certificate Signing Request (CSR) with the root CA’s private key.

  1. Select Get started to begin the process, as shown in Figure 6.

    Figure 6: Begin installing the root certificate authority certificate

    Figure 6: Begin installing the root certificate authority certificate

  2. Under Install subordinate CA certificate, select ACM Private CA, and then select Next. This starts the process of signing the subordinate CA cert with the root CA that was created earlier.
  3. Set the parent private CA with the root CA that was created, the validity period, the signature algorithm, and the path length for the subordinate CA certificate. In this case, leave the default values for validity period, signature algorithm, and path length fields as shown in Figure 7, and then select Next.

    Figure 7: Specify the subordinate CA certificate parameters

    Figure 7: Specify the subordinate CA certificate parameters

  4. Under Review, select Generate. This creates the subordinate CA certificate.
  5. You should now see the subordinate CA within the console with a status of Active.

How to view Security Hub findings

Now that you have created a root CA and a subordinate CA under the root, you can review findings from the perspective of your security team who is notified of the findings within Security Hub. In the example scenario, creating the CA certificates triggers a CloudWatch Events rule generated from the CloudFormation template.

This events rule utilizes the native ACM Private CA CloudWatch Event integration. The event keeps track of ACM Private CA Certificate Issuance of the root CA ARN. See below for the CloudWatch Event.


{
  "detail-type": [
    "ACM Private CA Certificate Issuance"
  ],
  "resources": [
    "arn:aws:acm-pca:us-east-2:xxxxxxxxxxx:certificate-authority/xxxxxx-xxxx-xxx-xxxx-xxxxxxxx"
  ],
  "source": [
    "aws.acm-pca"
  ]
}

When the event of creating a CA certificate from the root CA occurs, it triggers the Lambda function with the finding in ASFF to generate that finding in Security Hub.

To assess a finding in Security Hub

  1. Navigate to Security Hub. On the left side of the Security Hub page, select Findings to view the finding generated from the Lambda function. Filter by Title EQUALS Private CA Certificate Creation, as shown in Figure 8.

    Figure 8: Filter the findings in Security Hub

    Figure 8: Filter the findings in Security Hub

  2. Select the finding’s title (CA CertificateCreation) to open it. You will see the details generated from this finding:
    
    Severity: Medium
    Company: Personal
    Title: CA Certificate Creation
    Remediation: Verify this certificate was taken by a privileged user
    	

    The finding has a Medium severity level since we specified it through our level 60 definition. This could indicate a potential active compromise, but no indication that a potential adversary has completed their objectives. In the hypothetical example covered earlier, a user has provisioned a CA certificate from the root CA, which should only be provisioned under controlled circumstances, such as during a ceremony to create a subordinate CA. Issuing a certificate from the root at any other time is a red flag that should be investigated by the security team. The remediation attribute in the finding shown here links to security best practices for ACM Private CA.

    Figure 9: Remediation tab in Security Hub Finding

    Figure 9: Remediation tab in Security Hub Finding

  3. To see more details about the finding, in the upper right corner of the console, under CA Certificate Creation, select the Finding ID link, as shown in Figure 10.

    Figure 10: Select the Finding ID link to learn more about the finding

    Figure 10: Select the Finding ID link to learn more about the finding

  4. The Finding JSON box will appear. Scroll down to Resources > Details > Other, as shown in Figure 11. The CAArn is the Root Certificate Authority that provisioned the certificate. The CertARN is the certificate that it provisioned.

    Figure 11: Details about the finding

    Figure 11: Details about the finding

Cleanup

To avoid costs associated with the test CA hierarchy created and other test resources generated from the CloudFormation template, ensure that you clean up your test environment. Take the following steps:

  1. Disable and delete the CA hierarchy you created (including root and subordinate CAs, as well as the additional subordinate CAs created).
  2. Delete the CloudFormation template.

Any new account to ACM Private CA can try the service for 30 days with no charge for operation of the first private CA created in the account. You pay for the certificates you issue during the trial period.

Next steps

In this post, you learned how to create a pipeline from ACM PCA action to Security Hub findings. There are many other API calls that you can send to Security Hub for monitoring:

To generate an ASFF object for one of these API calls, follow the steps from the ASFF section above. For more details, see the documentation. For the latest updates and changes to the CloudFormation template and resources within this post, please check the Github repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Anthony Pasquariello

Anthony is an Enterprise Solutions Architect based in New York City. He provides technical consultation to customers during their cloud journey, especially around security best practices. He has an MS and BS in electrical & computer engineering from Boston University. In his free time, he enjoys ramen, writing non-fiction, and philosophy.

Author

Christine Samson

Christine is an AWS Solutions Architect based in New York City. She provides customers with technical guidance for emerging technologies within the cloud, such as IoT, Serverless, and Security. She has a BS in Computer Science with a certificate in Engineering Leadership from the University of Colorado Boulder. She enjoys exploring new places to eat, playing the piano, and playing sports such as basketball and volleyball.

Author

Ram Ramani

Ram is a Security Specialist Solutions Architect at AWS focusing on data protection. Ram works with customers across all verticals to help with security controls and best practices on how customers can best protect their data that they store on AWS. In his free time, Ram likes playing table tennis and teaching coding skills to his kids.

Code signing using AWS Certificate Manager Private CA and AWS Key Management Service asymmetric keys

Post Syndicated from Ram Ramani original https://aws.amazon.com/blogs/security/code-signing-aws-certificate-manager-private-ca-aws-key-management-service-asymmetric-keys/

In this post, we show you how to combine the asymmetric signing feature of the AWS Key Management Service (AWS KMS) and code-signing certificates from the AWS Certificate Manager (ACM) Private Certificate Authority (PCA) service to digitally sign any binary data blob and then verify its identity and integrity. AWS KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and with your applications running on AWS. ACM PCA provides you a highly available private certificate authority (CA) service without the upfront investment and ongoing maintenance costs of operating your own private CA. CA administrators can use ACM PCA to create a complete CA hierarchy, including online root and subordinate CAs, with no need for external CAs. Using ACM PCA, you can provision, rotate, and revoke certificates that are trusted within your organization.

Traditionally, a person’s signature helps to validate that the person signed an agreement and agreed to the terms. Signatures are a big part of our lives, from our driver’s licenses to our home mortgage documents. When a signature is requested, the person or entity requesting the signature needs to verify the validity of the signature and the integrity of the message being signed.

As the internet and cryptography research evolved, technologists found ways to carry the usefulness of signatures from the analog world to the digital world. In the digital world, public and private key cryptography and X.509 certificates can help with digital signing, verifying message integrity, and verifying signature authenticity. In simple terms, an entity—which could be a person, an organization, a device, or a server—can digitally sign a piece of data, and another entity can validate the authenticity of the signature and validate the integrity of the signed data. The data that’s being signed could be a document, a software package, or any other binary data blob.

To learn more about AWS KMS asymmetric keys and ACM PCA, see Digital signing with the new asymmetric keys feature of AWS KMS and How to host and manage an entire private certificate infrastructure in AWS.

We provide Java code snippets for each part of the process in the following steps. In addition, the complete Java code with the maven build configuration file pom.xml are available for download from this GitHub project. The steps below illustrate the different processes that are involved and the associated Java code snippet. However, you need to use the GitHub project to be able to build and run the Java code successfully.

Let’s take a look at the steps.

1. Create an asymmetric key pair

For digital signing, you need a code-signing certificate and an asymmetric key pair. In this step, you create an asymmetric key pair using AWS KMS. The below code snippet in the main method within the file Runner.java is used to create the asymmetric key pair within KMS in your AWS account. An asymmetric KMS key with the alias CodeSigningCMK is created.


AsymmetricCMK codeSigningCMK = AsymmetricCMK.builder()
                .withAlias(CMK_ALIAS)
                .getOrCreate();

2. Create a code-signing certificate

To create a code-signing certificate, you need a private CA hierarchy, which you create within the ACM PCA service. This uses a simple CA hierarchy of one root CA and one subordinate CA under the root because the recommendation is that you should not use the root CA directly for signing code-signing certificates. The certificate authorities are needed to create the code-signing certificate. The common name for the root CA certificate is root CA, and the common name for the subordinate CA certificate is subordinate CA. The following code snippet in the main method within the file Runner.java is used to create the private CA hierarchy.


PrivateCA rootPrivateCA = PrivateCA.builder()
                .withCommonName(ROOT_COMMON_NAME)
                .withType(CertificateAuthorityType.ROOT)
                .getOrCreate();

PrivateCA subordinatePrivateCA = PrivateCA.builder()
        .withIssuer(rootPrivateCA)
        .withCommonName(SUBORDINATE_COMMON_NAME)
        .withType(CertificateAuthorityType.SUBORDINATE)
        .getOrCreate();

3. Create a certificate signing request

In this step, you create a certificate signing request (CSR) for the code-signing certificate. The following code snippet in the main method within the file Runner.java is used to create the CSR. The END_ENTITY_COMMON_NAME refers to the common name parameter of the code signing certificate.


String codeSigningCSR = codeSigningCMK.generateCSR(END_ENTITY_COMMON_NAME);

4. Sign the CSR

In this step, the code-signing CSR is signed by the subordinate CA that was generated in step 2 to create the code-signing certificate.


GetCertificateResult codeSigningCertificate = subordinatePrivateCA.issueCodeSigningCertificate(codeSigningCSR);

Note: The code-signing certificate that’s generated contains the public key of the asymmetric key pair generated in step 1.

5. Create the custom signed object

The data to be signed is a simple string: “the data I want signed”. Its binary representation is hashed and digitally signed by the asymmetric KMS private key created in step 1, and a custom signed object that contains the signature and the code-signing certificate is created.

The below code snippet in the main method within the file Runner.java is used to create the custom signed object.


CustomCodeSigningObject customCodeSigningObject = CustomCodeSigningObject.builder()
                .withAsymmetricCMK(codeSigningCMK)
                .withDataBlob(TBS_DATA.getBytes(StandardCharsets.UTF_8))
                .withCertificate(codeSigningCertificate.getCertificate())
                .build();

6. Verify the signature

The custom signed object is verified for integrity, and the root CA certificate is used to verify the chain of trust to confirm non-repudiation of the identity that produced the digital signature.

The below code snippet in the main method within the file Runner.java is used for signature verification:


String rootCACertificate = rootPrivateCA.getCertificate();
 String customCodeSigningObjectCertificateChain = codeSigningCertificate.getCertificate() + "\n" + codeSigningCertificate.getCertificateChain();

 CustomCodeSigningObject.getInstance(customCodeSigningObject.toString())
        .validate(rootCACertificate, customCodeSigningObjectCertificateChain);

During this signature validation process, the validation method shown in the code above retrieves the public key portion of the AWS KMS asymmetric key pair generated in step 1 from the code-signing certificate. This process has the advantage that credentials to access AWS KMS aren’t needed during signature validation. Any entity that has the root CA certificate loaded in its trust store can verify the signature without needing access to the AWS KMS verify API.

Note: The implementation outlined in this post is an example. It doesn’t use a certificate trust store that’s either part of a browser or part of a file system within the resident operating system of a device or a server. The trust store is placed in an instance of a Java class object for the purpose of this post. If you are planning to use this code-signing example in a production system, you must change the implementation to use a trust store on the host. To do so, you can build and distribute a secure trust store that includes the root CA certificate.

Conclusion

In this post, we showed you how a binary data blob can be digitally signed using ACM PCA and AWS KMS and how the signature can be verified using only the root CA certificate. No secret information or credentials are required to verify the signature. You can use this method to build a custom code-signing solution to address your particular use cases. The GitHub repository provides the Java code and the maven pom.xml that you can use to build and try it yourself. The README.md file in the GitHub repository shows the instructions to execute the code.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ram Ramani

Ram is a Security Solutions Architect at AWS focusing on data protection. Ram works with customers across different industry verticals to provide them with solutions that help with protecting data at rest and in transit. In prior roles, Ram built ML algorithms for video quality optimization and worked on identity and access management solutions for financial services organizations.

Author

Kyle Schultheiss

Kyle is a Senior Software Engineer on the AWS Cryptography team. He has been working on the ACM Private Certificate Authority service since its inception in 2018. In prior roles, he contributed to other AWS services such as Amazon Virtual Private Cloud, Amazon EC2, and Amazon Route 53.

How to build a CI/CD pipeline for container vulnerability scanning with Trivy and AWS Security Hub

Post Syndicated from Amrish Thakkar original https://aws.amazon.com/blogs/security/how-to-build-ci-cd-pipeline-container-vulnerability-scanning-trivy-and-aws-security-hub/

In this post, I’ll show you how to build a continuous integration and continuous delivery (CI/CD) pipeline using AWS Developer Tools, as well as Aqua Security‘s open source container vulnerability scanner, Trivy. You’ll build two Docker images, one with vulnerabilities and one without, to learn the capabilities of Trivy and how to send all vulnerability information to AWS Security Hub.

If you’re building modern applications, you might be using containers, or have experimented with them. A container is a standard way to package your application’s code, configurations, and dependencies into a single object. In contrast to virtual machines (VMs), containers virtualize the operating system rather than the server. Thus, the images are orders of magnitude smaller, and they start up much more quickly.

Like VMs, containers need to be scanned for vulnerabilities and patched as appropriate. For VMs running on Amazon Elastic Compute Cloud (Amazon EC2), you can use Amazon Inspector, a managed vulnerability assessment service, and then patch your EC2 instances as needed. For containers, vulnerability management is a little different. Instead of patching, you destroy and redeploy the container.

Many container deployments use Docker. Docker uses Dockerfiles to define the commands you use to build the Docker image that forms the basis of your container. Instead of patching in place, you rewrite your Dockerfile to point to more up-to-date base images, dependencies, or both and to rebuild the Docker image. Trivy lets you know which dependencies in the Docker image are vulnerable, and which version of those dependencies are no longer vulnerable, allowing you to quickly understand what to patch to get back to a secure state.

Solution architecture

 

Figure 1: Solution architecture

Figure 1: Solution architecture

Here’s how the solution works, as shown in Figure 1:

  1. Developers push Dockerfiles and other code to AWS CodeCommit.
  2. AWS CodePipeline automatically starts an AWS CodeBuild build that uses a build specification file to install Trivy, build a Docker image, and scan it during runtime.
  3. AWS CodeBuild pushes the build logs in near real-time to an Amazon CloudWatch Logs group.
  4. Trivy scans for all vulnerabilities and sends them to AWS Security Hub, regardless of severity.
  5. If no critical vulnerabilities are found, the Docker images are deemed to have passed the scan and are pushed to Amazon Elastic Container Registry (ECR), so that they can be deployed.

Note: CodePipeline supports different sources, such as Amazon Simple Storage Service (Amazon S3) or GitHub. If you’re comfortable with those services, feel free to substitute them for this walkthrough of the solution.

To quickly deploy the solution, you’ll use an AWS CloudFormation template to deploy all needed services.

Prerequisites

  1. You must have Security Hub enabled in the AWS Region where you deploy this solution. In the AWS Management Console, go to AWS Security Hub, and select Enable Security Hub.
  2. You must have Aqua Security integration enabled in Security Hub in the Region where you deploy this solution. To do so, go to the AWS Security Hub console and, on the left, select Integrations, search for Aqua Security, and then select Accept Findings.

Setting up

For this stage, you’ll deploy the CloudFormation template and do preliminary setup of the CodeCommit repository.

  1. Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information on how to create a CloudFormation stack, see Getting Started with AWS CloudFormation.
  2. After the CloudFormation stack completes, go to the CloudFormation console and select the Resources tab to see the resources created, as shown in Figure 2.

 

Figure 2: CloudFormation output

Figure 2: CloudFormation output

Setting up the CodeCommit repository

CodeCommit repositories need at least one file to initialize their master branch. Without a file, you can’t use a CodeCommit repository as a source for CodePipeline. To create a sample file, do the following.

  1. Go to the CodeCommit console and, on the left, select Repositories, and then select your CodeCommit repository.
  2. Scroll to the bottom of the page, select the Add File dropdown, and then select Create file.
  3. In the Create a file screen, enter readme into the text body, name the file readme.md, enter your name as Author name and your Email address, and then select Commit changes, as shown in Figure 3.

    Figure 3: Creating a file in CodeCommit

    Figure 3: Creating a file in CodeCommit

Simulate a vulnerable image build

For this stage, you’ll create the necessary files and add them to your CodeCommit repository to start an automated container vulnerability scan.

    1. Download the buildspec.yml file from the GitHub repository.

      Note: In the buildspec.yml code, the values prepended with $ will be populated by the CodeBuild environmental variables you created earlier. Also, the command trivy -f json -o results.json –exit-code 1 will fail your build by forcing Trivy to return an exit code 1 upon finding a critical vulnerability. You can add additional severity levels here to force Trivy to fail your builds and ensure vulnerabilities of lower severity are not published to Amazon ECR.

    2. Download the python code file sechub_parser.py from the GitHub repository. This script parses vulnerability details from the JSON file that Trivy generates, maps the information to the AWS Security Finding Format (ASFF), and then imports it to Security Hub.
    3. Next, download the Dockerfile from the GitHub repository. The code clones a GitHub repository maintained by the Trivy team that has purposely vulnerable packages that generate critical vulnerabilities.
    4. Go back to your CodeCommit repository, select the Add file dropdown menu, and then select Upload file.
    5. In the Upload file screen, select Choose file, select the build specification you just created (buildspec.yml), complete the Commit changes to master section by adding the Author name and Email address, and select Commit changes, as shown in Figure 4.

 

Figure 4: Uploading a file to CodeCommit

Figure 4: Uploading a file to CodeCommit

 

  • To upload your Dockerfile and sechub_parser.py script to CodeCommit, repeat steps 4 and 5 for each of these files.
  • Your pipeline will automatically start in response to every new commit to your repository. To check the status, go back to the pipeline status view of your CodePipeline pipeline.
  • When CodeBuild starts, select Details in the Build stage of the CodePipeline, under BuildAction, to go to the Build section on the CodeBuild console. To see a stream of logs as your build progresses, select Tail logs, as shown in Figure 5.

    Figure 5: CodeBuild Tailed Logs

    Figure 5: CodeBuild Tailed Logs

  • After Trivy has finished scanning your image, CodeBuild will fail due to the critical vulnerabilities found, as shown in Figure 6.

    Note: The command specified in the post-build stage will run even if the CodeBuild build fails. This is by design and allows the sechub_parser.py script to run and send findings to Security Hub.

     

    Figure 6: CodeBuild logs failure

    Figure 6: CodeBuild logs failure

 

You’ll now go to Security Hub to further analyze the findings and create saved searches for future use.

Analyze container vulnerabilities in Security Hub

For this stage, you’ll analyze your container vulnerabilities in Security Hub and use the findings view to locate information within the ASFF.

  1. Go to the Security Hub console and select Integrations in the left-hand navigation pane.
  2. Scroll down to the Aqua Security integration card and select See findings, as shown in Figure 7. This filters to only Aqua Security product findings in the Findings view.

    Figure 7: Aqua Security integration card

    Figure 7: Aqua Security integration card

  3. You should now see critical vulnerabilities from your previous scan in the Findings view, as shown in Figure 8. To see more details of a finding, select the Title of any of the vulnerabilities, and you will see the details in the right side of the Findings view.
    Figure 8: Security Hub Findings pane

    Figure 8: Security Hub Findings pane

    Note: Within the Findings view, you can apply quick filters by checking the box next to certain fields, although you won’t do that for the solution in this post.

  4. To open a new tab to a website about the Common Vulnerabilities and Exposures (CVE) for the finding, select the hyperlink within the Remediation section, as shown in Figure 9.
    Figure 9: Remediation information

    Figure 9: Remediation information

    Note: The fields shown in Figure 9 are dynamically populated by Trivy in its native output, and the CVE website can differ greatly from vulnerability to vulnerability.

  5. To see the JSON of the full ASFF, at the top right of the Findings view, select the hyperlink for Finding ID.
  6. To find information mapped from Trivy, such as the CVE title and what the patched version of the vulnerable package is, scroll down to the Other section, as shown in Figure 10.

    Figure 10: ASFF, other

    Figure 10: ASFF, other

This was a brief demonstration of exploring findings with Security Hub. You can use custom actions to define response and remediation actions, such as sending these findings to a ticketing system or aggregating them in a security information event management (SIEM) tool.

Push a non-vulnerable Dockerfile

Now that you’ve seen Trivy perform correctly with a vulnerable image, you’ll fix the vulnerabilities. For this stage, you’ll modify Dockerfile to remove any vulnerable dependencies.

  1. Open a text editor, paste in the code shown below, and save it as Dockerfile. You can overwrite your previous example if desired.
    
    FROM alpine:3.7
    RUN apk add --no-cache mysql-client
    ENTRYPOINT ["mysql"]
    	

  2. Upload the new Dockerfile to CodeCommit, as shown earlier in this post.

Clean up

To avoid incurring additional charges from running these services, disable Security Hub and delete the CloudFormation stack after you’ve finished evaluating this solution. This will delete all resources created during this post. Deleting the CloudFormation stack will not remove the findings in Security Hub. If you don’t disable Security Hub, you can archive those findings and wait 90 days for them to be fully removed from your Security Hub instance.

Conclusion

In this post, you learned how to create a CI/CD Pipeline with CodeCommit, CodeBuild, and CodePipeline for building and scanning Docker images for critical vulnerabilities. You also used Security Hub to aggregate scan findings and take action on your container vulnerabilities.

You now have the skills to create similar CI/CD pipelines with AWS Developer Tools and to perform vulnerability scans as part of your container build process. Using these reports, you can efficiently identify CVEs and work with your developers to use up-to-date libraries and binaries for Docker images. To further build upon this solution, you can change the Trivy commands in your build specification to fail the builds on different severity levels of found vulnerabilities. As a final suggestion, you can use ECR as a source for a downstream CI/CD pipeline responsible for deploying your newly-scanned images to Amazon Elastic Container Service (Amazon ECS).

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Amrish Thakkar

Amrish is a Senior Solutions Architect at AWS. He holds the AWS Certified Solutions Architect Professional and AWS Certified DevOps Engineer Professional certifications. Amrish is passionate about DevOps, Microservices, Containerization, and Application Security, and devotes personal time into research and advocacy on these topics. Outside of work, he enjoys spending time with family and watching LOTR trilogy frequently.

Spring 2020 PCI DSS report now available with 124 services in scope

Post Syndicated from Nivetha Chandran original https://aws.amazon.com/blogs/security/spring-2020-pci-dss-report-available-124-services-in-scope/

Amazon Web Services (AWS) continues to expand the scope of our PCI compliance program to support our customers’ most important workloads. We are pleased to announce that six services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) compliance program. These services were validated by Coalfire, our independent Qualified Security Assessor (QSA).

The Spring 2020 PCI DSS attestation of compliance covers 124 services that you can use to securely architect your Cardholder Data Environment (CDE) in AWS. You can see the full list of services on the AWS Services in Scope by Compliance Program page. The six newly added services are:

The compliance reports, including the Spring 2020 PCI DSS report, are available on demand through AWS Artifact. The PCI DSS package available in AWS Artifact includes the DSS v. 3.2.1 Attestation of Compliance (AOC) and Shared Responsibility Guide.

You can learn more about our PCI program and other compliance and security programs on the AWS Compliance Programs page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nivetha Chandran

Nivetha is a Security Assurance Manager at Amazon Web Services on the Global Audits team, managing the PCI compliance program. Nivetha holds a Master’s degree in Information Management from the University of Washington.

What is a cyber range and how do you build one on AWS?

Post Syndicated from John Formento, Jr. original https://aws.amazon.com/blogs/security/what-is-cyber-range-how-do-you-build-one-aws/

In this post, we provide advice on how you can build a current cyber range using AWS services.

Conducting security incident simulations is a valuable exercise for organizations. As described in the AWS Security Incident Response Guide, security incident response simulations (SIRS) are useful tools to improve how an organization handles security events. These simulations can be tabletop sessions, individualized labs, or full team exercises conducted using a cyber range.

A cyber range is an isolated virtual environment used by security engineers, researchers, and enthusiasts to practice their craft and experiment with new techniques. Traditionally, these ranges were developed on premises, but on-prem ranges can be expensive to build and maintain (and do not reflect the new realities of cloud architectures).

In this post, we share concepts for building a cyber range in AWS. First, we cover the networking components of a cyber range, then how to control access to the cyber range. We also explain how to make the exercise realistic and how to integrate your own tools into a cyber range on AWS. As we go through each component, we relate them back to AWS services.

Using AWS to build a cyber range provides flexibility since you only pay for services when in use. They can also be templated to a certain degree, to make creation and teardown easier. This allows you to iterate on your design and build a cyber range that is as identical to your production environment as possible.

Networking

Designing the network architecture is a critical component of building a cyber range. The cyber range must be an isolated environment so you have full control over it and keep the live environment safe. The purpose of the cyber range is to be able to play with various types of malware and malicious code, so keeping it separate from live environments is necessary. That being said, the range should simulate closely or even replicate real-world environments that include applications on the public internet, as well as internal systems and defenses.

There are several services you can use to create an isolated cyber range in AWS:

  • Amazon Virtual Private Cloud (Amazon VPC), which lets you provision a logically isolated section of AWS. This is where you can launch AWS resources in a virtual network that you define.
    • Traffic mirroring is an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of Amazon Elastic Compute Cloud (Amazon EC2) instances.
  • AWS Transit Gateway, which is a service that enables you to connect your Amazon VPCs and your on-premises networks to a single gateway.
  • Interface VPC endpoints, which enable you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink. You’re able to do this without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Amazon VPC provides the fundamental building blocks to create the isolated software defined network for your cyber range. With a VPC, you have fine grained control over IP CIDR ranges, subnets, and routing. When replicating real-world environments, you want the ability to establish communications between multiple VPCs. By using AWS Transit Gateway, you can add or subtract an environment from your cyber range and route traffic between VPCs. Since the cyber range is isolated from the public internet, you need a way to connect to the various AWS services without leaving the cyber range’s isolated network. VPC endpoints can be created for the AWS services so they can function without an internet connection.

A network TAP (test access point) is an external monitoring device that mirrors traffic passing between network nodes. A network TAP can be hardware or virtual; it sends traffic to security and monitoring tools. Though its unconventional in a typical architecture, routing all traffic through an EC2 Nitro instance enables you to use traffic mirroring to provide the network TAP functionality.

Accessing the system

Due to the isolated nature of a cyber range, administrators and participants cannot rely on typical tools to connect to resources, such as the secure shell (SSH) protocol and Microsoft remote desktop protocol (RDP). However, there are several AWS services that help achieve this goal, depending on the type of access role.

There are typically two types of access roles for a cyber range: an administrator and a participant. The administrator – sometimes called the Black Team – is responsible for designing, building, and maintaining the environment. The participant – often called the Red Team (attacking), Blue Team (defending), or Purple Team (both) – then plays within the simulated environment.

For the administrator role, the following AWS services would be useful:

  • Amazon EC2, which provides scalable computing capacity on AWS.
  • AWS Systems Manager, which gives you visibility into and control of your infrastructure on AWS

The administrator can use Systems Manager to establish SSH, RDP, or run commands manually or on a schedule. Files can be transferred in and out of the environment using a combination of S3 and VPC endpoints. Amazon EC2 can host all the web applications and security tools used by the participants, but are managed by the administrators with Systems Manager.

For the participant role, the most useful AWS service is Amazon WorkSpaces, which is a managed, secure cloud desktop service.

The participants are provided with virtual desktops that are contained in the isolated cyber range. Here, they either initiate an attack on resources in the environment or defend the attack. With Amazon WorkSpaces, participants can use the same operating system environment they are accustomed to in the real-world, while still being fully controlled and isolated within the cyber range.

Realism

A cyber range must be realistic enough to provide a satisfactory experience for its participants. This realism must factor tactics, techniques, procedures, communications, toolsets, and more. Constructing a cyber range on AWS enables the builder full control of the environment. It also means the environment is repeatable and auditable.

It is important to replicate the toolsets that are used in real life. By creating re-usable “Golden” Amazon Machine Images (AMIs) that contain tools and configurations that would typically be installed on machines, a builder can easily slot the appropriate systems into an environment. You can also use AWS PrivateLink – a feature of Amazon VPC – to establish connections from your isolated environment to customer-defined outside tools.

To emulate the tactics of an adversary, you can replicate a functional copy of the internet that is scaled down. Using private hosted zones in Amazon Route 53, you can use a “.” record to control name resolution for the entire “internet”. Alternatively, you can use a Route 53 resolver to forward all requests for a specific domain to your name servers. With these strategies, you can create adversaries that act from known malicious domains to test your defense capabilities. You can also use Amazon VPC route tables to send all traffic to a centralized web server under your control. This web server can respond as variety of websites to emulate the internet.

Logging and monitoring

It is important for responders to exercise using the same tools and techniques that they would use in the real world. AWS VPC traffic mirroring is an effective way to utilize many IDS products. Our documentation provides guidance for using popular open source tool such as Zeek and Suricata. Additionally, responders can leverage any network monitoring tools that support VXLAN.

You can interact with the tools outside of the VPC by using AWS PrivateLink. PrivateLink provides secure connectivity to resources outside of a network, without the need for firewall rules or route tables. PrivateLink also enables integration with many AWS Partner offerings.

You can use Amazon CloudWatch Logs to aggregate operating system logs, application logs, and logs from AWS resources. You can also easily share CloudWatch Logs with a dedicated security logging account.

In addition, if there are third party tools that you currently use, you can leverage the AWS Marketplace to easily procure the specific tools and install within your AWS account.

Summary

In this post, I covered what a cyber range is, the value of using one, and how to think about creating one using AWS services. AWS provides a great platform to build a cost effective cyber range that can be used to bolster your security practices.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

John Formento, Jr.

John is a Solutions Architect at Amazon Web Services. He helps large enterprises achieve their goals by architecting secure and scalable solutions on the AWS Cloud. John holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and DevOps Engineer – Professional.

Author

Adam Cerini

Adam is a Senior Solutions Architect with Amazon Web Services. He focuses on helping Public Sector customers architect scalable, secure, and cost effective systems. Adam holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

Accreditation models for secure cloud adoption

Post Syndicated from Jana Kay original https://aws.amazon.com/blogs/security/accreditation-models-for-secure-cloud-adoption/

Today, as part of its Secure Cloud Adoption series, AWS released new strategic outlook recommendations to support decision makers in any sector considering or planning for secure cloud adoption. “Accreditation Models for Secure Cloud Adoption” provides best practices with respect to cloud accreditation to help organizations capitalize on the security benefits of commercial cloud computing, while maximizing efficiency, scalability, and cost reduction.

Many organizations are looking to modernize their IT investments and transition quickly to the cloud. However, determining how to accredit cloud services can be a challenge. If the organizational model is too laborious or is seen as an obstacle to cloud adoption and cloud-first policies, this can delay the transition to cloud. Understanding the best practices of early cloud adopters and the organizational models that support their accreditation programs helps leaders make well-informed decisions.

Accreditation Models for Secure Cloud Adoption” provides an overview of three organizational models for cloud accreditation: decentralized, centralized, and hybrid. It differentiates them based on who determines and approves risk decisions. Regardless of the organizational model used, four recommended best practices help cloud adopters balance speed, efficiency, and cost of adoption with security.

Ultimately, cloud adoption depends on a multitude of factors unique to each situation. Organizations should have a thorough understanding of the shared responsibility between the cloud service provider and the consumer to create a more secure, robust, and transparent environment. Examining a range of options and understanding how each can facilitate successful cloud adoption empowers organizations to make the best choice.

If you have questions or want to learn more, contact your account executive or AWS Support.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jana Kay

Since 2018, Jana Kay has been a cloud security strategist with the AWS Security Growth Strategies team. She develops innovative ways to help AWS customers achieve their objectives, such as security table top exercises and other strategic initiatives. Previously, she was a cyber, counter-terrorism, and Middle East expert for 16 years in the Pentagon’s Office of the Secretary of Defense.

Introducing the serverless LAMP stack – part 2 relational databases

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-serverless-lamp-stack-part-2-relational-databases/

In this post, you learn how to use an Amazon Aurora MySQL relational database in your serverless applications. I show how to pool and share connections to the database with Amazon RDS Proxy, and how to choose configurations. The code examples in this post are written in PHP and can be found in this GitHub repository. The concepts can be applied to any AWS Lambda supported runtime.

TThe serverless LAMP stack

The serverless LAMP stack

This serverless LAMP stack architecture is first discussed in this post. This architecture uses a PHP Lambda function (or multiple functions) to read and write to an Amazon Aurora MySQL database.

Amazon Aurora provides high performance and availability for MySQL and PostgreSQL databases. The underlying storage scales automatically to meet demand, up to 64 tebibytes (TiB). An Amazon Aurora DB instance is created inside a virtual private cloud (VPC) to prevent public access. To connect to the Aurora database instance from a Lambda function, that Lambda function must be configured to access the same VPC.

Database memory exhaustion can occur when connecting directly to an RDS database. This is caused by a surge in database connections or by a large number of connections opening and closing at a high rate. This can lead to slower queries and limited application scalability. Amazon RDS Proxy is implemented to solve this problem. RDS Proxy is a fully managed database proxy feature for Amazon RDS. It establishes a database connection pool that sits between your application and your relational database and reuses connections in this pool. This protects the database against oversubscription, without the memory and CPU overhead of opening a new database connection each time. Credentials for the database connection are securely stored in AWS Secrets Manager. They are accessed via an AWS Identity and Access Management (IAM) role. This enforces strong authentication requirements for database applications without a costly migration effort for the DB instances themselves.

The following steps show how to connect to an Amazon Aurora MySQL database running inside a VPC. The connection is made from a Lambda function running PHP. The Lambda function connects to the database via RDS Proxy. The database credentials that RDS Proxy uses are held in  Secrets Manager and accessed via IAM authentication.

RDS Proxy with IAM Authentication

RDS Proxy with IAM authentication

Getting started

RDS Proxy is currently in preview and not recommended for production workloads. For a full list of available Regions, refer to the RDS Proxy pricing page.

Creating an Amazon RDS Aurora MySQL database

Before creating an Aurora DB cluster, you must meet the prerequisites, such as creating a VPC and an RDS DB subnet group. For more information on how to set this up, see DB cluster prerequisites.

  1. Call the create-db-cluster AWS CLI command to create the Aurora MySQL DB cluster.
    aws rds create-db-cluster \
    --db-cluster-identifier sample-cluster \
    --engine aurora-mysql \
    --engine-version 5.7.12 \
    --master-username admin \
    --master-user-password secret99 \
    --db-subnet-group-name default-vpc-6cc1cf0a \
    --vpc-security-group-ids sg-d7cf52a3 \
    --enable-iam-database-authentication true
  2. Add a new DB instance to the cluster.
    aws rds create-db-instance \
        --db-instance-class db.r5.large \
        --db-instance-identifier sample-instance \
        --engine aurora-mysql  \
        --db-cluster-identifier sample-cluster
  3. Store the database credentials as a secret in AWS Secrets Manager.
    aws secretsmanager create-secret \
    --name MyTestDatabaseSecret \
    --description "My test database secret created with the CLI" \
    --secret-string '{"username":"admin","password":"secret99","engine":"mysql","host":"<REPLACE-WITH-YOUR-DB-WRITER-ENDPOINT>","port":"3306","dbClusterIdentifier":"<REPLACE-WITH-YOUR-DB-CLUSTER-NAME>"}'

    Make a note of the resulting ARN for later

    {
        "VersionId": "eb518920-4970-419f-b1c2-1c0b52062117", 
        "Name": "MySampleDatabaseSecret", 
        "ARN": "arn:aws:secretsmanager:eu-west-1:1234567890:secret:MySampleDatabaseSecret-JgEWv1"
    }

    This secret is used by RDS Proxy to maintain a connection pool to the database. To access the secret, the RDS Proxy service requires permissions to be explicitly granted.

  4. Create an IAM policy that provides secretsmanager permissions to the secret.
    aws iam create-policy \
    --policy-name my-rds-proxy-sample-policy \
    --policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": [
            "secretsmanager:GetResourcePolicy",
            "secretsmanager:GetSecretValue",
            "secretsmanager:DescribeSecret",
            "secretsmanager:ListSecretVersionIds"
          ],
          "Resource": [
            "<the-arn-of-the-secret>”
          ]
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": [
            "secretsmanager:GetRandomPassword",
            "secretsmanager:ListSecrets"
          ],
          "Resource": "*"
        }
      ]
    }'
    

    Make a note of the resulting policy ARN, which you need to attach to a new role.

    {
        "Policy": {
            "PolicyName": "my-rds-proxy-sample-policy", 
            "PermissionsBoundaryUsageCount": 0, 
            "CreateDate": "2020-06-04T12:21:25Z", 
            "AttachmentCount": 0, 
            "IsAttachable": true, 
            "PolicyId": "ANPA6JE2MLNK3Z4EFQ5KL", 
            "DefaultVersionId": "v1", 
            "Path": "/", 
            "Arn": "arn:aws:iam::1234567890112:policy/my-rds-proxy-sample-policy", 
            "UpdateDate": "2020-06-04T12:21:25Z"
         }
    }
    
  5. Create an IAM Role that has a trust relationship with the RDS Proxy service. This allows the RDS Proxy service to assume this role to retrieve the database credentials.

    aws iam create-role --role-name my-rds-proxy-sample-role --assume-role-policy-document '{
     "Version": "2012-10-17",
     "Statement": [
      {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
        "Service": "rds.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
      }
     ]
    }'
    
  6. Attach the new policy to the role:
    aws iam attach-role-policy \
    --role-name my-rds-proxy-sample-role \
    --policy-arn arn:aws:iam::123456789:policy/my-rds-proxy-sample-policy
    

Create an RDS Proxy

  1. Use the AWS CLI to create a new RDS Proxy. Replace the – -role-arn and SecretArn value to those values created in the previous steps.
    aws rds create-db-proxy \
    --db-proxy-name sample-db-proxy \
    --engine-family MYSQL \
    --auth '{
            "AuthScheme": "SECRETS",
            "SecretArn": "arn:aws:secretsmanager:eu-west-1:123456789:secret:exampleAuroraRDSsecret1-DyCOcC",
             "IAMAuth": "REQUIRED"
          }' \
    --role-arn arn:aws:iam::123456789:role/my-rds-proxy-sample-role \
    --vpc-subnet-ids  subnet-c07efb9a subnet-2bc08b63 subnet-a9007bcf
    

    To enforce IAM authentication for users of the RDS Proxy, the IAMAuth value is set to REQUIRED. This is a more secure alternative to embedding database credentials in the application code base.

    The Aurora DB cluster and its associated instances are referred to as the targets of that proxy.

  2. Add the database cluster to the proxy with the register-db-proxy-targets command.
    aws rds register-db-proxy-targets \
    --db-proxy-name sample-db-proxy \
    --db-cluster-identifiers sample-cluster
    

Deploying a PHP Lambda function with VPC configuration

This GitHub repository contains a Lambda function with a PHP runtime provided by a Lambda layer. The function uses the MySQLi PHP extension to connect to the RDS Proxy. The extension has been installed and compiled along with a PHP executable using this command:

The PHP executable is packaged together with a Lambda bootstrap file to create a PHP custom runtime. More information on building your own custom runtime for PHP can be found in this post.

Deploy the application stack using the AWS Serverless Application Model (AWS SAM) CLI:

sam deploy -g

When prompted, enter the SecurityGroupIds and the SubnetIds for your Aurora DB cluster.

The SAM template attaches the SecurityGroupIds and SubnetIds parameters to the Lambda function using the VpcConfig sub-resource.

Lambda creates an elastic network interface for each combination of security group and subnet in the function’s VPC configuration. The function can only access resources (and the internet) through that VPC.

Adding RDS Proxy to a Lambda Function

  1. Go to the Lambda console.
  2. Choose the PHPHelloFunction that you just deployed.
  3. Choose Add database proxy at the bottom of the page.
  4. Choose existing database proxy then choose sample-db-proxy.
  5. Choose Add.

Using the RDS Proxy from within the Lambda function

The Lambda function imports three libraries from the AWS PHP SDK. These are used to generate a password token from the database credentials stored in Secrets Manager.

The AWS PHP SDK libraries are provided by the PHP-example-vendor layer. Using Lambda layers in this way creates a mechanism for incorporating additional libraries and dependencies as the application evolves.

The function’s handler named index, is the entry point of the function code. First, getenv() is called to retrieve the environment variables set by the SAM application’s deployment. These are saved as local variables and available for the duration of the Lambda function’s execution.

The AuthTokenGenerator class generates an RDS auth token for use with IAM authentication. This is initialized by passing in the credential provider to the SDK client constructor. The createToken() method is then invoked, with the Proxy endpoint, port number, Region, and database user name provided as method parameters. The resultant temporary token is then used to connect to the proxy.

The PHP mysqli class represents a connection between PHP and a MySQL database. The real_connect() method is used to open a connection to the database via RDS Proxy. Instead of providing the database host endpoint as the first parameter, the proxy endpoint is given. The database user name, temporary token, database name, and port number are also provided. The constant MYSQLI_CLIENT_SSL is set to ensure that the connection uses SSL encryption.

Once a connection has been established, the connection object can be used. In this example, a SHOW TABLES query is executed. The connection is then closed, and the result is encoded to JSON and returned from the Lambda function.

This is the output:

RDS Proxy monitoring and performance tuning

RDS Proxy allows you to monitor and adjust connection limits and timeout intervals without changing application code.

Limit the timeout wait period that is most suitable for your application with the connection borrow timeout option. This specifies how long to wait for a connection to become available in the connection pool before returning a timeout error.

Adjust the idle connection timeout interval to help your applications handle stale resources. This can save your application from mistakenly leaving open connections that hold important database resources.

Multiple applications using a single database can each use an RDS Proxy to divide the connection quotas across each application. Set the maximum proxy connections as a percentage of the max_connections configuration (for MySQL).

The following example shows how to change the MaxConnectionsPercent setting for a proxy target group.

aws rds modify-db-proxy-target-group \
--db-proxy-name sample-db-proxy \
--target-group-name default \
--connection-pool-config '{"MaxConnectionsPercent": 75 }'

Response:

{
    "TargetGroups": [
        {
            "DBProxyName": "sample-db-proxy",
            "TargetGroupName": "default",
            "TargetGroupArn": "arn:aws:rds:eu-west-1:####:target-group:prx-tg-03d7fe854604e0ed1",
            "IsDefault": true,
            "Status": "available",
            "ConnectionPoolConfig": {
            "MaxConnectionsPercent": 75,
            "MaxIdleConnectionsPercent": 50,
            "ConnectionBorrowTimeout": 120,
            "SessionPinningFilters": []
        	},            
"CreatedDate": "2020-06-04T16:14:35.858000+00:00",
            "UpdatedDate": "2020-06-09T09:08:50.889000+00:00"
        }
    ]
}

RDS Proxy may keep a session on the same connection until the session ends when it detects a session state change that isn’t appropriate for reuse. This behavior is called pinning. Performance tuning for RDS Proxy involves maximizing connection reuse by minimizing pinning.

The Amazon CloudWatch metric DatabaseConnectionsCurrentlySessionPinned can be monitored to see how frequently pinning occurs in your application.

Amazon CloudWatch collects and processes raw data from RDS Proxy into readable, near real-time metrics. Use these metrics to observe the number of connections and the memory associated with connection management. This can help identify if a database instance or cluster would benefit from using RDS Proxy. For example, if it is handling many short-lived connections, or opening and closing connections at a high rate.

Conclusion

In this post, you learn how to create and configure an RDS Proxy to manage connections from a PHP Lambda function to an Aurora MySQL database. You see how to enforce strong authentication requirements by using Secrets Manager and IAM authentication. You deploy a Lambda function that uses Lambda layers to store the AWS PHP SDK as a dependency.

You can create secure, scalable, and performant serverless applications with relational databases. Do this by placing the RDS Proxy service between your database and your Lambda functions. You can also migrate your existing MySQL database to an Aurora DB cluster without altering the database. Using RDS Proxy and Lambda, you can build serverless PHP applications faster, with less code.

Find more PHP examples with the Serverless LAMP stack.

The importance of encryption and how AWS can help

Post Syndicated from Ken Beer original https://aws.amazon.com/blogs/security/importance-of-encryption-and-how-aws-can-help/

Encryption is a critical component of a defense-in-depth strategy, which is a security approach with a series of defensive mechanisms designed. It means if one security mechanism fails, there’s at least one more still operating. As more organizations look to operate faster and at scale, they need ways to meet critical compliance requirements and improve data security. Encryption, when used correctly, can provide an additional layer of protection above basic access control.

How and why does encryption work?

Encryption works by using an algorithm with a key to convert data into unreadable data (ciphertext) that can only become readable again with the right key. For example, a simple phrase like “Hello World!” may look like “1c28df2b595b4e30b7b07500963dc7c” when encrypted. There are several different types of encryption algorithms, all using different types of keys. A strong encryption algorithm relies on mathematical properties to produce ciphertext that can’t be decrypted using any practically available amount of computing power without also having the necessary key. Therefore, protecting and managing the keys becomes a critical part of any encryption solution.

Encryption as part of your security strategy

An effective security strategy begins with stringent access control and continuous work to define the least privilege necessary for persons or systems accessing data. AWS requires that you manage your own access control policies, and also supports defense in depth to achieve the best possible data protection.

Encryption is a critical component of a defense-in-depth strategy because it can mitigate weaknesses in your primary access control mechanism. What if an access control mechanism fails and allows access to the raw data on disk or traveling along a network link? If the data is encrypted using a strong key, as long as the decryption key is not on the same system as your data, it is computationally infeasible for an attacker to decrypt your data. To show how infeasible it is, let’s consider the Advanced Encryption Standard (AES) with 256-bit keys (AES-256). It’s the strongest industry-adopted and government-approved algorithm for encrypting data. AES-256 is the technology we use to encrypt data in AWS, including Amazon Simple Storage Service (S3) server-side encryption. It would take at least a trillion years to break using current computing technology. Current research suggests that even the future availability of quantum-based computing won’t sufficiently reduce the time it would take to break AES encryption.

But what if you mistakenly create overly permissive access policies on your data? A well-designed encryption and key management system can also prevent this from becoming an issue, because it separates access to the decryption key from access to your data.

Requirements for an encryption solution

To get the most from an encryption solution, you need to think about two things:

  1. Protecting keys at rest: Are the systems using encryption keys secured so the keys can never be used outside the system? In addition, do these systems implement encryption algorithms correctly to produce strong ciphertexts that cannot be decrypted without access to the right keys?
  2. Independent key management: Is the authorization to use encryption independent from how access to the underlying data is controlled?

There are third-party solutions that you can bring to AWS to meet these requirements. However, these systems can be difficult and expensive to operate at scale. AWS offers a range of options to simplify encryption and key management.

Protecting keys at rest

When you use third-party key management solutions, it can be difficult to gauge the risk of your plaintext keys leaking and being used outside the solution. The keys have to be stored somewhere, and you can’t always know or audit all the ways those storage systems are secured from unauthorized access. The combination of technical complexity and the necessity of making the encryption usable without degrading performance or availability means that choosing and operating a key management solution can present difficult tradeoffs. The best practice to maximize key security is using a hardware security module (HSM). This is a specialized computing device that has several security controls built into it to prevent encryption keys from leaving the device in a way that could allow an adversary to access and use those keys.

One such control in modern HSMs is tamper response, in which the device detects physical or logical attempts to access plaintext keys without authorization, and destroys the keys before the attack succeeds. Because you can’t install and operate your own hardware in AWS datacenters, AWS offers two services using HSMs with tamper response to protect customers’ keys: AWS Key Management Service (KMS), which manages a fleet of HSMs on the customer’s behalf, and AWS CloudHSM, which gives customers the ability to manage their own HSMs. Each service can create keys on your behalf, or you can import keys from your on-premises systems to be used by each service.

The keys in AWS KMS or AWS CloudHSM can be used to encrypt data directly, or to protect other keys that are distributed to applications that directly encrypt data. The technique of encrypting encryption keys is called envelope encryption, and it enables encryption and decryption to happen on the computer where the plaintext customer data exists, rather than sending the data to the HSM each time. For very large data sets (e.g., a database), it’s not practical to move gigabytes of data between the data set and the HSM for every read/write operation. Instead, envelope encryption allows a data encryption key to be distributed to the application when it’s needed. The “master” keys in the HSM are used to encrypt a copy of the data key so the application can store the encrypted key alongside the data encrypted under that key. Once the application encrypts the data, the plaintext copy of data key can be deleted from its memory. The only way for the data to be decrypted is if the encrypted data key, which is only a few hundred bytes in size, is sent back to the HSM and decrypted.

The process of envelope encryption is used in all AWS services in which data is encrypted on a customer’s behalf (which is known as server-side encryption) to minimize performance degradation. If you want to encrypt data in your own applications (client-side encryption), you’re encouraged to use envelope encryption with AWS KMS or AWS CloudHSM. Both services offer client libraries and SDKs to add encryption functionality to their application code and use the cryptographic functionality of each service. The AWS Encryption SDK is an example of a tool that can be used anywhere, not just in applications running in AWS.

Because implementing encryption algorithms and HSMs is critical to get right, all vendors of HSMs should have their products validated by a trusted third party. HSMs in both AWS KMS and AWS CloudHSM are validated under the National Institute of Standards and Technology’s FIPS 140-2 program, the standard for evaluating cryptographic modules. This validates the secure design and implementation of cryptographic modules, including functions related to ports and interfaces, authentication mechanisms, physical security and tamper response, operational environments, cryptographic key management, and electromagnetic interference/electromagnetic compatibility (EMI/EMC). Encryption using a FIPS 140-2 validated cryptographic module is often a requirement for other security-related compliance schemes like FedRamp and HIPAA-HITECH in the U.S., or the international payment card industry standard (PCI-DSS).

Independent key management

While AWS KMS and AWS CloudHSM can protect plaintext master keys on your behalf, you are still responsible for managing access controls to determine who can cause which encryption keys to be used under which conditions. One advantage of using AWS KMS is that the policy language you use to define access controls on keys is the same one you use to define access to all other AWS resources. Note that the language is the same, not the actual authorization controls. You need a mechanism for managing access to keys that is different from the one you use for managing access to your data. AWS KMS provides that mechanism by allowing you to assign one set of administrators who can only manage keys and a different set of administrators who can only manage access to the underlying encrypted data. Configuring your key management process in this way helps provide separation of duties you need to avoid accidentally escalating privilege to decrypt data to unauthorized users. For even further separation of control, AWS CloudHSM offers an independent policy mechanism to define access to keys.

Even with the ability to separate key management from data management, you can still verify that you have configured access to encryption keys correctly. AWS KMS is integrated with AWS CloudTrail so you can audit who used which keys, for which resources, and when. This provides granular vision into your encryption management processes, which is typically much more in-depth than on-premises audit mechanisms. Audit events from AWS CloudHSM can be sent to Amazon CloudWatch, the AWS service for monitoring and alarming third-party solutions you operate in AWS.

Encrypting data at rest and in motion

All AWS services that handle customer data encrypt data in motion and provide options to encrypt data at rest. All AWS services that offer encryption at rest using AWS KMS or AWS CloudHSM use AES-256. None of these services store plaintext encryption keys at rest — that’s a function that only AWS KMS and AWS CloudHSM may perform using their FIPS 140-2 validated HSMs. This architecture helps minimize the unauthorized use of keys.

When encrypting data in motion, AWS services use the Transport Layer Security (TLS) protocol to provide encryption between your application and the AWS service. Most commercial solutions use an open source project called OpenSSL for their TLS needs. OpenSSL has roughly 500,000 lines of code with at least 70,000 of those implementing TLS. The code base is large, complex, and difficult to audit. Moreover, when OpenSSL has bugs, the global developer community is challenged to not only fix and test the changes, but also to ensure that the resulting fixes themselves do not introduce new flaws.

AWS’s response to challenges with the TLS implementation in OpenSSL was to develop our own implementation of TLS, known as s2n, or signal to noise. We released s2n in June 2015, which we designed to be small and fast. The goal of s2n is to provide you with network encryption that is easier to understand and that is fully auditable. We released and licensed it under the Apache 2.0 license and hosted it on GitHub.

We also designed s2n to be analyzed using automated reasoning to test for safety and correctness using mathematical logic. Through this process, known as formal methods, we verify the correctness of the s2n code base every time we change the code. We also automated these mathematical proofs, which we regularly re-run to ensure the desired security properties are unchanged with new releases of the code. Automated mathematical proofs of correctness are an emerging trend in the security industry, and AWS uses this approach for a wide variety of our mission-critical software.

Implementing TLS requires using encryption keys and digital certificates that assert the ownership of those keys. AWS Certificate Manager and AWS Private Certificate Authority are two services that can simplify the issuance and rotation of digital certificates across your infrastructure that needs to offer TLS endpoints. Both services use a combination of AWS KMS and AWS CloudHSM to generate and/or protect the keys used in the digital certificates they issue.

Summary

At AWS, security is our top priority and we aim to make it as easy as possible for you to use encryption to protect your data above and beyond basic access control. By building and supporting encryption tools that work both on and off the cloud, we help you secure your data and ensure compliance across your entire environment. We put security at the center of everything we do to make sure that you can protect your data using best-of-breed security technology in a cost-effective way.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS KMS forum or the AWS CloudHSM forum, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ken Beer

Ken is the General Manager of the AWS Key Management Service. Ken has worked in identity and access management, encryption, and key management for over 7 years at AWS. Before joining AWS, Ken was in charge of the network security business at Trend Micro. Before Trend Micro, he was at Tumbleweed Communications. Ken has spoken on a variety of security topics at events such as the RSA Conference, the DoD PKI User’s Forum, and AWS re:Invent.

Tighten S3 permissions for your IAM users and roles using access history of S3 actions

Post Syndicated from Mathangi Ramesh original https://aws.amazon.com/blogs/security/tighten-s3-permissions-iam-users-and-roles-using-access-history-s3-actions/

Customers tell us that when their teams and projects are just getting started, administrators may grant broad access to inspire innovation and agility. Over time administrators need to restrict access to only the permissions required and achieve least privilege. Some customers have told us they need information to help them determine the permissions an application really needs, and which permissions they can remove without impacting applications. To help with this, AWS Identity and Access Management (IAM) reports the last time users and roles used each service, so you can know whether you can restrict access. This helps you to refine permissions to specific services, but we learned that customers also need to set more granular permissions to meet their security requirements.

We are happy to announce that we now include action-level last accessed information for Amazon Simple Storage Service (Amazon S3). This means you can tighten permissions to only the specific S3 actions that your application requires. The action-level last accessed information is available for S3 management actions. As you try it out, let us know how you’re using action-level information and what additional information would be valuable as we consider supporting more services.

The following is an example snapshot of S3 action last accessed information.
 

Figure 1: S3 action last accessed information snapshot

Figure 1: S3 action last accessed information snapshot

You can use the new action last accessed information for Amazon S3 in conjunction with other features that help you to analyze access and tighten S3 permissions. AWS IAM Access Analyzer generates findings when your resource policies allow access to your resources from outside your account or organization. Specifically for Amazon S3, when an S3 bucket policy changes, Access Analyzer alerts you if the bucket is accessible by users from outside the account, which helps you to protect your data from unintended access. You can use action last accessed information for your user or role, in combination with Access Analyzer findings, to improve the security posture of your S3 permissions. You can review the action last accessed information in the IAM console, or programmatically using the AWS Command Line Interface (AWS CLI) or a programmatic client.

Example use case for reviewing action last accessed details

Now I’ll walk you through an example to demonstrate how you identify unused S3 actions and reduce permissions for your IAM principals. In this example a system administrator, Martha Rivera, is responsible for managing access for her IAM principals. She periodically reviews permissions to ensure that teams follow security best practices. Specifically, she ensures that the team has only the minimum S3 permissions required to work on their application and achieve their use cases. To do this, Martha reviews the last accessed timestamp for each supported S3 action that the roles in her account have access to. Martha then uses this information to identify the S3 actions that are not used, and she restricts access to those actions by updating the policies.

To view action last accessed information in the AWS Management Console

  1. Open the IAM Console.
  2. In the navigation pane, select Roles, then choose the role that you want to analyze (for example, PaymentAppTestRole).
  3. Select the Access Advisor tab. This tab displays all the AWS services to which the role has permissions, as shown in Figure 2.
     
    Figure 2: List of AWS services to which the role has permissions

    Figure 2: List of AWS services to which the role has permissions

  4. On the Access Advisor tab, select Amazon S3 to view all the supported actions to which the role has permissions, when each action was last used by the role, and the AWS Region in which it was used, as shown in Figure 3.
     
    Figure 3: List of S3 actions with access data

    Figure 3: List of S3 actions with access data

In this example, Martha notices that PaymentAppTestRole has read and write S3 permissions. From the information in Figure 3, she sees that the role is using read actions for GetBucketLogging, GetBucketPolicy, and GetBucketTagging. She also sees that the role hasn’t used write permissions for CreateAccessPoint, CreateBucket, PutBucketPolicy, and others in the last 30 days. Based on this information, Martha updates the policies to remove write permissions. To learn more about updating permissions, see Modifying a Role in the AWS IAM User Guide.

At launch, you can review 50 days of access data, that is, any use of S3 actions in the preceding 50 days will show up as a last accessed timestamp. As this tracking period continues to increase, you can start making permissions decisions that apply to use cases with longer period requirements (for example, when 60 or 90 days is available).

Martha sees that the GetAccessPoint action shows Not accessed in the tracking period, which means that the action was not used since IAM started tracking access for the service, action, and AWS Region. Based on this information, Martha confidently removes this permission to further reduce permissions for the role.

Additionally, Martha notices that an action she expected does not show up in the list in Figure 3. This can happen for two reasons, either PaymentAppTestRole does not have permissions to the action, or IAM doesn’t yet track access for the action. In such a situation, do not update permission for those actions, based on action last accessed information. To learn more, see Refining Permissions Using Last Accessed Data in the AWS IAM User Guide.

To view action last accessed information programmatically

The action last accessed data is available through updates to the following existing APIs. These APIs now generate action last accessed details, in addition to service last accessed details:

  • generate-service-last-accessed-details: Call this API to generate the service and action last accessed data for a user or role. You call this API first to start a job that generates the action last accessed data for a user or role. This API returns a JobID that you will then use with get-service-last-accessed-details to determine the status of the job completion.
  • get-service-last-accessed-details: Call this API to retrieve the service and action last accessed data for a user or role based on the JobID you pass in. This API is paginated at the service level.

To learn more, see GenerateServiceLastAccessedDetails in the AWS IAM User Guide.

Conclusion

By using action last accessed information for S3, you can review access for supported S3 actions, remove unused actions, and restrict access to S3 to achieve least privilege. To learn more about how to use action last accessed information, see Refining Permissions Using Last Accessed Data in the AWS IAM User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Mathangi Ramesh

Mathangi Ramesh

Mathangi is the product manager for AWS Identity and Access Management. She enjoys talking to customers and working with data to solve problems. Outside of work, Mathangi is a fitness enthusiast and a Bharatanatyam dancer. She holds an MBA degree from Carnegie Mellon University.

AWS achieves its first PCI 3DS attestation

Post Syndicated from Nivetha Chandran original https://aws.amazon.com/blogs/security/aws-achieves-first-pci-3ds-attestation/

We are pleased to announce that Amazon Web Services (AWS) has achieved its first PCI 3-D Secure (3DS) certification. Financial institutions and payment providers are implementing EMV® 3-D Secure services to support application-based authentication, integration with digital wallets, and browser-based e-commerce transactions. Although AWS doesn’t perform 3DS functions directly, the AWS PCI 3DS attestation of compliance enables customers to attain their own PCI 3DS compliance for their services running on AWS.

All AWS regions in scope for PCI DSS were included in the 3DS attestation. AWS was assessed by Coalfire, an independent Qualified Security Assessor (QSA).

AWS compliance reports, including this latest PCI 3DS attestation, are available on demand through AWS Artifact. The 3DS package available in AWS Artifact includes the 3DS Attestation of Compliance (AOC) and Shared Responsibility Guide.

To learn more about our PCI program and other compliance and security programs, please visit the AWS Compliance Programs page.

We value your feedback and questions. Feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nivetha Chandran

Nivetha is a Security Assurance Manager at Amazon Web Services on the Global Audits team, managing the PCI compliance program. Nivetha holds a Master’s degree in Information Management from the University of Washington.

How to perform automated incident response in a multi-account environment

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/how-to-perform-automated-incident-response-multi-account-environment/

How quickly you respond to security incidents is key to minimizing their impacts. Automating incident response helps you scale your capabilities, rapidly reduce the scope of compromised resources, and reduce repetitive work by security teams. But when you use automation, you also must manage exceptions to standard response procedures.

In this post, I provide a pattern and ready-made templates for a scalable multi-account setup of an automated incident response process with minimal code base, using native AWS tools. I also explain how to set up exception handling for approved deviations based on resource tags.

Because security response is a broad topic, I provide an overview of some alternative approaches to consider. Incident response is one part of an overarching governance, risk, and compliance (GRC) program that every organization should implement. For more information, see Scaling a governance, risk, and compliance program for the cloud.

Important: Use caution when introducing automation. Carefully test each of the automated responses in a non-production environment, as you should not use untested automated incident response for business-critical applications.

Solution benefits and deliverables

In the solution described below, you use AWS Systems Manager automation to execute most of the incident response steps. In general, you can either write your own or use pre-written runbooks. AWS maintains ready-made operational runbooks (automation documents) so you don’t need to maintain your own code base for them. The AWS runbooks cover many predefined use cases, such as enabling Amazon Simple Storage Service (S3) bucket encryption, opening a Jira ticket, or terminating an Amazon Elastic Compute Cloud (EC2) instance. Every execution for these is well documented and repeatable in AWS Systems Manager. For a few cases where there are no ready-made automation documents, I provide three additional AWS Lambda functions with the required response actions in the templates. The Lambda functions require minimal code, with no external dependencies outside AWS native tools.

You use a central security account to execute the incident response actions in the automation runbooks. You don’t need to do anything in the service accounts where you monitor and respond to incidents. In this post, you learn about and receive patterns for:

  • Responding to an incident based on Amazon GuardDuty alerts or AWS Config findings.
  • Deploying templates for a multi-account setup with a central security account and multiple service accounts. All account resources must be in the same AWS Region and part of the same AWS organization.
  • AWS Systems Manager automation to execute many existing AWS managed automation runbooks (you need access to the AWS Management Console to see all documents at this link). Systems Manager automation is available in these AWS Regions.
  • Prewritten Lambda functions to:
    • Confine permissive (open) security groups to a more-constrained CIDR (classless interdomain routing), such as the VPC (virtual private cloud) range for that security group. This prevents knowable network configuration errors, such as too-open security groups.
    • Isolate a potentially compromised EC2 instance by attaching it to a single, empty security group and removing its existing security groups.
    • Block an AWS Identity and Access Management (IAM) principal (an IAM user or role) by attaching a deny all policy to that principal.
    • Send notifications to the Amazon Simple Notification Service (Amazon SNS) for alerting in addition to (and in concert with) automated response actions.
  • Exception handling when actions should not be executed. I suggest decentralized exception handling based on AWS resource tags.
  • Integrating AWS Security Hub with custom actions for GuardDuty findings. This can be used for manual triggering of remediations that should not receive an automatic response.
  • Custom AWS Config rule for detecting permissive (open) security groups on any port.
  • Mapping the security finding to the response action defined in the CloudWatch Events rule pattern is extendable. I provide an example of how to extend to new findings and responses.

Out-of-scope for this solution

This post only shows you how to get started with security response automation for accounts that are part of the same AWS organization and Region. For more information on developing a comprehensive program for incident response, see How to prepare for & respond to security incidents in your AWS environment.

Understanding the difference between this solution and AWS Config remediation

The AWS Config remediation feature provides remediation of non-compliant resources using AWS Systems Manager within a single AWS account. The solution you learn in this post includes both GuardDuty alerts and AWS Config, a multi-account approach managed from a central security account, rules exception handling based on resource tags, and additional response actions using AWS Lambda functions.

Choosing the right approach for incident response

There are many technical patterns and solutions for responding to security incidents. When considering the right one for your organization, you must consider how much flexibility for response actions you need, what resources are in scope, and the skill level of your security engineers. Also consider dependencies on external code and applications, software licensing costs, and your organization’s experience level with AWS.

If you want the maximum flexibility and extensibility beyond AWS Systems Manager used in this post, I recommend AWS Step Functions as described in the session How to prepare for & respond to security incidents in your AWS environment and in DIY guide to runbooks, incident reports, and incident response. You can create workflows for your runbooks and have full control of all possible actions.

Architecture

 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

The architecture works as follows:

In the service account (the environment for which we want to monitor and respond to incidents):

  1. GuardDuty findings are forwarded to CloudWatch Events.
  2. Changes in the AWS Config compliance status are forwarded to CloudWatch Events.
  3. CloudWatch Events for GuardDuty and AWS Config are forwarded to the central security account, via a CloudWatch Events bus.

In the central security account:

  1. Each event from the service account is mapped to one or more response actions using CloudWatch Events rules. Every rule is an incident response action that is executed on one or more security findings, as defined in the event pattern. If there is an exception rule, the response actions are not executed.

    The list of possible actions that could be taken include:

    1. Trigger a Systems Manager automation document, invoked by the Lambda function StratSsmAutomation within the security account.
    2. Isolate an EC2 instance by attaching an empty security group to it and removing any prior security groups, invoked by the Lambda function IsolateEc2. This assumes the incident response role in the target service account.
    3. Block the IAM principal by attaching a deny all policy, invoked by the Lambda function BlockPrincipal, by assuming the incident response role in the target service account.
    4. Confine security group to safe CIDR, invoked by the Lambda function ConfineSecurityGroup, by assuming the incident response role in the target service account.
    5. Send the finding to an SNS topic for processing outside this solution; for example, by manual evaluation or simply for information.
  2. Invoke AWS Systems Manager within the security account, with a target of the service account and the same AWS Region.
  3. The Systems Manager automation document is executed from the security account against the resources in the service account; for example, EC2, S3, or IAM resources.
  4. The response actions are executed directly from the security account to the service account. This is done by assuming an IAM role, for example, to isolate a potentially compromised EC2 instance.
  5. Manually trigger security responses using Security Hub custom actions. This can be suitable for manual handling of complex findings that need investigation before action.

Response actions decision

Your organization wants to articulate information security policy decisions and then create a list of corresponding automated security responses. Factors to consider include the classification of the resources and the technical complexity of the automation required. Start with common, less complex cases to get immediate gains, and increase complexity as you gain experience. Many stakeholders in your organization, such as business, IT operations, information security, and risk and compliance, should be involved in deciding which incident response actions should be automated. You need executive support for the political will to execute policies.

Creating exceptions to automated responses

There may be cases in which it’s unwise to take an automated response. For example, you might not want an automated response to incidents involving a core production database server that is critical to business operations. Instead, you’d want to use human judgment calls before responding. Or perhaps you know there are alarms that you don’t need for certain resources, like alerting for an open security group when you intentionally use it as a public web server. To address these exceptions, there is a carve-out. If the AWS resource has a tag with the name SecurityException, a response action isn’t executed. The tag name is defined during installation.

Table 1 provides an example of the responses implemented in this solution template. You might decide on different actions and different priorities for your organization. A current list of GuardDuty findings can be found at Active Finding Types and for AWS Config at AWS Config Managed Rules.

NSourceFindingDescriptionResponse
1GuardDuty Backdoor:EC2/Spambot
Backdoor:EC2/C&CActivity.B!DNS,
Backdoor:EC2/DenialOfService.Tcp,
Backdoor:EC2/DenialOfService.Udp,
Backdoor:EC2/DenialOfService.Dns,
Backdoor:EC2/DenialOfService.UdpOnTcpPorts,
Backdoor:EC2/DenialOfService.UnusualProtocol,
Trojan:EC2/BlackholeTraffic,
Trojan:EC2/DropPoint,
Trojan:EC2/BlackholeTraffic!DNS,
Trojan:EC2/DriveBySourceTraffic!DNS,
Trojan:EC2/DropPoint!DNS,
Trojan:EC2/DGADomainRequest.B,
Trojan:EC2/DGADomainRequest.C!DNS,
Trojan:EC2/DNSDataExfiltration,
Trojan:EC2/PhishingDomainRequest!DNS
See Backdoor Finding Types and
 
Trojan Finding Types
Isolate EC2 with empty security group.
 
Archive the GuardDuty finding.
 
Send SNS notification.
2GuardDutyUnauthorizedAccess:IAMUser/InstanceCredentialExfiltration,
UnauthorizedAccess:IAMUser/TorIPCaller,
UnauthorizedAccess:IAMUser/MaliciousIPCaller.Custom,
UnauthorizedAccess:IAMUser/ConsoleLoginSuccess.B,
UnauthorizedAccess:IAMUser/MaliciousIPCaller
See Unauthorized Finding TypesBlock IAM principal by attaching deny all policy.
 
Archive the GuardDuty finding.
 
Send SNS notification.
3AWS ConfigS3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLEDSee the documentation for s3-bucket-server-side-encryption-enabledEnable server-side encryption with Amazon S3-Managed keys (SSE-S3) with SSM document (AWS-EnableS3BucketEncryption).
 
Send SNS notification.
4AWS ConfigS3_BUCKET_PUBLIC_READ_PROHIBITEDSee the documentation for s3-bucket-public-read-prohibitedDisable S3 PublicRead and PublicWrite with SSM document (AWS-DisableS3BucketPublicReadWrite).
 
Send SNS notification.
5AWS ConfigS3_BUCKET_PUBLIC_WRITE_PROHIBITEDSee the documentation for s3-bucket-public-write-prohibitedDisable S3 PublicRead and PublicWrite with SSM document (AWS-DisableS3BucketPublicReadWrite).
 
Send SNS notification.
6AWS ConfigSECURITY_GROUP_OPEN_PROHIBITEDSee template, custom configuration.Confine security group to safe CIDR 172.31. 0.0/16
 
Send SNS notification.
7AWS ConfigENCRYPTED_VOLUMESSee the documentation for encrypted-volumesSend SNS notification.
8AWS ConfigRDS_STORAGE_ENCRYPTEDSee the documentation for rds-storage-encryptedSend SNS notification.

Installation

  1. In the security account, launch the template by selecting Launch Stack.
     
    Select this image to open a link that starts building the CloudFormation stack
    Additionally, you can find the latest code on GitHub, where you can also contribute to the sample code.
  2. Provide the following parameters for the security account (see Figure 2):
    • S3 bucket with sources: This bucket contains all sources, such as the Lambda function and templates. If you’re not customizing the sources, you can leave the default text.
    • Prefix for S3 bucket with sources: Prefix for all objects. If you’re not customizing the sources, you can leave the default.
    • Security IR-Role names: This is the role assumed for the response actions by the Lambda functions in the security account. The role is created by the stack launched in the service account.
    • Security exception tag: This defines the tag name for security exceptions. Resources marked with this tag are not automatically changed in response to a security finding. For example, you could add an exception tag for a valid open security group for a public website.
    • Organization ID: This is your AWS organization ID used to authorize forwarding of CloudWatch Events to the security account. Your accounts must be members of the same AWS organization.
    • Allowed network range IPv4: This CIDRv4 range is used to confine all open security groups that are not tagged for exception.
    • Allowed network range IPv6: This CIDRv6 range is used to confine all open security groups that are not tagged for exception.
    • Isolate EC2 findings: This is a list of all GuardDuty findings that should lead to an EC2 instance being isolated. Comma delimited.
    • Block principal finding: This is a list of all GuardDuty findings that should lead to blocking this role or user by attaching a deny all policy. Comma delimited.
    Figure 2: Stack launch in security account

    Figure 2: Stack launch in security account

  3. In each service account, launch the template by selecting Launch Stack.
     
    Select this image to open a link that starts building the CloudFormation stack

    Additionally, you can find the latest code on GitHub, where you can also contribute to the sample code.

  4. Provide the following parameters for each service account:
    • S3 bucket with sources: This bucket contains all sources, such as Lambda functions and templates. If you’re not customizing the sources, you can leave the default text.
    • Prefix for S3 bucket with sources: Prefix for all objects. If you’re not customizing the sources, you can leave the default text.
    • IR-Security role: This is the role that is created and used by the security account to execute response actions.
    • Security account ID: The CloudWatch Events are forwarded to this central security account.
    • Enable AWS Config: Define whether you want this stack to enable AWS Config. If you have already enabled AWS Config, then leave this value false.
    • Create SNS topic: Provide the name of an SNS topic only if you enable AWS Config and want to stream the configuration change to SNS (optional). Otherwise, leave this field blank.
    • SNS topic name: This is the name of the SNS topic to be created only if enabling AWS Config. The default text is Config.
    • Create S3 bucket for AWS Config: If you enable AWS Config, the template creates an S3 bucket for AWS Config.
    • Bucket name for AWS Config: The name of the S3 bucket created for AWS Config. The default text is config-bucket-{AccountId}.
    • Enable GuardDuty: If you have not already enabled GuardDuty in the service account, then you can do it here.

Testing

After you have deployed both stacks, you can test your environment by following these example steps in one of your service accounts. Before you test, you can subscribe to the SNS topic with prefix Security_Alerts_[Your_Stack] to be notified of a security event.

  1. Create and open security group 0.0.0.0/0 without creating an exception tag. After several minutes, the security group will be confined to the safe CIDR that you defined in your stack.
  2. Create an S3 bucket without enabling encryption. After several minutes, the default encryption AES-256 will be set on the bucket.
  3. For GuardDuty blocking of IAM principal, you can define a list of malicious IPs under the Threat List in the GuardDuty panel. Create a test role or user. When you execute an API call from this IP with the created test role or user, a GuardDuty finding is generated that triggers blocking of the IAM role or user.
  4. You can deploy Amazon GuardDuty tester and generate findings such as Trojan:EC2/DNSDataExfiltration or CryptoCurrency:EC2/BitcoinTool.B!DN. The GuardDuty findings trigger isolation of an EC2 instance by removing all current security groups and attaching an empty one. This new empty group can then be configured for forensic access later.

Exceptions for response actions

If the resource has the tag name SecurityException, a response action is not executed. The tag name is a parameter of the CloudFormation stack in the security account and can be customized at installation. The value of the tag is not validated, but it is good practice for the value to refer to an approval document such as a Jira issue. In this way, you can build an auditing chain of the approval. For example:


Tag name:  SecurityException
Tag value: Jira-1234

Make sure that the security tag can only be set or modified by an appropriate role. You can do this in two ways. The first way is to attach a deny statement to all policies that do not have privileges to assign this tag. An example policy statement to deny setting, removing, and editing of this tag for IAM, EC2, and S3 services is shown below. This policy does not prohibit working with the resources, such as starting or stopping an EC2 instance with this tag. See Controlling access based on tag keys for more information.


{
        {
            "Sid": "S3-Sec-Tag",
            "Effect": "Deny",
            "A	ction": [
                "s3:PutBucketTagging",
                "s3:PutObjectTagging"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringLikeIfExists": {
                    "aws:TagKeys": "TAG-NAME-SecurityException"
                }
            }
        },
        {
            "Sid": "EC2-VPC-Sec-Tag",
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringLikeIfExists": {
                    "aws:TagKeys": "TAG-NAME-SecurityException"
                }
            }
        }
    ]
}

In the above policy, you must modify the TAG-NAME-SecurityException to match your own tag name.

The second way to restrict the attachment of this tag is to use Tag policies to manage tags across multiple AWS accounts.

Centralized versus decentralized exception management

Attaching security tags is a decentralized approach in which you don’t need a centralized database to record remediation exceptions. A centralized exception database requires that you know each individual resource ID to set exceptions, and this is not always possible. Amazon EC2 Auto Scaling is a good example where you might not know the EC2 instance ID in advance. On an up-scaling event, the instance ID can’t be known in advance and preapproved. Furthermore, a central database must be kept in sync with the lifecycle of the resources, like with an instance down-scaling event. Hence, using tags on resources is a decentralized approach. If needed, the automatic scaling launch configuration can propagate the security exception tag; see Tag your auto scaled EC2 instances.

You can manage the IAM policies and roles for tagging either centrally or within an AWS CodePipeline. In this way, you can implement a centralized enforcement point for security exceptions but decentralize storing of the resource tags to the resources themselves.

Using AWS resource groups, you can always find all resources that have the Security tag for inventory and auditing proposes. For more information, see Build queries and groups in AWS resource groups.

Monitoring for security alerts

You can subscribe to the SNS topic with prefix Security_Alerts_[Your_Stack] to be notified of a security event or an execution that has failed.

Every execution is triggered from CloudWatch Events, and each of these events generates CloudWatch metrics. Under CloudWatch rules, you can see the rules for event forwarding from service accounts, or for triggering the responses in the central security account. For each CloudWatch rule, you can view the metrics by checking the box next to the rule, as shown in Figure 3, and then select Show metrics for the rule.
 

Figure 3: Metrics for Forwarding GuardDuty Events in CloudWatch

Figure 3: Metrics for Forwarding GuardDuty Events in CloudWatch

Creating new rules and customization

The mapping of findings to responses is defined in the CloudWatch Events rules in the security account. If you define new responses, you only update the central security account. You don’t adjust the settings of service accounts because they only forward events to the security account.

Here’s an example: Your team decides that a new SSM action – Terminate EC2 instance – should be triggered on the GuardDuty finding Backdoor:EC2/C&CActivity.B!DNS. In the security account, go to CloudWatch in the AWS Management Console. Create a new rule as described in the following steps (and shown in Figure 4):
 

Figure 4: CloudWatch rule creation

Figure 4: CloudWatch rule creation

  1. Go to CloudWatch Events and create a new rule. In the Event Pattern, under Build custom event pattern, define the following JSON:
    
    {
      "detail-type": [
        "GuardDuty Finding"
      ],
      "source": [
        "aws.guardduty"
      ],
      "detail": {
        "type": [
          "Backdoor:EC2/C&CActivity.B!DNS"
        ]
      }
    }
    

  2. Set the target for the rule as the Lambda function for execution of SSM, StratSsmAutomation. Use Input transformer to customize the parameters in invocation of the Lambda function.

    For the Input paths map:

    
    {
       "resourceId":"$.detail.resource.instanceDetails.instanceId",
       "account":"$.account"
    } 
    

    The expression extracts the instanceId from the event as resourceId, and the Account ID as account.

    For the field Input template, enter the following:

    
    {
       "account":<account>,
       "resourseType":"ec2:instance",
       "resourceId":<resourceId>,
       "AutomationDocumentName":"AWS-TerminateEC2Instance",
       "AutomationParameters":{
          "InstanceId":[
          <resourceId>   
          ]
       }
    }
    

You pass the name of the SSM automation document that you want to execute as an input parameter. In this case, the document is AWS-TerminateEC2Instance and the document input parameters are a JSON structure named AutomationParameters.

You have now created a new response action that is ready to test.

Troubleshooting

The automation execution that takes place in the security account is documented in AWS Systems Manager under Automation Execution. You can also refer to the AWS Systems Manager automation documentation.

If Lambda function execution fails, then a CloudWatch alarm is triggered, and notification is sent to an SNS topic with prefix Security_Alerts_[Your_Stack]. Lambda function execution in the security account is documented in the CloudWatch log group for the Lambda function. You can use the logs to understand whether the Lambda function was executed successfully.

Security Hub integration

AWS Security Hub aggregates alarms from GuardDuty and other providers. To get Security Hub to aggregate alarms, you must first activate Security Hub before deploying this solution. Then, invite your environment accounts to the security account as master. For more information, see Master and member accounts in AWS Security Hub.

The integration with Security Hub is for manual triggering and is not part of the automated response itself. This can be useful if you have findings that require security investigation before triggering the action.

From the Security Hub console, select the finding and use the Actions drop down menu in the upper right corner to trigger a response, as shown in Figure 5. A CloudWatch event invokes the automated response, such as a Lambda function to isolate an EC2 instance. The Lambda function fetches the GuardDuty finding from the service account and executes one of the following: Isolate EC2 Instance, Block Principal, or Send SNS Notification.

Only one resource can be selected for execution. If you select multiple resources, the response is not executed, and the Lambda log message is shown in CloudWatch Logs.
 

Figure 5: Security Hub integration

Figure 5: Security Hub integration

Cost of the solution

The cost of the solution depends on the events generated in your AWS account and chosen AWS Region. Costs might be as little as several U.S. dollars per month and per account. You can model your costs by understanding GuardDuty pricing, automation pricing in AWS system manager, and AWS Config pricing for six AWS Config rules. The costs for AWS Lambda and Amazon CloudWatch are expected to be minimal and might be included in your free tier. You can optionally use Security Hub; see AWS Security Hub pricing.

Summary

In this post, you learned how to deploy an automated incident response framework using AWS native features. You can easily extend this framework to meet your future needs. If you would like to extend it further, contact AWS professional services or an AWS partner. If you have technical questions, please use the Amazon GuardDuty or AWS Config forums. Remember, this solution is only an introduction to automated security response and is not a comprehensive solution.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

AWS Shield Threat Landscape report is now available

Post Syndicated from Mario Pinho original https://aws.amazon.com/blogs/security/aws-shield-threat-landscape-report-now-available/

AWS Shield is a managed threat protection service that safeguards applications running on AWS against exploitation of application vulnerabilities, bad bots, and Distributed Denial of Service (DDoS) attacks. The AWS Shield Threat Landscape Report (TLR) provides you with a summary of threats detected by AWS Shield. This report is curated by the AWS Threat Response Team (TRT), who continually monitors and assesses the threat landscape to build protections on behalf of AWS customers. This includes rules and mitigations for services like AWS Managed Rules for AWS WAF and AWS Shield Advanced. You can use this information to expand your knowledge of external threats and improve the security of your applications running on AWS.

Here are some of our findings from the most recent report, which covers Q1 2020:

Volumetric Threat Analysis

AWS Shield detects network and web application-layer volumetric events that may indicate a DDoS attack, web content scraping, account takeover bots, or other unauthorized, non-human traffic. In Q1 2020, we observed significant increases in the frequency and volume of network volumetric threats, including a CLDAP reflection attack with a peak volume of 2.3 Tbps.

You can find a summary of the volumetric events detected in Q1 2020, compared to the same quarter in 2019, in the following table:

MetricSame Quarter, Prior Year (Q1 2019)
Most Recent Quarter (Q1 2020)
Change
Total number of events253,231310,954+23%
Largest bit rate (Tbps)0.82.3+188%
Largest packet rate (Mpps)260.1293.1+13%
Largest request rate (rps)1,000,414694,201-31%
Days of elevated threat*13+200%

Days of elevated threat indicates the number of days during which the volume or frequency of events was unusually high.

Malware Threat Analysis

AWS operates a threat intelligence platform that monitors Internet traffic and evaluates potentially suspicious interactions. We observed significant increases in the both the total number of events and the number of unique suspects, relative to the prior quarter. The most common interactions observed in Q1 2020 were Remote Code Execution (RCE) attempts on Apache Hadoop YARN applications, where the suspect attempts to exploit the API of a Hadoop cluster’s resource management system and execute code, without authorization. In March 2020, these interactions accounted for 31% of all events detected by the threat intelligence platform.

You can find a summary of the volumetric events detected in Q1 2020, compared to the prior quarter, in the following table:

MetricPrior Quarter
(Q4 2019)
Most Recent Quarter
(Q1 2020)
Change
Total number of events (billion)0.71.1+57%
Unique suspects (million)1.21.6+33%

 

For more information about the threats detected by AWS Shield in Q1 2020 and steps that you can take to protect your applications running on AWS, download the AWS Shield Threat Landscape Report.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Shield forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mario Pinho

Mario Pinho is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into its simpler components. In his free time he pretends to be an artist by playing piano and doing landscape photography.

Single Sign-On between Okta Universal Directory and AWS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/single-sign-on-between-okta-universal-directory-and-aws/

Enterprises adopting the AWS Cloud want to effectively manage identities. Having one central place to manage identities makes it easier to enforce policies, to manage access permissions, and to reduce the overhead by removing the need to duplicate users and user permissions across multiple identity silos. Having a unique identity also simplifies access for all of us, the users. We all have access to multiple systems, and we all have troubles to remember multiple distinct passwords. Being able to connect to multiple systems using one single combination of user name and password is a daily security and productivity gain. Being able to link an identity from one system with an identity managed on another trusted system is known as “Identity Federation“, which single sign-on is a subset of. Identity Federation is made possible thanks to industry standards such as Security Assertion Markup Language (SAML), OAuth, OpenID and others.

Recently, we announced a new evolution of AWS Single Sign-On, allowing you to link AWS identities with Azure Active Directory identities. We did not stop there. Today, we are announcing the integration of AWS Single Sign-On with Okta Universal Directory.

Let me show you the experience for System Administrators, then I will demonstrate the single sign-on experience for the users.

First, let’s imagine that I am an administrator for an enterprise that already uses Okta Universal Directory to manage my workforce identities. Now I want to enable a simple and easy to use access to our AWS environments for my users, using their existing identities. As most enterprises, I manage multiple AWS Accounts. I want more than just a single sign-on solution, I want to manage access to my AWS Accounts centrally. I do not want to duplicate my Okta groups and user memberships by hand, nor maintain multiple identity systems (Okta Universal Directory and one for each AWS Account I manage). I want to enable automatic user synchronization between Okta and AWS. My users will sign in to the AWS environments using the experience they are already familiar with in Okta.

Connecting Okta as an identity source for AWS Single Sign-On
The first step is to add AWS Single Sign-On as an “application” Okta users can connect to. I navigate to the Okta administration console and login with my Okta administrator credentials, then I navigate to the Applications tab.

Okta admin consoleI click the green Add Application button and I search for AWS SSO application. I click Add.

Okta add applicationI enter a name to the app (you can choose whatever name you like) and click Done.

On the next screen, I configure the mutual agreement between AWS Single Sign-On and Okta. I first download the SAML Meta Data file generated by Okta by clicking the blue link Identity Provider Metadata. I keep this file, I need it later to configure the AWS side of the single sign-on.

Okta Identity Provider metadata

Now that I have the metadata file, I open to the AWS Management Console in a new tab. I keep the Okta tab open as the procedure is not finished there yet. I navigate to AWS Single Sign-On and click Enable AWS SSO.

I click Settings in the navigation panel. I first set the Identity source by clicking the Change link and selecting External identity provider from the list of options. Secondly, I browse to and select the XML file I downloaded from Okta in the Identity provider metadata section.

SSO configure metadata

I click Next: Review, enter CONFIRM in the provided field, and finally click Change identity source to complete the AWS Single Sign-On side of the process. I take note of the two values AWS SSO ACS URL and AWS SSO Issuer URL as I must enter these in the Okta console.

AWS SSO Save URLsI return to the tab I left open to my Okta console, and copy the values for AWS SSO ACS URL and AWS SSO Issuer URL.

OKTA ACS URLsI click Save to complete the configuration

Configuring Automatic Provisioning
Now that Okta is configured for single sign-on for my users to connect using AWS Single Sign-On I’m going to enable automatic provisioning of user accounts. As new accounts are added to Okta, and assigned to the AWS SSO application, a corresponding AWS Single Sign-On user is created automatically. As an administrator, I do not need to do any work to configure a corresponding account in AWS to map to the Okta user.

From the AWS Single Sign-On Console, I navigate to Settings and then click the Enable identity synchronization link. This opens a dialog containing the values for the SCIM endpoint and an OAuth bearer access token (hidden by default). I need both of these values to use in the Okta application settings.

AWS SSO SCIMI switch back to the tab open on the Okta console, and click on Provisioning tab under the AWS SSO Application. I select Enable API Integration. Then I copy / paste the values Base URL (I paste the value copied in AWS Single Sign-On Console SCIM endpoint) and API Token (I paste the value copied AWS Single Sign-On Console Access token)

Okta API IntegrationI click Test API Credentials to verify everything works as expected. Then I click To App to enable users creation, update, and deactivate.

Okta Provisioning To App

With provisioning enabled, my final task is to assign the users and groups that I want to synchronize from Okta to AWS Single Sign-On. I click the Assignments tab and add Okta users and groups. I click Assign, and I select the Okta users and groups I want to have access to AWS.

OKTA AssignmentsThese users are synchronized to AWS Single Sign-On, and the users now see the AWS Single Sign-On application appear in their Okta portal.

Okta Portal User ViewTo verify user synchronization is working, I switch back to the AWS Single Sign-On console and select the Users tab. The users I assigned in Okta console are present.

AWS SSO User View

I Configured Single Sign-On, Now What?
Okta is now my single source of truth for my user identities and their assignment into groups, and periodic synchronization automatically creates corresponding identities in AWS Single Sign-On. My users sign into their AWS accounts and applications with their Okta credentials and experience, and don’t have to remember an additional user name or password. However, as things stand my users have only access to sign in. To manage permissions in terms of what they can access once signed into AWS, I must set up permissions in AWS Single Sign-On.

Back to AWS SSO Console, I click AWS Accounts on the left tab bar and select the account from my AWS Organizations that I am giving access to. For enterprises having multiple accounts for multiple applications or environment, it gives you the granularity to grant access to a subset of your AWS accounts.

AWS SSO Select AWS AccountI click Assign users to assign SSO users or groups to a set of IAM permissions. For this example, I assign just one user, the one with @example.com email address.

Assign SSO UsersI click Next: Permission sets and Create new permission set to create a set of IAM policies to describe the set of permissions I am granting to these Okta users. For this example, I am granting a read-only permission on all AWS services.SSO Permission setAnd voila, I am ready to test this setup.

SSO User Experience for the console
Now that I showed you the steps System Administrators take to configure the integration, let me show you what is the user experience.

As an AWS Account user, I can sign-in on Okta and get access to my AWS Management Console. I can start either from the AWS Single Sign-On user portal (the URL is on the AWS Single Sign-On settings page) or from the Okta user portal page and select the AWS SSO app.

I choose to start from the AWS SSO User Portal. I am redirected to the Okta login page. I enter my Okta credentials and I land on the AWS Account and Role selection page. I click on AWS Account, select the account I want to log into, and click Management console. After a few additional redirections, I land on the AWS Console page.

SSO User experience

SSO User Experience for the CLI
System administrators, DevOps engineers, Developers, and your automation scripts are not using the AWS console. They use the AWS Command Line Interface (CLI) instead. To configure SSO for the command line, I open a terminal and type aws configure sso. I enter the AWS SSO User Portal URL and the Region.

$aws configure sso
SSO start URL [None]: https://d-0123456789.awsapps.com/start
SSO Region [None]: eu-west-1
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:

https://device.sso.eu-west-1.amazonaws.com/

Then enter the code:

AAAA-BBBB

At this stage, my default browser pops up and I enter my Okta credentials on the Okta login page. I confirm I want to enable SSO for the CLI.

SSO for the CLIand I close the browser when I receive this message:

AWS SSO CLI Close Browser Message

The CLI automatically resumes the configuration, I enter the default Region, the default output format and the name of the CLI profile I want to use.

The only AWS account available to you is: 012345678901
Using the account ID 012345678901
The only role available to you is: ViewOnlyAccess
Using the role name "ViewOnlyAccess"
CLI default client Region [eu-west-1]:
CLI default output format [None]:
CLI profile name [okta]:

To use this profile, specify the profile name using --profile, as shown:

aws s3 ls --profile okta

I am now ready to use the CLI with SSO. In my terminal, I type:

aws --profile okta s3 ls
2020-05-04 23:14:49 do-not-delete-gatedgarden-audit-012345678901
2015-09-24 16:46:30 elasticbeanstalk-eu-west-1-012345678901
2015-06-11 08:23:17 elasticbeanstalk-us-west-2-012345678901

If the machine you want to configure CLI SSO has no graphical user interface, you can configure SSO in headless mode, using the URL and the code provided by the CLI (https://device.sso.eu-west-1.amazonaws.com/ and AAAA-BBBB in the example above)

In this post, I showed how you can take advantage of the new AWS Single Sign-On capabilities to link Okta identities to AWS accounts for user single sign-on. I also make use of the automatic provisioning support to reduce complexity when managing and using identities. Administrators can now use a single source of truth for managing their users, and users no longer need to manage an additional identity and password to sign into their AWS accounts and applications.

AWS Single Sign-On with Okta is free to use, and is available in all Regions where AWS Single Sign-On is available. The full list is here.

To see all this in motion, you can check out the following demo video for more details on getting started.

— seb

How to create SAML providers with AWS CloudFormation

Post Syndicated from Rich McDonough original https://aws.amazon.com/blogs/security/how-to-create-saml-providers-with-aws-cloudformation/

As organizations grow, they often experience an inflection point where it becomes impractical to manually manage separate user accounts in disparate systems. Managing multiple AWS accounts is no exception. Many large organizations have dozens or even hundreds of AWS accounts spread across multiple business units.

AWS provides many solutions that can orchestrate a person’s identity across multiple accounts. AWS Identity and Access Management (IAM), SAML, and OpenID can all help. The solution I describe in this post uses these features and services and can scale to thousands of AWS accounts. It provides a repeatable and automated means for deploying a unified identity management structure across all of your AWS environments. It does this while extending existing identity management into AWS without needing to change your current sources of user information.

About multi-account management

This approach uses AWS CloudFormation StackSets to deploy an identity provider and AWS IAM roles into multiple accounts. Roles may be tailored for your business needs and mapped to administrators, power users, or highly specialized roles that perform domain-specific tasks within your environment. StackSets are a highly scalable tool for multi-account orchestration and enforcement of policies across an organization.

Before deploying this solution across your organization, I recommend that you deploy the stack below into a single AWS account for testing purposes. Then, extend the solution using StackSets across your organization once tested. For more information about AWS CloudFormation Stacks and StackSets, see our documentation.

About SAML and SAML providers

Security Assertion Markup Language 2.0 (SAML) is an open standard for exchanging identity and security information with applications and other external systems. For example, if your organization uses Microsoft Active Directory, you can federate your user accounts to other organizations by way of SAML assertions. Actions that users take in the external organization can then be mapped to the identity of the original user and executed with a role that has predetermined privileges.

If you’re new to SAML, don’t worry! Not only is it ubiquitous, it’s also well documented. Many websites use it to drive their security and infrastructure, and you may have already used it without knowing it. For more information about SAML and AWS, I suggest you start with Enabling SAML for Your AWS Resources.

Properly configured SAML federation provides core security requirements for authentication and authorization, while maintaining a single source of truth for who your users are. It also enables important constraints such as multi-factor authentication and centralized access management. Within AWS IAM, you setup a SAML trust by configuring your identity provider with information about AWS and the AWS IAM roles that you want your federated users to use. SAML is secured using public key authentication.

This solution is a companion to the blog post AWS Federated Authentication with Active Directory Federation Services (AD FS), but it can be applied to any SAML provider.

Solution overview

To orchestrate SAML providers across multiple accounts, you will need the following:

  • A basic understanding of federated users.
  • An identity provider that supports SAML, such as Active Directory Federation Services (AD FS), or Shibboleth.
  • An Amazon Simple Storage Service (Amazon S3) bucket to host the provider’s federation metadata file.
  • An AWS account that can create StackSets in other accounts. Usually, this is an AWS Organizations master account, with StackSets provisioned into member accounts. See the AWS Organizations User Guide for more details.

Note: You can adapt this solution to retrieve federation metadata from a HTTPS endpoint, such as those published by publicly hosted identity providers, but this solution does not currently offer that.

At this time, CloudFormation does not directly support provisioning a SAML provider. However, you can accomplish this by using a custom resource Lambda function. The sample code provided in this post takes this one step further and deploys your federation metadata from a secure Amazon S3 bucket, as some organizations have requirements about how this metadata is stored.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Here’s what the solution architecture looks like, as shown in Figure 1:

  1. An AWS CloudFormation template is created within an AWS account. Most commonly this is your master account within AWS Organizations, but it can be a standalone account as well.
  2. The AWS CloudFormation template is deployed to other AWS accounts within your organization using AWS CloudFormation StackSets.
  3. The AWS CloudFormation StackSet creates the following resources:
    1. A new read-only role within AWS IAM that uses the new identity provider as the principal in the IAM role’s trust policy. This is a sample that I use to demonstrate how you can deploy IAM Roles that are bound to a single, dedicated identity provider.
    2. An execution role for a custom resource AWS Lambda function.
    3. The AWS Lambda function, which creates the identity provider within the destination account.
  4. The new AWS Lambda function is executed, pulling your federation metadata from the master account’s S3 Bucket and creating the new Identity Provider.

Step 1: Upload your federation metadata to a restricted S3 bucket

To ensure that your SAML provider’s federation metadata is exposed only to accounts within your AWS organization, you can upload the metadata to an Amazon S3 bucket and then restrict access to accounts within your organization. The bucket policy follows this template:


{
    "Version": "2012-10-17",
    "Id": "Policy1545178796950",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<replace with bucket name>/*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalOrgID": "<replace with organization ID>"
                }
            }
        }
    ]
}

Once this Amazon S3 bucket policy is in place, upload your federation metadata document to the bucket and note the object URL.

If your organization can share your federation metadata publicly, then there’s no need for this bucket policy. Simply upload your file to the S3 bucket, and make the bucket publicly accessible.

Step 2: Launch the CloudFormation template

To create the SAML provider within AWS IAM, this solution uses a custom resource Lambda function, as CloudFormation does not currently offer the ability to create the configuration directly. In this case, I show you how to use the AWS API to create a SAML provider in each of your member accounts, pulling the federation metadata from an S3 bucket owned by the organization master account.

The actual CloudFormation template follows. A sample read-only role has been included, but I encourage you to replace this with IAM roles that make sense for your environment. Save this template to your local workstation, or to your code repository for tracking infrastructure changes (if you need a source code management system, we recommend AWS CodeCommit).


---
AWSTemplateFormatVersion: 2010-09-09
Description: Create SAML provider

Parameters:
  FederationName:
    Type: String
    Description: Name of SAML provider being created in IAM
  FederationBucket:
    Type: String
    Description: Bucket containing federation metadata
  FederationFile:
    Type: String
    Description: Name of file containing the federation metadata

Resources:
  FederatedReadOnlyRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: Federated-ReadOnly
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Action: sts:AssumeRoleWithSAML
            Principal:
              Federated: !Sub arn:aws:iam::${AWS::AccountId}:saml-provider/${FederationName}
            Condition:
              StringEquals:
                SAML:aud: https://signin.aws.amazon.com/saml
      Path: '/'
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/ReadOnlyAccess

  SAMLProviderCustomResourceLambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action:
              - sts:AssumeRole
      Path: "/"
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogGroup
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                Resource: 'arn:aws:logs:*:*:log-group:/aws/lambda/*-SAMLProviderCustomResourceLambda-*:*'
              - Effect: Allow
                Action:
                  - iam:CreateSAMLProvider
                  - iam:DeleteSAMLProvider
                Resource: !Sub arn:aws:iam::${AWS::AccountId}:saml-provider/${FederationName}
              - Effect: Allow
                Action:
                  - iam:ListSAMLProviders
                Resource: '*'
              - Effect: Allow
                Action:
                  - s3:GetObject
                Resource: !Sub 'arn:aws:s3:::${FederationBucket}/*'

  CustomResource:
    Type: Custom::CustomResource
    DependsOn:
      - SAMLProviderCustomResourceLambda
      - SAMLProviderCustomResourceLambdaExecutionRole
    Properties:
      ServiceToken: !GetAtt SAMLProviderCustomResourceLambda.Arn

  SAMLProviderCustomResourceLambda:
    Type: "AWS::Lambda::Function"
    Properties:
      Handler: index.lambda_handler
      Role: !GetAtt SAMLProviderCustomResourceLambdaExecutionRole.Arn
      Runtime: python3.7
      Timeout: 300
      Environment:
        Variables:
          FEDERATION_NAME: !Ref FederationName
          FEDERATION_BUCKET: !Ref FederationBucket
          FEDERATION_FILE: !Ref FederationFile
      Code:
        ZipFile: |
          import boto3, json, os, urllib.request, ssl, time, traceback


          BUCKET = os.getenv('FEDERATION_BUCKET')
          FILE = os.getenv('FEDERATION_FILE')
          NAME = os.getenv('FEDERATION_NAME')


          class SAMLProvider(object):
              def __init__(self):
                  self.iam_client = boto3.client('iam')
                  self.existing_providers = []
                  self._list_saml_providers()
                  self.s3 = boto3.resource('s3')

              def get_federation_metadata(self):
                  try:
                      self.s3.Bucket(BUCKET).download_file(FILE, '/tmp/' + FILE)
                      handle = open('/tmp/' + FILE)
                      data = handle.read()
                      handle.close()
                      os.remove('/tmp/' + FILE)
                      return data
                  except:
                      traceback.print_exc()
                      raise

              def _list_saml_providers(self):
                  providers = []
                  response = self.iam_client.list_saml_providers()
                  for provider in response['SAMLProviderList']:
                      self.existing_providers.append(provider['Arn'])

              def add_saml_provider(self, name):
                  for arn in self.existing_providers:
                      if arn.split('/')[1] == name:
                          print(name + ' already exists as a provider')
                          return False
                  response = self.iam_client.create_saml_provider(SAMLMetadataDocument=self.get_federation_metadata(), Name=name)
                  print('Create response: ' + str(response))
                  return True

              def delete_saml_provider(self, name):
                  for arn in self.existing_providers:
                      if arn.split('/')[1] == name:
                          response = self.iam_client.delete_saml_provider(SAMLProviderArn=arn)
                          print('Delete response: ' + str(response))

          def send_response(event, context, response_status, response_data):
              response_body = json.dumps({
                  'Status': response_status,
                  'Reason': 'See the details in CloudWatch Log Stream: ' + context.log_stream_name,
                  'PhysicalResourceId': context.log_stream_name,
                  'StackId': event['StackId'],
                  'RequestId': event['RequestId'],
                  'LogicalResourceId': event['LogicalResourceId'],
                  'Data': response_data
              })
              print('ResponseURL: %s', event['ResponseURL'])
              print('ResponseBody: %s', response_body)
              try:
                  opener = urllib.request.build_opener(urllib.request.HTTPHandler)
                  request = urllib.request.Request(event['ResponseURL'], data=response_body.encode())
                  request.add_header('Content-Type', '')
                  request.add_header('Content-Length', len(response_body))
                  request.get_method = lambda: 'PUT'
                  response = opener.open(request)
                  print("Status code: %s", response.getcode())
                  print("Status message: %s", response.msg)
              except:
                  traceback.print_exc()


          def lambda_handler(event, context):
              print(event)
              print(context)
              saml = SAMLProvider()
              try:
                  if event['RequestType'] == 'Create':
                      saml.add_saml_provider(NAME)
                      send_response(event, context, 'SUCCESS', {"Message": "Resource creation successful!"})
                  if event['RequestType'] == 'Update':
                      saml.delete_saml_provider(NAME)
                      time.sleep(10)
                      saml.add_saml_provider(NAME)
                      send_response(event, context, 'SUCCESS', {"Message": "Resource update successful!"})
                  if event['RequestType'] == 'Delete':
                      saml.delete_saml_provider(NAME)
                      send_response(event, context, 'SUCCESS', {"Message": "Resource deletion successful!"})
              except:
                  send_response(event, context, "FAILED", {"Message": "Exception during processing"})
                  traceback.print_exc()

As you review this template, note that a Python Lambda function is embedded in the template body itself. This is a useful way to package code along with your CloudFormation templates. The limit for the length of code inline within a CloudFormation template is fixed, as detailed in the AWS CloudFormation user guide. Code that exceeds this length must be saved to a separate location within an Amazon S3 bucket and referenced from your template. However, the preceding snippet is short enough to be included all in one CloudFormation template.

Step 3: Create the StackSet

With your SAML provider metadata securely uploaded to your Amazon S3 bucket, it’s time to create the StackSet.

  1. First, navigate to the CloudFormation console and select StackSets, then Create StackSet.
     
    Figure 2: Creating a new StackSet

    Figure 2: Creating a new StackSet

  2. Select Template is ready, then Upload a template file. Select Choose file to choose the location of the CloudFormation template, then select Next.
     
    Figure 3: Specifying the template details

    Figure 3: Specifying the template details

  3. Enter a value for StackSet name. Then provide the following configuration parameters and select Next:
    1. FederationName: The name of the SAML provider. This is the name of the provider as you see it in IAM once provisioned, and it appears in the ARN of the SAML provider.
    2. FederationFile: The file name within S3.
    3. FederationBucket: The name of the S3 bucket.

     

    Figure 4: Specifying the StackSet parameters

    Figure 4: Specifying the StackSet parameters

  4. In the Account numbers field, enter the list of accounts into which you wish to provision this stack. Select a Region from the drop-down menu, then select any concurrent deployment options that you need. For most customers, the concurrent and failure tolerance options generally remain unmodified.

    Important: When you enter your list of accounts, you must use the 12-digit account number for each destination, without spaces, and separated by commas.

    Because IAM is a global service, there is no need to deploy this to multiple regions. The template must be deployed once per AWS account, although it is managed by a CloudFormation Stack that exists in a single Region.
     

    Figure 5: Selecting your deployment options

    Figure 5: Selecting your deployment options

  5. Depending on your organization’s security posture, you may have chosen to use self-managed permissions. If that is the case, then you must specify an AWS IAM role that is permitted to execute StackSet deployments, as well as an execution role in the destination accounts. For more info, visit the StackSets’ Grant Self-Managed Permissions documentation page.
  6. Review your selections, accept the IAM role creation acknowledgment, and click Submit to create your stacks. Once completed, ensure that the new identity provider is present in each account by visiting the AWS IAM console.

Summary

Creating a SAML provider across multiple AWS accounts enables enterprises to extend their existing authentication infrastructure into the cloud.

You can extend the solution provided here to deploy different federation metadata to AWS accounts based on specific criteria—for example, if your development and staging accounts have a separate identity provider. You can also add simple logic to the Lambda function that I’ve included in the CloudFormation template. Additionally, you might need to create any number of predefined roles within your AWS organization—using the StackSets approach enables a zero-touch means for creating these roles from a central location.

Organizations looking for even stronger controls over multiple accounts should implement this solution with AWS Organizations. Combined with federated access and centrally managed IAM roles, Organizations provides Service Control Policies and a common framework for deploying CloudFormation Stacks across multiple business units.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Rich McDonough

Rich McDonough is a Solutions Architect for Amazon Web Services based in Toronto. His primary focus is on Management and Governance, helping customers scale their use of AWS safely and securely. Before joining AWS two years ago, he specialized in helping migrate customers into the cloud. Rich loves helping customers learn about AWS CloudFormation, AWS Config, and AWS Control Tower.

AWS Artifact service launches new user interface

Post Syndicated from Dhiraj Mehta original https://aws.amazon.com/blogs/security/aws-artifact-service-launches-new-user-interface/

AWS Artifact service introduces a new user interface (UI) that provides a more intuitive experience in searching and saving AWS compliance reports, and accepting agreements. The new UI includes AWS Artifact home page equipped with information and videos on how to use the AWS Artifact service for your compliance needs. Additionally, the Reports and Agreements console now provides keyword search capability allowing you to accurately search the artifact you are looking for rather than scrolling through the entire page. The new UI is supported on a smartphone, tablet, laptop, or widescreen monitor, resizing the on-screen content dynamically.

Check out the new AWS Artifact UI and connect with us on our new AWS Artifact Forum for any questions, feedback or suggestions for new features. To learn more about AWS Artifact, please visit our product page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dhiraj Mehta

Dhiraj Mehta is a Senior Technical Program Manager at AWS and product owner of AWS Artifact. He has extensive experience in security, risk, and compliance domains. He received his MBA from University of California, Irvine, and completed his undergrad in Computer Science from Kurukshetra University in India. Outside of work, Dhiraj likes to travel, cook, and play ping-pong.

Spring 2020 SOC 2 Type I Privacy report now available

Post Syndicated from Ninad Naik original https://aws.amazon.com/blogs/security/spring-2020-soc-2-type-i-privacy-report-now-available/

We continue to be customer focused in addressing privacy requirements, enabling you to be confident in how your content is protected while using Amazon Web Services. Our latest SOC2 Type 1 Privacy report is now available to demonstrate our privacy compliance commitments to you.

Our spring 2020 SOC 2 Type I Privacy report provides you with a third-party attestation of systems (services) and the suitability of the design of our privacy controls. The SOC 2 Privacy Trust Principle, developed by the American Institute of CPAs (AICPA), establishes the criteria for evaluating controls related to how personal information is collected, used, retained, disclosed, and disposed to meet the entity’s objectives. Additional information supporting our SOC2 Type1 report can be found in our Privacy Notice and Customer Agreement documentation.

The scope of our privacy report includes information about how we handle the content that you upload to AWS and how it is protected in all of the services and locations that are in scope for the latest AWS SOC reports. You can download the latest SOC 2 Type I Privacy report through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ninad Naik

Ninad is a Security Assurance Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Ninad holds a Master’s degree in Information Systems from Syracuse University, NY and a Bachelor’s of Engineering degree in Information Technology from Mumbai University, India. Ninad has 10 years of experience in security assurance and ITIL, CISA, CGEIT, and CISM certifications.

Spring 2020 SOC reports now available with 122 services in scope

Post Syndicated from Ashutosh Sawant original https://aws.amazon.com/blogs/security/spring-2020-soc-reports-now-available-122-services-in-scope/

At AWS, our customers’ security is of the highest importance and we continue to provide transparency into our security posture.

We’re proud to deliver the System and Organizational Controls (SOC) 1, 2, and 3 reports to our AWS customers. The SOC program continues to enable our global customer base to maintain confidence in our secured control environments with a focus on information security, confidentiality, and availability. For the spring 2020 SOC reports covering period 10/1/2019 to 03/31/2020, we are excited to announce six new services in scope, for a total of 122 total services in scope. Additionally, we have updated how the scope of AWS locations is represented in our SOC reports, to provide better clarity to our customers.

These SOC reports are now available through AWS Artifact in the AWS Management Console. The SOC 3 report can also be downloaded online as a PDF.

Here are the 6 new services in scope (followed by their SDK names):

  • Amazon Chime (chime)
  • AWS Data Exchange (dataexchange)
  • AWS Elemental MediaLive (medialive)
  • AWS Elemental MediaConvert (mediaconvert)
  • AWS Personal Health Dashboard (health)
  • Amazon Textract (textract)

As always, AWS strives to bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS representatives to let us know what additional services you would like to see in scope across any of our compliance programs.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ashutosh Sawant

Ashutosh is a Security Assurance Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Prior to joining AWS, Ashutosh spent over 7 years at Ernst & Young as a Manager in the Risk Advisory Practice. Ashutosh holds a Master’s degree in Information Systems from Northeastern University, Boston and a Bachelor’s degree in Information Technology from Gujarat University, India.

New – Enhanced Amazon Macie Now Available with Substantially Reduced Pricing

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-enhanced-amazon-macie-now-available/

Amazon Macie is a fully managed service that helps you discover and protect your sensitive data, using machine learning to automatically spot and classify data for you.

Over time, Macie customers told us what they like, and what they didn’t. The service team has worked hard to address this feedback, and today I am very happy to share that we are making available a new, enhanced version of Amazon Macie!

This new version has simplified the pricing plan: you are now charged based on the number of Amazon Simple Storage Service (S3) buckets that are evaluated, and the amount of data processed for sensitive data discovery jobs. The new tiered pricing plan has reduced the price by 80%. With higher volumes, you can reduce your costs by more than 90%.

At the same time, we have introduced many new features:

  • An expanded sensitive data discovery, including updated machine learning models for personally identifiable information (PII) detection, and customer-defined sensitive data types using regular expressions.
  • Multi-account support with AWS Organizations.
  • Full API coverage for programmatic use of the service with AWS SDKs and AWS Command Line Interface (CLI).
  • Expanded regional availability to 17 Regions.
  • A new, simplified free tier and free trial to help you get started and understand your costs.
  • A completely redesigned console and user experience.

Macie is now tightly integrated with S3 in the backend, providing more advantages:

  • Enabling S3 data events in AWS CloudTrail is no longer a requirement, further reducing overall costs.
  • There is now a continual evaluation of all buckets, issuing security findings for any public bucket, unencrypted buckets, and for buckets shared with (or replicated to) an AWS account outside of your Organization.

The anomaly detection features monitoring S3 data access activity previously available in Macie are now in private beta as part of Amazon GuardDuty, and have been enhanced to include deeper capabilities to protect your data in S3.

Enabling Amazon Macie
In the Macie console, I select to Enable Macie. If you use AWS Organizations, you can delegate an AWS account to administer Macie for your Organization.

After it has been enabled, Amazon Macie automatically provides a summary of my S3 buckets in the region, and continually evaluates those buckets to generate actionable security findings for any unencrypted or publicly accessible data, including buckets shared with AWS accounts outside of my Organization.

Below the summary, I see the top findings by type and by S3 bucket. Overall, this page provides a great overview of the status of my S3 buckets.

In the Findings section I have the full list of findings, and I can select them to archive, unarchive, or export them. I can also select one of the findings to see the full information collected by Macie.

Findings can be viewed in the web console and are sent to Amazon CloudWatch Events for easy integration with existing workflow or event management systems, or to be used in combination with AWS Step Functions to take automated remediation actions. This can help meet regulations such as Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), General Data Privacy Regulation (GDPR), and California Consumer Protection Act (CCPA).

In the S3 Buckets section, I can search and filter on buckets of interest to create sensitive data discovery jobs across one or multiple buckets to discover sensitive data in objects, and to check encryption status and public accessibility at object level. Jobs can be executed once, or scheduled daily, weekly, or monthly.

For jobs, Amazon Macie automatically tracks changes to the buckets and only evaluates new or modified objects over time. In the additional settings, I can include or exclude objects based on tags, size, file extensions, or last modified date.

To monitor my costs, and the use of the free trial, I look at the Usage section of the console.

Creating Custom Data Identifiers
Amazon Macie supports natively the most common sensitive data types, including personally identifying information (PII) and credential data. You can extend that list with custom data identifiers to discover proprietary or unique sensitive data for your business.

For example, often companies have a specific syntax for their employee IDs. A possible syntax is to have a capital letter, that defines if this is a full-time or a part-time employee, followed by a dash, and then eight numbers. Possible values in this case are F-12345678 or P-87654321.

To create this custom data identifier, I enter a regular expression (regex) to describe the pattern to match:

[A-Z]-\d{8}

To avoid false positives, I ask that the employee keyword is found near the identifier (by default, less than 50 characters apart). I use the Evaluate box to test that this configuration works with sample text, then I select Submit.

Available Now
For Amazon Macie regional availability, please see the AWS Region Table. You can find more information on how the new enhanced Macie in the documentation.

This release of Amazon Macie remains optimized for S3. However, anything you can get into S3, permanently or temporarily, in an object format supported by Macie, can be scanned for sensitive data. This allows you to expand the coverage to data residing outside of S3 by pulling data out of custom applications, databases, and third-party services, temporarily placing it in S3, and using Amazon Macie to identify sensitive data.

For example, we’ve made this even easier with RDS and Aurora now supporting snapshots to S3 in Apache Parquet, which is a format Macie supports. Similarly, in DynamoDB, you can use AWS Glue to export tables to S3 which can then be scanned by Macie. With the new API and SDKs coverage, you can use the new enhanced Amazon Macie as a building block in an automated process exporting data to S3 to discover and protect your sensitive data across multiple sources.

Danilo

AWS achieves Spain’s ENS High certification across 105 services

Post Syndicated from Borja Larrumbide original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-certification-across-105-services/

AWS achieved Spain’s Esquema Nacional de Seguridad (ENS) High certification across 105 services in all AWS Regions. To successfully achieve the ENS High certification, BDO España conducted an independent audit and attested that AWS meets confidentiality, integrity, and availability standards. This provides assurance to Spain’s public sector organizations wanting to build secure applications and services on AWS.

Spain’s National Security Framework is regulated under Royal Decree 3/2010, and was developed through close collaboration between Entidad Nacional de Acreditación (ENAC), the Ministry of Finance and Public Administration, and the National Cryptologic Centre (CCN), as well as other administrative bodies.

The following AWS services are ENS High certified across all AWS Regions:

  • Amazon API Gateway
  • Amazon AppStream 2.0
  • Amazon Athena
  • Amazon Chime
  • Amazon Cloud Directory
  • Amazon CloudFront
  • Amazon CloudWatch
  • Amazon CloudWatch Events
  • Amazon CloudWatch Logs
  • Amazon Cognito
  • Amazon Comprehend
  • Amazon Connect
  • Amazon DocumentDB
  • Amazon DynamoDB (with MongoDB compatibility)
  • Amazon Elastic Block Store (Amazon EBS)
  • Amazon Elastic Compute Cloud (Amazon EC2)
  • Amazon Elastic Container Registry (Amazon ECR)
  • Amazon Elastic Container Service (Amazon ECS)
  • Amazon Elastic File System (Amazon EFS)
  • Amazon Elastic Kubernetes Service (Amazon EKS)
  • Amazon ElastiCache
  • Amazon Elasticsearch Service (Amazon ES)
  • Amazon EMR
  • Amazon FSx
  • Amazon GuardDuty
  • Amazon Inspector
  • Amazon Kinesis Data Analytics
  • Amazon Kinesis Data Firehose
  • Amazon Kinesis Data Streams
  • Amazon Kinesis Video Streams
  • Amazon MQ
  • Amazon Neptune
  • Amazon Pinpoint
  • Amazon Polly
  • Amazon Redshift
  • Amazon Rekognition
  • Amazon Relational Database Service (Amazon RDS)
  • Amazon Route 53
  • Amazon Route 53 Resolver
  • Amazon Simple Storage Service Glacier
  • Amazon SageMaker
  • Amazon Simple Notification Service (Amazon SNS)
  • Amazon Simple Queue Service (Amazon SQS)
  • Amazon Simple Storage Service (Amazon S3)
  • Amazon Simple Workflow Service (Amazon SWF)
  • Amazon Transcribe
  • Amazon Translate
  • Amazon Virtual Private Cloud (Amazon VPC)
  • Amazon WorkSpaces
  • AWS Amplify
  • AWS AppSync
  • AWS Artifact
  • AWS Auto Scaling
  • AWS Backup
  • AWS Batch
  • AWS Certificate Manager (ACM)
  • AWS CloudFormation
  • AWS CloudHSM
  • AWS CloudTrail
  • AWS CodeBuild
  • AWS CodeCommit
  • AWS CodeDeploy
  • AWS CodePipeline
  • AWS CodeStar
  • AWS Config
  • AWS Database Migration Service (AWS DMS)
  • AWS DataSync
  • AWS Direct Connect
  • AWS Directory Service
  • AWS Elastic Beanstalk
  • AWS Elemental MediaConnect
  • AWS Elemental MediaConvert
  • AWS Elemental MediaLive
  • AWS Firewall Manager
  • AWS Global Accelerator
  • AWS Glue
  • AWS Identity and Access Management (IAM)
  • AWS IoT
  • AWS Key Management Service (AWS KMS)
  • AWS Lambda
  • AWS License Manager
  • AWS Managed Services
  • AWS OpsWorks for CM
  • AWS OpsWorks Stacks
  • AWS Organizations
  • AWS Outposts
  • AWS Secrets Manager
  • AWS Security Hub
  • AWS Server Migration Service (AWS SMS)
  • AWS Serverless Application Repository
  • AWS Service Catalog
  • AWS Shield
  • AWS Single Sign-On
  • AWS Snowball
  • AWS Snowball Edge
  • AWS Snowmobile
  • AWS Storage Gateway
  • AWS Systems Manager
  • AWS Transfer for SFTP
  • AWS Transit Gateway
  • AWS Trusted Advisor
  • AWS WAF
  • AWS X-Ray
  • Elastic Load Balancing
  • VM Import/Export

For more information about ENS High, see the AWS Compliance page Esquema Nacional de Seguridad High. To view which services are covered, see the ENS High tab on the AWS Services in Scope by Compliance Program page. You may download the Esquema Nacional de Seguridad (ENS) Certificate from AWS Artifact in the AWS Console or from the Compliance page Esquema Nacional de Seguridad High.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. He received a bachelor’s degree in Computer Science from Boston University (USA). Since then, he has worked in companies such as Microsoft and BBVA, where he has performed in different roles and sectors. Borja is a seasoned security assurance practitioner with many years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

Easily control the naming of individual IAM role sessions

Post Syndicated from Derrick Oigiagbe original https://aws.amazon.com/blogs/security/easily-control-naming-individual-iam-role-sessions/

AWS Identity and Access Management (IAM) now has a new sts:RoleSessionName condition element for the AWS Security Token Service (AWS STS), that makes it easy for AWS account administrators to control the naming of individual IAM role sessions. IAM roles help you grant access to AWS services and resources by using dynamically generated short-term credentials. Each instantiation of an IAM role, and the associated set of short-term credentials, is known as an IAM role session. Each IAM role session is uniquely identified by a role session name. You can now use the new condition to control how IAM principals and applications name their role sessions when they assume an IAM role, and rely on the role session name to easily track their actions when viewing AWS CloudTrail logs.

How do you name a role session?

There are different ways to name a role session, and it depends on the method used to assume the IAM role. In some cases, AWS sets the role session on your behalf. For example, for Amazon Elastic Compute Cloud (Amazon EC2) instance profiles, AWS sets the role session name to the instance profile ID. When you use the AssumeRolewithSAML API to assume an IAM role, AWS sets the role session name value to the attribute provided by the identity provider, which your administrator defined. In other cases, you provide the role session name when assuming the IAM role. For example, when assuming an IAM role with APIs such as AssumeRole or AssumeRoleWithWebIdentity, the role session name is a required input parameter that you set when making the API request.

What is a condition element?

A condition is an optional IAM policy element. You can use a condition to specify the circumstances under which the IAM policy grants or denies permissions. A condition includes a condition key, operator, and value for the condition.

There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, specific EC2 actions support the ec2:InstanceType condition. All AWS service actions support global conditions.

Now that I’ve explained the meaning of a role session name and the meaning of a condition element in an IAM policy, let me introduce the new condition, sts:RoleSessionName.

sts:RoleSessionName condition

The sts:RoleSessionName is a service-specific condition that you use with the AssumeRole API action, in an IAM policy to control what is set as the role session name. You can use any string operator, such as StringLike, when using this condition.

Condition KeyDescriptionOperator(s)Value
sts:RoleSessionNameUniquely identifies a session when IAM principals, federated identities, and applications assume the same IAM role.All string operatorsString of upper-case and lower-case alphanumeric characters with no spaces. It can include underscores or any of the following characters: =,[email protected]IAM policy element variables can be set as values.

In this post, I will walk you through two examples of how to use the sts:RoleSessionName condition. In the first example, you will learn how to require IAM users to set their aws:username as their role session name when they assume an IAM role in your AWS account. In the second example, you will learn how to require IAM principals to choose from a pre-selected set of role session names when they assume an IAM role in your AWS account.

The examples shared in this post describe a scenario in which you have pricing data that is stored in an Amazon DynamoDB database in your AWS account, and you want to share the pricing data with members from your marketing department, who are in a different AWS account. In addition, you want to use your AWS CloudTrail logs to track the activities of members from the marketing department whenever they access the pricing data. This post will show you how to achieve this by doing the following:

  1. Dedicate an IAM role in your AWS account for the marketing department.
  2. Define the role trust policy for the IAM role, to specify who can assume the IAM role.
  3. Use the new sts:RoleSessionName condition in the role trust policy to define the allowed role session name values for the dedicated IAM role.

When members from the marketing department attempt to assume the IAM role in your AWS account, AWS will verify that their role session name does not conflict with the IAM role trust policy, before authorizing the assume-role action. The new sts:RoleSessionName condition gives you control of the role session name. With this control, when you view the AWS CloudTrail logs, you can now rely on the role session name for any of the following information:

  • To identify the IAM principal or application that assumed an IAM role.
  • The reason why the IAM principal or application assumed an IAM role.
  • To track the actions performed by the IAM principal or application with the assumed IAM role.

Example 1 – Require IAM users to set their aws:username as their role session name when they assume an IAM role in your AWS account

When an IAM user assumes an IAM role in your AWS account, you can require them to set their aws:username as the role session name. With this requirement, you can rely on the role session name to identify the IAM user who performed an action with the IAM role.

This example continues the scenario of sharing pricing data with members of the marketing department within your organization, who are in a different AWS account. John is a member of the marketing department, he is an IAM user in the marketing AWS account and his aws:username is john_s. For John to access the pricing data in your AWS account, you first create a dedicated IAM role for the marketing department, called marketing. John will assume the marketing IAM role to access the pricing information in your AWS account.

Next, you establish a two-way trust between the marketing AWS account and your AWS account. The administrator of the marketing AWS account will need to grant John sts:AssumeRole permission with an IAM policy, so that John can assume the marketing IAM role in your AWS account. The following is a sample policy to grant John assume-role permission. Be sure to replace <AccountNumber> with your account number.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AssumeRole",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<AccountNumber>:role/marketing"
        }
    ]
}

You then create a role trust policy for the marketing IAM role, which permits members of the marketing department to assume the IAM role. The following is a sample policy to create a role trust policy for the marketing IAM role. Be sure to replace <AccountNumber> with your account number.


{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<AccountNumber>:root"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:RoleSessionName": "${aws:username}"
            }
          }
        }
      ]
    }

In the role trust policy above, you use the sts:RoleSessionName condition to ensure that members of the marketing department set their aws:username as their role session name when they assume the marketing IAM role. If John attempts to assume the marketing IAM role and does not set his role session name to john_s, then AWS will not authorize the request. When John sets the role session name to his aws:username, AWS will permit him to assume the marketing IAM role. The following is a sample CLI command to assume an IAM role. Replace <AccountNumber> with your account number.


aws sts assume-role --role-arn arn:aws:iam::<AccountNumber>:role/marketing --role-session-name john_s

In the AWS CLI command above, John assumes the marketing IAM role and sets the role session name to john_s. John then calls the get-caller-identity API to verify that he assumed the marketing IAM role. The following is confirmation that John successfully assumed the marketing IAM role.


{
    "UserId": " AIDACKCEVSQ6C2EXAMPLE:john_s",
    "Account": "<AccountNumber>",
    "Arn": "arn:aws:sts::<AccountNumber>:assumed-role/marketing/john_s"
}

AWS CloudTrail captures any action that John performs with the marketing IAM role, and you can easily identify John’s sessions in your AWS CloudTrail logs by searching for any Amazon Resource Name (ARN) with John’s aws:username (which is john_s) as the role session name. The following is an example of AWS CloudTrail event details that shows the role session name. Replace <AccountNumber> with your account number.



"assumedRoleUser": {
            "assumedRoleId": "AIDACKCEVSQ6C2EXAMPLE:john_s",
            "arn": "arn:aws:sts::<AccountNumber>:assumed-role/marketing/john_s"
        }

Example 2 – Require IAM principals to choose from a pre-selected set of role session names when they assume an IAM role in your AWS account

You can also define the acceptable role session names that an IAM principal or application can use when they assume an IAM role in your AWS account. With this requirement, you ensure that IAM principals and applications that assume IAM roles in your AWS account use a pre-approved role session name that you can easily understand.

Expanding on the previous example, in the following scenario, you have a new AWS account with an Amazon DynamoDB database that stores competitive analysis data. You do not want members of the marketing department to have direct access to this new AWS account. You will achieve this by requesting your marketing partners to first assume the marketing IAM role in your other AWS account with pricing information, and from that AWS account, assume the Analyst IAM role in the new AWS account to access the competitive analysis data. Also, you want your marketing partners to select from a pre-defined set of role session names: “marketing-campaign”, “product-development” and “other”, which will identify their reason for accessing the competitive analysis data.

First, you establish a two-way trust. You grant the marketing IAM role sts:AssumeRole permission with an IAM policy. The following is a sample policy to grant the marketing IAM role assume-role permission. Be sure to replace <AccountNumber> with your account number.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AssumeRole",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<AccountNumber>:role/Analyst"
        }
    ]
}

Next, you create a role trust policy for the Analyst IAM role. In the role trust policy, you set the marketing IAM role as the Principal, to restrict who can access the Analyst IAM role. Then you use the sts:RoleSessionName condition to define the acceptable role session names: marketing-campaign, product-development and other. The following is a role trust policy to limit the list of acceptable role session names. Replace <AccountNumber> with your account number.


        {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
             "AWS": " arn:aws:iam::<AccountNumber>:role/marketing"          
             },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:RoleSessionName": [
                "marketing-campaign",
                "product-development",
                "other"
              ]
            }
          }
        }
      ]
    }

If John from the marketing department wants to access the competitive analysis data, and he has assumed the marketing IAM role as shown in example #1, then he can assume the Analyst IAM role in the new AWS account by using the marketing IAM role. For AWS to authorize the assume-role request, when he assumes the Analyst IAM role, he must set the role session name to one of the pre-defined values. The following is a sample CLI command to assume the Analyst IAM role. Replace <AccountNumber> with your account number.


aws sts assume-role --role-arn arn:aws:iam::<AccountNumber>:role/Analyst --role-session-name marketing-campaign

In the CLI command above, John assumes the Analyst IAM role, using the marketing IAM role. He also sets the role session name to marketing-campaign, which is an allowed role session name. John then calls the get-caller-identity API to verify that he successfully assumed the Analyst IAM role. The following log results show the marketing IAM role successfully assumed the Analyst IAM role with the role session name as marketing-campaign.


{
    "UserId": " AIDACKCEVSQ6C2EXAMPLE:marketing-campaign",
    "Account": "<AccountNumber>",
    "Arn": "arn:aws:sts::<AccountNumber>:assumed-role/Analyst/marketing-campaign"
}

AWS CloudTrail captures any action performed with the Analyst IAM role. By viewing the role session names in your AWS CloudTrail logs, you can easily identify the reasons why your marketing partners accessed the competitive analysis data.


"assumedRoleUser": {
            "assumedRoleId": "AIDACKCEVSQ6C2EXAMPLE:marketing-campaign",
            "arn": "arn:aws:sts::<AccountNumber>:assumed-role/Analyst/marketing-campaign"
        }

Conclusion

In this post, I showed how AWS account administrators can use the sts:RoleSessionName condition to control what IAM principal names their session when they assume an IAM role. This control gives you increased confidence to rely on the role session name, when viewing AWS CloudTrail logs, to identify who performed an action with an IAM role, or get additional context for why an IAM principal assumed an IAM role.

For more information about the sts:RoleSessionName condition, and for policy examples, see Available Keys for AWS STS in the AWS IAM User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Identity and Access Management forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Derrick Oigiagbe

Derrick is a Senior Product Manager for Identity and Access Management service at AWS. Prior to his career at Amazon, he received his MBA from the Carnegie Mellon’s Tepper School of Business. Derrick spent his early career as a technology consultant for Summa Technologies (recently acquired by CGI). In his spare time, Derrick enjoys playing soccer and travelling.