Tag Archives: AWS Security Hub

Monitoring AWS Certificate Manager Private CA with AWS Security Hub

Post Syndicated from Anthony Pasquariello original https://aws.amazon.com/blogs/security/monitoring-aws-certificate-manager-private-ca-with-aws-security-hub/

Certificates are a vital part of any security infrastructure because they allow a company’s internal or external facing products, like websites and devices, to be trusted. To deploy certificates successfully and at scale, you need to set up a certificate authority hierarchy that provisions and issues certificates. You also need to monitor this hierarchy closely, looking for any activity that occurs within your infrastructure, such as creating or deleting a root certificate authority (CA). You can achieve this using AWS Certificate Manager (ACM) Private Certificate Authority (CA) with AWS Security Hub.

AWS Certificate Manager (ACM) Private CA is a managed private certificate authority service that extends ACM certificates to private certificates. With private certificates, you can authenticate resources inside an organization. Private certificates allow entities like users, web servers, VPN users, internal API endpoints, and IoT devices to prove their identity and establish encrypted communications channels. With ACM Private CA, you can create complete CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating your own certificate authority.

AWS Security Hub provides a comprehensive view of your security state within AWS and your compliance with security industry standards and best practices. Security Hub centralizes and prioritizes security and compliance findings from across AWS accounts, services, and supported third-party partners to help you analyze your security trends and identify the highest priority security issues.

In this example, we show how to monitor your root CA and generate a security finding in Security Hub if your root is used to issue a certificate. Following best practices, the root CA should be used rarely and only to issue certificates under controlled circumstances, such as during a ceremony to create a subordinate CA. Issuing a certificate from the root at any other time is a red flag that should be investigated by your security team. This will show up as a finding in Security Hub indicated by ‘ACM Private CA Certificate Issuance.’

Example scenario

For highly privileged actions within an IT infrastructure, it’s crucial that you use the principle of least privilege when allowing employee access. To ensure least privilege is followed, you should track highly sensitive actions using monitoring and alerting solutions. Highly sensitive actions should only be performed by authorized personnel. In this post, you’ll learn how to monitor activity that occurs within ACM Private CA, such as creating or deleting a root CA, using AWS Security Hub. In this example scenario, we cover a highly sensitive action within an organization building a private certificate authority hierarchy using ACM Private CA:

Creation of a subordinate CA that is signed by the root CA:

Creating a CA certificate is a privileged action. Only authorized personnel within the CA Hierarchy Management team should create CA certificates. Certificate authorities can sign private certificates that allow entities to prove their identity and establish encrypted communications channels.

Architecture overview

This solution requires some background information about the example scenario. In the example, the organization has the following CA hierarchy: root CA → subordinate CA → end entity certificates. To learn how to build your own private certificate infrastructure see this post.

Figure 1: An example of a certificate authority hierarchy

Figure 1: An example of a certificate authority hierarchy

There is one root CA and one subordinate CA. The subordinate CA issues end entity certificates (private certificates) to internal applications.

To use the test solution, you will first deploy a CloudFormation template that has set up an Amazon CloudWatch Events Rule and a Lambda function. Then, you will assume the persona of a security or certificate administrator within the example organization who has the ability to create certificate authorities within ACM Private CA.

Figure 2: Architecture diagram of the solution

Figure 2: Architecture diagram of the solution

The architecture diagram in Figure 2 outlines the entire example solution. At a high level this architecture enables customers to monitor activity within ACM Private CA in Security Hub. The components are explained as follows:

  1. Administrators within your organization have the ability to create certificate authorities and provision private certificates.
  2. Amazon CloudWatch Events tracks API calls using ACM Private CA as a source.
  3. Each CloudWatch Event triggers a corresponding AWS Lambda function that is also deployed by the CloudFormation template. The Lambda function reads the event details and formats them into an AWS Security Finding Format (ASFF).
  4. Findings are generated in AWS Security Hub by the Lambda function for your security team to monitor and act on.

This post assumes you have administrative access to the resources used, such as ACM Private CA, Security Hub, CloudFormation, and Amazon Simple Storage Service (Amazon S3). We also cover how to remediate through practicing the principle of least privilege, and what that looks like within the example scenario.

Deploy the example solution

First, make sure that AWS Security Hub is turned on, as it isn’t on by default. If you haven’t used the service yet, go to the Security Hub landing page within the AWS Management Console, select Go to Security Hub, and then select Enable Security Hub. See documentation for more ways to enable Security Hub.

Next, launch the CloudFormation template. Here’s how:

  1. Log in to the AWS Management Console and select AWS Region us-east-1 (N. Virginia) for this example deployment.
  2. Make sure you have the necessary privileges to create resources, as described in the “Architecture overview” section.
  3. Set up the sample deployment by selecting Launch Stack below.

The example solution must be launched in an AWS Region where ACM Private CA and Security Hub are enabled. The Launch Stack button will default to us-east-1. If you want to launch in another region, download the CloudFormation template from the GitHub repository found at the end of the blog.

Select this image to open a link that starts building the CloudFormation stack

Now that you’ve deployed the CloudFormation stack, we’ll help you understand how we’ve utilized AWS Security Finding Format (ASFF) in the Lambda functions.

How to create findings using AWS Security Finding Format (ASFF)

Security Hub consumes, aggregates, organizes, and prioritizes findings from AWS security services and from third-party product integrations. Security Hub receives these findings using a standard findings format called the AWS Security Finding Format (ASFF), thus eliminating the need for time-consuming data conversion efforts. Then it correlates ingested findings across products to prioritize the most important ones.

Below you can find an example input that shows how to use ASFF to populate findings in AWS Security Hub for the creation of a CA certificate. We placed this information in the Lambda function Certificate Authority Creation that was deployed in the CloudFormation stack.


{
 "SchemaVersion": "2018-10-08",
 "Id": region + "/" + accountNum + "/" + caCertARN,
 "ProductArn": "arn:aws:securityhub:" + region + ":" + accountNum + ":product/" + accountNum + "/default",
 "GeneratorId": caCertARN,
 "AwsAccountId": accountNum,
 "Types": [
     "Unusual Behaviors"
 ],
 "CreatedAt": date,
 "UpdatedAt": date,
 "Severity": {
     "Normalized": 60
 },
 "Title": "Private CA Certificate Creation",
 "Description": "A Private CA certificate was issued in AWS Certificate Manager Private CA",
 "Remediation": {
     "Recommendation": {
         "Text": "Verify this CA certificate creation was taken by a privileged user",
         "Url": "https://docs.aws.amazon.com/acm-pca/latest/userguide/ca-best-practices.html#minimize-root-use"
     }
 },
 "ProductFields": {
     "ProductName": "ACM PCA"
 },
 "Resources": [
     {
         "Details": {
             "Other": {
                "CAArn": CaArn,
                "CertARN": caCertARN
             }
         },
         "Type": "Other",
         "Id": caCertARN,
         "Region": region,
         "Partition": "aws"
    }
 ],
 "RecordState": "ACTIVE"
}

Below, we summarize some important fields within the finding generated by ASFF. We set these fields within the Lambda function in the CloudFormation template you deployed for the example scenario, and so you don’t have to do this yourself.

ProductARN

AWS services that are not yet integrated with Security Hub are treated similar to third party findings. Therefore, the company-id must be the account ID. The product-id must be the reserved word “default”, as shown below.


"ProductArn": "arn:aws:securityhub:" + region + ":" + accountNum + ":product/" + accountNum + "/default",

Severity

Assigning the correct severity is important to ensure useful findings. This example scenario sets the severity within the ASFF generated, as shown above. For future findings, you can determine the appropriate severity by comparing the score to the labels listed in Table 1.

Table 1: Severity labels in AWS Security Finding Format

Severity LabelSeverity Score Range
Informational0
Low1–39
Medium40–69
High70–89
Critical90–100
  • Informational: No issue was found.
  • Low: Findings with issues that could result in future compromises, such as vulnerabilities, configuration weaknesses, or exposed passwords.
  • Medium: Findings with issues that indicate an active compromise, but no indication that an adversary has completed their objectives. Examples include malware activity, hacking activity, or unusual behavior detection.
  • Critical: Findings associated with an adversary completing their objectives. Examples include data loss or compromise, or a denial of service.

Remediation

This provides the remediation options for a finding. In our example, we link you to least privilege documentation to learn how to fix the overly permissive resource.

Types

These indicate one or more finding types in the format of namespace/category/classifier that classify a finding. Finding types should match against the Types Taxonomy for ASFF.

To learn more about ASFF parameters, see ASFF syntax documentation.

Trigger a Security Hub finding

Figure 1 above shows the CA hierarchy you are building in this post. The CloudFormation template you deployed created the root CA. The following steps will walk you through signing the root CA and creating a CA certificate for the subordinate CA. The architecture we deployed will notify the security team of these actions via Security Hub.

First, we will activate the root CA and install the root CA certificate. This step signs the root CA Certificate Signing Request (CSR) with the root CA’s private key.

  1. Navigate to the ACM Private CA service. Under the Private certificate authority section, select Private CAs.
  2. Under Actions, select Install CA certificate.
  3. Set the validity period and signature algorithm for the root CA certificate. In this case, leave the default values for both fields as shown in Figure 3, and then select Next.

    Figure 3: Specify the root CA certificate parameters

    Figure 3: Specify the root CA certificate parameters

  4. Under Review, generate, and install root CA certificate, select Confirm and install. This creates the root CA certificate.
  5. You should now see the root CA within the console with a status of Active, as shown in Figure 4 below.

    Figure 4: The root CA is now active

    Figure 4: The root CA is now active

Now we will create a subordinate CA and install a CA certificate onto it.

  1. Select the Create CA button.
  2. Under Select the certificate authority (CA) type, select Subordinate CA, and then select Next.
  3. Configure the root CA parameters by entering the following values (or any values that make sense for the CA hierarchy you’re trying to build) in the fields shown in Figure 5, and then select Next.

    Figure 5: Configure the certificate authority

    Figure 5: Configure the certificate authority

  4. Under Configure the certificate authority (CA) key algorithm, select RSA 2048, and then select Next.
  5. Check Enable CRL distribution, and then, under Create a new S3 bucket, select No. Under S3 bucket name, enter acm-private-ca-crl-bucket-<account-number>, and then select Next.
  6. Under Configure CA permissions, select Authorize ACM to use this CA for renewals, and then select Next.
  7. To create the subordinate CA, review and accept the conditions described at the bottom of the page, select the check box if you agree to the conditions, and then select Confirm and create.

Now, you need to activate the subordinate CA and install the subordinate certificate authority certificate. This step allows you to sign the subordinate CA Certificate Signing Request (CSR) with the root CA’s private key.

  1. Select Get started to begin the process, as shown in Figure 6.

    Figure 6: Begin installing the root certificate authority certificate

    Figure 6: Begin installing the root certificate authority certificate

  2. Under Install subordinate CA certificate, select ACM Private CA, and then select Next. This starts the process of signing the subordinate CA cert with the root CA that was created earlier.
  3. Set the parent private CA with the root CA that was created, the validity period, the signature algorithm, and the path length for the subordinate CA certificate. In this case, leave the default values for validity period, signature algorithm, and path length fields as shown in Figure 7, and then select Next.

    Figure 7: Specify the subordinate CA certificate parameters

    Figure 7: Specify the subordinate CA certificate parameters

  4. Under Review, select Generate. This creates the subordinate CA certificate.
  5. You should now see the subordinate CA within the console with a status of Active.

How to view Security Hub findings

Now that you have created a root CA and a subordinate CA under the root, you can review findings from the perspective of your security team who is notified of the findings within Security Hub. In the example scenario, creating the CA certificates triggers a CloudWatch Events rule generated from the CloudFormation template.

This events rule utilizes the native ACM Private CA CloudWatch Event integration. The event keeps track of ACM Private CA Certificate Issuance of the root CA ARN. See below for the CloudWatch Event.


{
  "detail-type": [
    "ACM Private CA Certificate Issuance"
  ],
  "resources": [
    "arn:aws:acm-pca:us-east-2:xxxxxxxxxxx:certificate-authority/xxxxxx-xxxx-xxx-xxxx-xxxxxxxx"
  ],
  "source": [
    "aws.acm-pca"
  ]
}

When the event of creating a CA certificate from the root CA occurs, it triggers the Lambda function with the finding in ASFF to generate that finding in Security Hub.

To assess a finding in Security Hub

  1. Navigate to Security Hub. On the left side of the Security Hub page, select Findings to view the finding generated from the Lambda function. Filter by Title EQUALS Private CA Certificate Creation, as shown in Figure 8.

    Figure 8: Filter the findings in Security Hub

    Figure 8: Filter the findings in Security Hub

  2. Select the finding’s title (CA CertificateCreation) to open it. You will see the details generated from this finding:
    
    Severity: Medium
    Company: Personal
    Title: CA Certificate Creation
    Remediation: Verify this certificate was taken by a privileged user
    	

    The finding has a Medium severity level since we specified it through our level 60 definition. This could indicate a potential active compromise, but no indication that a potential adversary has completed their objectives. In the hypothetical example covered earlier, a user has provisioned a CA certificate from the root CA, which should only be provisioned under controlled circumstances, such as during a ceremony to create a subordinate CA. Issuing a certificate from the root at any other time is a red flag that should be investigated by the security team. The remediation attribute in the finding shown here links to security best practices for ACM Private CA.

    Figure 9: Remediation tab in Security Hub Finding

    Figure 9: Remediation tab in Security Hub Finding

  3. To see more details about the finding, in the upper right corner of the console, under CA Certificate Creation, select the Finding ID link, as shown in Figure 10.

    Figure 10: Select the Finding ID link to learn more about the finding

    Figure 10: Select the Finding ID link to learn more about the finding

  4. The Finding JSON box will appear. Scroll down to Resources > Details > Other, as shown in Figure 11. The CAArn is the Root Certificate Authority that provisioned the certificate. The CertARN is the certificate that it provisioned.

    Figure 11: Details about the finding

    Figure 11: Details about the finding

Cleanup

To avoid costs associated with the test CA hierarchy created and other test resources generated from the CloudFormation template, ensure that you clean up your test environment. Take the following steps:

  1. Disable and delete the CA hierarchy you created (including root and subordinate CAs, as well as the additional subordinate CAs created).
  2. Delete the CloudFormation template.

Any new account to ACM Private CA can try the service for 30 days with no charge for operation of the first private CA created in the account. You pay for the certificates you issue during the trial period.

Next steps

In this post, you learned how to create a pipeline from ACM PCA action to Security Hub findings. There are many other API calls that you can send to Security Hub for monitoring:

To generate an ASFF object for one of these API calls, follow the steps from the ASFF section above. For more details, see the documentation. For the latest updates and changes to the CloudFormation template and resources within this post, please check the Github repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Anthony Pasquariello

Anthony is an Enterprise Solutions Architect based in New York City. He provides technical consultation to customers during their cloud journey, especially around security best practices. He has an MS and BS in electrical & computer engineering from Boston University. In his free time, he enjoys ramen, writing non-fiction, and philosophy.

Author

Christine Samson

Christine is an AWS Solutions Architect based in New York City. She provides customers with technical guidance for emerging technologies within the cloud, such as IoT, Serverless, and Security. She has a BS in Computer Science with a certificate in Engineering Leadership from the University of Colorado Boulder. She enjoys exploring new places to eat, playing the piano, and playing sports such as basketball and volleyball.

Author

Ram Ramani

Ram is a Security Specialist Solutions Architect at AWS focusing on data protection. Ram works with customers across all verticals to help with security controls and best practices on how customers can best protect their data that they store on AWS. In his free time, Ram likes playing table tennis and teaching coding skills to his kids.

How to build a CI/CD pipeline for container vulnerability scanning with Trivy and AWS Security Hub

Post Syndicated from Amrish Thakkar original https://aws.amazon.com/blogs/security/how-to-build-ci-cd-pipeline-container-vulnerability-scanning-trivy-and-aws-security-hub/

In this post, I’ll show you how to build a continuous integration and continuous delivery (CI/CD) pipeline using AWS Developer Tools, as well as Aqua Security‘s open source container vulnerability scanner, Trivy. You’ll build two Docker images, one with vulnerabilities and one without, to learn the capabilities of Trivy and how to send all vulnerability information to AWS Security Hub.

If you’re building modern applications, you might be using containers, or have experimented with them. A container is a standard way to package your application’s code, configurations, and dependencies into a single object. In contrast to virtual machines (VMs), containers virtualize the operating system rather than the server. Thus, the images are orders of magnitude smaller, and they start up much more quickly.

Like VMs, containers need to be scanned for vulnerabilities and patched as appropriate. For VMs running on Amazon Elastic Compute Cloud (Amazon EC2), you can use Amazon Inspector, a managed vulnerability assessment service, and then patch your EC2 instances as needed. For containers, vulnerability management is a little different. Instead of patching, you destroy and redeploy the container.

Many container deployments use Docker. Docker uses Dockerfiles to define the commands you use to build the Docker image that forms the basis of your container. Instead of patching in place, you rewrite your Dockerfile to point to more up-to-date base images, dependencies, or both and to rebuild the Docker image. Trivy lets you know which dependencies in the Docker image are vulnerable, and which version of those dependencies are no longer vulnerable, allowing you to quickly understand what to patch to get back to a secure state.

Solution architecture

 

Figure 1: Solution architecture

Figure 1: Solution architecture

Here’s how the solution works, as shown in Figure 1:

  1. Developers push Dockerfiles and other code to AWS CodeCommit.
  2. AWS CodePipeline automatically starts an AWS CodeBuild build that uses a build specification file to install Trivy, build a Docker image, and scan it during runtime.
  3. AWS CodeBuild pushes the build logs in near real-time to an Amazon CloudWatch Logs group.
  4. Trivy scans for all vulnerabilities and sends them to AWS Security Hub, regardless of severity.
  5. If no critical vulnerabilities are found, the Docker images are deemed to have passed the scan and are pushed to Amazon Elastic Container Registry (ECR), so that they can be deployed.

Note: CodePipeline supports different sources, such as Amazon Simple Storage Service (Amazon S3) or GitHub. If you’re comfortable with those services, feel free to substitute them for this walkthrough of the solution.

To quickly deploy the solution, you’ll use an AWS CloudFormation template to deploy all needed services.

Prerequisites

  1. You must have Security Hub enabled in the AWS Region where you deploy this solution. In the AWS Management Console, go to AWS Security Hub, and select Enable Security Hub.
  2. You must have Aqua Security integration enabled in Security Hub in the Region where you deploy this solution. To do so, go to the AWS Security Hub console and, on the left, select Integrations, search for Aqua Security, and then select Accept Findings.

Setting up

For this stage, you’ll deploy the CloudFormation template and do preliminary setup of the CodeCommit repository.

  1. Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information on how to create a CloudFormation stack, see Getting Started with AWS CloudFormation.
  2. After the CloudFormation stack completes, go to the CloudFormation console and select the Resources tab to see the resources created, as shown in Figure 2.

 

Figure 2: CloudFormation output

Figure 2: CloudFormation output

Setting up the CodeCommit repository

CodeCommit repositories need at least one file to initialize their master branch. Without a file, you can’t use a CodeCommit repository as a source for CodePipeline. To create a sample file, do the following.

  1. Go to the CodeCommit console and, on the left, select Repositories, and then select your CodeCommit repository.
  2. Scroll to the bottom of the page, select the Add File dropdown, and then select Create file.
  3. In the Create a file screen, enter readme into the text body, name the file readme.md, enter your name as Author name and your Email address, and then select Commit changes, as shown in Figure 3.

    Figure 3: Creating a file in CodeCommit

    Figure 3: Creating a file in CodeCommit

Simulate a vulnerable image build

For this stage, you’ll create the necessary files and add them to your CodeCommit repository to start an automated container vulnerability scan.

    1. Download the buildspec.yml file from the GitHub repository.

      Note: In the buildspec.yml code, the values prepended with $ will be populated by the CodeBuild environmental variables you created earlier. Also, the command trivy -f json -o results.json –exit-code 1 will fail your build by forcing Trivy to return an exit code 1 upon finding a critical vulnerability. You can add additional severity levels here to force Trivy to fail your builds and ensure vulnerabilities of lower severity are not published to Amazon ECR.

    2. Download the python code file sechub_parser.py from the GitHub repository. This script parses vulnerability details from the JSON file that Trivy generates, maps the information to the AWS Security Finding Format (ASFF), and then imports it to Security Hub.
    3. Next, download the Dockerfile from the GitHub repository. The code clones a GitHub repository maintained by the Trivy team that has purposely vulnerable packages that generate critical vulnerabilities.
    4. Go back to your CodeCommit repository, select the Add file dropdown menu, and then select Upload file.
    5. In the Upload file screen, select Choose file, select the build specification you just created (buildspec.yml), complete the Commit changes to master section by adding the Author name and Email address, and select Commit changes, as shown in Figure 4.

 

Figure 4: Uploading a file to CodeCommit

Figure 4: Uploading a file to CodeCommit

 

  • To upload your Dockerfile and sechub_parser.py script to CodeCommit, repeat steps 4 and 5 for each of these files.
  • Your pipeline will automatically start in response to every new commit to your repository. To check the status, go back to the pipeline status view of your CodePipeline pipeline.
  • When CodeBuild starts, select Details in the Build stage of the CodePipeline, under BuildAction, to go to the Build section on the CodeBuild console. To see a stream of logs as your build progresses, select Tail logs, as shown in Figure 5.

    Figure 5: CodeBuild Tailed Logs

    Figure 5: CodeBuild Tailed Logs

  • After Trivy has finished scanning your image, CodeBuild will fail due to the critical vulnerabilities found, as shown in Figure 6.

    Note: The command specified in the post-build stage will run even if the CodeBuild build fails. This is by design and allows the sechub_parser.py script to run and send findings to Security Hub.

     

    Figure 6: CodeBuild logs failure

    Figure 6: CodeBuild logs failure

 

You’ll now go to Security Hub to further analyze the findings and create saved searches for future use.

Analyze container vulnerabilities in Security Hub

For this stage, you’ll analyze your container vulnerabilities in Security Hub and use the findings view to locate information within the ASFF.

  1. Go to the Security Hub console and select Integrations in the left-hand navigation pane.
  2. Scroll down to the Aqua Security integration card and select See findings, as shown in Figure 7. This filters to only Aqua Security product findings in the Findings view.

    Figure 7: Aqua Security integration card

    Figure 7: Aqua Security integration card

  3. You should now see critical vulnerabilities from your previous scan in the Findings view, as shown in Figure 8. To see more details of a finding, select the Title of any of the vulnerabilities, and you will see the details in the right side of the Findings view.
    Figure 8: Security Hub Findings pane

    Figure 8: Security Hub Findings pane

    Note: Within the Findings view, you can apply quick filters by checking the box next to certain fields, although you won’t do that for the solution in this post.

  4. To open a new tab to a website about the Common Vulnerabilities and Exposures (CVE) for the finding, select the hyperlink within the Remediation section, as shown in Figure 9.
    Figure 9: Remediation information

    Figure 9: Remediation information

    Note: The fields shown in Figure 9 are dynamically populated by Trivy in its native output, and the CVE website can differ greatly from vulnerability to vulnerability.

  5. To see the JSON of the full ASFF, at the top right of the Findings view, select the hyperlink for Finding ID.
  6. To find information mapped from Trivy, such as the CVE title and what the patched version of the vulnerable package is, scroll down to the Other section, as shown in Figure 10.

    Figure 10: ASFF, other

    Figure 10: ASFF, other

This was a brief demonstration of exploring findings with Security Hub. You can use custom actions to define response and remediation actions, such as sending these findings to a ticketing system or aggregating them in a security information event management (SIEM) tool.

Push a non-vulnerable Dockerfile

Now that you’ve seen Trivy perform correctly with a vulnerable image, you’ll fix the vulnerabilities. For this stage, you’ll modify Dockerfile to remove any vulnerable dependencies.

  1. Open a text editor, paste in the code shown below, and save it as Dockerfile. You can overwrite your previous example if desired.
    
    FROM alpine:3.7
    RUN apk add --no-cache mysql-client
    ENTRYPOINT ["mysql"]
    	

  2. Upload the new Dockerfile to CodeCommit, as shown earlier in this post.

Clean up

To avoid incurring additional charges from running these services, disable Security Hub and delete the CloudFormation stack after you’ve finished evaluating this solution. This will delete all resources created during this post. Deleting the CloudFormation stack will not remove the findings in Security Hub. If you don’t disable Security Hub, you can archive those findings and wait 90 days for them to be fully removed from your Security Hub instance.

Conclusion

In this post, you learned how to create a CI/CD Pipeline with CodeCommit, CodeBuild, and CodePipeline for building and scanning Docker images for critical vulnerabilities. You also used Security Hub to aggregate scan findings and take action on your container vulnerabilities.

You now have the skills to create similar CI/CD pipelines with AWS Developer Tools and to perform vulnerability scans as part of your container build process. Using these reports, you can efficiently identify CVEs and work with your developers to use up-to-date libraries and binaries for Docker images. To further build upon this solution, you can change the Trivy commands in your build specification to fail the builds on different severity levels of found vulnerabilities. As a final suggestion, you can use ECR as a source for a downstream CI/CD pipeline responsible for deploying your newly-scanned images to Amazon Elastic Container Service (Amazon ECS).

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Amrish Thakkar

Amrish is a Senior Solutions Architect at AWS. He holds the AWS Certified Solutions Architect Professional and AWS Certified DevOps Engineer Professional certifications. Amrish is passionate about DevOps, Microservices, Containerization, and Application Security, and devotes personal time into research and advocacy on these topics. Outside of work, he enjoys spending time with family and watching LOTR trilogy frequently.

Enabling AWS Security Hub integration with AWS Chatbot

Post Syndicated from Ross Warren original https://aws.amazon.com/blogs/security/enabling-aws-security-hub-integration-with-aws-chatbot/

In this post, we show you how to configure AWS Chatbot to send findings from AWS Security Hub to Slack. Security Hub gives you a comprehensive view of your security high-priority alerts and security posture across your Amazon Web Services (AWS) accounts. AWS Chatbot is an interactive agent that makes it easy to monitor and interact with your AWS resources in your Slack channels and Amazon Chime chat rooms. This can enable your security teams to receive alerts in familiar Slack channels, facilitating collaboration and quick response to events.

We will be describing the preset formatting integration with AWS Chatbot, if you want to enrich the finding data you can follow the guide in How to Enable Custom Actions in AWS Security Hub. The steps listed in this post are ideal if you want to use the preset formatting of AWS Chatbot. The second path is a customized integration, which is useful when you need the finding data to be transformed or enriched.

With AWS Chatbot you can receive alerts and run commands to return diagnostic information, invoke AWS Lambda functions, and create AWS support cases so that your team can collaborate and respond to events faster.

This post is a follow up to a previous post, How to Enable Custom Actions in AWS Security Hub, where custom actions in Security Hub enabled notifications to be sent to Slack. In this post, we simplify the workflow by using an AWS CloudFormation template to create the information flow in Figure 1 below. This configures the custom action in Security Hub, an Amazon EventBridge rule, and an Amazon Simple Notification Service (Amazon SNS) topic to tie them all together.

Figure 1: Information flow showing a Slack channel and Amazon Chime as options for AWS Chatbot integration

Figure 1: Information flow showing a Slack channel and Amazon Chime as options for AWS Chatbot integration

Configure AWS Chatbot and Security Hub

To get started you’ll need the following prerequisites:

We will now walk through configuring AWS Chatbot and Security Hub. You must have an AWS Account with GuardDuty and Security Hub enabled as well as a Slack account. Keep a virtual scratch pad handy to take note of your Slack Workspace ID and Slack Channel ID refer to as you configure the integration.

Security Hub supports two types of integration with EventBridge, both of which are supported by AWS Chatbot:

  • Standard Amazon CloudWatch events. Security Hub automatically sends all findings to EventBridge. Use this method to automatically send all Security Hub findings, or a filtered subset of findings, to an Amazon SNS topic to which AWS Chatbot subscribes.
  • Security Hub custom actions. Define custom actions in Security Hub and configure CloudWatch events rules to respond to those actions. The event rule uses its Amazon SNS topic target to forward notifications to the SNS topic AWS Chatbot is subscribed to.

We are going to focus on Security Hub custom actions. You might not initially want to have all Security Hub findings appear in Slack, so we’re going to create a Security Hub custom action to send only relevant findings to Slack. This workflow gives your security team the ability to manually provide notifications to Slack channels through AWS Chatbot. At the end of this post, I share an EventBridge Rule for those users who want all Security Hub findings in a Slack channel. I also provide some filter examples which will help you select the findings you receive in Slack.

Configure a Slack client

To allow AWS Chatbot to send notifications to your Slack channel, you must configure AWS Chatbot to work with Slack. Owners of Slack workspaces can approve the use of the AWS Chatbot and any workspace user can configure the workspace to receive notifications.

  1. Log in to your AWS console and navigate to the AWS Chatbot console.
  2. Select Slack from the dropdown menu and then Configure client.

    Figure 2: Configure a chat client

    Figure 2: Configure a chat client

  3. If you are not yet logged in to Slack, add your workspace name and log in to Slack.

    Figure 3: Slack workspace login

    Figure 3: Slack workspace login

  4. On the next screen where AWS Chatbot is requesting permission to your Slack workspace, choose Allow.
  5. Copy and save the Workspace ID. You will need it for the CloudFormation Template.

    Figure 4: Console with workplace ID

    Figure 4: Console with workplace ID

  6. You can now leave the AWS Chatbot console and log in to your Slack workspace where we can get the channel ID.
    1. If you do not have a Slack channel in your organization for findings, or you want to test this integration before deploying it to production, please follow the steps from Slack for creating a channel.
    2. If you are using the Slack desktop client, right-click on the Slack channel name and select Copy Link.

      Figure 5: Copy link from the desktop client

      Figure 5: Copy link from the desktop client

    3. If you are using the Slack web UI, right click on your Slack channel name, select Additional options, and then select Copy link.

      Figure 6: Copy link from the web UI

      Figure 6: Copy link from the web UI

  7. The last part of the resulting URL is your channel ID. For example, in the URL https://xxxxxxx.slack.com/archives/CSQRRLTHRT, CSQRRLTHRT is the channel ID. Write down your channel ID to use later.

Tie it all together

The CloudFormation template is going to create the following:

  • A Security Hub custom action named SendToSlack
  • An Amazon SNS topic named AWS Chatbot SNS Topic
  • An EventBridge rule to tie everything together
  1. Open the SecurityHub_to_AWSChatBot.yml CloudFormation template.
  2. Right-click and use Save As to save the template to your workstation.

    Note: The CloudFormation template is going to require your Slack workspace ID and channel ID from the previous step.

  3. Open the CloudFormation console.
  4. Select Create stack.
    Figure 7: Create a CloudFormation stack

    Figure 7: Create a CloudFormation stack

    1. Select Upload a template file

      Figure 8: Upload CloudFormation Template File

      Figure 8: Upload CloudFormation template file

    2. Select Choose file and navigate to the saved CloudFormation template from step 2.
    3. Select Next.
    4. Enter a stack name, such as “SecurityHubToAWSChatBot.”
    5. Enter your Slack channel ID and Slack workSpace ID (be careful not to transpose these IDs).
    6. Continue by selecting Next.
    7. On the Configure stack options screen you can add tags if required by your organization. The rest of the default options will work, click Next.
    8. Review stack details on the Review screen and scroll to the bottom.
    9. You must click the “I acknowledge that AWS CloudFormation might create IAM resources.” Check box before clicking Create Stack.

      Figure 9: IAM capabilities acknowledgment

      Figure 9: IAM capabilities acknowledgment

  5. After the CloudFormation template has completed successfully you will see ‘Create Complete’ in the CloudFormation console.

To test the configuration perform the following steps:

  1. Open the AWS Security Hub console, select a finding and choose the Actions drop down. Select Send_To_Slack — the custom action that you just created.

    Figure 10: Security Hub custom action drop down

    Figure 10: Security Hub custom action drop down

  2. Go to your Slack workspace and channel to verify receipt of the notification.

    Figure 11: Example Security Hub notification in Slack

    Figure 11: Example Security Hub notification in Slack

Bonus: Send all critical findings to Slack

You can also use this workflow to send all critical Security Hub findings to Slack.

To do this, configure an additional CloudWatch rule that can be used in conjunction with the custom action that we’ve already deployed. For example, your security team requires that all the critical severity findings go to your team’s Slack channel, but with the ability to also manually send other interesting or relevant findings to Slack.

  1. Go to the EventBridge console.
  2. Underneath Events, select Rules.
  3. Select Create Rule.
  4. Give the Rule a name ex: “All_SecurityHub_Findings_to_Slack.”
  5. In the Define Pattern section, select Event pattern and Custom pattern.

    Figure 12: EventBridge event pattern dialogue

    Figure 12: EventBridge event pattern dialogue

  6. Paste the following code into the Event pattern field and select Save.

    Note: You can edit this filter to fit your needs.

    
    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "ProductFields": {
            "aws/securityhub/SeverityLabel": [
              "CRITICAL"
            ]
          }
        }
      }
    }
    

  7. Leave the event bus as “AWS default event bus.”
  8. Under Select Targets, select SNS Topic from the drop down.
  9. Choose the Topic with “SNSTopicAWSChatBot” in the name.
  10. Configure any required tags.
  11. Select Create.

When Security Hub creates findings, it will send any findings with a severity label of Critical to your Slack channel.

Note: Depending on the volume of critical findings in your Security Hub console, the signal to noise ratio might be too much in Slack for you to provide actionable results. You should look at automating the response and remediation of critical findings following best practice guidance in the Security Hub console.

Summary

In this post we showed how to send findings from Security Hub to Slack using AWS Chatbot. This can help your team collaborate, and respond faster to operational events. In addition, AWS Chatbot enables an easy way to interact with your AWS resources. Running AWS CLI commands from Slack channels includes a list of the commands you can run.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ross Warren

Ross Warren is a Solution Architect at AWS based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Ross has worked at a handful of startups and has enjoyed the transition to AWS because he can continue to build solutions for customers on today’s most innovative platform.

Author

Jose Ruiz

Jose Ruiz is a Sr. Solutions Architect – Security Specialist at AWS. He often enjoys “the road less traveled” and knows each technology has a security story often not spoken of. He takes this perspective when working with customers on highly complex solutions and driving security at the beginning of each build.

AWS Foundational Security Best Practices standard now available in Security Hub

Post Syndicated from Rima Tanash original https://aws.amazon.com/blogs/security/aws-foundational-security-best-practices-standard-now-available-security-hub/

AWS Security Hub offers a new security standard, AWS Foundational Security Best Practices

This week AWS Security Hub launched a new security standard called AWS Foundational Security Best Practices. This standard implements security controls that detect when your AWS accounts and deployed resources do not align with the security best practices defined by AWS security experts. By enabling this standard, you can monitor your own security posture to ensure that you are using AWS security best practices. These controls closely align to the Top 10 Security Best Practices outlined by AWS Chief Information Security Office, Stephen Schmidt, at AWS re:Invent 2019.

In the initial release, this standard consists of 31 fully-automated security controls in supported AWS Regions, and 27 controls in AWS GovCloud (US-West) and AWS GovCloud (US-East).

This standard is enabled by default when you enable Security Hub in a new account, so no extra steps are necessary to enable it. If you are an existing Security Hub user, when you open the Security Hub console, you will see a pop-up message recommending that you enable this standard. For more information, see AWS Foundational Security Best Practices standard in the AWS Security Hub User Guide.

As an example, let’s look at one of the new security controls for Amazon Relational Database Service (Amazon RDS), [RDS.1] RDS snapshots should be private. This control checks the resource types AWS::RDS::DBSnapshot and AWS::RDS::DBClusterSnapshot. The relevant AWS Config rule is rds-snapshots-public-prohibited, which checks whether Amazon RDS snapshots are public. The control fails if Security Hub identifies that any existing or new Amazon RDS snapshots are configured to be publicly accessible. The severity label is CRITICAL when the security check fails. The severity indicates the potential impact of not enforcing this rule.

You can find additional details about all the security controls, including the remediation instructions of the misconfigured resource, in the AWS Foundational Security Best Practices standard section of the Security Hub User Guide.

Getting started

In this post, we will cover:

  • How to enable the new AWS Foundational Security Best Practices standard.
  • An overview of the security controls.
  • An explanation of the security control details.
  • How to disable and enable specific security controls.
  • How to navigate to the remediation instructions for a failed security control.

Prerequisites

For the security standards to be functional in Security Hub, when you enable Security Hub in a particular account and AWS Region, you must also enable AWS Config in that account and Region. This is because Security Hub is a regional service.

Enable the new AWS Foundational Security Best Practices Security standard

After you enable AWS Config in your account and Region, you can enable the AWS Foundational Security Best Practices standard in Security Hub. We recommend that you enable Security Hub and this standard in all accounts and in all Regions where you have activity. For a script to enable AWS Security Hub across multi-account and Regions, see the AWS Security Hub multi-account scripts page on GitHub.

If you are a new user of Security Hub, when you open the Security Hub console, you are prompted to enable Security Hub. When you enable Security Hub, the AWS Foundational Security Best Practices standard is selected by default, as shown in the following screen shot. Leave the default selection and choose Enable Security Hub to enable the AWS Foundational Security Best Practices standard, as well as the other security standards you select, in your AWS account in your selected AWS Region.

Figure 1: Welcome to AWS Security Hub page

Figure 1: Welcome to AWS Security Hub page

If you are an existing user of Security Hub, when you open the Security Hub console, you are presented with a pop-up to enable the new security standard. You will see the number of new controls that are available in your AWS Region and the number of AWS services and resources that are associated with those controls, as shown in the following screen shot. Choose Enable standard to enable the AWS Foundational Security Best Practices standard in your AWS account in your selected AWS Region.

Figure 2: AWS Foundational Security Best Practices confirmation page

Figure 2: AWS Foundational Security Best Practices confirmation page

You also have the option to enable the new AWS Foundational Security Best Practices Security standard by using the command line, which we will describe later in this post.

View the security controls

Now that you have successfully enabled the standard, on the Security standards page, you see the new the AWS Foundational Security Best Practices v1.0.0 standard is displayed with the other security standards, CIS AWS Foundations and PCI DSS.

Figure 3: Security standards page in AWS Security Hub

Figure 3: Security standards page in AWS Security Hub

View security findings

Within two hours after you enable the standard, Security Hub begins to evaluate related resources in the current AWS account and Region against the available AWS controls within the AWS Foundational Security Best Practices standard. The scope of the assessment is the AWS account.

To view security findings, on the Security standards page, for AWS Foundational Security Best Practices standard, choose View results. The following image shows an example of the dashboard page you will see that displays all of the available controls in the standard, and the status of each control within the current AWS account and Region.

Figure 4: AWS Foundational Security Best Practices controls page

Figure 4: AWS Foundational Security Best Practices controls page

At a glance, each control card provides you with the following high-level information:

  • Title and unique identifier of the AWS control. This provides you with a synopsis of the purpose and functionality of the control.
  • The current status of the AWS control evaluation. The possible values are Passed, Failed, or Unknown (evaluation is still in progress and not finished).
  • Severity information associated with the AWS control. The possible values are CRITICAL, HIGH, MEDIUM, and LOW. For Passed findings associated with the controls, the severity appears, but is INFORMATIONAL. To learn more about how Security Hub determines the severity score, see Determining the severity of security standards findings.
  • A count of AWS resources that passed or failed the check for this particular AWS control.

You can use the Filter controls to search for specific AWS controls based on their evaluation status and severity. For example, you can search for all controls that have a check status of Failed and a severity of CRITICAL.

Inspect the security finding

To see detailed information about a specific security control and its findings, choose the security control card. Choosing the control displays a page that contains detailed information about the control, including a list of the findings for the security control. The page also indicates whether the resources for the security control are Passed, Failed, or if the compliance evaluation is still in progress (Unknown).

Figure 5: RDS.1 control findings view

Figure 5: RDS.1 control findings view

For business reasons, you may sometimes need to suppress a particular finding against a particular resource using the workflow status. Setting the Workflow status to SUPPRESSED means that the finding will not be reviewed again and will not be acted upon. If you suppress a FAILED finding, it will stay suppressed as long as it remains failed. However, if the finding moves from FAILED to PASSED, a new passed finding will be generated and the workflow status will be NEW. You can’t un-suppress a finding. If you suppress all findings for a control, the control status will be Unknown until any new finding is generated.

To suppress a finding

  1. In the Findings list, select the control you want to suppress, for example [RDS.1] RDS snapshot should be private.
  2. For Change workflow status, choose Suppressed.

 

Figure 6: RDS.1 control showing change workflow status options

Figure 6: RDS.1 control showing change workflow status options

You will no longer see the finding that you suppressed.

If you do not want to generate any findings for a specific control, you can instead choose to disable the control using the Disable feature, described in the next section.

Disable a security control

You can also disable the security check for a particular security control until you manually re-enable it. This disables the control check for all resources in the context of Security Hub in your AWS account and AWS Region. This may be helpful if a particular security control is not applicable for your environment. To disable a security control, on the AWS Foundational Security Best Practices standard dashboard page, on the specific control card, choose Disable. You can always re-enable the control when you need it in the future.

Figure 7: Control cards Disable option

Figure 7: Control cards Disable option

When you disable a particular control, you are required to enter a reason in the Reason for disabling field, so that you or someone else looking into it in the future have a clear record of why the control is not being used.

Figure 8: Reason for disabling a control page - Disabling control ACM.1example

Figure 8: Reason for disabling a control page – Disabling control ACM.1example

On the AWS Foundational Security Best Practices controls page, disabled controls are marked with a Disabled badge, as shown in the following screenshot. The cards also display the date when the control was disabled, and the reason that was provided. To re-enable a disabled control, on the control card, choose Enable.

Figure 9: Disabled control example – ACM.1

Figure 9: Disabled control example – ACM.1

You can enable the control any time without providing a reason. The evaluation for the control starts from the point in time when the control is re-enabled.

Remediate a failed security control

You can get the remediation instructions for a failed control from within the Security Hub console. On the AWS Foundational Security Best Practices standard dashboard page, choose the specific control card, then in the list of findings for a control, choose the finding you want to remediate. In the finding details, expand the Remediation section, and then choose the For directions on how to fix this issue link, as shown in the following screen shot.

Figure 10: Finding remediation link in the console

Figure 10: Finding remediation link in the console

You can also get to these step-by-step remediation instructions directly from the user guide. Go to the AWS Foundational Security Best Practices controls page and scroll down to the name of the specific control that generated the finding.

Use the AWS CLI to enable or disable the standard

To use the AWS Command Line Interface (AWS CLI) to enable the AWS Foundational Security Best Practices standard in Security Hub programmatically without using the Security Hub console, use the following command. Be sure you are running AWS CLI version 2.0.7 or later, and replace REGION-NAME with your AWS Region:


aws securityhub batch-enable-standards --standards-subscription-requests '{"StandardsArn":"arn:aws:securityhub:<REGION-NAME>::standards/aws-foundational-security-best-practices/v/1.0.0"}' --region <REGION-NAME>

To check the status, run the get-enabled-standards command. Be sure to replace REGION-NAME with your AWS Region:


aws securityhub get-enabled-standards --region <REGION-NAME> 

You should see the following “StandardsStatus”: “READY” output to indicate that the AWS Foundational Security Best Practices standard is enabled and ready:


{
    "StandardsSubscriptions": [
        {
            "StandardsSubscriptionArn": "arn:aws:securityhub:us-east-1:012345678912:subscription/aws-foundational-security-best-practices/v/1.0.0",
            "StandardsArn": "arn:aws:securityhub:us-east-1::standards/aws-foundational-security-best-practices/v/1.0.0",
            "StandardsInput": {},
            "StandardsStatus": "READY"
        }
    ]
}

To use the AWS CLI to disable the AWS Foundational Security Best Practices standard in Security Hub, use the following command. Be sure to replace ACCOUNT_ID with your account ID, and replace REGION-NAME with your AWS Region:


aws securityhub batch-disable-standards --standards-subscription-arns "arn:aws:securityhub:eu-central-1:<ACCOUNT_ID>:subscription/aws-foundational-security-best-practices/v/1.0.0" --region <REGION-NAME> 

Conclusion

In this post, you learned about how to implement the new AWS Foundational Security Best Practices standard in Security Hub, and how to interpret the findings. You also learned how to enable the standard by using the Security Hub console and AWS CLI, how to disable and enable specific controls within the standard, and how to follow remediation steps for failed findings. For more information, see the AWS Foundational Security Best Practices standard in the AWS Security Hub User Guide.

If you have comments about this post, submit them in the Comments section below. If you have questions, please start a new thread on the Security Hub forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Karthikeyan Vasuki Balasubramaniam

Karthikeyan is a Software Development Engineer on the Amazon Security Hub service team. At Amazon Web Services, he works on the infrastructure that supports the development and functioning of various security standards. He has a background in computer networking and operating systems. In his free time, you can find him learning Indian classical music and cooking.

Author

Rima Tanash

Rima Tanash is the Lead Security Engineer on the Amazon Security Hub service team. At Amazon Web Services, she applies automated technologies to audit various access and security configurations. She has a research background in data privacy using graph properties and machine learning.

Continuous compliance monitoring with Chef InSpec and AWS Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/continuous-compliance-monitoring-with-chef-inspec-and-aws-security-hub/

In this post, I will show you how to run a Chef InSpec scan with AWS Systems Manager and Systems Manager Run Command across your managed instances. InSpec is an open-source runtime framework that lets you create human-readable profiles to define security, compliance, and policy requirements and then test your Amazon Elastic Compute Cloud (Amazon EC2) instances against those profiles. InSpec profiles can also be used to make sure certain network ports aren’t reachable, to verify that certain packages are not installed, and/or to confirm that certain processes are running on your instances.

InSpec is integrated within AWS Systems Manager, an AWS service that you can use to view and control your infrastructure on AWS. InSpec compliance scans are run by using an AWS Systems Manager document (SSM document), which installs InSpec on your servers and removes InSpec after scans are completed.

In this post, you will create the supporting infrastructure to collect non-compliant findings from AWS Systems Manager Configuration Compliance and send them to AWS Security Hub via a Lambda function. You will also explore methods to correlate finding information in Security Hub for non-compliant resources.

Note: AWS Systems Manager (Systems Manager) was formerly known as “Amazon Simple Systems Manager (SSM)” and “Amazon EC2 Systems Manager (SSM).” The original abbreviated name of the service, “SSM,” is still reflected in various AWS resources. For more information, see Systems Manager Service Name History.

Solution overview

The following diagram shows the flow of events in the solution I describe in this post.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

  1. Invoke an AWS-RunInSpecChecks document on-demand by using Run Command against your target instances (State Manager is another option for scheduling InSpec scans, but is not covered in this post).
  2. Systems Manager downloads the InSpec Ruby files from Amazon Simple Storage Service (Amazon S3), installs InSpec on your server, runs the scan, and removes InSpec when complete.
  3. AWS Systems Manager pushes scan results to the Compliance API and presents the information in the Systems Manager Compliance console, to include severity and compliance state.
  4. A CloudWatch Event is emitted for Compliance state changes.
  5. A CloudWatch Event Rule listens for these state changes and when detected, invokes a Lambda function.
  6. Lambda calls the Compliance APIs for additional data about which InSpec check failed.
  7. Lambda calls the EC2 APIs to further enrich the data about the non-compliant instance.
  8. Lambda maps these details to the AWS Security Finding Format and sends them to Security Hub.

To support the steps above, you will deploy a CloudFormation template that creates a CloudWatch Event Rule and a placeholder Lambda function. You will then create an InSpec profile, upload it to Amazon S3, and use Run Command to invoke an InSpec compliance scan.

When the scan completes, you can then search for the findings in Security Hub. You can create saved searches with insights in AWS Security Hub, and use different filtering to correlate InSpec compliance failures with other information from Amazon Inspector and Amazon GuardDuty.

Prerequisites

Getting started

Before creating your CloudFormation stack, you’ll upload your InSpec profile and Lambda function package to an S3 bucket.

To upload the Lambda package:

  1. Download the Lambda function ZIP file from GitHub.
  2. Save the file to your workstation. In your S3 bucket, choose Upload to upload the InSpecToSecurityHub.zip file.

The next step is to create and upload an InSpec profile to Amazon S3.

To create and upload an InSpec profile:

  1. From your workstation, create a file named Inspec_SSH.rb and paste in the code that follows:
    
    	control 'Linux instance check' do
        title 'SSH access'
        desc 'SSH port should not be open to the world'
        impact 0.9
        require 'rbconfig'
        is_windows = (RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/)
        if ! is_windows
          describe port(22) do
            it { should be_listening }
            its('addresses') {should_not include '0.0.0.0'}
          end
        end
      end
    	

  2. Save your file, and in your S3 bucket, choose Upload to upload the Inspec_SSH.rb file.

Note: The InSpec profile in the example code above ensures that SSH (Port 22) is listening on your instance and that SSH access is not publicly available, as noted by {should_not include ‘0.0.0.0’}. For other InSpec profiles, see the DevSec chef-os-hardening project on GitHub for profiles to help you secure your instances and servers, or you can learn more about Compliance Automation with InSpec on the Chef website.

You will now deploy the CloudFormation template to finish setting up the solution.

To deploy the CloudFormation template:

  1. Download the CloudFormation template from GitHub and create a CloudFormation stack.

    Note: For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the AWS CloudFormation User Guide.

  2. Under Parameters, enter the name of the bucket you uploaded the package to, as shown in Figure 2, and finish creating your stack
     
    Figure 2: CloudFormation parameters

    Figure 2: CloudFormation parameters

After your stack is successfully created, you will run an InSpec scan.

Run an InSpec scan

Now that you have your Lambda function and InSpec profile ready, you will run your InSpec scan against one or more managed instances with Run Command.

  1. Navigate to the AWS Systems Manager Console and in the navigation pane, choose Run Command.
  2. In the search box, enter AWS-RunInspecChecks. In the search results, select AWS-RunInspecChecks.
  3. For Source Type, select S3. The Source Info format is:
    
    	{"path":"https://s3.amazonaws.com/BUCKET_NAME/PATH/ Inspec_SSH.rb"}
    	

    Change BUCKET_NAME, PATH, and the InSpec profile name as appropriate, and enter it in the Source Info box, as shown in Figure 3.

  4. Under Targets, select Choose instances manually and select one or more instances, as shown in Figure 3.
     
    Figure 3: Run Command interface

    Figure 3: Run Command interface

  5. Scroll down to Output options, and select S3, CloudWatch Logs, or both, then select Run.
  6. In the Run Command console for the specific Command invocation you just sent, you will see your instances and their status, as shown in Figure 4. Refresh the page after a few minutes to see if the invocation was successful.
     
    Figure 4: Run Command invocation status

    Figure 4: Run Command invocation status

  7. If you see anything other than Success, you can find information about different statuses in Understanding Command Statuses. You can also see Troubleshooting Systems Manager Run Command.
  8. In the AWS Systems Manager Console, navigate to the Compliance menu to see your full Compliance resources summary, as shown in Figure 5. You may also see additional compliance types and statuses. Only failed Custom:InSpec compliance types will make their way to Security Hub.
     
    Figure 5: SSM Compliance dashboard

    Figure 5: SSM Compliance dashboard

Analyze with Security Hub

After Lambda has sent your non-compliant InSpec results to Security Hub, you can create saved searches by using Security Hub insights to do basic correlation.

  1. Navigate to the Security Hub console and in the navigation pane, choose Findings.
  2. To find InSpec-related findings, select the search bar, and choose Product fields.
  3. For the Key field, enter Provider Name and for the Value field enter AWS Systems Manager Compliance, then choose Apply, as shown in Figure 6.
     
    Figure 6: User defined field - filter

    Figure 6: User defined field – filter

    This will give you your InSpec-related findings, as shown in Figure 7, but there is not enough data to correlate across to other findings from GuardDuty or Amazon Inspector.
     

    Figure 7: Security Hub InSpec findings

    Figure 7: Security Hub InSpec findings

  4. Navigate the Insights menu on the navigation pane and choose Create insight on the top-right.
  5. Select the search bar, enter Resource type, enter AwsEc2Instance and select Apply, as shown in Figure 8. This will group all findings related to EC2 instances together.
     
    Figure 8: Filter insights for EC2 instances

    Figure 8: Filter insights for EC2 instances

  6. Select the search bar again, select Group by, scroll down to select Severity label and select Apply. We now can see our total findings by severity across all EC2 instances, as shown in Figure 9.
     
    Figure 9: Insights grouped by severity

    Figure 9: Insights grouped by severity

  7. On the top right select Create insight, enter a name such as EC2 Instances by Severity and choose Create insight to save your insight.
  8. Navigate back to the main Insights menu and select your new Insight for more detailed graphics to appear on the right-hand side, as shown in Figure 10.
     
    Figure 10: Detailed Insight menu

    Figure 10: Detailed Insight menu

    You can hover over the values to get the exact count and select them to filter down further on details such as the ARN of the resource, the name of the product, or what account the finding originated from. This view can be used by security analysts who are looking to do fast correlation of findings for a resource or to take action on high severity findings first.

Conclusion

In this post, I showed you how to run InSpec scans to monitor the compliance of your instances against your policy requirements, as defined by InSpec profiles. InSpec can help identify when certain points are improperly configured or publicly accessible. By using Systems Manager, you can continuously monitor the compliance against these profiles with State Manager, and run these checks on demand by using Run Command. Systems Manager allows you to rapidly scale across your managed instances, and enriches instance data through the SSM Agent.

You also learned how to use Security Hub to perform correlation within the findings menu, so you can quickly search based on resource type, or the VPC and subnet that the EC2 instances are in. When you have an instance that fails compliance against an InSpec profile, you can look for any threat-related findings from GuardDuty, or vulnerability information from Amazon Inspector or partner integrations.

To avoid incurring additional charges from the resources created in this blog, delete the CloudFormation stack you deployed. For more information on Security Hub pricing, see the AWS Security Hub Pricing page.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

How to use the AWS Security Hub PCI DSS v3.2.1 standard

Post Syndicated from Rima Tanash original https://aws.amazon.com/blogs/security/how-to-use-the-aws-security-hub-pci-dss-v3-2-1-standard/

On February 13, 2020, AWS added partial support for the Payment Card Industry Data Security Standard (PCI DSS) version 3.2.1 requirements to AWS Security Hub.

This update enables you to validate a subset of PCI DSS’s requirements and helps with ongoing PCI DSS security activities by conducting continuous and automated checks. The new Security Hub standard also makes it easier to proactively monitor AWS resources, which is critical for any company involved with the storage, processing, or transmission of cardholder data. There’s also a Security score feature for the Security Hub standard, which can help support preparations for PCI DSS assessment.

Use this post to learn how to:

  • Enable the AWS Security Hub PCI DSS v3.2.1 standard and navigating results
  • Interpret your security score
  • Remediate failed security checks
  • Understand requirements related to findings

Enable Security Hub’s PCI DSS v3.2.1 standard and navigate results

Note: This section assumes that you have Security Hub enabled in one or more accounts. To learn how to enable Security Hub, follow these instructions. If you don’t have Security Hub enabled, the first time you enable Security Hub you will be given the option to enable PCI DSS v3.2.1.

To enable the PCI DSS v3.2.1 security standard in Security Hub:

  1. Open Security Hub and enable PCI DSS v3.2.1 Security standards.
    (Once enabled, Security Hub will begin evaluating related resources in the current AWS account and region against the AWS controls within the standard. The scope of the assessment is the current AWS account).
  2. When the evaluation completes, select View results.
  3. Now you are on the PCI DSS v3.2.1 page (Figure 1). You can see all 32 currently-implemented security controls in this standard, their severities, and their status for this account and region. Use search and filters to narrow down the controls by status, severity, title, or related requirement.

    Figure 1: PCI DSS v3.2.1 standard results page

    Figure 1: PCI DSS v3.2.1 standard results page

  4. Select the name of the control to review detailed information about it. This action will take you to the control’s detail page (Figure 2), which gives you related findings.

    Figure 2: Detailed control information

    Figure 2: Detailed control information

  5. If a specific control is not relevant for you, you can disable the control by selecting Disable and providing a Reason for disabling. (See Disabling Individual Compliance Controls for instructions).

How to interpret and improve your “Security score”

After enabling the PCI DSS v3.2.1 standard in Security Hub, you will notice a Security score appear for the standard itself, and for your account overall. These scores range between 0% and 100%.

Figure 3: Security score for PCI DSS standard (left) and overall (right)

Figure 3: Security score for PCI DSS standard (left) and overall (right)

The PCI DSS standard’s Security score represents the proportion of passed PCI DSS controls over enabled PCI DSS controls. The score is displayed as a percentage. Similarly, the overall Security score represents the proportion of passed controls over enabled controls, including controls from every enabled Security Hub standard, displayed as a percentage.

Your aim should be to pass all enabled security checks to reach a score of 100%. Reaching a 100% security score for the AWS Security Hub PCI DSS standard will help you prepare for a PCI DSS assessment. The PCI DSS Compliance Standard in Security Hub is designed to help you with your ongoing PCI DSS security activities.

An important note, the controls cannot verify whether your systems are compliant with the PCI DSS standard. They can neither replace internal efforts nor guarantee that you will pass a PCI DSS assessment.

Remediating failed security checks

To remediate a failed control, you need to remediate every failed finding for that control.

  1. To prioritize remediation, we recommend filtering by Failed controls and then remediating issues starting with critical– and ending with low severity controls.
  2. Identify a control you want to remediate and visit the control detail page.
  3. Follow the Remediation instructions link, and then follow the step-by-step remediation instructions, applying them for every failed finding.

    Figure 4: The control detail page, with a link to the remediation instructions

    Figure 4: The control detail page, with a link to the remediation instructions

How to interpret “Related requirements”

Every control displays Related requirements in the control card and in the control’s detail page. For PCI DSS, the Related requirements show which PCI DSS requirements are related to the Security Hub PCI DSS control. A single AWS control might relate to multiple PCI DSS requirements.

Figure 5: Related requirements in the control detail page

Figure 5: Related requirements in the control detail page

The user guide lists the related PCI DSS requirements and explains how the specific Security Hub PCI DSS control is related to the requirement.

For example, the AWS Config rule cmk-backing-key-rotation-enabled checks that key rotation is enabled for each customer master key (CMK), but it doesn’t check for CMKs that are using key material imported with the AWS Key Management Service (AWS KMS) BYOK mechanism. The related PCI DSS requirement that is mapped to this rule is PCI DSS 3.6.4 – “Cryptographic keys should be changed once they have reached the end of their cryptoperiod.” Although PCI DSS doesn’t specify the time frame for cryptoperiods, this rule is mapped because, if key rotation is enabled, rotation occurs annually by default with a customer-managed CMK.

Conclusion

The new AWS Security Hub PCI DSS v3.2.1 standard is fundamental for any company involved with storing, processing, or transmitting cardholder data. In this post, you learned how to enable the standard to begin proactively monitoring your AWS resources against the Security Hub PCI DSS controls. You also learned how to navigate the PCI DSS results within Security Hub. By frequently reviewing failed security checks, prioritizing their remediation, and aiming to achieve a 100% security score for PCI DSS within Security Hub, you’ll be better prepared for a PCI DSS assessment.

Further reading

If you have feedback about this post, submit comments in the Comments section below. If you have questions, please start a new thread on the Security Hub forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Rima Tanash

Rima Tanash is the Lead Security Engineer on the Amazon Security Hub service team. At Amazon Web Services, she applies automated technologies to audit various access and security configurations. She has a research background in data privacy using graph properties and machine learning.

Author

Michael Guzman

Michael is a Security Assurance Consultant with AWS Security Assurance Services. He is a current Qualified Security Assessor (QSA), certified by the PCI SSC. Michael has 20+ years of experience in IT in the financial, professional services, and retail industry. He helps customers on their cloud journey of critical workloads to the AWS cloud in a PCI DSS compliant manner.

Author

Logan Culotta

Logan Culotta is a Security Assurance Consultant on the AWS Security Assurance team. He is also a current Qualified Security Assessor (QSA), certified by the PCI SSC. Logan enjoys finding ways to automate compliance and security in the AWS cloud. In his free time, you can find him spending time with family, road cycling, and cooking.

Author

Avik Mukherjee

Avik is a Security Architect with over a decade of experience in IT governance, security, risk, and compliance. He’s been a Qualified Security Assessor for PCI DSS and Point-to-Point-Encryption and has deep knowledge of security advisory and assessment work in various industries, including retail, financial, and technology. He loves spending time with family and working on his culinary skills.

Automated Response and Remediation with AWS Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/automated-response-and-remediation-with-aws-security-hub/

AWS Security Hub is a service that gives you aggregated visibility into your security and compliance status across multiple AWS accounts. In addition to consuming findings from Amazon services and integrated partners, Security Hub gives you the option to create custom actions, which allow a customer to manually invoke a specific response or remediation action on a specific finding. You can send custom actions to Amazon CloudWatch Events as a specific event pattern, allowing you to create a CloudWatch Events rule that listens for these actions and sends them to a target service, such as a Lambda function or Amazon SQS queue.

By creating custom actions mapped to specific finding type and by developing a corresponding Lambda function for that custom action, you can achieve targeted, automated remediation for these findings. This allows a customer to specifically decide if he or she wants to invoke a remediation action on a specific finding. A customer can also use these Lambda functions as the target of fully automated remediation actions that do not require any human review.

In this blog post, I’ll show you how to build custom actions, CloudWatch Event rules, and Lambda functions for a dozen targeted actions that can help you remediate CIS AWS Foundations Benchmark-related compliance findings. I’ll also cover use cases for sending findings to an issue management system and for automating security patching. To promote rapid deployment and adoption of this solution, you’ll deploy a majority of the necessary components via AWS CloudFormation.

Note: The full repository for current and future response and remediation templates is hosted on GitHub and includes additional technical guidance for expanding the solution provided in this post.

Solution architecture

Figure 1 - Solution Architecture Overview

Figure 1 – Solution Architecture Overview

Figure 1 shows how a finding travels from an integrated service to a custom action:

  1. Integrated services send their findings to Security Hub.
  2. From the Security Hub console, you’ll choose a custom action for a finding. Each custom action is then emitted as a CloudWatch Event.
  3. The CloudWatch Event rule triggers a Lambda function. This function is mapped to a custom action based on the custom action’s ARN.
  4. Dependent on the particular rule, the Lambda function that is invoked will perform a remediation action on your behalf.

For the purpose of this blog post, I’ll refer to the end-to-end combination of a custom action, a CloudWatch Event rule, a Lambda function, plus any supporting services needed to perform a specific action as a “playbook.” To demonstrate how a remediation solution works end-to-end, I’ll show you how to build your first playbook manually. You’ll deploy the remainder of the playbooks via CloudFormation.

I’ll also show you how to modify four of the playbooks (three of which are appended by an asterisk), as they use AWS Lambda environment variables to perform their actions, we’ll walk through populating these later.

Based on feedback from Security Hub customers, the following controls from the CIS AWS Foundations Benchmark will be supported by this blog post:

  • 1.3 – “Ensure credentials unused for 90 days or greater are disabled”
  • 1.4 – “Ensure access keys are rotated every 90 days or less”
  • 1.5 – “Ensure IAM password policy requires at least one uppercase letter”
  • 1.6 – “Ensure IAM password policy requires at least one lowercase letter”
  • 1.7 – “Ensure IAM password policy requires at least one symbol”
  • 1.8 – “Ensure IAM password policy requires at least one number”
  • 1.9 – “Ensure IAM password policy requires a minimum length of 14 or greater”
  • 1.10 – “Ensure IAM password policy prevents password reuse”
  • 1.11 – “Ensure IAM password policy expires passwords within 90 days or less”
  • 2.2 – “Ensure CloudTrail log file validation is enabled”
  • 2.3 – “Ensure the S3 bucket CloudTrail logs to is not publicly accessible”
  • 2.4 – “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs”*
  • 2.6 – “Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket”*
  • 2.7 – “Ensure CloudTrail logs are encrypted at rest using AWS KMS CMKs”
  • 2.8 – “Ensure rotation for customer created CMKs is enabled”
  • 2.9 – “Ensure VPC flow logging is enabled in all VPCs”*
  • 4.1 – “Ensure no security groups allow ingress from 0.0.0.0/0 to port 22”
  • 4.2 – “Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389”
  • 4.3 – “Ensure the default security group of every VPC restricts all traffic”

You’ll also deploy and modify an additional playbook, “Send findings to JIRA.” You can find the high-level description of each playbook in each custom action creation script, as well as in the CloudFormation resource descriptions.

Note: If you want to send Security Hub findings to a security information and event management tool (SIEM) such as Amazon ElasticSearch Service or a third-party solution, you must change the CloudWatch Events event pattern to match all findings and use different targets such as Amazon Kinesis Data Streams to Kinesis Data Firehose to load your SIEM. This process is out of scope for this post.

Prerequisites

Ensure you have Security Hub and AWS Config turned on in your Region. Also, note that the solution in this blog post is meant to support a single account and will not support cross-account remediation as deployed. Refer to the Knowledge Center article How can I configure a Lambda function to assume a role from another AWS account? for basic information on cross-account roles for Lambda.

For the playbook “Apply Security Patches,” your EC2 instances must be managed by Systems Manager. For more information on managed instances, see AWS Systems Manager Managed Instances in the AWS Systems Manager User Guide.

Manually create a remediation playbook

To demonstrate the end-to-end process of building a playbook, I’ll first show you how to create one manually, before you deploy the remaining playbooks via CloudFormation. You’ll build a playbook to remediate Control 2.7 of the AWS Foundations Benchmark, “ensure CloudTrail logs are encrypted at rest using AWS Key Management Service (KMS) Customer Managed Keys (CMK).” Configuring CloudTrail to use KMS encryption (called SSE-KMS) provides additional confidentiality controls on you log data. To access your CloudTrail logs, users must not only have S3 read permissions for the corresponding log bucket, they now must be granted decrypt permissions by the KMS key policy.

Important note: The way this remediation, and all other remediation code is written, you can only target one finding at a time via the Action Menu.

You’ll achieve automated remediation by using a Lambda function to create a new KMS CMK and alias which identifies the non-compliant CloudTrail trail. You’ll then attach a KMS key policy that only allows the AWS account that owns the trail to decrypt the logs by using the IAM condition for StringEquals: kms:CallerAccount. You only need to run this playbook once per non-compliant CloudTrail trail.

To get started, follow these steps:

  1. Navigate to the Security Hub console, select Settings from the navigation pane, then select the Custom Actions tab.
  2. Choose Create custom action and enter values for Action name, Description, and Custom action ID, then choose Create custom action again, as shown in Figure 2.

    For the purpose of this blog post, I’ll refer to my action name as “CIS 2.7 RR” where the “RR” stands for “Response and remediation.”
     

    Figure 2 - Create custom action

    Figure 2 – Create custom action

  3. Copy the Amazon resource number (ARN) down, as you’ll need it in step 11.
  4. Navigate to the Lambda console and select Create function.
  5. Enter a function name, choose Python 3.7 runtime, and under Permissions select Create a new role with basic Lambda permissions. Then choose Create function.
  6. Scroll down to Execution role and select the hyperlink under Existing role. This will open a new tab in the IAM console.
  7. From the IAM console, select Add inline policy, then select the JSON tab, paste in the below IAM policy JSON, and select Review Policy.
    
    {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "kmssid",
            "Action": [
              "kms:CreateAlias",
              "kms:CreateKey",
              "kms:PutKeyPolicy"
            ],
            "Effect": "Allow",
            "Resource": "*"
          },
          {
            "Sid": "cloudtrailsid",
            "Action": [
              "cloudtrail:UpdateTrail"
            ],
            "Effect": "Allow",
            "Resource": "*"
          }
        ]
      }
    

  8. Give the in-line policy a name and select Create policy.
  9. Back in the Lambda console, increase Timeout to 1 minute and Memory to 256MB. Scroll up to Function code, paste in the below code, and select Save.
    
     # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
     # SPDX-License-Identifier: MIT-0
     #
     # Permission is hereby granted, free of charge, to any person obtaining a copy of this
     # software and associated documentation files (the "Software"), to deal in the Software
     # without restriction, including without limitation the rights to use, copy, modify,
     # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
     # permit persons to whom the Software is furnished to do so.
     #
     # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
     # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
     # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
     # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
     # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
     # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
    
    import boto3
    import json
    import time
    
    def lambda_handler(event, context):
        # parse non-compliant trail from Security Hub finding
        noncompliantTrail = str(event['detail']['findings'][0]['Resources'][0]['Details']['Other']['name'])
        # parse account ID from Security Hub finding, will be needed for Key Policy
        accountID = str(event['detail']['findings'][0]['AwsAccountId'])
        
        # import boto3 clients for KMS and CloudTrail
        kms = boto3.client('kms')
        cloudtrail = boto3.client('cloudtrail')
    
        # create a new KMS CMK to encrypt the non-compliant trail
        try:
            createKey = kms.create_key(
            Description='Generated by Security Hub to remediate CIS 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs',
            KeyUsage='ENCRYPT_DECRYPT',
            Origin='AWS_KMS'
            )
            # save key id as a variable
            cloudtrailKey = str(createKey['KeyMetadata']['KeyId'])
            print("Created Key" + " " + cloudtrailKey)
        except Exception as e:
            print(e)
            print("KMS CMK creation failed")
            raise
            
        # wait 2 seconds for key creation to propogate
        time.sleep(2)
    
        # attach an alias for easy identification to the key - must always begin with "alias/"
        try:
            createAlias = kms.create_alias(
            AliasName='alias/' + noncompliantTrail + '-CMK',
            TargetKeyId=cloudtrailKey
            )
            print(createAlias)
        except Exception as e:
            print(e)
            print("Failed to create KMS Alias")
            raise
        
        # wait 1 second
        time.sleep(1)
    
        # policy name for PutKeyPolicy is always "default"
        policyName = 'default'
        # set Key Policy as JSON object
        keyPolicy={
            "Version": "2012-10-17",
            "Id": "Key policy created by CloudTrail",
            "Statement": [
                {
                    "Sid": "Enable IAM User Permissions",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [ "arn:aws:iam::" + accountID + ":root" ]
                    },
                    "Action": "kms:*",
                    "Resource": "*"
                },
                {
                    "Sid": "Allow CloudTrail to encrypt logs",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudtrail.amazonaws.com"
                    },
                    "Action": "kms:GenerateDataKey*",
                    "Resource": "*",
                    "Condition": {
                        "StringLike": {
                            "kms:EncryptionContext:aws:cloudtrail:arn": "arn:aws:cloudtrail:*:" + accountID + ":trail/" + noncompliantTrail
                        }
                    }
                },
                {
                    "Sid": "Allow CloudTrail to describe key",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudtrail.amazonaws.com"
                    },
                    "Action": "kms:DescribeKey",
                    "Resource": "*"
                },
                {
                    "Sid": "Allow principals in the account to decrypt log files",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "*"
                    },
                    "Action": [
                        "kms:Decrypt",
                        "kms:ReEncryptFrom"
                    ],
                    "Resource": "*",
                    "Condition": {
                        "StringEquals": {
                            "kms:CallerAccount": accountID
                        },
                        "StringLike": {
                            "kms:EncryptionContext:aws:cloudtrail:arn": "arn:aws:cloudtrail:*:" + accountID + ":trail/" + noncompliantTrail
                        }
                    }
                },
                {
                    "Sid": "Allow alias creation during setup",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "*"
                    },
                    "Action": "kms:CreateAlias",
                    "Resource": "*",
                    "Condition": {
                        "StringEquals": {
                            "kms:CallerAccount": accountID
                        }
                    }
                }
            ]
        }
        # attaches above key policy to key
        try:
            attachKeyPolicy = kms.put_key_policy(
                KeyId=cloudtrailKey,
                Policy=json.dumps(keyPolicy),
                PolicyName=policyName
            )
            print(attachKeyPolicy)
        except Exception as e:
            print(e)
            print("Failed to attach key policy to Key:" + " " + cloudtrailKey)
    
        # update CloudTrail with the new CMK
        try:
            encryptTrail = cloudtrail.update_trail(
            Name=noncompliantTrail,
            KmsKeyId=cloudtrailKey
            )
            print(encryptTrail)
            print("CloudTrail trail" + " " + noncompliantTrail + " " + "has been successfully encrypted!")
        except Exception as e:
            print(e)
            print("Failed to attach KMS CMK to CloudTrail")
    

  10. Navigate to the CloudWatch console, and choose Events, then Create rule.
  11. On the left, next to Event Pattern Preview, select Edit.
  12. Under Event Source, find Build custom event pattern and paste the JSON below into the text box. Replace the ARN under “resources” with the ARN of the custom action you created in step 3.
    
    {
      "source": [
        "aws.securityhub"
      ],
      "detail-type": [
        "Security Hub Findings - Custom Action"
      ],
      "resources": [
        "arn:aws:securityhub:us-west-2:123456789012:action/custom/test-action1"
      ]
    }
    

  13. On the same screen, under Targets on the right, select add Target, choose the Lambda function you created in step 4, then select Configure details.
  14. On the next screen, enter values for Name and Description, then select Create rule.
  15. Back in the Security Hub console, select Compliance standards from the menu on the left, then select View results to see the CIS AWS Benchmarks.
  16. Find rule 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs and select the hyperlink to see all findings related to that control.
  17. Select a finding that is FAILED, with a Resource type that reads AwsCloudTrailTrail.
  18. From the top right, select the Actions dropdown menu, then select CIS 2.7 RR (or whatever you named this action in step 2), as shown in Figure 3.

    Selecting this action will execute the rule you created in step 13, which will invoke the Lambda function you created in step 9.
     

    Figure 3 - Security Hub Custom Actions

    Figure 3 – Security Hub Custom Actions

  19. Navigate to the CloudTrail console to ensure your CloudTrail trail is updated with SSE-KMS.

This creation flow via the console is universal for all playbook development with Security Hub. You’ll use the Actions menu in the same way to trigger the playbooks you deploy via CloudFormation in the next section of this walkthrough.

Note: To monitor the actions that are taken by the playbooks’ Lambda functions, refer to the functions’ logs. Both success and error messages will appear to help you diagnose as needed.

Deploy remediation playbooks via CloudFormation

Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the AWS CloudFormation User Guide.

After your stack has finished deploying, navigate to the Resources tab and select the hyperlink for each resource to be taken to their respective consoles. The logical ID for each resource will be prepended by the action the resource corresponds to. For example, in figure 4, CIS13RRCWEPermissions denotes CloudWatch Event permissions for AWS CIS Benchmark Control 1.3. This logical ID structure is used throughout the template.
 

Figure 4 - CloudFormation Resources

Figure 4 – CloudFormation Resources

Next, I’ll show you how to modify the Lambda functions associated with the following playbooks: “Send findings to JIRA,” CIS 2.4, CIS 2.6, and CIS 2.9.

Playbook modification: send findings to JIRA

Note: If you don’t currently have a JIRA Software Data Center deployment set up in your account, you can deploy one with a free evaluation period by following this Quick Start. If you are not interested in using JIRA or you use a different issue management tool, you can skip this section.

This playbook works by using the associated Lambda function to execute a Systems Manager automation document called AWS-CreateJiraIssue.

This Systems Manager document will in turn deploy a CloudFormation stack and create a custom Lambda function to map environmental variables (listed below) into JIRA via your playbook’s Lambda function.

Once complete, the CloudFormation stack will self-delete and the automation will be complete. To see this flow, refer to figure 5, below:
 

Figure 5 - Send to JIRA Architecture Diagram

Figure 5 – Send to JIRA Architecture Diagram

  1. A finding is selected and the custom action “Create JIRA Issue” is invoked, which triggers a CloudWatch Event.
  2. The CloudWatch Event rule will trigger a Lambda function.
  3. Lambda will invoke the AWS-CreateJiraIssue document via Systems Manager Automation.
  4. Systems Manager Automation will pass your Lambda environmental variables and Security Hub finding as document parameters.
  5. The document will create a CloudFormation Stack that contains another custom Lambda to invoke JIRA APIs for creating issues.
  6. The document’s Lambda function uses your parameters to create an issue in JIRA.
  7. JIRA will send back a response to note failure or success, and the CloudFormation stack will self-delete.

To get started, navigate to the Lambda console and find the function named SendToJIRA, then scroll down to Environmental variables: You should see 4, with the text “placeholder” next to each. The following steps walk you through how to populate them:

  1. Fill out the JIRA_API_PARAMETER field:
    1. Refer to Atlassian’s instructions to generate a JIRA API token
    2. Follow the Systems Manager Parameter Store walkthrough to create a parameter.
      1. In Step 6 of the linked instructions, choose Secure String and choose the default KMS key.
      2. In Step 7 of the linked instructions, paste in your newly generated JIRA API token.
    3. Copy the name of the parameter and paste it as the value of the JIRA_API_PARAMETER field.
  2. Fill out the JIRA_PROJECT field by pasting in your JIRA Software project’s Project Key.
  3. Fill out the JIRA_SECURITY_ISSUE_USER field:
    1. This value maps to a user in JIRA Software; refer to Atlassian’s instructions for how to create a user. Use the API key you generated in step 1.A as this user’s password.
    2. Paste the user name into the JIRA_SECURITY_ISSUE_USER field.
  4. Fill out the JIRA_URL field:
    1. From JIRA Software navigate to the System sub-menu of JIRA Administration and look for the Base URL field, underneath Settings – General Settings (see figure 6).
    2. Copy the Base URL value and paste it into the JIRA_URL field.
       
      Figure 6 - Base URL in JIRA

      Figure 6 – Base URL in JIRA

  5. When finished, select Save at the top of the Lambda console, then navigate to the Security Hub console and select Findings from the left-hand menu.
  6. Select any finding, and then, from the Actions dropdown in the top right, select Create JIRA Issue.
  7. Navigate to your JIRA instance and wait a few minutes for the new issue to appear, as shown in Figure 7.
     
    Figure 7 - Security Hub Finding in JIRA

    Figure 7 – Security Hub Finding in JIRA

    Note: If your issue hasn’t populated in JIRA after a few minutes, refer to the Systems Manager Automation console or CloudFormation console to find the stack that was created by Systems Manager and refer to the failure messages to troubleshoot further.

CIS 2.4 response & remediation playbook modification

CIS Control 2.4 is “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs.” The intent of this recommendation is to ensure that API activity recorded by CloudTrail is available to query in near real-time for the purpose of troubleshooting or security incident investigation with CloudWatch.

To send your CloudTrail logs to CloudWatch, the Lambda function for this playbook will create a brand new CloudWatch Logs group that has the name of the non-compliant CloudTrail trail in it for easy identification and then update your non-compliant CloudTrail trail to send its logs to the newly created log group.

To accomplish this, CloudTrail needs an IAM role and permissions to be allowed to publish logs to CloudWatch. To avoid creating multiple new IAM roles and policies via Lambda, you’ll populate the ARN of this IAM role in the Lambda environmental variables for this playbook.

Note: If you don’t currently have an IAM role for CloudTrail, follow these instructions from the CloudTrail user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-4_RR, then scroll down to Environmental variables.
  2. Find the variable CLOUDTRAIL_CW_LOGGING_ROLE_ARN with text that reads “placeholder” as the value.
  3. Paste the ARN of the IAM role into the field for this value, then select Save.

CIS 2.6 response & remediation playbook modification

CIS Control 2.6 is “Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket.” An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. AWS recommends that you enable bucket access logging on the CloudTrail S3 bucket. By enabling S3 bucket logging on target S3 buckets, you can capture all the events that might affect objects in the bucket. Configuring logs to be placed in a separate bucket enables centralized collection of access log information, which can be useful in security and incident response workflows.

Note: Security Hub supports CIS AWS Foundations controls only on resources in the same Region and owned by the same account as the one in which Security Hub is enabled and being used. For example, if you’re using Security Hub in the us-east-2 Region, and you’re storing CloudTrail logs in a bucket in the us-west-2 Region, Security Hub cannot find the bucket in the us-west-2 Region. The control returns a warning that the resource cannot be located. Similarly, if you’re aggregating logs from multiple accounts into a single bucket, the CIS control fails for all accounts except the account that owns the bucket.

To ensure the S3 bucket that contains your CloudTrail logs has access logging enabled, the Lambda function for this playbook invokes the Systems Manager document AWS-ConfigureS3BucketLogging. This document will enable access logging for that bucket. To avoid statically populating your S3 access logging bucket in the Lambda function’s code, you’ll pass that value in via an environmental variable.

Note: If you do not currently have an S3 bucket configured to receive access logs, follow the directions from the S3 user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-6_RR, then scroll down to Environmental variables.
  2. Find the variable ACCESS_LOGGING_BUCKET with text that reads ‘placeholder’ as the value.
  3. Paste the name of the S3 bucket that will receive the access logs into the field for this value and select Save.

CIS 2.9 response & remediation playbook modification

CIS Control 2.9 is “Ensure VPC flow logging is enabled in all VPCs.” The Amazon Virtual Private Cloud (VPC) flow logs feature enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you’ve created a flow log, you can view and retrieve its data in CloudWatch Logs. AWS recommends that you enable flow logging for packet rejects for VPCs. Flow logs provide visibility into network traffic that traverses the VPC and can detect anomalous traffic or provide insight into your security workflow.

To enable VPC flow logging for rejected packets, the Lambda function for this playbook will create a new CloudWatch Logs group. For easy identification, the name of the group will include the non-compliant VPC name. The Lambda function will programmatically update your VPC to enable flow logs to be sent to the newly created log group.

Similar to CloudTrail logging, VPC flow log need an IAM role and permissions to be allowed to publish logs to CloudWatch. To avoid creating multiple new IAM roles and policies via Lambda, you’ll populate the ARN of this IAM role in the Lambda environmental variables for this playbook.

Note: If you don’t currently have an IAM role that VPC flow logs can use to deliver logs to CloudWatch, follow the directions from the VPC user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-9_RR, then scroll down to Environmental variables.
  2. Find the variable flowLogRoleARN with text that reads ‘placeholder’ as the value.
  3. Paste in the ARN of the VPC flow logs IAM role the field for this value, and select Save.

Conclusion

In this blog post, I showed you how to create, deploy, and execute response and remediation playbooks for Security Hub. By combining custom actions, CloudWatch Event rules, and Lambda functions, you can quickly remediate non-compliant resources. I also showed you how to take pre-defined actions such as sending findings to JIRA, and how to deploy an additional seven playbooks via CloudFormation.

You can create playbooks that take other actions, such as updating Network Access Control Lists to help block malicious traffic from a TOR exit node via the UnauthorizedAccess:EC2/TorIPCaller finding from GuardDuty. Using playbooks with Security Hub is one more way to build toward the security and compliance of your AWS resources.

To avoid incurring additional charges from AWS resources, delete the CloudFormation stack you deployed as well as the resources you manually created for the CIS 2.7 response and remediation playbook.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Jonathon Rau

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

How to import AWS Config rules evaluations as findings in Security Hub

Post Syndicated from Omar Kahil original https://aws.amazon.com/blogs/security/how-to-import-aws-config-rules-evaluations-findings-security-hub/

In June at re:Inforce 2019, AWS announced the general availability of AWS Security Hub, a security service that enables customers to centrally view and manage compliance checks and security findings across their AWS accounts. AWS Security Hub imports security findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and over 30 AWS partner security solutions.

By default, when Security Hub is enabled, it deploys the CIS AWS Foundations standard in your account. CIS AWS Foundations are a set of security configuration best practices for hardening AWS accounts. In order for Security Hub to successfully run compliance checks against the rules included in the CIS AWS Foundations standard, AWS Config must be enabled in the same account where you enabled AWS Security Hub.

If you had previously created your own AWS Config rules before enabling AWS Security Hub, you will need to import them into AWS Security Hub. This will enable your security teams to have all compliance findings across accounts show up in Security Hub, along with the security findings from services like AWS Guard Duty, Amazon Inspector and Amazon Macie. In this post, we’ll show you how to set this up.

Solution Overview

You’ll use an AWS CloudFormation template to deploy the following components, so that you can import the findings of AWS Config rules into AWS Security Hub. Below are the main resources created by this template:

  • An AWS Lambda function with the necessary IAM role and IAM policies:

    Config-SecHub-Lambda

    This Lambda function will execute immediately after the CloudFormation stack creation. It will parse and import AWS Config Rule findings into Security Hub.

  • A CloudWatch Event rule with the necessary permission to invoke the AWS Lambda function:

    Config-Sechub-CW-Rule

    This CloudWatch Event rule will match the compliance change event from AWS Config and will route it to the Lambda function for processing.

Here’s the high level setup of the AWS services that will be created from the CloudFormation stack deployment:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

  1. A new AWS Config rule is deployed in the account after you enable AWS Security Hub.
  2. The AWS Config rule reacts to resource configuration and compliance changes and send these change items to AWS CloudWatch.
  3. When AWS CloudWatch receives the compliance change, a CloudWatch event rule triggers the AWS Lambda function.
  4. The Lambda function parses the event received from CloudWatch and imports it into Security Hub as a finding.

Deploy the solution

  1. Log in to the AWS Management Console and select the us-east-1 Region (N.Virginia) for this sample deployment.
  2. Make sure that AWS Config and AWS Security Hub are enabled in the region where you want to deploy the template.
  3. Set up the sample deployment by selecting the Launch Stack button below.

The solution must be launched in the region where AWS Config and AWS Security Hub are enabled, and can be deployed in any region that support the resources that will be created.

Select this image to open a link that starts building the CloudFormation stack

Follow these steps to execute the stack:

  1. Leave the default location for the template and select Next.
  2. On the Specify stack details page, add your preferred stack name.
  3. On the Configure Stack options page, select the Next button.
  4. On the Review page, select the check box, the select the Create Stack button.
  5. Stack creation will take between 5 – 10 minutes. After the stack is created successfully, select the Resources tab to see the deployed resources.

    Figure 2: AWS CloudFormation Resources tab

    Figure 2: Select the Resources tab to see deployed resources

At this point, you’ve successfully created the following resources:

  • A Lambda function that will integrate AWS Config and Security Hub.
  • A Lambda execution role used by the Lambda function to call other services.
  • A CloudWatch Events rule that will trigger the Lambda function based on AWS Config compliance change events.
  • A service permission to allow Amazon CloudWatch to invoke the Lambda function.

Verify the solution

To be certain that everything is set up properly, look at the Lambda code that integrates AWS Config with AWS Security Hub by following the steps below from the console:

  1. Navigate to the AWS Lambda console.
  2. From the list of Lambda functions, open the function with the name Config-SecHub-Lambda to check that the function has been deployed successfully.

    Figure 3: AWS Lambda console

    Figure 3: Config-SecHub-Lambda

You can also verify that the CloudWatch Event rule has deployed successfully:

  1. Go to the Amazon CloudWatch service console.
  2. Under Events, select Rules. You should see a rule with the name Config-Sechub-CW-Rule.
     
    Figure 4: Look for Config-Sechub-CW-Rule

    Figure 4: Config-Sechub-CW-Rule

Test the solution

  1. Go to the AWS Config console.
  2. On the left, select Rules, then Add rule.
  3. In the search filter, enter ec2 as an input, then select the desired-instance-type rule to test.

    Figure 5: AWS Config "add rule" page

    Figure 5: AWS Config “add rule” page

  4. In the rule parameters, enter t2.micro as the value. Any other instance type launched in this account will violate this rule and be noncompliant. Then, select Save.
  5. Navigate to Amazon EC2 console, and under Create Instance, select Launch.
  6. Select any Amazon Linux AMI.

    Figure 6: Select an Amazon Linux AMI

    Figure 6: Select an Amazon Linux AMI

  7. To test the rule, select any instance type except t2.micro. In our example, we’ve selected t2.nano.

    Figure 7: Choose your instance type

    Figure 7: Choose your instance type

  8. Because you’re just testing the rule, select Review and Launch. Then choose Proceed without a key pair, check the “I acknowledge…” box, and select Launch Instances.
     
    Figure 8: Select "Launch Instances"

    Figure 8: Select “Launch Instances”

    You’ll need to wait few minutes until the instance is launched. You can check the EC2 console to view the instance status.

  9. Once the instance has launched, go back to the AWS Config console, and from the left-hand menu, select Rules again.
  10. On the Rules page, select the Noncompliant filter from the drop-down menu to view the rule you’ve created to check instance types.

    Figure 9: Select the "Noncompliant" filter on the AWS Config Rules page

    Figure 9: Select the “Noncompliant” filter on the AWS Config Rules page

  11. Under Rule name, look for your desired-instance-type rule. It will be followed by your number of noncompliant resources. In this case, there should only be 1 noncompliant resource that matches the EC2 instance you launched above. You can select the rule name to view more details about the noncompliant resource.

    Figure 10: Look for the desired-instance-type rule

    Figure 10: Look for the desired-instance-type rule

  12. Go to the AWS Security Hub console. On the left, select Findings.
  13. Select the search filter, and from the drop-down menu select Title as the filter, then desired-instance-type as the value. Then select Apply.

    Figure 11: Filter by title

    Figure 11: Filter by title

  14. You should now see your AWS Config rule under Findings in AWS Security Hub.

    Figure 12: Look for your Config rule under "Findings"

    Figure 12: Look for your Config rule under “Findings”

  15. After you’ve verified that the Lambda function is working, terminate the EC2 instance and delete the Config rule.

What’s happening in the background?

When you deploy the CloudFormation stack, a Lambda function is created along with other resources. This function integrates AWS Config with AWS Security Hub. Every time a new config rule is added, and as soon its compliance status is changed, the AWS CloudWatch Event rule will trigger the Lambda function based on the compliance event. Through this function, the CloudWatch Event is parsed to get the required data for ingestion, then the data is mapped and converted into AWS Security Finding format. After this, the BatchImportFindings API operation is used to import the findings into AWS Security Hub. You can also find the CloudFormation template and the code files in the linked GitHub repository.

Conclusion

In this blog post, we’ve shown you how to import an AWS Config rule to AWS Security Hub, so that your AWS Config rules show up alongside your other security findings. This allows you to more easily use AWS Config rules in your journey toward continuous compliance across your estate of AWS accounts. If you need help in planning, building or optimizing your infrastructure, please reach out to AWS Professional Services. If you have questions about this post, let us know in the Comments section below, or start a new thread on the AWS Security Hub forum.

Kahil author photo

Omar Kahil

Omar is a Professional Services consultant who helps customers adopt DevOps culture and best practices. He also works to simplify the adoption of AWS services by automating and implementing complex solutions.

Shahzad author photo

Muhammad Shahzad

Muhammad is a DevOps consultant in ProServe who enables customers to implement DevOps by explaining principles and integrating best practices in their organizations.

Use AWS Fargate and Prowler to send security configuration findings about AWS services to Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/use-aws-fargate-prowler-send-security-configuration-findings-about-aws-services-security-hub/

In this blog post, I’ll show you how to integrate Prowler, an open-source security tool, with AWS Security Hub. Prowler provides dozens of security configuration checks related to services such as Amazon Redshift, Amazon ElasticCache, Amazon API Gateway and Amazon CloudFront. Integrating Prowler with Security Hub will provide posture information about resources not currently covered by existing Security Hub integrations or compliance standards. You can use Prowler checks to supplement the existing CIS AWS Foundations compliance standard Security Hub already provides, as well as other compliance-related findings you may be ingesting from partner solutions.

In this post, I’ll show you how to containerize Prowler using Docker and host it on the serverless container service AWS Fargate. By running Prowler on Fargate, you no longer have to provision, configure, or scale infrastructure, and it will only run when needed. Containers provide a standard way to package your application’s code, configurations, and dependencies into a single object that can run anywhere. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers.

Solution overview

The following diagram shows the flow of events in the solution I describe in this blog post.
 

Figure 1: Prowler on Fargate Architecture

Figure 1: Prowler on Fargate Architecture

 

The integration works as follows:

  1. A time-based CloudWatch Event starts the Fargate task on a schedule you define.
  2. Fargate pulls a Prowler Docker image from Amazon Elastic Container Registry (ECR).
  3. Prowler scans your AWS infrastructure and writes the scan results to a CSV file.
  4. Python scripts in the Prowler container convert the CSV to JSON and load an Amazon DynamoDB table with formatted Prowler findings.
  5. A DynamoDB stream invokes an AWS Lambda function.
  6. The Lambda function maps Prowler findings into the AWS Security Finding Format (ASFF) before importing them to Security Hub.

Except for an ECR repository, you’ll deploy all of the above via AWS CloudFormation. You’ll also need the following prerequisites to supply as parameters for the CloudFormation template.

Prerequisites

  • A VPC with at least 2 subnets that have access to the Internet plus a security group that allows access on Port 443 (HTTPS).
  • An ECS task role with the permissions that Prowler needs to complete its scans. You can find more information about these permissions on the official Prowler GitHub page.
  • An ECS task execution IAM role to allow Fargate to publish logs to CloudWatch and to download your Prowler image from Amazon ECR.

Step 1: Create an Amazon ECR repository

In this step, you’ll create an ECR repository. This is where you’ll upload your Docker image for Step 2.

  1. Navigate to the Amazon ECR Console and select Create repository.
  2. Enter a name for your repository (I’ve named my example securityhub-prowler, as shown in figure 2), then choose Mutable as your image tag mutability setting, and select Create repository.
     
    Figure 2: ECR Repository Creation

    Figure 2: ECR Repository Creation

Keep the browser tab in which you created the repository open so that you can easily reference the Docker commands you’ll need in the next step.

Step 2: Build and push the Docker image

In this step, you’ll create a Docker image that contains scripts that will map Prowler findings into DynamoDB. Before you begin step 2, ensure your workstation has the necessary permissions to push images to ECR.

  1. Create a Dockerfile via your favorite text editor, and name it Dockerfile.
    
    FROM python:latest
    
    # Declar Env Vars
    ENV MY_DYANMODB_TABLE=MY_DYANMODB_TABLE
    ENV AWS_REGION=AWS_REGION
    
    # Install Dependencies
    RUN \
        apt update && \
        apt upgrade -y && \
        pip install awscli && \
        apt install -y python3-pip
    
    # Place scripts
    ADD converter.py /root
    ADD loader.py /root
    ADD script.sh /root
    
    # Installs prowler, moves scripts into prowler directory
    RUN \
        git clone https://github.com/toniblyx/prowler && \
        mv root/converter.py /prowler && \
        mv root/loader.py /prowler && \
        mv root/script.sh /prowler
    
    # Runs prowler, ETLs ouput with converter and loads DynamoDB with loader
    WORKDIR /prowler
    RUN pip3 install boto3
    CMD bash script.sh
    

  2. Create a new file called script.sh and paste in the below code. This script will call the remaining scripts, which you’re about to create in a specific order.

    Note: Change the AWS Region in the Prowler command on line 3 to the region in which you’ve enabled Security Hub.

    
    #!/bin/bash
    echo "Running Prowler Scans"
    ./prowler -b -n -f us-east-1 -g extras -M csv > prowler_report.csv
    echo "Converting Prowler Report from CSV to JSON"
    python converter.py
    echo "Loading JSON data into DynamoDB"
    python loader.py
    

  3. Create a new file called converter.py and paste in the below code. This Python script will convert the Prowler CSV report into JSON, and both versions will be written to the local storage of the Prowler container.
    
    import csv
    import json
    
    # Set variables for within container
    CSV_PATH = 'prowler_report.csv'
    JSON_PATH = 'prowler_report.json'
    
    # Reads prowler CSV output
    csv_file = csv.DictReader(open(CSV_PATH, 'r'))
    
    # Create empty JSON list, read out rows from CSV into it
    json_list = []
    for row in csv_file:
        json_list.append(row)
    
    # Writes row into JSON file, writes out to docker from .dumps
    open(JSON_PATH, 'w').write(json.dumps(json_list))
    
    # open newly converted prowler output
    with open('prowler_report.json') as f:
        data = json.load(f)
    
    # remove data not needed for Security Hub BatchImportFindings    
    for element in data: 
        del element['PROFILE']
        del element['SCORED']
        del element['LEVEL']
        del element['ACCOUNT_NUM']
        del element['REGION']
    
    # writes out to a new file, prettified
    with open('format_prowler_report.json', 'w') as f:
        json.dump(data, f, indent=2)
    

  4. Create your last file, called loader.py and paste in the below code. This Python script will read values from the JSON file and send them to DynamoDB.
    
    from __future__ import print_function # Python 2/3 compatibility
    import boto3
    import json
    import decimal
    import os
    
    awsRegion = os.environ['AWS_REGION']
    prowlerDynamoDBTable = os.environ['MY_DYANMODB_TABLE']
    
    dynamodb = boto3.resource('dynamodb', region_name=awsRegion)
    
    table = dynamodb.Table(prowlerDynamoDBTable)
    
    # CHANGE FILE AS NEEDED
    with open('format_prowler_report.json') as json_file:
        findings = json.load(json_file, parse_float = decimal.Decimal)
        for finding in findings:
            TITLE_ID = finding['TITLE_ID']
            TITLE_TEXT = finding['TITLE_TEXT']
            RESULT = finding['RESULT']
            NOTES = finding['NOTES']
    
            print("Adding finding:", TITLE_ID, TITLE_TEXT)
    
            table.put_item(
               Item={
                   'TITLE_ID': TITLE_ID,
                   'TITLE_TEXT': TITLE_TEXT,
                   'RESULT': RESULT,
                   'NOTES': NOTES,
                }
            )
    

  5. From the ECR console, within your repository, select View push commands to get operating system-specific instructions and additional resources to build, tag, and push your image to ECR. See Figure 3 for an example.
     
    Figure 3: ECR Push Commands

    Figure 3: ECR Push Commands

    Note: If you’ve built Docker images previously within your workstation, pass the –no-cache flag with your docker build command.

  6. After you’ve built and pushed your Image, note the URI within the ECR console (such as 12345678910.dkr.ecr.us-east-1.amazonaws.com/my-repo), as you’ll need this for a CloudFormation parameter in step 3.

Step 3: Deploy CloudFormation template

Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the CloudFormation User Guide.

You’ll need the values you noted in Step 2 and during the “Solution overview” prerequisites. The description of each parameter is provided on the Parameters page of the CloudFormation deployment (see Figure 4)
 

Figure 4: CloudFormation Parameters

Figure 4: CloudFormation Parameters

After the CloudFormation stack finishes deploying, click the Resources tab to find your Task Definition (called ProwlerECSTaskDefinition). You’ll need this during Step 4.
 

Figure 5: CloudFormation Resources

Figure 5: CloudFormation Resources

Step 4: Manually run ECS task

In this step, you’ll run your ECS Task manually to verify the integration works. (Once you’ve tested it, this step will be automatic based on CloudWatch events.)

  1. Navigate to the Amazon ECS console and from the navigation pane select Task Definitions.
  2. As shown in Figure 6, select the check box for the task definition you deployed via CloudFormation, then select the Actions dropdown menu and choose Run Task.
     
    Figure 6: ECS Run Task

    Figure 6: ECS Run Task

  3. Configure the following settings (shown in Figure 7), then select Run Task:
    1. Launch Type: FARGATE
    2. Platform Version: Latest
    3. Cluster: Select the cluster deployed by CloudFormation
    4. Number of tasks: 1
    5. Cluster VPC: Enter the VPC of the subnets you provided as CloudFormation parameters
    6. Subnets: Select 1 or more subnets in the VPC
    7. Security groups: Enter the same security group you provided as a CloudFormation parameter
    8. Auto-assign public IP: ENABLED
       
      Figure 7: ECS Task Settings

      Figure 7: ECS Task Settings

  4. Depending on the size of your account and the resources within it, your task can take up to an hour to complete. Follow the progress by looking at the Logs tab within the Task view (Figure 8) by selecting your task. The stdout from Prowler will appear in the logs.

    Note: Once the task has completed it will automatically delete itself. You do not need to take additional actions for this to happen during this or subsequent runs.

     

    Figure 8: ECS Task Logs

    Figure 8: ECS Task Logs

  5. Under the Details tab, monitor the status. When the status reads Stopped, navigate to the DynamoDB console.
  6. Select your table, then select the Items tab. Your findings will be indexed under the primary key NOTES, as shown in Figure 9. From here, the Lambda function will trigger each time new items are written into the table from Fargate and will load them into Security Hub.
     
    Figure 9: DynamoDB Items

    Figure 9: DynamoDB Items

  7. Finally, navigate to the Security Hub console, select the Findings menu, and wait for findings from Prowler to arrive in the dashboard as shown in figure 10.
     
    Figure 10: Prowler Findings in Security Hub

    Figure 10: Prowler Findings in Security Hub

If you run into errors when running your Fargate task, refer to the Amazon ECS Troubleshooting guide. Log errors commonly come from missing permissions or disabled Regions – refer back to the Prowler GitHub for troubleshooting information.

Conclusion

In this post, I showed you how to containerize Prowler, run it manually, create a schedule with CloudWatch Events, and use custom Python scripts along with DynamoDB streams and Lambda functions to load Prowler findings into Security Hub. By using Security Hub, you can centralize and aggregate security configuration information from Prowler alongside findings from AWS and partner services.

From Security Hub, you can use custom actions to send one or a group of findings from Prowler to downstream services such as ticketing systems or to take custom remediation actions. You can also use Security Hub custom insights to create saved searches from your Prowler findings. Lastly, you can use Security Hub in a master-member format to aggregate findings across multiple accounts for centralized reporting.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Security Hub forum.

Want more AWS Security news? Follow us on Twitter.

Jonathon Rau

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

AWS Security Profiles: Dan Plastina, VP of Security Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-dan-plastina-vp-security-services/

In the weeks leading up to re:Invent 2019, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do as the VP of Security Services?

I’ve been at Amazon for just over two years. I lead the External Security Services organization—our team builds AWS services that help customers improve the security of their workloads. Our services include Amazon Macie, Amazon GuardDuty, Amazon Inspector, and AWS Security Hub.

What drew me to Amazon is the culture of ownership and accountability. I wake up every day and get to help AWS customers do things that transform their world—and I get to do that work with a whole bunch of people who feel the same way and take the same level of ownership. It’s very energizing.

What’s your favorite part of your job?

That’s hard! I love most aspects of my job. Forced to pick one, I’d have to say my favorite part is helping customers. Our Shared Responsibility Model says that AWS is accountable for the security of AWS, and customers are responsible for the resources and workloads they manage in AWS. My job allows me to sit on the customer side of the shared responsibility model. Our team builds the services that help customers improve the security of their workloads on AWS. Being able to help in that way is very rewarding.

One of Amazon’s widely-known leadership principles is Customer Obsession. Can you speak to what that looks like in the context of your work?

Being customer obsessed means that you’re in tune with the needs of the customer you’re working with. In the case of external security services, “customer obsessed” requires you to deeply understand what it means for individual customers to protect their assets in AWS, to empathize with those needs, and then to help them figure out how to get from where they are, to where they want to be. Because of this, I spend a lot of time with customers.

Our team participates in many in-person executive customer briefings. We hold a lot of conference calls. I’m flying to the UK on Monday to meet with customers—and I was there three weeks ago. I’ve spent over six weeks this fall traveling to talk with customers.  That much travel time can be hard, but it’s necessary to be in front of customers and listen to what they tell us. I’m fortunate to have a really strong team and so when I’m not traveling, I’m still able to spend a lot of time thinking about customer needs and about what my team should do next.

You’re on an elevator packed with CISOs, and they want you to explain the difference between Security Hub, GuardDuty, Macie, and Inspector before the doors open. What do you say?

First, I would tell them that the services are best understood as a suite of security services, and that AWS Security Hub offers a single pane of glass [Editor: a management tool that integrates information and offers a unified view] into everything else: Use it to understand the severity and sensitivity of findings across the other services you’re using.

Amazon GuardDuty is a continuous security monitoring and threat detection service. You simply choose to have it on or off in your AWS accounts. When it’s enabled, it detects highly suspicious activity and unauthorized access across the entirety of your AWS workloads. While GuardDuty alerts you to potential threats, Amazon Inspector helps you ensure that you address publicly known software vulnerabilities in a timely manner, removing them as a potential entry point for unauthorized users. Amazon Macie offers a particular focus on protecting your sensitive data by giving you a highly scalable and cost effective way to scan AWS for sensitive data and report back what is found and how it is being protected with access controls and encryption.

Then, I’d invite the entire elevator to come to re:Invent, to learn more about the new work my team is doing.

What can you tell us about your team’s re:Invent plans?

We have some exciting things planned for re:Invent this year. I can’t go into specifics yet, but we’re excited about it. A lot of my team will be present, and we’re looking forward to speaking with customers and learning more about what we should work on for next year.

We’ve got a variety of sessions about Security Hub, GuardDuty, and Inspector. If you can only make it to three security-specific sessions, I recommend Threat management in the cloud (SEC206-R), Automating threat detection and response in AWS (SEC301-R), and Use AWS Security Hub to act on your compliance and security posture (SEC342-R).

Is there some connecting thread to all of the various projects that your teams are working on right now?

I see a few threads. One is the concept of security being priority zero. It’s a theme that we live by at AWS, but customers ask us to stretch a little bit further and include their workloads in our security considerations. So workload security is now priority zero too. We’re spending a lot of time working that out and looking for ways to improve our services.

Another thread is that customers are asking us for prescriptive guidance. They’re saying, “Just tell me how I can ensure that my environment is safe. I promise you won’t offend me. Guide me as much as you can, and I’ll disregard anything that isn’t relevant to my environment.”

What’s one currently available security feature that you wish more customers were aware of?

A service, not a feature: AWS Security Hub. It has the ability to bring together security findings from many different AWS, partner, and customer security detection services. Security Hub takes security findings and normalizes them into our Amazon Security Findings Format, ASFF, and then sends them all back out through Amazon CloudWatch events to many partners that are capable of consuming them.

I think customers underestimate the value of having all of these security events normalized into a format that they can use to write a Splunk Phantom runbook, for example, or a Demisto runbook, or a Lambda function, or to send it to Rapid7 or cut a ticket in Jira. There’s a lot of power in what Security Hub does and it’s very cost effective. Many customers have started to use these capabilities, but I know that not everyone knows about it yet.

How do you stay up to date on important cloud security developments across the industry?

I get a lot of insight from customers. Customers have a lot of questions, and I can take these questions as a good indicator of what’s on peoples’ minds. I then do the research needed to get them smart answers, and in the process I learn things myself.

I also subscribe to a number of newsletters, such as Last Week In AWS, that give some interesting information about what’s trending. Reading our AWS blogs also helps because just keeping up with AWS is hard. There’s a lot going on! Listening to the various feeds and channels that we have is very informative.

And then there’s tinkering. I tinker with home automation / Internet of Things projects and with vendor-provided offers such as those provided to me by Splunk, Palo Alto, and CheckPoint of recent. It’s been fun learning partner offerings by building out VLANs, site-to-site tunnels, VPNs, DNS filters, SSL inspection, gateway-level anonymizers, central logging, and intrusion detection systems. You know, the home networking ‘essentials’ we all need.

You’re into riding Superbikes as a hobby. What’s the appeal?

I ride fast bikes on well-known race tracks all around the US several times a year. I love how speed and focus must come together. Going through different corners requires orchestrating all kinds of different motor and mental skills. It flushes the brain and clears your thoughts like nothing else. So, I appreciate the hobby as a way of escaping from normal day-to-day routine. Honestly, there’s nothing like doing 160 mph down a straightaway to teach you how to focus on what is needed, now.

You’re originally from Montreal. What’s one thing a visitor should eat on a trip there?

Let me give you two, eh. If you find yourself in a small rural Quebec restaurant, you must have poutine, the local ‘delicacy’. If you find yourself downtown, near my Alma matter Concordia University, you must enjoy our local student staple, Kojax. That said, it’s honestly hard to make a mistake when you’re eating in Montreal. They have a lot of good food there.

Want more AWS Security news? Follow us on Twitter.

The AWS External Security Services team is hiring! Want to find out more? Check out our career page.

Dan Plastina

Dan is Vice President, Head of Security Services for Amazon Web Services (AWS). He’s most often seen working alongside his team leaders on product design, management and engineering development efforts to enable business and government customers to secure themselves, when using AWS. He has travelled extensively, meeting c-suite and security leaders at all corners of the globe.

Nine AWS Security Hub best practices

Post Syndicated from Ketan Srivastava original https://aws.amazon.com/blogs/security/nine-aws-security-hub-best-practices/

AWS Security Hub is a security and compliance service that became generally available on June 25, 2019. It provides you with extensive visibility into your security and compliance status across multiple AWS accounts, in a single dashboard per region. The service helps you monitor critical settings to ensure that your AWS accounts remain secure, allowing you to notice and react quickly to any changes in your environment.

AWS Security Hub aggregates, organizes, and prioritizes security findings from supported AWS services—that’s Amazon GuardDuty, Amazon Inspector, and Amazon Macie at the time this post was published—and from various AWS partner security solutions. AWS Security Hub also generates its own findings, based on automated, resource-level and account-level configuration and compliance checks using service-linked AWS Config rules plus other analytic techniques. These checks help you keep your AWS accounts compliant with industry standards and best practices, such as the Center for Internet Security (CIS) AWS Foundations standard.

In this post, I’ll provide nine best practices to help you use AWS Security Hub as effectively as possible.

1. Use the AWS Labs script to turn on Security Hub in all your AWS accounts in all regions and to establish your existing Amazon GuardDuty master/member hierarchy

As a best practice, you should continuously monitor all regions across all of your AWS accounts for unauthorized behavior or misconfigurations, even in regions that you don’t use heavily. AWS already recommends that you do this when using monitoring services like AWS Config and AWS CloudTrail. I recommend that you enable Security Hub in every region available in your AWS accounts.

In addition, you can also invite other AWS accounts to enable Security Hub and share findings with your AWS account. If you send an invitation and it is accepted by the other account owner, your Security Hub account is designated as the master account, and any associated Security Hub accounts become your member accounts. Users from the master account will then be able to view Security Hub findings from member accounts.

To simplify these configurations, you can utilize the AWS Labs script available on GitHub, which provides a step-by-step guide to automate this process. This script allows you to enable (and disable) AWS Security Hub simultaneously across a list of associated AWS accounts and bulk-add them to become your Security Hub members; it sends invitations from the master account and automatically accepts invitations in all member accounts. To run the script, you must have the AWS account IDs and root email addresses of the AWS accounts that you want as your Security Hub members. (Note that you should only share your root email address and account ID with AWS accounts that you trust. Visit the IAM best practices page to learn more about how to keep access to your AWS accounts secured.)

By default, the Security Hub master/member association is independent of the relationships that you’ve established between your Amazon GuardDuty or Amazon Macie accounts and other associated accounts. If you have an existing master/member hierarchy in GuardDuty or Macie, you can export that list of accounts into a CSV file and then use it with the script. For example in GuardDuty, use the ListMembers API to export the AWS Account ID and email of all member accounts, as follows:

aws guardduty list-members –detector-id <Detector ID> –query "Members[].[AccountId, Email]" –output text | awk ‘{print $1 "," $2}’

The output of the above command will be your GuardDuty member account IDs and their corresponding root email addresses, one per line and separated with a comma as shown below:

12345678910,[email protected]
98765432101,[email protected]

2. Enable AWS Config in all AWS accounts and regions and leave the AWS CIS Foundations standard check enabled

When you enable Security Hub in any region, the AWS CIS standard checks are enabled by default. I recommend leaving them enabled; they are important security measures that are applicable to all AWS accounts.

To run most of these checks, Security Hub uses service-linked AWS Config rules. Because of this, you should make sure that AWS Config is turned on and recording all supported resources, including global resources, in all accounts and regions where Security Hub is deployed. You are not charged by AWS Config for these service-linked rules. You are only charged via Security Hub’s pricing model.

3. Use specific managed IAM policies for different types of Security Hub users

You can choose to allow a large group of users to access List and Read Security Hub actions, which will permit them to view your security findings. However, you should allow only a small group of users to access the Security Hub Write actions. This will permit only authorized users to archive, resolve, or remediate the findings.

You can use AWS managed policies to give your employees the permissions they need to get started. These policies are already available in your account and are maintained and updated by AWS. To grant more granular permission to your Security Hub users, I recommend that you create your own customer managed policies. A great place to start with this is to import an existing AWS managed policy. That way, you know that the policy is initially correct, and all you need to do is customize it for your environment.

AWS categorizes each service action into one of five access levels based on what each action does: List, Read, Write, Permissions management, or Tagging. To determine which access level to include in the IAM policies that you assign to your users, you can view the policy summary by navigating from the IAM Console to Policies, then selecting any AWS managed or customer managed policy. Next, on the Summary page, under the Permissions tab, select Policy summary (see Figure 1). For more details and examples of access level classification, see Understanding Access Level Summaries Within Policy Summaries.
 

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

4. Use tags for access controls and cost allocation

A SecurityHub::Hub resource represents the implementation of the AWS Security Hub service per region in your AWS account. Security Hub allows you to assign metadata to your SecurityHub::Hub resource in the form of tags. Each tag is a string consisting of a user-defined key and an optional key-value that makes it easier for you to identify and manage the AWS resources in your environment.

You can control access permissions by using tags on your SecurityHub::Hub resource. For example, you can allow a group of developer IAM entities to manage and update only the SecurityHub::Hub resources that have the tag key developer associated with them. This can help you restrict access to your production SecurityHub::Hub resources, while allowing your developers to continue testing in their developer environment.

For more information on the supported tag-based conditions which you can use with the Security Hub APIs, refer to Condition Keys for AWS Security Hub. Please note that when you use tag-based conditions for access control, you must define who can modify those tags.

To make it easier to categorize and track your AWS costs, you can also activate cost allocation tags. This helps you organize your SecurityHub::Hub resource costs. AWS generates a cost allocation report as a CSV file, with your usage and costs grouped according to your active tags. You can apply tags that represent business categories (such as cost centers, application names, or project environments) to organize your costs.

For more information on commonly used tagging categories and effective tagging strategies, read about AWS Tagging Strategies.

5. Integrate and enable your existing security products (with 34 integrations today and more to come)

Numerous tools can help you understand the security and compliance posture of your AWS accounts, but these tools generate their own set of findings, often in different formats. Security Hub normalizes the findings.

With Security Hub, findings generated from integrated providers (both third-party services and AWS services) are ingested using a standard findings format, which eliminates the need for security teams to convert the data. You can currently integrate 34 findings providers to import and/or export findings with Security Hub. Some partner products, like PagerDuty, Splunk, and Slack, can receive findings from Security Hub, although they don’t generate findings.

If you want to add a third-party partner product to your AWS environment, you can choose the Purchase link from the Security Hub console’s Integrations page and navigate to AWS Marketplace. Once purchased, choose the Configure link to navigate to step-by-step instructions to install the product and configure its integration with Security Hub. Then choose Enable integration to create a product subscription in your account for that third-party provider (see Figure 2).

After you enable a subscription, a resource policy is automatically attached to it. The resource policy defines the permissions that Security Hub needs to accept and process the product’s findings. You can also enable the subscription via the API and CloudFormation.
 

Figure 2: Integrating partner findings provider with Security Hub

Figure 2: Integrating partner findings provider with Security Hub

6. Build out customized remediation playbooks using Amazon CloudWatch Events, AWS Systems Manager Automation documents, and AWS Step Functions to automatically resolve findings that don’t require human intervention

Security Hub automatically sends all findings to Amazon CloudWatch Events. This integration helps you automate your response to threat incidents by allowing you to take specific actions using AWS Systems Manager Automation documents, OpsItems, and AWS Step Functions. Using these tools, you can create your own incident handling plan. This will allow your security team to focus on strengthening the security of your AWS environments rather than on remediating the current findings.
 

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

7. Create custom actions to send a copy of a Security Hub finding to a resource that is internal or external to your AWS account, enabling additional visibility and remediation options for the finding

Because of its integration with CloudWatch Events, you can use Security Hub to create custom actions that will send specific findings to ticketing, chat, email, or automated remediation systems. Custom actions can also be sent to your own AWS resources, such as AWS Systems Manager OpsCenter, AWS Lambda or Amazon Kinesis, allowing you to do your own remediation or data capture related to the finding.

For an in-depth look at this architecture, plus specific examples of how to implement custom actions, see How to Integrate AWS Security Hub Custom Actions with PagerDuty and How to Enable Custom Actions in AWS Security Hub.

In addition, Security Hub gives you the option to choose a language-specific AWS SDK so that you can use custom actions to resolve findings programmatically. Below, I’ll demonstrate how you can implement this using AWS Lambda and AWS SDK for Python (Boto3). In my example, I’ll remediate the finding generated by Security Hub for CIS check 2.4, “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs.” For this walk-through, I assume that you have the necessary AWS IAM permissions to work with Security Hub, CloudWatch Events, Lambda and AWS CloudTrail.
 

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

As shown in figure 4:

  1. When findings against CIS check 2.4 are generated in Security Hub, Security Hub will send them to CloudWatch Events using custom actions that I’ll describe below.
  2. CloudWatch Events will send the findings to a Lambda function that has been configured as the target.
  3. The Lambda function will utilize a Python script to check whether the finding has been generated against CIS check 2.4. If it has, the Lambda function will identify the affected CloudTrail trail and configure it with CloudWatch Logs to monitor the trail logs.

Prerequisites

  1. You must configure an IAM Role for AWS CloudTrail to assume so that it can deliver events to your CloudWatch Logs log group. For more information about how to do this, refer to the AWS CloudTrail documentation. I’ll refer to this role as the CloudTrail role.
  2. To deploy the Lambda function, you must configure an IAM Role for the Lambda function to assume. I’ll refer to this role as the Lambda execution role. The following sample policy includes the permissions that you’ll assign to it for this example. Please replace <CloudTrail_CloudWatchLogs_Role> with the CloudTrail role that you created in the previous step. Depending on your use case, you can restrict this IAM policy further to grant least privilege, which is a recommended IAM Best Practice.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogGroups",
                "cloudtrail:UpdateTrail",
                "iam:GetRole"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::012345678910:role/<CloudTrail_CloudWatchLogs_Role>"
        }
    ]
}     

Solution deployment

  1. Create a custom action in AWS Security Hub and associate it with a CloudWatch Events rule that you configure for your Security Hub findings. Follow the instructions laid out in the Security Hub user guide for the exact steps to do this.
  2. Create a Lambda Function, which will complete the auto-remediation of the CIS 2.4 findings:
    1. Open the Lambda Console and select Create function.
    2. On the next page, choose Author from scratch.
    3. Under Basic information, enter a name for your function. For Runtime, select Python 3.7.
       
      Figure 5: Updating basic information to create the Lambda function

      Figure 5: Updating basic information to create the Lambda function

    4. Under Permissions, expand Choose or create an execution role.
    5. Under Execution role, select the drop down menu and change the setting to Use an existing role.
    6. Under Existing role, select the Lambda execution role that you created earlier, then select Create function.
       
      Figure 6: Updating basic information and permissions to create the Lambda function

      Figure 6: Updating basic information and permissions to create the Lambda function

    7. Delete the default function code and paste the code I’ve provided below:
      
              import json, boto3
              cloudtrail_client = boto3.client('cloudtrail')
              cloudwatchlogs_client = boto3.client('logs')
              iam_client = boto3.client('iam')
              
              role_details = iam_client.get_role(RoleName='<CloudTrail_CloudWatchLogs_Role>')
              
              def lambda_handler(event, context):
                  # First off all, let us see if the JSON sent by CWE has any Security Hub findings.
                  if 'detail' in event.keys() and 'findings' in event['detail'].keys() and len(event['detail']['findings']) > 0:
                      print("There are some findings. Let's check them!")
                      print("Number of findings: %i" % len(event['detail']['findings']))
              
                      # Then we need to filter out the findings. In this code snippet, we'll handle only findings related to CloudTrail trails for integration with CloudWatch Logs.
                      for finding in event['detail']['findings']:
                          if 'Title' in finding.keys():
                              if 'Ensure CloudTrail trails are integrated with CloudWatch Logs' in finding['Title']:
                                  print("There's a CloudTrail-related finding. I can handle it!")
              
                                  if 'Compliance' in finding.keys() and 'Status' in finding['Compliance'].keys():
                                      print("Compliance Status: %s" % finding['Compliance']['Status'])
              
                                      # We can skip compliant findings, and evaluate only the non-compliant ones.                        
                                      if finding['Compliance']['Status'] == 'PASSED':
                                          continue
              
                                      # For each non-compliant finding, we need to get specific pieces of information so as to create the correct log group and update the CloudTrail trail.                        
                                      for resource in finding['Resources']:
                                          resource_id = resource['Id']
                                          cloudtrail_name = resource['Details']['Other']['name']
                                          loggroup_name = 'CloudTrail/' + cloudtrail_name
                                          print("ResourceId for the finding is %s" % resource_id)
                                          print("LogGroup name: %s" % loggroup_name)
              
                                          # At this point, we can create the log group using the name extracted from the finding.
                                          try:
                                              response_logs = cloudwatchlogs_client.create_log_group(logGroupName=loggroup_name)
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
              
                                          # For updating the CloudTrail trail, we need to have the ARN of the log group. Let's retrieve it now.                            
                                          response_logsARN = cloudwatchlogs_client.describe_log_groups(logGroupNamePrefix = loggroup_name)
                                          print("LogGroup ARN: %s" % response_logsARN['logGroups'][0]['arn'])
                                          print("The role used by CloudTrail is: %s" % role_details['Role']['Arn'])
              
                                          # Finally, let's update the CloudTrail trail so that it sends logs to the new log group created.
                                          try:
                                              response_cloudtrail = cloudtrail_client.update_trail(
                                                  Name=cloudtrail_name,
                                                  CloudWatchLogsLogGroupArn = response_logsARN['logGroups'][0]['arn'],
                                                  CloudWatchLogsRoleArn = role_details['Role']['Arn']
                                              )
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
                              else:
                                  print("Title: %s" % finding['Title'])
                                  print("This type of finding cannot be handled by this function. Skipping it…")
                          else:
                              print("This finding doesn't have a title and so cannot be handled by this function. Skipping it…")
                  else:
                      print("There are no findings to remediate.")            
              

    8. After pasting the code, replace <CloudTrail_CloudWatchLogs_Role> with your CloudTrail role and select Save to save your Lambda function.
       
      Figure 7: Editing Lambda code to replace the correct CloudTrail role

      Figure 7: Editing Lambda code to replace the correct CloudTrail role

  3. Go to your CloudWatch console and select Rules in the navigation pane on the left.
    1. From the list of CloudWatch rules that you see, select the rule which you created in Step 1 of this solution deployment.
    2. Then, select Actions on the top right of the page and choose Edit.
    3. On the Step 1: Create rule page, under Targets, choose Lambda function and select the Lambda function you created in Step 2.
    4. Select Configure details.
    5. On the Step 2: Configure rule details page, select Update rule.
       
      Figure 8: Adding your created Lambda function as Target for the CloudWatch rule

      Figure 8: Adding your created Lambda function as target for the CloudWatch rule

  4. Configuration is now complete, and you can test your rule. Go to your AWS Security Hub console and select Compliance standards in the navigation pane.
    1. Next, select CIS AWS Foundations.
       
      Figure 9: Compliance standards page in the Security Hub console

      Figure 9: Compliance standards page in the Security Hub console

    2. Search for the rule 2.4 Ensure CloudTrail trails are integrated with CloudWatch Logs and select it.
       
      Figure 10: Locating CIS check 2.4 in the Security Hub console

      Figure 10: Locating CIS check 2.4 in the Security Hub console

    3. If you’ve left the default AWS Security Hub CIS checks enabled (along with AWS Config service in the same region), and if you have CloudTrail trails in that region which are not yet configured to deliver events to CloudWatch Logs, you should see a low severity finding with a Failed Compliance status.
    4. Select the failed finding by selecting the checkbox and choosing the Actions button.
    5. Finally, from the dropdown menu, select the custom action that you created in Step 1 to send the finding to CloudWatch Events. CloudWatch Events will send the finding to your Lambda function, which you configured as the target for the rule in step 3. The Lambda function will automatically identify the affected CloudTrail trail and configure CloudWatch Logs log group for you. The log group will have the same name as your trail for identification purposes. You can modify the code to suit your needs further.

    Note: There may be a delay before the compliance status of the remediated resource changes. Once the CIS AWS Foundations Standard is enabled, Security Hub will run the checks within 2 hours. After that, the checks are automatically run once every 24 hours.

     

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub Console

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub console

    8. Customize your insights using the default “managed insights” as templates and use them to prioritize resources and findings to act upon

    A Security Hub “insight” is a collection of related findings to which one or more Security Hub filters have been applied. Insights can help you organize your findings and identify security risks that need immediate attention.

    Security Hub offers several managed (default) insights. You can use these as templates for new insights, and modify them depending on your use case. You can save these modified queries as new custom insights to ensure an even greater visibility of your AWS accounts. Please refer to the documentation for step-by-step instructions on how to create custom insights.
     

    Figure 12: Creating a Security Hub custom insight

    Figure 12: Creating a Security Hub custom insight

    9. Use the free trial to evaluate what your costs could be

    Security Hub provides a 30-day free trial for all AWS accounts and regions. The trial is a good way to evaluate how much Security Hub will cost, on average, to monitor threats and compliance in your environments. You can view an estimate by navigating from the Security Hub console to Settings, then Usage (see Figure 13).
     

    Figure 13: Estimating your Security Hub costs

    Figure 13: Estimating your Security Hub costs

    Conclusion

    AWS Security Hub allows you to have more visibility into the security and compliance status of your AWS environments. Using the Security Hub best practices discussed here, security teams can spend more time on incident remediation and recovery rather than incident detection and organization. Security Hub has undergone HIPAA, ISO, PCI, and SOC certification. To learn more about Security Hub, refer to the AWS Security Hub documentation.

    If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Author

    Ketan Srivastava

    Ketan is a Cloud Support Engineer at AWS. He enjoys the fact that, at AWS, there are always so many opportunities to build things better for our customers and learn from these opportunities. Outside of work, he plays MOBAs and travels to new places with his wife. He holds a Master of Science degree from Rochester Institute of Technology.