Tag Archives: Security Hub

Integrate Kubernetes policy-as-code solutions into Security Hub

Post Syndicated from Joaquin Manuel Rinaudo original https://aws.amazon.com/blogs/security/integrate-kubernetes-policy-as-code-solutions-into-security-hub/

Using Kubernetes policy-as-code (PaC) solutions, administrators and security professionals can enforce organization policies to Kubernetes resources. There are several publicly available PAC solutions that are available for Kubernetes, such as Gatekeeper, Polaris, and Kyverno.

PaC solutions usually implement two features:

  • Use Kubernetes admission controllers to validate or modify objects before they’re created to help enforce configuration best practices for your clusters.
  • Provide a way for you to scan your resources created before policies were deployed or against new policies being evaluated.

This post presents a solution to send policy violations from PaC solutions using Kubernetes policy report format (for example, using Kyverno) or from Gatekeeper’s constraints status directly to AWS Security Hub. With this solution, you can visualize Kubernetes security misconfigurations across your Amazon Elastic Kubernetes Service (Amazon EKS) clusters and your organizations in AWS Organizations. This can also help you implement standard security use cases—such as unified security reporting, escalation through a ticketing system, or automated remediation—on top of Security Hub to help improve your overall Kubernetes security posture and reduce manual efforts.

Solution overview

The solution uses the approach described in A Container-Free Way to Configure Kubernetes Using AWS Lambda to deploy an AWS Lambda function that periodically synchronizes the security status of a Kubernetes cluster from a Kubernetes or Gatekeeper policy report with Security Hub. Figure 1 shows the architecture diagram for the solution.

Figure 1: Diagram of solution

Figure 1: Diagram of solution

This solution works using the following resources and configurations:

  1. A scheduled event which invokes a Lambda function on a 10-minute interval.
  2. The Lambda function iterates through each running EKS cluster that you want to integrate and authenticate by using a Kubernetes Python client and an AWS Identity and Access Management (IAM) role of the Lambda function.
  3. For each running cluster, the Lambda function retrieves the selected Kubernetes policy reports (or the Gatekeeper constraint status, depending on the policy selected) and sends active violations, if present, to Security Hub. With Gatekeeper, if more violations exist than those reported in the constraint, an additional INFORMATIONAL finding is generated in Security Hub to let security teams know of the missing findings.

    Optional: EKS cluster administrators can raise the limit of reported policy violations by using the –constraint-violations-limit flag in their Gatekeeper audit operation.

  4. For each running cluster, the Lambda function archives archive previously raised and resolved findings in Security Hub.

You can download the solution from this GitHub repository.

Walkthrough

In the walkthrough, I show you how to deploy a Kubernetes policy-as-code solution and forward the findings to Security Hub. We’ll configure Kyverno and a Kubernetes demo environment with findings in an existing EKS cluster to Security Hub.

The code provided includes an example constraint and noncompliant resource to test against.

Prerequisites

An EKS cluster is required to set up this solution within your AWS environments. The cluster should be configured with either aws-auth ConfigMap or access entries. Optional: You can use eksctl to create a cluster.

The following resources need to be installed on your computer:

Step 1: Set up the environment

The first step is to install Kyverno on an existing Kubernetes cluster. Then deploy examples of a Kyverno policy and noncompliant resources.

Deploy Kyverno example and policy

  1. Deploy Kyverno in your Kubernetes cluster according to its installation manual using the Kubernetes CLI.
    kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.10.0/install.yaml

  2. Set up a policy that requires namespaces to use the label thisshouldntexist.
    kubectl create -f - << EOF
    apiVersion: kyverno.io/v1
    kind: ClusterPolicy
    metadata:
      name: require-ns-labels
    spec:
      validationFailureAction: Audit
      background: true
      rules:
      - name: check-for-labels-on-namespace
        match:
          any:
          - resources:
              kinds:
              - Namespace
        validate:
          message: "The label thisshouldntexist is required."
          pattern:
            metadata:
              labels:
                thisshouldntexist: "?*"
    EOF

Deploy a noncompliant resource to test this solution

  1. Create a noncompliant namespace.
    kubectl create namespace non-compliant

  2. Check the Kubernetes policy report status using the following command:
    kubectl get clusterpolicyreport -o yaml

You should see output similar to the following:

apiVersion: v1
items:
- apiVersion: wgpolicyk8s.io/v1alpha2
  kind: ClusterPolicyReport
  metadata:
    creationTimestamp: "2024-02-20T14:00:37Z"
    generation: 1
    labels:
      app.kubernetes.io/managed-by: kyverno
      cpol.kyverno.io/require-ns-labels: "3734083"
    name: cpol-require-ns-labels
    resourceVersion: "3734261"
    uid: 3cfcf1da-bd28-453f-b2f5-512c26065986
  results:
   ...
  - message: 'validation error: The label thisshouldntexist is required. rule check-for-labels-on-namespace
      failed at path /metadata/labels/thisshouldntexist/'
    policy: require-ns-labels
    resources:
    - apiVersion: v1
      kind: Namespace
      name: non-compliant
      uid: d62eb1ad-8a0b-476b-848d-ff6542c57840
    result: fail
    rule: check-for-labels-on-namespace
    scored: true
    source: kyverno
    timestamp:
      nanos: 0
      seconds: 1708437615

Step 2: Solution code deployment and configuration

The next step is to clone and deploy the solution that integrates with Security Hub.

To deploy the solution

  1. Clone the GitHub repository by using your preferred command line terminal:
    git clone https://github.com/aws-samples/securityhub-k8s-policy-integration.git

  2. Open the parameters.json file and configure the following values:
    1. Policy – Name of the product that you want to enable, in this case policyreport, which is supported by tools such as Kyverno.
    2. ClusterNames – List of EKS clusters. When AccessEntryEnabled is enabled, this solution deploys an access entry for the integration to access your EKS clusters.
    3. SubnetIds – (Optional) A comma-separated list of your subnets. If you’ve configured the API endpoints of your EKS clusters as private only, then you need to configure this parameter. If your EKS clusters have public endpoints enabled, you can remove this parameter.
    4. SecurityGroupId – (Optional) A security group ID that allows connectivity to the EKS clusters. This parameter is only required if you’re running private API endpoints; otherwise, you can remove it. This security group should be allowed ingress from the security group of the EKS control plane.
    5. AccessEntryEnabled – (Optional) If you’re using EKS access entries, the solution automatically deploys the access entries with read-only-group permissions deployed in the next step. This parameter is True by default.
  3. Save the changes and close the parameters file.
  4. Set up your AWS_REGION (for example, export AWS_REGION=eu-west-1) and make sure that your credentials are configured for the delegated administrator account.
  5. Enter the following command to deploy:
    ./deploy.sh

You should see the following output:

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - aws-securityhub-k8s-policy-integration

Step 3: Set up EKS cluster access

You need to create the Kubernetes Group read-only-group to allow read-only permissions to the IAM role of the Lambda function. If you aren’t using access entries, you will also need to modify the aws-auth ConfigMap of the Kubernetes clusters.

To configure access to EKS clusters

  1. For each cluster that’s running in your account, run the kube-setup.sh script to create the Kubernetes read-only cluster role and cluster role binding.
  2. (Optional) Configure aws-auth ConfigMap using eksctl if you aren’t using access entries.

Step 4: Verify AWS service integration

The next step is to verify that the Lambda integration to Security Hub is running.

To verify the integration is running

  1. Open the Lambda console, and navigate to the aws-securityhub-k8s-policy-integration-<region> function.
  2. Start a test to import your cluster’s noncompliant findings to Security Hub.
  3. In the Security Hub console, review the recently created findings from Kyverno.
     
    Figure 2: Sample Kyverno findings in Security Hub

    Figure 2: Sample Kyverno findings in Security Hub

Step 5: Clean up

The final step is to clean up the resources that you created for this walkthrough.

To destroy the stack

  • Use the command line terminal in your laptop to run the following command:
    ./cleanup.sh

Conclusion

In this post, you learned how to integrate Kubernetes policy report findings with Security Hub and tested this setup by using the Kyverno policy engine. If you want to test the integration of this solution with Gatekeeper, you can find alternative commands for step 1 of this post in the GitHub repository’s README file.

Using this integration, you can gain visibility into your Kubernetes security posture across EKS clusters and join it with a centralized view, together with other security findings such as those from AWS Config, Amazon Inspector, and more across your organization. You can also try this solution with other tools, such as kube-bench or Gatekeeper. You can extend this setup to notify security teams of critical misconfigurations or implement automated remediation actions by using AWS Security Hub.

For more information on how to use PaC solutions to secure Kubernetes workloads in the AWS cloud, see Amazon Elastic Kubernetes Service (Amazon EKS) workshop, Amazon EKS best practices, Using Gatekeeper as a drop-in Pod Security Policy replacement in Amazon EKS and Policy-based countermeasures for Kubernetes.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Author

Joaquin Manuel Rinaudo

Joaquin is a Principal Security Architect with AWS Professional Services. He is passionate about building solutions that help developers improve their software quality. Prior to AWS, he worked across multiple domains in the security industry, from mobile security to cloud and compliance related topics. In his free time, Joaquin enjoys spending time with family and reading science fiction novels.

How to generate security findings to help your security team with incident response simulations

Post Syndicated from Jonathan Nguyen original https://aws.amazon.com/blogs/security/how-to-generate-security-findings-to-help-your-security-team-with-incident-response-simulations/

Continually reviewing your organization’s incident response capabilities can be challenging without a mechanism to create security findings with actual Amazon Web Services (AWS) resources within your AWS estate. As prescribed within the AWS Security Incident Response whitepaper, it’s important to periodically review your incident response capabilities to make sure your security team is continually maturing internal processes and assessing capabilities within AWS. Generating sample security findings is useful to understand the finding format so you can enrich the finding with additional metadata or create and prioritize detections within your security information event management (SIEM) solution. However, if you want to conduct an end-to-end incident response simulation, including the creation of real detections, sample findings might not create actionable detections that will start your incident response process because of alerting suppressions you might have configured, or imaginary metadata (such as synthetic Amazon Elastic Compute Cloud (Amazon EC2) instance IDs), which might confuse your remediation tooling.

In this post, we walk through how to deploy a solution that provisions resources to generate simulated security findings for actual provisioned resources within your AWS account. Generating simulated security findings in your AWS account gives your security team an opportunity to validate their cyber capabilities, investigation workflow and playbooks, escalation paths across teams, and exercise any response automation currently in place.

Important: It’s strongly recommended that the solution be deployed in an isolated AWS account with no additional workloads or sensitive data. No resources deployed within the solution should be used for any purpose outside of generating the security findings for incident response simulations. Although the security findings are non-destructive to existing resources, they should still be done in isolation. For any AWS solution deployed within your AWS environment, your security team should review the resources and configurations within the code.

Conducting incident response simulations

Before deploying the solution, it’s important that you know what your goal is and what type of simulation to conduct. If you’re primarily curious about the format that active Amazon GuardDuty findings will create, you should generate sample findings with GuardDuty. At the time of this writing, Amazon Inspector doesn’t currently generate sample findings.

If you want to validate your incident response playbooks, make sure you have playbooks for the security findings the solution generates. If those playbooks don’t exist, it might be a good idea to start with a high-level tabletop exercise to identify which playbooks you need to create.

Because you’re running this sample in an AWS account with no workloads, it’s recommended to run the sample solution as a purple team exercise. Purple team exercises should be periodically run to support training for new analysts, validate existing playbooks, and identify areas of improvement to reduce the mean time to respond or identify areas where processes can be optimized with automation.

Now that you have a good understanding of the different simulation types, you can create security findings in an isolated AWS account.

Prerequisites

  1. [Recommended] A separate AWS account containing no customer data or running workloads
  2. GuardDuty, along with GuardDuty Kubernetes Protection
  3. Amazon Inspector must be enabled
  4. [Optional] AWS Security Hub can be enabled to show a consolidated view of security findings generated by GuardDuty and Inspector

Solution architecture

The architecture of the solution can be found in Figure 1.

Figure 1: Sample solution architecture diagram

Figure 1: Sample solution architecture diagram

  1. A user specifies the type of security findings to generate by passing an AWS CloudFormation parameter.
  2. An Amazon Simple Notification Service (Amazon SNS) topic is created to subscribe to findings for notifications. Subscribed users are notified of the finding through the deployed SNS topic.
  3. Upon user selection of the CloudFormation parameter, EC2 instances are provisioned to run commands to generate security findings.

    Note: If the parameter inspector is provided during deployment, then only one EC2 instance is deployed. If the parameter guardduty is provided during deployment, then two EC2 instances are deployed.

  4. For Amazon Inspector findings:
    1. The Amazon EC2 user data creates a .txt file with vulnerable images, pulls down Docker images from open source vulhub, and creates an Amazon Elastic Container Registry (Amazon ECR) repository with the vulnerable images.
    2. The EC2 user data pushes and tags the images in the ECR repository which results in Amazon Inspector findings being generated.
    3. An Amazon EventBridge cron-style trigger rule, inspector_remediation_ecr, invokes an AWS Lambda function.
    4. The Lambda function, ecr_cleanup_function, cleans up the vulnerable images in the deployed Amazon ECR repository based on applied tags and sends a notification to the Amazon SNS topic.

      Note: The ecr_cleanup_function Lambda function is also invoked as a custom resource to clean up vulnerable images during deployment. If there are issues with cleanup, the EventBridge rule continually attempts to clean up vulnerable images.

  5. For GuardDuty, the following actions are taken and resources are deployed:
    1. An AWS Identity and Access Management (IAM) user named guardduty-demo-user is created with an IAM access key that is INACTIVE.
    2. An AWS Systems Manager parameter stores the IAM access key for guardduty-demo-user.
    3. An AWS Secrets Manager secret stores the inactive IAM secret access key for guardduty-demo-user.
    4. An Amazon DynamoDB table is created, and the table name is stored in a Systems Manager parameter to be referenced within the EC2 user data.
    5. An Amazon Simple Storage Service (Amazon S3) bucket is created, and the bucket name is stored in a Systems Manager parameter to be referenced within the EC2 user data.
    6. A Lambda function adds a threat list to GuardDuty that includes the IP addresses of the EC2 instances deployed as part of the sample.
    7. EC2 user data generates GuardDuty findings for the following:
      1. Amazon Elastic Kubernetes Service (Amazon EKS)
        1. Installs eksctl from GitHub.
        2. Creates an EC2 key pair.
        3. Creates an EKS cluster (dependent on availability zone capacity).
        4. Updates EKS cluster configuration to make a dashboard public.
      2. DynamoDB
        1. Adds an item to the DynamoDB table for Joshua Tree.
      3. EC2
        1. Creates an AWS CloudTrail trail named guardduty-demo-trail-<GUID> and subsequently deletes the same CloudTrail trail. The <GUID> is randomly generated by using the $RANDOM function
        2. Runs portscan on 172.31.37.171 (an RFC 1918 private IP address) and private IP of the EKS Deployment EC2 instance provisioned as part of the sample. Port scans are primarily used by bad actors to search for potential vulnerabilities. The target of the port scans are internal IP addresses and do not leave the sample VPC deployed.
        3. Curls DNS domains that are labeled for bitcoin, command and control, and other domains associated with known threats.
      4. Amazon S3
        1. Disables Block Public Access and server access logging for the S3 bucket provisioned as part of the solution.
      5. IAM
        1. Deletes the existing account password policy and creates a new password policy with a minimum length of six characters.
  6. The following Amazon EventBridge rules are created:
    1. guardduty_remediation_eks_rule – When a GuardDuty finding for EKS is created, a Lambda function attempts to delete the EKS resources. Subscribed users are notified of the finding through the deployed SNS topic.
    2. guardduty_remediation_credexfil_rule – When a GuardDuty finding for InstanceCredentialExfiltration is created, a Lambda function is used to revoke the IAM role’s temporary security credentials and AWS permissions. Subscribed users are notified of the finding through the deployed SNS topic.
    3. guardduty_respond_IAMUser_rule – When a GuardDuty finding for IAM is created, subscribed users are notified through the deployed SNS topic. There is no remediation activity triggered by this rule.
    4. Guardduty_notify_S3_rule – When a GuardDuty finding for Amazon S3 is created, subscribed users are notified through the deployed Amazon SNS topic. This rule doesn’t invoke any remediation activity.
  7. The following Lambda functions are created:
    1. guardduty_iam_remediation_function – This function revokes active sessions and sends a notification to the SNS topic.
    2. eks_cleanup_function – This function deletes the EKS resources in the EKS CloudFormation template.

      Note: Upon attempts to delete the overall sample CloudFormation stack, this runs to delete the EKS CloudFormation template.

  8. An S3 bucket stores EC2 user data scripts run from the EC2 instances

Solution deployment

You can deploy the SecurityFindingGeneratorStack solution by using either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

Option 1: Deploy the solution with AWS CloudFormation using the console

Use the console to sign in to your chosen AWS account and then choose the Launch Stack button to open the AWS CloudFormation console pre-loaded with the template for this solution. It takes approximately 10 minutes for the CloudFormation stack to complete.

Launch Stack

Option 2: Deploy the solution by using the AWS CDK

You can find the latest code for the SecurityFindingGeneratorStack solution in the SecurityFindingGeneratorStack GitHub repository, where you can also contribute to the sample code. For instructions and more information on using the AWS Cloud Development Kit (AWS CDK), see Get Started with AWS CDK.

To deploy the solution by using the AWS CDK

  1. To build the app when navigating to the project’s root folder, use the following commands:
    npm install -g aws-cdk-lib
    npm install

  2. Run the following command in your terminal while authenticated in your separate deployment AWS account to bootstrap your environment. Be sure to replace <INSERT_AWS_ACCOUNT> with your account number and replace <INSERT_REGION> with the AWS Region that you want the solution deployed to.
    cdk bootstrap aws://<INSERT_AWS_ACCOUNT>/<INSERT_REGION>

  3. Deploy the stack to generate findings based on a specific parameter that is passed. The following parameters are available:
    1. inspector
    2. guardduty
    cdk deploy SecurityFindingGeneratorStack –parameters securityserviceuserdata=inspector

Reviewing security findings

After the solution successfully deploys, security findings should start appearing in your AWS account’s GuardDuty console within a couple of minutes.

Amazon GuardDuty findings

In order to create a diverse set of GuardDuty findings, the solution uses Amazon EC2 user data to run scripts. Those scripts can be found in the sample repository. You can also review and change scripts as needed to fit your use case or to remove specific actions if you don’t want specific resources to be altered or security findings to be generated.

A comprehensive list of active GuardDuty finding types and details for each finding can be found in the Amazon GuardDuty user guide. In this solution, activities which cause the following GuardDuty findings to be generated, are performed:

To generate the EKS security findings, the EKS Deployment EC2 instance is running eksctl commands that deploy CloudFormation templates. If the EKS cluster doesn’t deploy, it might be because of capacity restraints in a specific Availability Zone. If this occurs, manually delete the failed EKS CloudFormation templates.

If you want to create the EKS cluster and security findings manually, you can do the following:

  1. Sign in to the Amazon EC2 console.
  2. Connect to the EKS Deployment EC2 instance using an IAM role that has access to start a session through Systems Manager. After connecting to the ssm-user, issue the following commands in the Session Manager session:
    1. sudo chmod 744 /home/ec2-user/guardduty-script.sh
    2. chown ec2-user /home/ec2-user/guardduty-script.sh
    3. sudo /home/ec2-user/guardduty-script.sh

It’s important that your security analysts have an incident response playbook. If playbooks don’t exist, you can refer to the GuardDuty remediation recommendations or AWS sample incident response playbooks to get started building playbooks.

Amazon Inspector findings

The findings for Amazon Inspector are generated by using the open source Vulhub collection. The open source collection has pre-built vulnerable Docker environments that pull images into Amazon ECR.

The Amazon Inspector findings that are created vary depending on what exists within the open source library at deployment time. The following are examples of findings you will see in the console:

For Amazon Inspector findings, you can refer to parts 1 and 2 of Automate vulnerability management and remediation in AWS using Amazon Inspector and AWS Systems Manager.

Clean up

If you deployed the security finding generator solution by using the Launch Stack button in the console or the CloudFormation template security_finding_generator_cfn, do the following to clean up:

  1. In the CloudFormation console for the account and Region where you deployed the solution, choose the SecurityFindingGeneratorStack stack.
  2. Choose the option to Delete the stack.

If you deployed the solution by using the AWS CDK, run the command cdk destroy.

Important: The solution uses eksctl to provision EKS resources, which deploys additional CloudFormation templates. There are custom resources within the solution that will attempt to delete the provisioned CloudFormation templates for EKS. If there are any issues, you should verify and manually delete the following CloudFormation templates:

  • eksctl-GuardDuty-Finding-Demo-cluster
  • eksctl-GuardDuty-Finding-Demo-addon-iamserviceaccount-kube-system-aws-node
  • eksctl-GuardDuty-Finding-Demo-nodegroup-ng-<GUID>

Conclusion

In this blog post, I showed you how to deploy a solution to provision resources in an AWS account to generate security findings. This solution provides a technical framework to conduct periodic simulations within your AWS environment. By having real, rather than simulated, security findings, you can enable your security teams to interact with actual resources and validate existing incident response processes. Having a repeatable mechanism to create security findings also provides your security team the opportunity to develop and test automated incident response capabilities in your AWS environment.

AWS has multiple services to assist with increasing your organization’s security posture. Security Hub provides native integration with AWS security services as well as partner services. From Security Hub, you can also implement automation to respond to findings using custom actions as seen in Use Security Hub custom actions to remediate S3 resources based on Amazon Macie discovery results. In part two of a two-part series, you can learn how to use Amazon Detective to investigate security findings in EKS clusters. Amazon Security Lake automatically normalizes and centralizes your data from AWS services such as Security Hub, AWS CloudTrail, VPC Flow Logs, and Amazon Route 53, as well as custom sources to provide a mechanism for comprehensive analysis and visualizations.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Incident Response re:Post or contact AWS Support.

Author

Jonathan Nguyen

Jonathan is a Principal Security Architect at AWS. His background is in AWS security with a focus on threat detection and incident response. He helps enterprise customers develop a comprehensive AWS security strategy and deploy security solutions at scale, and trains customers on AWS security best practices.

Now available: Building a scalable vulnerability management program on AWS

Post Syndicated from Anna McAbee original https://aws.amazon.com/blogs/security/now-available-how-to-build-a-scalable-vulnerability-management-program-on-aws/

Vulnerability findings in a cloud environment can come from a variety of tools and scans depending on the underlying technology you’re using. Without processes in place to handle these findings, they can begin to mount, often leading to thousands to tens of thousands of findings in a short amount of time. We’re excited to announce the Building a scalable vulnerability management program on AWS guide, which includes how you can build a structured vulnerability management program, operationalize tooling, and scale your processes to handle a large number of findings from diverse sources.

Building a scalable vulnerability management program on AWS focuses on the fundamentals of building a cloud vulnerability management program, including traditional software and network vulnerabilities and cloud configuration risks. The guide covers how to build a successful and scalable vulnerability management program on AWS through preparation, enabling and configuring tools, triaging findings, and reporting.

Targeted outcomes

This guide can help you and your organization with the following:

  • Develop policies to streamline vulnerability management and maintain accountability.
  • Establish mechanisms to extend the responsibility of security to your application teams.
  • Configure relevant AWS services according to best practices for scalable vulnerability management.
  • Identify patterns for routing security findings to support a shared responsibility model.
  • Establish mechanisms to report on and iterate on your vulnerability management program.
  • Improve security finding visibility and help improve overall security posture.

Using the new guide

We encourage you to read the entire guide before taking action or building a list of changes to implement. After you read the guide, assess your current state compared to the action items and check off the items that you’ve already completed in the Next steps table. This will help you assess the current state of your AWS vulnerability management program. Then, plan short-term and long-term roadmaps based on your gaps, desired state, resources, and business needs. Building a cloud vulnerability management program often involves iteration, so you should prioritize key items and regularly revisit your backlog to keep up with technology changes and your business requirements.

Further information

For more information and to get started, see the Building a scalable vulnerability management program on AWS.

We greatly value feedback and contributions from our community. To share your thoughts and insights about the guide, your experience using it, and what you want to see in future versions, select Provide feedback at the bottom of any page in the guide and complete the form.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Anna McAbee

Anna is a Security Specialist Solutions Architect focused on threat detection and incident response at AWS. Before AWS, she worked as an AWS customer in financial services on both the offensive and defensive sides of security. Outside of work, Anna enjoys cheering on the Florida Gators football team, wine tasting, and traveling the world.

Author

Megan O’Neil

Megan is a Principal Security Specialist Solutions Architect focused on Threat Detection and Incident Response. Megan and her team enable AWS customers to implement sophisticated, scalable, and secure solutions that solve their business challenges. Outside of work, Megan loves to explore Colorado, including mountain biking, skiing, and hiking.

Consolidating controls in Security Hub: The new controls view and consolidated findings

Post Syndicated from Emmanuel Isimah original https://aws.amazon.com/blogs/security/consolidating-controls-in-security-hub-the-new-controls-view-and-consolidated-findings/

In this blog post, we focus on two recently released features of AWS Security Hub: the consolidated controls view and consolidated control findings. You can use these features to manage controls across standards and to consolidate findings, which can help you significantly reduce finding noise and administrative overhead.

Security Hub is a cloud security posture management service that you can use to apply security best practice controls, such as “EC2 instances should not have a public IP address.” With Security Hub, you can check that your environment is properly configured and that your existing configurations don’t pose a security risk. Security Hub has more than 200 controls that cover more than 30 AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), and AWS Lambda. In addition, Security Hub has integrations with more than 85 partner products. Security Hub can centralize findings across your AWS accounts and AWS Regions into a single delegated administrator account in your aggregation Region of choice, creating a single pane of glass to view findings. This can help you to triage, investigate, and respond to findings in a simpler way and improve your security posture.

The Security Hub controls are grouped into the following security standards:

With the new features — consolidated controls view and consolidated control findings—you can now do the following:

  • Enable or disable controls across standards in a single action. Previously, if you wanted to maintain the same enablement status of controls between standards, you had to take the same action across multiple standards (up to six times!).
  • If you choose to turn on consolidated control findings, you will receive only a single finding for a security check, even if the security check is enabled across several standards. This reduces the number of findings and helps you focus on the most important misconfigured resources in your AWS environment. It allows you to apply actions and updates (such as suppressing the finding or changing its severity) one time rather than having to do so multiple times across non-consolidated findings.

Overview of new features

Now we’ll discuss some of the details of how you can use the two new features to streamline the management of controls.

The new consolidated controls view

On the new Controls page, now available in the Security Hub console as shown in Figure 1, you can view and configure security controls across standards from one central location.

Figure 1: Security Hub Controls page

Figure 1: Security Hub Controls page

Before this release, controls had to be managed within the context of individual security standards. Even if the same control was part of multiple standards, the control had different IDs in each of them. With this recent release, Security Hub now assigns controls a unique security control ID across standards, so that it’s simpler for you to reference the controls and view their findings. Following the current naming convention of the AWS FSBP standard, the consolidated control IDs start with the relevant service in scope for the control. In fact, whenever possible, the new consolidated control ID is the same as the previous FSBP control ID.

For example, before this release, control IAM.6 in FSBP was also referenced as 1.14 in CIS 1.2, and 1.6 in CIS 1.4, PCI.IAM.4, and CT.IAM.6. After the release, the control is now referenced as IAM.6 in the Security Hub standards. This change does not affect the pre-existing API calls for Security Hub, such as UpdateStandardsControl, where you can still provide the previous StandardControlARN in order to make the call.

By using the new Controls view, you can understand the status of controls across your system, view control findings, and prioritize next steps without context switching. The following information is available on the Controls page of the Security Hub console:

  • An overall security score, which is based on the proportion of passed controls to the total number of enabled controls.
  • A breakdown of security checks across controls, with the percentage of failed security checks highlighted. Because many controls can contain multiple security checks and multiple findings, this value might be different from the security score, which considers controls as a single object. You can use this metric, as well as your security score, to monitor your progress as you work to remediate findings.
  • A list of controls that are categorized into different tabs based on enablement and compliance status. If you are an administrator of an organization within Security Hub, the enablement and compliance status will reflect the aggregate status of the entire organization. In your finding aggregation Region, the status will also be aggregated across linked Regions.

From the controls page, you can select a control to view its details (including its title and the standards it belongs to), and view and act on the findings generated by the control.

Security Hub also offers new API operations that match the capabilities of the controls page. Unlike the pre-existing API operations, these new API operations use the consolidated control IDs (also known as security control IDs) to provide a way to know and manage the relationship between controls and standards. You can use these API operations to manage each Security Hub control across standards, to make sure that the status of controls in the standards is aligned. The new API operations include the following:

We also provide an example script that makes use of these API calls and applies them across accounts and Regions so that your configuration is consistent. You can use our script to enable or disable Security Hub controls across your various accounts or Regions.

Consolidating control findings between standards

Before we released the consolidated control findings feature, Security Hub generated separate findings per standard for each related control. Now, you can turn on consolidated control findings, and after doing so, Security Hub will produce a single finding per security check, even when the underlying control is shared across multiple standards. Having a single finding per check across standards will help you investigate, update, and remediate failed findings more quickly, while also reducing finding noise.

As an example, we can look at control CloudTrail.2, which is shared between standards supported by Security Hub. Before you turn on this capability, you might potentially receive up to six findings for each security check generated by this control—with one finding for each security standard. After you turn on consolidated control findings, these older findings will be archived and Security Hub will generate one finding per security check in this control, regardless of how many security standards you have enabled. For an example of how the standard-specific findings compare to the new consolidated finding, see Sample control findings. The following is an example of a consolidated finding for the CloudTrial.2 control; we’ve highlighted the part that shows this finding is shared across standards.

{
  "SchemaVersion": "2018-10-08",
  "Id": "arn:aws:securityhub:us-east-2:123456789012:security-control/CloudTrail.2/finding/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
  "ProductArn": "arn:aws:securityhub:us-east-2::product/aws/securityhub",
  "ProductName": "Security Hub",
  "CompanyName": "AWS",
  "Region": "us-east-2",
  "GeneratorId": "security-control/CloudTrail.2",
  "AwsAccountId": "123456789012",
  "Types": [
    "Software and Configuration Checks/Industry and Regulatory Standards"
  ],
  "FirstObservedAt": "2022-10-06T02:18:23.076Z",
  "LastObservedAt": "2022-10-28T16:10:06.956Z",
  "CreatedAt": "2022-10-06T02:18:23.076Z",
  "UpdatedAt": "2022-10-28T16:10:00.093Z",
  "Severity": {
    "Label": "MEDIUM",
    "Normalized": "40",
    "Original": "MEDIUM"
  },
  "Title": "CloudTrail should have encryption at-rest enabled",
  "Description": "This AWS control checks whether AWS CloudTrail is configured to use the server-side encryption (SSE) AWS Key Management Service (AWS KMS) customer master key (CMK) encryption. The check will pass if the KmsKeyId is defined.",
  "Remediation": {
    "Recommendation": {
      "Text": "For directions on how to correct this issue, consult the AWS Security Hub controls documentation.",
      "Url": "https://docs.aws.amazon.com/console/securityhub/CloudTrail.2/remediation"
    }
  },
  "ProductFields": {
    "RelatedAWSResources:0/name": "securityhub-cloud-trail-encryption-enabled-fe95bf3f",
    "RelatedAWSResources:0/type": "AWS::Config::ConfigRule",
    "aws/securityhub/ProductName": "Security Hub",
    "aws/securityhub/CompanyName": "AWS",
    "Resources:0/Id": "arn:aws:cloudtrail:us-east-2:123456789012:trail/AWSMacieTrail-DO-NOT-EDIT",
    "aws/securityhub/FindingId": "arn:aws:securityhub:us-east-2::product/aws/securityhub/arn:aws:securityhub:us-east-2:123456789012:security-control/CloudTrail.2/finding/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
  }
  "Resources": [
    {
      "Type": "AwsCloudTrailTrail",
      "Id": "arn:aws:cloudtrail:us-east-2:123456789012:trail/AWSMacieTrail-DO-NOT-EDIT",
      "Partition": "aws",
      "Region": "us-east-2"
    }
  ],
  "Compliance": {
    "Status": "FAILED",
    "RelatedRequirements": [
        "PCI DSS v3.2.1/3.4",
        "CIS AWS Foundations Benchmark v1.2.0/2.7",
        "CIS AWS Foundations Benchmark v1.4.0/3.7"
    ],
    "SecurityControlId": "CloudTrail.2",
    "AssociatedStandards": [
  { "StandardsId": "standards/aws-foundational-security-best-practices/v/1.0.0"},
  { "StandardsId": "standards/pci-dss/v/3.2.1"},
  { "StandardsId": "ruleset/cis-aws-foundations-benchmark/v/1.2.0"},
  { "StandardsId": "standards/cis-aws-foundations-benchmark/v/1.4.0"},
  { "StandardsId": "standards/service-managed-aws-control-tower/v/1.0.0"},
  ]
  },
  "WorkflowState": "NEW",
  "Workflow": {
    "Status": "NEW"
  },
  "RecordState": "ACTIVE",
  "FindingProviderFields": {
    "Severity": {
      "Label": "MEDIUM",
      "Normalized": "40",
      "Original": "MEDIUM"
    },
    "Types": [
      "Software and Configuration Checks/Industry and Regulatory Standards"
    ]
  }
}

To turn on consolidated control findings

  1. Open the Security Hub console.
  2. In the left navigation pane, choose Settings, and then choose the General tab.
  3. Under Controls, turn on Consolidated control findings, and then choose Save.
Figure 2: Turn on consolidated control findings

Figure 2: Turn on consolidated control findings

If you are using the Security Hub integration with AWS Organizations or have invited member accounts through a manual invitation process, consolidated control findings can only be turned on by the administrator account. When this action is taken in the administrator account, the action will also be reflected in each member account in the current Region. It can take up to 18 hours for Security Hub to archive existing standard-specific findings and generate the new, standard-agnostic, findings.

You can also enable consolidated control findings by using the API (calling the UpdateSecurityHubConfiguration API with the ControlFindingGenerator parameter equal to SECURITY_CONTROL), or by using the AWS CLI (running the update-security-hub-configuration command with control-finding-generator equal to SECURITY_CONTROL), as in the following example.

aws securityhub ‐‐region <Region of choice> update-security-hub-configuration ‐‐control-finding-generator SECURITY_CONTROL

Much like the console behavior, if you have an organizational setup in Security Hub, this API action can only be taken by the administrator, and it will be reflected in each member account in the same Region.

What to expect when you enable consolidated control findings

To allow for these new capabilities to be launched, changes to the AWS Security Finding Format (ASFF) are required. This format is used by Security Hub for findings it generates from its controls or ingests from external providers. When you turn on finding consolidation, Security Hub will archive old standard-specific findings and generate standard-agnostic findings instead. This action will only affect control findings that Security Hub generates, and it will not affect findings ingested from partner products. However, in Security Hub findings, turning on consolidated control findings might cause some updates that you previously made to findings to be archived. Despite this one-time change, after the migration is complete (it can take up to 18 hours), you will be able to update finding fields in a single action and the updates will apply across standards, without the need to make multiple updates.

One field affected by the new capabilities is the Workflow field, which provides information about the status of the investigation into a finding. Manipulating this field can also update the overall compliance status of the control that the finding is related to. For example, if you have a control with one failed finding (and the rest have passed), and the failed finding comes from a resource for which you’d like to make an exception, you can decide to suppress that failed finding by updating the Workflow field. If you suppress failed findings in a control, its compliance status can change to pass.

Before turning on consolidated control findings, if you want to maintain an aligned compliance status in controls that belong to multiple standards, you have to update the Workflow status of findings in each standard. After turning on finding consolidation, you will only have to update the Workflow status once, and the suppression will be applied across standards, helping you to reduce the number of steps needed to suppress the same findings across standards.

As mentioned earlier, when you turn on this new capability, some updates made to the previous, standard-specific findings will be archived and will not be included in the new consolidated control findings generated by Security Hub. In the case of the Workflow status, the new consolidated findings will be created with a value of NEW (for failed findings) or RESOLVED (for new findings) in the Workflow field. However, after you have onboarded to the new finding format, you can update the value of the Workflow field, as well as other fields, and this value will be maintained without requiring you to make continuous updates. For the full list of fields that can be affected by the migration to the consolidated finding format, see Consolidated control findings – ASFF changes. Before you turn on finding consolidation, we suggest that you check if your custom automations refer to those affected fields. If they do, you can update your automations and test them by using the Sample control findings in the documentation.

Conclusion

This blog post covers new Security Hub features that make it simpler for you to manage controls across standards. With the new consolidated control findings feature, you can focus on the most relevant findings and reduce noise, which is why we recommend that you review the new feature and its associated changes and turn it on at your earliest convenience.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Emmanuel Isimah

Emmanuel Isimah

Emmanuel is a solutions architect covering hypergrowth customers in the Digital Native Business sector. He has a background in networking, security, and containers. Emmanuel helps customers build and secure innovative cloud solutions, solving their business problems by using data-driven approaches. Emmanuel’s areas of depth include security and compliance, cloud operations, and containers.

How to set up and track SLAs for resolving Security Hub findings

Post Syndicated from Maisie Fernandes original https://aws.amazon.com/blogs/security/how-to-set-up-and-track-slas-for-resolving-security-hub-findings/

Your organization can use AWS Security Hub to gain a comprehensive view of your security and compliance posture across your Amazon Web Services (AWS) environment. Security Hub receives security findings from AWS security services and supported third-party products and centralizes them, providing a single view for identifying and analyzing security issues. Security Hub correlates findings and breaks them down into five severity categories: INFORMATIONAL, LOW, MEDIUM, HIGH, and CRITICAL. In this blog post, we provide step-by-step instructions for tracking Security Hub findings in each severity category against service-level agreements (SLAs) through visual dashboards.

SLAs are defined collaboratively by the Business, IT, and Security and Compliance teams within an organization. You can track Security Hub findings against your specific SLAs, and any findings that are in breach of an SLA can be escalated. You can also apply automation to alert the owners of the resources and remediate common security findings to improve your overall security posture.

Prerequisites

Security Hub uses service-linked AWS Config rules to perform security checks behind the scenes. To support these controls, you must enable AWS Config on all accounts, including the administrator and member accounts, in each AWS Region where Security Hub is enabled.

As a best practice, we recommend that you enable AWS Config and Security Hub across all of your accounts and Regions. For more information on how to do this, see Enabling and configuring AWS Config and Setting up Security Hub.

Solution overview

In this solution, you will learn two different ways to track your findings in Security Hub against the pre-defined SLA for each severity category.

Option 1: Use custom insights

Security Hub offers managed insights, which include a collection of related findings that identify a security issue that requires attention and intervention. You can view and take action on the insight findings. In addition to the managed insights, you can create custom insights to track issues and findings related to your resources in your environment.

Create a custom insight for SLA tracking

In this example, you set an SLA of 30 days for HIGH severity findings. This example will provide you with a view of the HIGH severity findings that were generated within the last 30 days and haven’t been resolved.

To create a custom insight to view HIGH severity findings from the last 30 days

  1. In the Security Hub console, in the left navigation pane, choose Insights.
  2. On the Insights page, choose Create insight, as shown in Figure 1.
    Figure 1: Create insight in the Security Hub console

    Figure 1: Create insight in the Security Hub console

  3. On the Create insight page, in the search box, leave the following default filters: Workflow status is NEW, Workflow status is NOTIFIED, and Record state is ACTIVE, as show in Figure 2.
  4. To select the required grouping attribute for the insight, choose the search box to display the filter options. In the search box, choose the following filters and settings:
    1. Choose the Group by filter, and select WorkflowStatus.
      Figure 2: Create insights using filters

      Figure 2: Create insights using filters

    2. Choose the Severity label filter and enter HIGH.
    3. Choose the Created at filter and enter 30 to indicate the number of days you want to set as your SLA.
  5. Choose Create insight again.
  6. For Insight name, enter a meaningful name (for this example, we entered UnresolvedHighSevFindings), and then choose Create insight again.

You can repeat the same steps for other finding severities – CRITICAL, MEDIUM, LOW, and INFORMATIONAL; you can change the number of days you specify for the Created at filter to meet your SLA requirements; or specify different workflow status settings. Note that the workflow status can have the following values:

  • NEW – The initial state of a finding before you review it.
  • NOTIFIED – Indicates that the resource owner has been notified about the security issue.
  • SUPPRESSED – Indicates that you have reviewed the finding and no action is required.
  • RESOLVED – Indicates that the finding has been reviewed and remediated.

Your custom insight will show the findings that meet the criteria you defined. For more information about creating custom insights, see Module 2: Custom Insights in the Security Hub Workshop.

Option 2: Build visualizations for Security Hub findings data by using Amazon QuickSight

We hear from our customers that your organizations are looking for a solution where you can quickly visualize the status of your Security Hub findings, to see which findings you need to take action on (NEW and NOTIFIED) and which you do not (SUPPRESSED and RESOLVED). You can achieve this by building a data analytics pipeline that uses Amazon EventBridge, Amazon Kinesis Data Firehose, Amazon Simple Storage Service (Amazon S3), Amazon Athena, and Amazon QuickSight. The data analytics pipeline enables you to detect, analyze, contain, and mitigate issues quickly.

This solution integrates Security Hub with EventBridge to set SLA rules to a specified period of your choice for each severity level. For example, you can set the SLA to 5 days for CRITICAL severity findings, 10 days for HIGH severity findings, 14 days for MEDIUM severity findings, 30 days for LOW severity findings, and 60 days for INFORMATIONAL severity findings.

Architecture overview

Figure 3 shows the architectural overview of the QuickSight solution workflow.

Figure 3: Architecture diagram for option 2, the QuickSight solution

Figure 3: Architecture diagram for option 2, the QuickSight solution

In the QuickSight solution, Security Hub publishes the findings to EventBridge, and then an EventBridge rule (based on the SLA) is configured to deliver the findings to Kinesis Data Firehose. For example, if the SLA is 14 days for all MEDIUM severity findings, then those findings will be filtered by the rule and sent to Kinesis Data Firehose. Security Hub findings follow the AWS Security Finding Format (ASFF).

The following is a sample EventBridge rule that filters the Security Hub findings for MEDIUM severity and workflow status NEW, before publishing the findings to Kinesis Data Firehose, and then finally to Amazon S3 for storage. A workflow status of NEW and NOTIFIED should be included to catch all findings that require action.

{
  "source": ["aws.securityhub"],
  "detail-type": ["Security Hub Findings - Imported"],
  "detail": {
    "findings": {
      "Severity": {
        "Label": ["MEDIUM"]
      },
      "Workflow": {
        "Status": ["NEW"]
      }
    }
  }
}

After the findings are exported and stored in Amazon S3, you can use Athena to run queries on the data and you can use Amazon QuickSight to display the findings that violate your organization’s SLA. With Athena, you can create views of the original table as a logical table. You can also create a view for CRITICAL, HIGH, MEDIUM, LOW, and INFORMATIONAL severity findings.

For details about how to export findings and build a dashboard, see the blog post How to build a multi-Region AWS Security Hub analytic pipeline and visualize Security Hub data.

Visualize an SLA by using QuickSight

The QuickSight dashboard shown in Figure 4 is an example that shows all the MEDIUM severity findings that should be resolved within a 14 day SLA.

Figure 4: QuickSight table showing medium severity findings over a 14-day SLA

Figure 4: QuickSight table showing medium severity findings over a 14-day SLA

Using QuickSight, you can create different types of data visualizations to represent the exported Security Hub findings, which enables the decision makers in your organization to explore and interpret information in an interactive visual environment. For example, Figure 5 shows findings categorized by service.

Figure 5: QuickSight visual showing MEDIUM severity findings for each service

Figure 5: QuickSight visual showing MEDIUM severity findings for each service

As another example, Figure 6 shows findings categorized by severity.

Figure 6: QuickSight visual showing findings by severity

Figure 6: QuickSight visual showing findings by severity

For more information about visualizing Security Hub findings by using Amazon OpenSearch Service and Kibana, see the blog post Visualize Security Hub Findings using Analytics and Business Intelligence Tools.

Changing a finding’s severity

Over time, your organization might discover that there are certain findings that should be tracked at a lower or higher severity level than what is auto-generated from Security Hub. You can implement EventBridge rules with AWS Lambda functions to automatically update the severity of the findings as soon as they are generated.

To automate the finding severity change

  1. On the EventBridge console, create an EventBridge rule. For detailed instructions, see Getting started with Amazon EventBridge.
    Figure 7: Create an EventBridge rule in the console

    Figure 7: Create an EventBridge rule in the console

  2. Define the event pattern, including the finding generator ID or any other identifying fields for which you want to redefine the severity. Review the fields in the format, and choose your desired filters. The following is a sample of the event pattern.
    {
      "source": ["aws.securityhub"],
      "detail-type": ["Security Hub Findings - Imported"],
      "detail": {
        "findings": {
            "GeneratorId": [
            "aws-foundational-security-best-practices/v/1.0.0/S3.4"
                        ],
          "RecordState": ["ACTIVE"],
          "Workflow": {
            "Status": ["NEW"]
          }
        }
      }
    }

  3. Specify the target as a Lambda function that will host the code to update the finding severity.
    Figure 8: Select a target Lambda function

    Figure 8: Select a target Lambda function

  4. In the Lambda function, use the BatchUpdateFindings API action to update the severity label as desired.

    The following example Lambda code will update finding severity to INFORMATIONAL. This function requires Amazon CloudWatch write permissions, and requires permissions to invoke the Security Hub API action BarchUpdateFindings.

    import logging
    import json, boto3
    import botocore.exceptions as boto3exceptions
    
    logger = logging.getLogger()
    logger.setLevel(os.environ.get('LOGLEVEL', 'INFO').upper())
    
    def lambda_handler(event, context):
        
        finding_id = ""
        product_arn = ""
        
        logger.info(event)
        
        for finding in event['detail']['findings']:
            
            #determine and log this Finding's ID
            finding_id = finding["Id"]
            product_arn = finding["ProductArn"]
            logger.info("Finding ID: " + finding_id)
            
        
            #determine and log this Finding's resource type
            resource_type = finding["Resources"][0]["Type"]
            logger.info("Resource Type is: " + resource_type)
    
            try:
                sec_hub_client = boto3.client('securityhub')
                response = sec_hub_client.batch_update_findings(
                    FindingIdentifiers=[
                    {
                        'Id': finding_id,
                        'ProductArn': product_arn
                    }
                    ],
                        Severity={"Label": "INFORMATIONAL"}
        
                    )
    
            except boto3exceptions.ClientError as error:
                logger.exception(f"Client error invoking batch update findings {error}")
            except boto3exceptions.ParamValidationError as error:
                logger.exception(f"The parameters you provided are incorrect: {error}")
    
        return {"statusCode": 200}

  5. The finding is generated with a new severity level, as updated in the Lambda function. For example, Figure 9 shows a finding that is generated as MEDIUM by default, but the configured EventBridge rule and Lambda function update the severity level to INFORMATIONAL.
    Figure 9: Security Hub findings generated with updated severity level

    Figure 9: Security Hub findings generated with updated severity level

Conclusion

This blog post walked you through two different solutions for setting up and tracking the SLAs for the findings generated by Security Hub. Reporting Security Hub findings for a given SLA in a dashboard view can help you prioritize findings and track whether findings are being remediated on time. This post also provided example code that you can use to modify the Security Hub severity for a specific finding. To further extend the solution and enable custom actions to remediate the findings, see the following:

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Maisie Fernandes

Maisie Fernandes

Maisie is a Senior Solutions Architect at AWS based in London. She is focused on helping public sector customers design, build, and secure scalable applications on AWS. Outside of work, Maisie enjoys traveling, running, and gardening.

Krati Singh

Krati Singh

Krati is a Senior Solutions Architect at AWS based in San Francisco Bay Area. She collaborates with small and medium business customers on their cloud journey and is passionate about security in the cloud. Outside of work, Krati enjoys reading, and an occasional hike on a nice weather day.

Use AWS Chatbot in Slack to remediate security findings from AWS Security Hub

Post Syndicated from Vikas Purohit original https://aws.amazon.com/blogs/security/use-aws-chatbot-in-slack-to-remediate-security-findings-from-aws-security-hub/

You can use AWS Chatbot and its integration with Slack and Amazon Chime to receive and remediate security findings from AWS Security Hub. To learn about how to configure AWS Chatbot to send findings from Security Hub to Slack, see the blog post Enabling AWS Security Hub integration with AWS Chatbot.

In this blog post, you’ll learn how to extend the solution so you can use AWS Chatbot to remediate the findings in your Slack channel. You’ll receive the findings from Security Hub and then run AWS CLI commands from your Slack channel to remediate the reported security findings.

AWS Chatbot works by acting as a subscriber to an Amazon Simple Notification Service (Amazon SNS) topic that can receive notifications from either Amazon CloudWatch or Amazon EventBridge, and have them delivered to the configured Slack channels or Amazon Chime chat rooms. You can apply standard AWS Identity and Access Management (IAM) permissions to the Slack channel or Amazon Chime chatroom, and you can also associate some channel guardrails to provide granular control on what commands can be run from the channel. For example, you may want to allow running commands that would allow getting more details on findings reported from Security Hub, and remediating and archiving those findings, but use channel guardrails to prevent anyone from disabling Security Hub. Another example is that you might want to allow the channel members to query AWS CloudTrail logs in order to get more details on findings, but you use channel guardrails to prevent them from disabling AWS CloudTrail or changing the destination Amazon Simple Storage Services (Amazon S3) bucket.

Overview of ChatOps and ChatSecOps concepts

ChatOps, also known as Chat Operations, refers to using chatbots, tools, and clients to communicate, notify, assign, and launch operational tasks and issues. You can use your existing Slack channels and Amazon Chime chatrooms to receive alerts and notifications about operational issues or tasks, and you can also respond to those incidents or tasks in real time from the same chat room. SecOps is a philosophy of encouraging collaboration between the ITOps and Security teams of an organization. ChatSecOps, also known as Chat Security Operations, uses the ChatOps technology to enable customers to put SecOps in practice.

ChatSecOps facilitates this collaboration by allowing security-related notifications to be delivered to common chat rooms used by SecOps teams, providing visibility on the issues and actions that are taken to investigate and remediate the reported issues. SecOps teams can share threat analysis reports, compliance finding reports, and information on security vulnerabilities in these channels and work closely with DevOps teams to perform further analysis, investigation, and remediation of the issues and findings. This helps to ensure visibility and collaboration across the SecOps and DevOps teams and promotes the philosophy of DevSecOps.

Prerequisites

To get started, you’ll need the following prerequisites:

Set up Slack permissions

You need to grant permissions to the users in Slack channels, which you can do in one of the following ways:

  • Associate a channel IAM role with AWS Chatbot. This method provides similar permissions to all the members of the Slack channel. A channel IAM role is more useful if all your channel members require the same set of permissions. The channel IAM role can also be used to restrict the permissions provided by the user IAM role.
  • Define user roles. User roles require channel members to choose their own roles. This allows different users in your channel to have different sets of permissions. User roles are also useful when you don’t want new channel members to perform actions as soon as they join the channel.

For detailed instructions about setting up AWS Chatbot and defining permissions, see Getting started with AWS Chatbot. For more information about setting boundaries on the permissions that can be allowed by the channel and user IAM roles, see Channel guardrails.

Integrate the Slack channel with AWS Chatbot

After you set up the Slack channel with required permissions, you integrate the ChatOps for AWS app with your channel by using the following steps.

To integrate the Slack channel with AWS Chatbot

  1. Log in to Slack by using either the Slack app or web browser.
  2. In the Slack sidebar, from the Channels section, choose the channel name.
  3. In the right pane, choose the channel name to open the channel configuration window.
  4. Choose the Integrations tab, then choose Add an App.
  5. In the search bar, enter AWS Chatbot. In the search results list, choose the Add button for AWS Chatbot.
  6. On the Integrations tab, under Apps, you should see ChatOps for AWS, as shown in Figure 1.
    Figure 1: Integrate the ChatOps for AWS app with your Slack channel

    Figure 1: Integrate the ChatOps for AWS app with your Slack channel

The step-by-step process for integrating a Slack channel with AWS Chatbot is described in more detail in the blog post Enabling AWS Security Hub integration with AWS Chatbot.

Now you’re ready to start running the commands. Note that you need to add @aws before writing any commands. For more information , see Running AWS CLI commands from Slack channels.

Use case: Amazon S3 Block Public Access enabled at the account level

The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. With S3 Block Public Access, account administrators and bucket owners can set up centralized controls to limit public access to your S3 resources. These controls are enforced regardless of how the resources are created.

Amazon GuardDuty tracks and reports S3 Block Public Access feature configurations at the account level, as well as the bucket level. These findings are automatically sent to Security Hub.

For the purpose of this walkthrough, consider the following use case: your organization has compliance requirements to disable public access to all the S3 buckets at the account level. You do not want to allow individual bucket owners to configure this access policy. You get a notification that the S3 Block Public Access feature is disabled at the account level for a specific account. This walkthrough shows how you can run AWS CLI commands from the Slack channel to investigate and remediate this issue.

To remediate finding for Amazon S3 Block Public Access from the Slack channel

  1. You receive a Security Hub notification that Amazon S3 Block Public Access was disabled for an account in your designated Slack channel.
    Figure 2: Notification received from Security Hub in Slack channel

    Figure 2: Notification received from Security Hub in Slack channel

    This notification indicates that S3 Block Public Access was disabled for a specific account.

    Note: Your Slack channel members require permissions to investigate and remediate the findings received in the Slack channel. As described earlier, you can grant permissions using a channel IAM role or a user IAM role. You should follow the principal of least privilege access when granting access and use IAM Access Analyzer to review the permissions that are granted through the channel or user IAM role.

  2. Before you take any action, you need to find the current S3 Block Public Access configuration for the account. To do this, run the following AWS CLI command from the Slack channel, replacing <your_account_id> with the AWS account ID you are investigating.

    @aws s3control get-public-access-block –account-id <your_account_id>

  3. Review the response in the Slack channel.
    Figure 3: AWS CLI command output in Slack channel indicating that S3 Block Public Access is disabled

    Figure 3: AWS CLI command output in Slack channel indicating that S3 Block Public Access is disabled

    You see that the output in Figure 3 shows that all the parameters of PublicAccessBlockConfiguration are set to false, which indicates that the Block Public Access feature is disabled at the account level.

  4. To remediate this issue, run the following AWS CLI command in your Slack channel, replacing <your_account_id> with the AWS account ID you are investigating.

    @aws s3control put-public-access-block –account-id <your_account_id> –public-access-block-configuration {“RestrictPublicBuckets”: true,
    “BlockPublicPolicy”: true,
    “BlockPublicAcls”: true,
    “IgnorePublicAcls”: true

  5. In the response from AWS Chatbot, look for Result was null to verify that the command was run without any errors.
    Figure 4: AWS CLI command run from Slack channel to enable S3 Block Public Access

    Figure 4: AWS CLI command run from Slack channel to enable S3 Block Public Access

  6. To check the current status of the configuration, and to validate whether the issue has been resolved, again run the following AWS CLI command from the Slack channel, replacing <your_account_id> with the AWS account ID you are investigating:

    @aws s3control get-public-access-block –account-id <your_account_id>

  7. In the response, you see that all the parameters of PublicAccessBlockConfiguration are set to false, which indicates that the Block Public Access feature is enabled at the account level.
    Figure 5: AWS CLI command output in Slack channel indicating S3 Block Public Access is enabled

    Figure 5: AWS CLI command output in Slack channel indicating S3 Block Public Access is enabled

Another example use case is that you get a security finding notifying you about unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. You can remediate the finding by running AWS CLI commands to encrypt the volume. In addition to interacting with AWS services by running standard AWS CLI commands in the Slack channel, you can further extend this capability to run operating system (OS)-level commands by using AWS Systems Manager runbooks, using the same mechanism described in this post. For more information, see AWS Systems Manager Runbooks in the AWS Chatbot Administrator Guide.

Conclusion

In this blog post, you learned how to run AWS CLI commands from Slack channels to remediate your security findings. This allows you to receive alerts and notifications from Security Hub and other security services such as Amazon GuardDuty, then investigate and remediate the issues from a single platform. You can integrate AWS Chatbot with your security operation team’s Slack channel or Amazon Chime chatroom, and manage your security operations in a more collaborative, transparent, and automated manner.

If you have any questions about this post, let us know in the Comments section below. For more information about AWS Chatbot, see the AWS Chatbot Administrator Guide.

Want more AWS Security news? Follow us on Twitter.

Vikas Purohit

Vikas Purohit

Vikas works as a Partner Solution Architect with AISPL, India. He is passionate about helping customers and partners in their cloud journeys. He is particularly passionate in Cloud Security, hybrid networking and migrations.

Use Security Hub custom actions to remediate S3 resources based on Macie discovery results

Post Syndicated from Jonathan Nguyen original https://aws.amazon.com/blogs/security/use-security-hub-custom-actions-to-remediate-s3-resources-based-on-macie-discovery-results/

The amount of data available to be collected, stored and processed within an organization’s AWS environment can grow rapidly and exponentially. This increases the operational complexity and the need to identify and protect sensitive data. If your security teams need to review and remediate security risks manually, it would either take a large team or the actions might not be timely. There is also a chance that with manual operation, a step could be missed or the incorrect action could be taken. As a result, your security teams will need an automated and scalable way to support these operations efficiently.

Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie generates findings for sensitive data in an S3 object or a potential issue with the security or privacy of an S3 bucket. AWS Security Hub allows you to gain a centralized view into the security posture across your AWS environment by aggregating security findings from various AWS services and partner products, including Amazon Macie. Security Hub also includes the custom actions feature, which you can use to create actions for response and remediation to selected findings within the Security Hub console in an efficient and consistent manner.

It is important for your security teams to create effective and standardized mechanisms for taking action against Macie findings to ensure that data remains secure. By using Security Hub custom actions, you can have predefined actions for the security team to take against Macie findings without having to manually find and remediate the resources.

This blog post provides you with an example solution for responding to Macie sensitive data findings and policy findings in Security Hub by using custom actions. I will walk through the components of the solution, as well as opportunities where resources can be customized for your specific use case.

Prerequisites

You must have AWS Security Hub and Amazon Macie enabled in the AWS account where you are deploying this solution.

Solution overview

In this solution, you’ll use a combination of Security Hub custom actions, Amazon EventBridge, and AWS Lambda to take action on Macie findings in Security Hub. You will be working with the findings within the same AWS account where you deployed the solution.

Macie generates two categories of findings relating to different resources, which will require different remediation actions.

  1. Policy finding is a detailed report of a potential policy violation or issue with the security or privacy of an Amazon Simple Storage Service (Amazon S3) bucket.
  2. Sensitive data finding is a detailed report of sensitive data in an S3 object.

A full list of Macie finding types can be found in the Macie User guide.

For the two Macie finding categories, there is an associated Security Hub custom action:

  1. Custom action for sensitive data finding (S3 object) – When the security team selects this custom action, the action invokes a Lambda function that will take the following steps on the S3 object in the Macie finding:
    1. Tag the object with the Security Hub finding ID
    2. Encrypt the S3 object with a different customer-managed KMS key
    3. Update the Security Hub finding workflow status to RESOLVED
  2. Custom action for policy finding (S3 bucket). When you select this this custom action, it invokes a Lambda function that will take the following steps on the S3 bucket in the Macie finding:
    1. Tag the object with the Security Hub finding ID
    2. Update the S3 bucket configuration to:
      • Enable default encryption
      • Enable public access block
    3. Update the Security Hub finding workflow status to RESOLVED

The solution is configured to take action within the AWS account where the finding and corresponding resource is generated. In order to enable cross-account remediation, you will need to deploy an additional IAM role for the automation to assume and provision a KMS key to use for encryption.

Note: The custom actions in this solution are meant to be examples of actions to take against Macie policy and sensitive data findings. These actions will be different depending on your use-case and environment. You will also need to review and update the associated Lambda function execution role IAM policies accordingly.

Solution architecture

Figure 1: Resources deployed in the Security AWS account taking action on resources identified in the Workload AWS account

Figure 1: Resources deployed in the Security AWS account taking action on resources identified in the Workload AWS account

Figure 1 shows the architecture for the solution. The workflow is as follows:

  1. A Macie job runs and creates findings, which are sent to Security Hub in the same AWS account as the Macie finding.
  2. The delegated administrator Security Hub account combines findings across all member Security Hub accounts, including Macie findings.
  3. The security team reviews the Macie findings in the Security Hub delegated administrator account and determines to take remediation actions for a finding by selecting the finding and then selecting the appropriate Security Hub custom action.
  4. The Security Hub custom action sends the finding to the EventBridge rule, which is linked to the Lambda function.
  5. The EventBridge rule invokes the Lambda function to take action against the resources from the Macie finding.
  6. The Lambda function will:
    1. Take action for the S3 resource
    2. Mark the Macie finding as resolved in the delegated administrator Security Hub account

The solution is currently intended to work in a single Region. In order to enable this solution across Regions, you will need to change the Remediation Lambda function code for any regional resources used for remediation actions (i.e. AWS Key Management Service).

Deploy the solution

You can deploy the solution through either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

To deploy the solution by using the AWS Management Console

  • In your security tooling account, launch the AWS CloudFormation template by choosing the following Launch Stack button. It will take approximately 10 minutes for the CloudFormation stack to complete.
    Select this image to open a link that starts building the CloudFormation stack

    Note: The stack will launch in the N. Virginia (us-east-1) Region. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template, modify it, and deploy it to the selected Region.

  • (OPTIONAL) If you want to enable cross-account remediation, launch the following AWS CloudFormation template in the AWS account where you want to be able to take remediation actions. You can also use AWS CloudFormation StackSets if deploying to multiple AWS accounts.
    Select this image to open a link that starts building the CloudFormation stack

To deploy the solution by using AWS CDK

You can find the latest code in our GitHub repository, where you can also contribute to the sample code. The following commands show how to deploy the solution by using the AWS CDK. First, the CDK initializes your environment and uploads the AWS Lambda assets to Amazon S3. Then, you can deploy the solution to your account. Make sure to replace <AWS_ACCOUNT> with the account number, and replace <REGION> with the AWS Region that you want the solution deployed to.

  1. Run the following commands in your terminal while authenticated in the security tooling AWS account:

    cdk bootstrap aws://<Security_Tooling_AWS_ACCOUNT>/<REGION>

    cdk deploy MacieRemediationStack

  2. (OPTIONAL) If you want to enable cross-account remediation, Run the following commands in your terminal while authenticated to member AWS account:

    cdk bootstrap aws://<Member_AWS_ACCOUNT>/<REGION>

    cdk deploy MacieRemediationIAMStack –parameters solutionaccount=<Security_Tooling_AWS_ACCOUNT>

Solution walkthrough and validation

Now that you’ve successfully deployed the solution, you can see things in action. You have two options for testing the workflow on your own:

  1. Use a sample event, generated by a Macie finding in Security Hub, and invoke the Lambda function that is tied to the Security Hub custom action.

    Note: If using sample events, you can replace the values for the resources with real resources. Otherwise, you will not be able to see the Lambda function successfully take action because the resource in your sample event may not exist.

  2. Generate demo Macie findings in Security Hub by using this sample data for Amazon Macie.

I have existing findings for Macie generated in my AWS account, and in the procedures in this section, I’ll walk through taking action against these.

Note: If you set up Macie and Security Hub in a delegated administrator and member model that ingests findings from other AWS accounts, the IAM remediation roles for the S3 bucket and S3 objects must be deployed in the member accounts.

Review deployed resources in the AWS console

Before taking action on your sample findings, review the deployed resources that you’ll use.

To review deployed resources

  1. In the AWS account console where the automation was deployed, go to Security Hub, choose Settings, and then choose Custom actions. You should see two custom actions:
    • Macie Policy Finding
      • arn:aws:securityhub:<region>:<account-id>:action/custom/MacieS3BucketPolicy
    • Macie Data Finding
      • arn:aws:securityhub:<region>:<account-id>:action/custom/MacieSensitiveData
        Figure 2: Custom actions in Security Hub

        Figure 2: Custom actions in Security Hub

  2. Navigate to the EventBridge console and then choose Rules. You should see four rules:
    • Disabled – These are disabled by default during deployment
      • Autoremediate_Macie_Policy_Finding
      • Autoremediate_Macie_Sensitive_Data_Finding
        Figure 3: Disabled EventBridge rules for autoremediation of Macie findings in Security Hub

        Figure 3: Disabled EventBridge rules for autoremediation of Macie findings in Security Hub

    • Enabled – These are enabled by default during deployment:
      • Custom_Action_Macie_Policy_Finding
      • Custom_Action_Macie_Sensitive_Data_Finding
        Figure 4: Enabled EventBridge rules tied to the Security Hub custom actions

        Figure 4: Enabled EventBridge rules tied to the Security Hub custom actions

    In the enabled EventBridge rules, you should see the corresponding Security Hub custom action Amazon Resource Names (ARNs) in the rule event pattern.

    Figure 5: Enabled EventBridge rule event pattern for the Security Hub custom action

    Figure 5: Enabled EventBridge rule event pattern for the Security Hub custom action

Take action on an Amazon Macie object or policy finding

Each Security Hub custom action invokes a corresponding Lambda function that is configured as a target in the EventBridge rule. The Lambda function parses the information in the Macie finding from Security Hub to take action.

Each Security Hub custom action is specific to either an S3 object or an S3 bucket. If you attempt a custom action meant for an S3 object against a Macie policy finding, this will successfully initiate the custom action, but the Lambda function that is invoked will be unsuccessful.

If the Macie finding is specific to an S3 object, the title will display “The S3 object …,” whereas if the Macie finding is for a policy finding, the title will display information for an S3 bucket.

To take action on findings

  1. In the AWS account console where the automation was deployed, navigate to AWS Security Hub, and then choose Findings.
  2. Filter the findings by setting Product Name to Macie.
    Figure 6: Filter for Macie findings in Security Hub

    Figure 6: Filter for Macie findings in Security Hub

  3. Select the checkbox for either a Macie policy finding or a sensitive data finding; this will select a custom action. After you select the action, there is no confirmation step, and the action will invoke the Lambda function.
    Figure 7: Validate Custom Action has sent the finding to Amazon CloudWatch Events (EventBridge rule)

    Figure 7: Validate Custom Action has sent the finding to Amazon CloudWatch Events (EventBridge rule)

Review and validate the Security Hub custom action on target resources

In order to validate or troubleshoot the solution, you need to review whether the Lambda function was able to take action against the resources in the Security Hub finding for Macie.

To validate or troubleshoot the custom action

  1. For validation of sensitive data finding remediation, review S3 object configuration:
    1. Navigate to the Amazon S3 console.
    2. Choose the S3 object in the Macie finding.
    3. Choose the Properties tab and review the following fields:
      • Tags should be set to SH_Finding_ID.
      • AWS KMS key ARN should be set to the KMS key with the alias `macie_key`
        1. Click on the KMS key ARN and validate the key’s alias is the key deployed in the solution
  2. For validation of policy finding remediation, review the S3 bucket configuration:
    1. Navigate to the Amazon S3 console.
    2. Choose the S3 bucket in the Macie finding.
    3. Choose the Properties tab and review the following fields:
      • Tags should be set to SH_Finding_ID.
      • Default Encryption should be set to Enabled.
    4. Choose the Permissions tab and review the following fields:
      • Block public access should be set to On.
  3. For troubleshooting, you can review the CloudWatch logs for the Lambda function:
    1. Navigate to the CloudWatch console.
    2. Choose /aws/lambda/Remediate_Macie_S3_Bucket.
    3. Choose the most recent log stream and review the logs to see what actions were taken on the resources.

Next steps and customization

The solution in this post has a custom action for an S3 object and an S3 bucket, and is meant to serve as a template. You could modify the Lambda functions associated with the custom actions to take different or additional actions that are specific to your environment and data classification.

Additionally, I walked through specific Security Hub custom actions for Macie policy (bucket) or sensitive data (objects) findings. If you have defined actions to take for both, you could consolidate the custom actions and invoke a Lambda function that parses information from the Security Hub Macie finding to determine if it is a policy or sensitive data finding.

The two disabled EventBridge rules deployed as part of the solution are examples that can be leveraged for auto-remediation. After you use Security Hub’s custom actions to remediate findings, your security team could start to see a trend where you always want to take specific actions and enable the EventBridge rules to take action without requiring your security team to select a custom action in Security Hub in the AWS console.

  • Autoremediate_Macie_Policy_Finding
  • Autoremediate_Macie_Sensitive_Data_Finding

Conclusion

In this post, you deployed a solution to allow your security team to take automated actions against a Macie sensitive data and policy finding from Security Hub by using custom actions in the AWS console. We walked through what the solution does and how the solution can be customized to your use case.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum or Amazon Macie forum.

Want more AWS Security news? Follow us on Twitter.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on threat detection and incident response. Today, he helps enterprise customers develop a comprehensive security strategy and deploy security solutions at scale, and he trains customers on AWS Security best practices.