Tag Archives: Amazon GuardDuty

How Security Operation Centers can use Amazon GuardDuty to detect malicious behavior

Post Syndicated from Darren House original https://aws.amazon.com/blogs/security/how-security-operation-centers-can-use-amazon-guardduty-to-detect-malicious-behavior/

The Security Operations Center (SOC) has a tough job. As customers modernize and shift to cloud architectures, the ability to monitor, detect, and respond to risks poses different challenges.

In this post we address how Amazon GuardDuty can address some common concerns of the SOC regarding the number of security tools and the overhead to integrate and manage them. We describe the GuardDuty service, how the SOC can use GuardDuty threat lists, filtering, and suppression rules to tune detections and reduce noise, and the intentional model used to define and categorize GuardDuty finding types to quickly give detailed information about detections.

Today, the typical SOC has between 10 and 60 tools for managing security. Some larger SOCs can have more than 100 tools, which are mostly point solutions that don’t integrate with each other.

The security market is flush with niche security tools you can deploy to monitor, detect, and respond to events. Each tool has technical and operational overhead in the form of designing system uptime, sensor deployment, data aggregation, tool integration, deployment plans, server and software maintenance, and licensing.

Tuning your detection systems is an example of a process with both technical and operational overhead. To improve your signal-to-noise ratio (S/N), the security systems you deploy have to be tuned to your environment and to emerging risks that are relevant to your environment. Improving the S/N matters for SOC teams because it reduces time and effort spent on activities that don’t bring value to an organization. Spending time tuning detection systems reduces the exhaustion factors that impact your SOC analysts. Tuning is highly technical, yet it’s also operational because it’s a process that continues to evolve, which means you need to manage the operations and maintenance lifecycle of the infrastructure and tools that you use in tuning your detections.

Amazon GuardDuty

GuardDuty is a core part of the modern FedRAMP-authorized cloud SOC, because it provides SOC analysts with a broad range of cloud-specific detective capabilities without requiring the overhead associated with a large number of security tools.

GuardDuty is a continuous security monitoring service that analyzes and processes data from Amazon Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail event logs that record Amazon Web Services (AWS) API calls, and DNS logs to provide analysis and detection using threat intelligence feeds, signatures, anomaly detection, and machine learning in the AWS Cloud. GuardDuty also helps you to protect your data stored in S3. GuardDuty continuously monitors and profiles S3 data access events (usually referred to as data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities. Detections include unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. For a full list of GuardDuty S3 threat detections, see GuardDuty S3 finding types. GuardDuty integrates threat intelligence feeds from CrowdStrike, Proofpoint, and AWS Security to detect network and API activity from known malicious IP addresses and domains. It uses machine learning to identify unknown and potentially unauthorized and malicious activity within your AWS environment.

The GuardDuty team continually monitors and manages the tuning of detections for threats related to modern cloud deployments, but your SOC can use trusted IP and threat lists and suppression rules to implement your own custom tuning to fit your unique environment.

GuardDuty is integrated with AWS Organizations, and customers can use AWS Organizations to associate member accounts with a GuardDuty management account. AWS Organizations helps automate the process of enabling and disabling GuardDuty simultaneously across a group of AWS accounts that are in your control. Note that, as of this writing, you can have one management account and up to 5,000 member accounts.

Operational overhead is near zero. There are no agents or sensors to deploy or manage. There are no servers to build, deploy, or manage. There’s nothing to patch or upgrade. There aren’t any highly available architectures to build. You don’t have to buy a subscription to a threat intelligence provider, manage the influx of threat data and most importantly, you don’t have to invest in normalizing all of the datasets to facilitate correlation. Your SOC can enable GuardDuty with a single click or API call. It is a multi-account service where you can create a management account, typically in the security account, that can read all findings information from the member accounts for easy centralization of detections. When deployed in a Management/Member design, GuardDuty provides a flexible model for centralizing your enterprise threat detection capability. The management account can control certain member settings, like update intervals for Amazon CloudWatch Events, use of threat and trusted lists, creation of suppression rules, opening tickets, and automating remediations.

Filters and suppression rules

When GuardDuty detects potential malicious activity, it uses a standardized finding format to communicate the details about the specific finding. The details in a finding can be queried in filters, displayed as saved rules, or used for suppression to improve visibility and reduce analyst fatigue.

Suppress findings from vulnerability scanners

A common example of tuning your GuardDuty deployment is to use suppression rules to automatically archive new Recon:EC2/Portscan findings from vulnerability assessment tools in your accounts. This is a best practice designed to reduce S/N and analyst fatigue.

The first criteria in the suppression rule should use the Finding type attribute with a value of Recon:EC2/Portscan. The second filter criteria should match the instance or instances that host these vulnerability assessment tools. You can use the Instance image ID attribute, the Network connection remote IPv4 address, or the Tag value attribute depending on what criteria is identifiable with the instances that host these tools. In the example shown in Figure 1, we used the remote IPv4 address.

Figure 1: GuardDuty filter for vulnerability scanners

Figure 1: GuardDuty filter for vulnerability scanners

Filter on activity that was not blocked by security groups or NACL

If you want visibility into the GuardDuty detections that weren’t blocked by preventative measures such as a network ACL (NACL) or security group, you can filter by the attribute Network connection blocked = False, as shown in Figure 2. This can provide visibility into potential changes in your filtering strategy to reduce your risk.

Figure 2: GuardDuty filter for activity not blocked by security groups on NACLs

Figure 2: GuardDuty filter for activity not blocked by security groups on NACLs

Filter on specific malicious IP addresses

Some customers want to track specific malicious IP addresses to see whether they are generating findings. If you want to see whether a single source IP address is responsible for CloudTrail-based findings, you can filter by the API caller IPv4 address attribute.

Figure 3: GuardDuty filter for specific malicious IP address

Figure 3: GuardDuty filter for specific malicious IP address

Filter on specific threat provider

Maybe you want to know how many findings are generated from a threat intelligence provider or your own threat lists. You can filter by the attribute Threat list name to see if the potential attacker is on a threat list from CrowdStrike, Proofpoint, AWS, or your threat lists that you uploaded to GuardDuty.

Figure 4: GuardDuty filter for specific threat intelligence list provider

Figure 4: GuardDuty filter for specific threat intelligence list provider

Finding types and formats

Now that you know a little more about GuardDuty and tuning findings by using filters and suppression rules, let’s dive into the finding types that are generated by a GuardDuty detection. The first thing to know is that all GuardDuty findings use the following model:


ThreatPurpose:ResourceTypeAffected/ThreatFamilyName.ThreatFamilyVariant!Artifact

This model is intended to communicate core information to security teams on the nature of the potential risk, the AWS resource types that are potentially impacted, and the threat family name, variants, and artifacts of the detected activity in your account. The Threat Purpose field describes the primary purpose of a threat or a potential attempt on your environment.

Let’s take the Backdoor:EC2/C&CActivity.B!DNS finding as an example.


ThreatPurpose:ResourceTypeAffected/ThreatFamilyName.ThreatFamilyVariant!Artifact
Backdoor     :EC2                 /C&CActivity.    .B                  !DNS

The Backdoor threat purpose indicates an attempt to bypass normal security controls on a specific Amazon Elastic Compute Cloud (EC2) instance. In this example, the EC2 instance in your AWS environment is querying a domain name (DNS) associated with a known command and control (C&CActivity) server. This is a high-severity finding, because there are enough indicators that malware is on your host and acting with malicious intent.

GuardDuty, at the time of this writing, supports the following finding types:

  • Backdoor finding types
  • Behavior finding types
  • CryptoCurrency finding types
  • PenTest finding types
  • Persistence finding types
  • Policy finding types
  • PrivilegeEscalation finding types
  • Recon finding types
  • ResourceConsumption finding types
  • Stealth finding types
  • Trojan finding types
  • Unauthorized finding types

OK, now you know about the model for GuardDuty findings, but how does GuardDuty work?

When you enable GuardDuty, the service evaluates events in both a stateless and stateful manner, which allows us to use machine learning and anomaly detection in addition to signatures and threat intelligence. Some stateless examples include the Backdoor:EC2/C&CActivity.B!DNS or the CryptoCurrency:EC2/BitcoinTool.B finding types, where GuardDuty only needs to see a single DNS query, VPC Flow Log entry, or CloudTrail record to detect potentially malicious activity.

Stateful detections are driven by anomaly detection and machine learning models that identify behaviors that deviate from a baseline. The machine learning detections typically require more time to train the models and potentially use multiple events for triggering the detection.

The typical triggers for behavioral detections vary based on the log source and the detection in question. The behavioral detections learn about typical network or user activity to set a baseline, after which the anomaly detections change from a learning mode to an active mode. In active mode, you only see findings generated from these detections if the service observes behavior that suggests a deviation. Some examples of machine learning–based detections include the Backdoor:EC2/DenialOfService.Dns, UnauthorizedAccess:IAMUser/ConsoleLogin, and Behavior:EC2/NetworkPortUnusual finding types.

Why does this matter?

We know the SOC has the tough job of keeping organizations secure with limited resources, and with a high degree of technical and operational overhead due to a large portfolio of tools. This can impact the ability to detect and respond to security events. For example, CrowdStrike tracks the concept of breakout time—the window of time from when an outside party first gains unauthorized access to an endpoint machine, to when they begin moving laterally across your network. They identified average breakout times are between 19 minutes and 10 hours. If the SOC is overburdened with undifferentiated technical and operational overhead, it can struggle to improve monitoring, detection, and response. To act quickly, we have to decrease detection time and the overhead burden on the SOC caused by the numerous tools used. The best way to decrease detection time is with threat intelligence and machine learning. Threat intelligence can provide context to alerts and gives a broader perspective of cyber risk. Machine learning uses baselines to detect what normal looks like, enabling detection of anomalies in user or resource behavior, and heuristic threats that change over time. The best way to reduce SOC overhead is to share the load so that AWS services manage the undifferentiated heavy lifting, while the SOC focuses on more specific tasks that add value to the organization.

GuardDuty is a cost-optimized service that is in scope for the FedRAMP and DoD compliance programs in the US commercial and GovCloud Regions. It leverages threat intelligence and machine learning to provide detection capabilities without you having to manage, maintain, or patch any infrastructure or manage yet another security tool. With a 30-day trial period, there is no risk to evaluate the service and discover how it can support your cyber risk strategy.

If you want to receive automated updates about GuardDuty, you can subscribe to an SNS notification that will email you whenever new features and detections are released.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Darren House

Darren brings over 20 years’ experience building secure technology architectures and technical strategies to support customer outcomes. He has held several roles including CTO, Director of Technology Solutions, Technologist, Principal Solutions Architect, and Senior Network Engineer for USMC. Today, he is focused on enabling AWS customers to adopt security services and automations that increase visibility and reduce risk.

Author

Trish Cagliostro

Trish is a leader in the security industry where she has spent 10 years advising public and private sector customers like DISA, DHS, and US Senate and commercial entities like Bank of America and United Airlines. Trish is a subject matter expert on a variety of topics, including integrating threat intelligence and has testified before the House Homeland Security Committee about information sharing.

New third-party test compares Amazon GuardDuty to network intrusion detection systems

Post Syndicated from Tim Winston original https://aws.amazon.com/blogs/security/new-third-party-test-compares-amazon-guardduty-to-network-intrusion-detection-systems/

A new whitepaper is available that summarizes the results of tests by Foregenix comparing Amazon GuardDuty with network intrusion detection systems (IDS) on threat detection of network layer attacks. GuardDuty is a cloud-centric IDS service that uses Amazon Web Services (AWS) data sources to detect a broad range of threat behaviors. Security engineers need to understand how Amazon GuardDuty compares to traditional solutions for network threat detection. Assessors have also asked for clarity on the effectiveness of GuardDuty for meeting compliance requirements, like Payment Card Industry (PCI) Data Security Standard (DSS) requirement 11.4, which requires intrusion detection techniques to be implemented at critical points within a network.

A traditional IDS typically relies on monitoring network traffic at specific network traffic control points, like firewalls and host network interfaces. This allows the IDS to use a set of preconfigured rules to examine incoming data packet information and identify patterns that closely align with network attack types. Traditional IDS have several challenges in the cloud:

  • Networks are virtualized. Data traffic control points are decentralized and traffic flow management is a shared responsibility with the cloud provider. This makes it difficult or impossible to monitor all network traffic for analysis.
  • Cloud applications are dynamic. Features like auto-scaling and load balancing continuously change how a network environment is configured as demand fluctuates.

Most traditional IDS require experienced technicians to maintain their effective operation and avoid the common issue of receiving an overwhelming number of false positive findings. As a compliance assessor, I have often seen IDS intentionally de-tuned to address the false positive finding reporting issue when expert, continuous support isn’t available.

GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon Virtual Private Cloud (Amazon VPC) flow logs, and Amazon Route 53 DNS logs. This gives GuardDuty the ability to analyze event data, such as AWS API calls to AWS Identity and Access Management (IAM) login events, which is beyond the capabilities of traditional IDS solutions. Monitoring AWS API calls from CloudTrail also enables threat detection for AWS serverless services, which sets it apart from traditional IDS solutions. However, without inspection of packet contents, the question remained, “Is GuardDuty truly effective in detecting network level attacks that more traditional IDS solutions were specifically designed to detect?”

AWS asked Foregenix to conduct a test that would compare GuardDuty to market-leading IDS to help answer this question for us. AWS didn’t specify any specific attacks or architecture to be implemented within their test. It was left up to the independent tester to determine both the threat space covered by market-leading IDS and how to construct a test for determining the effectiveness of threat detection capabilities of GuardDuty and traditional IDS solutions which included open-source and commercial IDS.

Foregenix configured a lab environment to support tests that used extensive and complex attack playbooks. The lab environment simulated a real-world deployment composed of a web server, a bastion host, and an internal server used for centralized event logging. The environment was left running under normal operating conditions for more than 45 days. This allowed all tested solutions to build up a baseline of normal data traffic patterns prior to the anomaly detection testing exercises that followed this activity.

Foregenix determined that GuardDuty is at least as effective at detecting network level attacks as other market-leading IDS. They found GuardDuty to be simple to deploy and required no specialized skills to configure the service to function effectively. Also, with its inherent capability of analyzing DNS requests, VPC flow logs, and CloudTrail events, they concluded that GuardDuty was able to effectively identify threats that other IDS could not natively detect and required extensive manual customization to detect in the test environment. Foregenix recommended that adding a host-based IDS agent on Amazon Elastic Compute Cloud (Amazon EC2) instances would provide an enhanced level of threat defense when coupled with Amazon GuardDuty.

As a PCI Qualified Security Assessor (QSA) company, Foregenix states that they consider GuardDuty as a qualifying network intrusion technique for meeting PCI DSS requirement 11.4. This is important for AWS customers whose applications must maintain PCI DSS compliance. Customers should be aware that individual PCI QSAs might have different interpretations of the requirement, and should discuss this with their assessor before a PCI assessment.

Customer PCI QSAs can also speak with AWS Security Assurance Services, an AWS Professional Services team of PCI QSAs, to obtain more information on how customers can leverage AWS services to help them maintain PCI DSS Compliance. Customers can request Security Assurance Services support through their AWS Account Manager, Solutions Architect, or other AWS support.

We invite you to download the Foregenix Amazon GuardDuty Security Review whitepaper to see the details of the testing and the conclusions provided by Foregenix.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tim Winston

Tim is long-time security and compliance consultant and currently a PCI QSA with AWS Security Assurance Services.

New – Using Amazon GuardDuty to Protect Your S3 Buckets

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-using-amazon-guardduty-to-protect-your-s3-buckets/

As we anticipated in this post, the anomaly and threat detection for Amazon Simple Storage Service (S3) activities that was previously available in Amazon Macie has now been enhanced and reduced in cost by over 80% as part of Amazon GuardDuty. This expands GuardDuty threat detection coverage beyond workloads and AWS accounts to also help you protect your data stored in S3.

This new capability enables GuardDuty to continuously monitor and profile S3 data access events (usually referred to data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities such as requests coming from an unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. To detect possibly malicious behavior, GuardDuty uses a combination of anomaly detection, machine learning, and continuously updated threat intelligence. For your reference, here’s the full list of GuardDuty S3 threat detections.

When threats are detected, GuardDuty produces detailed security findings to the console and to Amazon EventBridge, making alerts actionable and easy to integrate into existing event management and workflow systems, or trigger automated remediation actions using AWS Lambda. You can optionally deliver findings to an S3 bucket to aggregate findings from multiple regions, and to integrate with third party security analysis tools.

If you are not using GuardDuty yet, S3 protection will be on by default when you enable the service. If you are using GuardDuty, you can simply enable this new capability with one-click in the GuardDuty console or through the API. For simplicity, and to optimize your costs, GuardDuty has now been integrated directly with S3. In this way, you don’t need to manually enable or configure S3 data event logging in AWS CloudTrail to take advantage of this new capability. GuardDuty also intelligently processes only the data events that can be used to generate threat detections, significantly reducing the number of events processed and lowering your costs.

If you are part of a centralized security team that manages GuardDuty across your entire organization, you can manage all accounts from a single account using the integration with AWS Organizations.

Enabling S3 Protection for an AWS Account
I already have GuardDuty enabled for my AWS account in this region. Now, I want to add threat detection for my S3 buckets. In the GuardDuty console, I select S3 Protection and then Enable. That’s it. To be more protected, I repeat this process for all regions enabled in my account.

After a few minutes, I start seeing new findings related to my S3 buckets. I can select each finding to get more information on the possible threat, including details on the source actor and the target action.

After a few days, I select the Usage section of the console to monitor the estimated monthly costs of GuardDuty in my account, including the new S3 protection. I can also find which are the S3 buckets contributing more to the costs. Well, it turns out I didn’t have lots of traffic on my buckets recently.

Enabling S3 Protection for an AWS Organization
To simplify management of multiple accounts, GuardDuty uses its integration with AWS Organizations to allow you to delegate an account to be the administrator for GuardDuty for the whole organization.

Now, the delegated administrator can enable GuardDuty for all accounts in the organization in a region with one click. You can also set Auto-enable to ON to automatically include new accounts in the organization. If you prefer, you can add accounts by invitation. You can then go to the S3 Protection page under Settings to enable S3 protection for their entire organization.

When selecting Auto-enable, the delegated administrator can also choose to enable S3 protection automatically for new member accounts.

Available Now
As always, with Amazon GuardDuty, you only pay for the quantity of logs and events processed to detect threats. This includes API control plane events captured in CloudTrail, network flow captured in VPC Flow Logs, DNS request and response logs, and with S3 protection enabled, S3 data plane events. These sources are ingested by GuardDuty through internal integrations when you enable the service, so you don’t need to configure any of these sources directly. The service continually optimizes logs and events processed to reduce your cost, and displays your usage split by source in the console. If configured in multi-account, usage is also split by account.

There is a 30-day free trial for the new S3 threat detection capabilities. This applies as well to accounts that already have GuardDuty enabled, and add the new S3 protection capability. During the trial, the estimated cost based on your S3 data event volume is calculated in the GuardDuty console Usage tab. In this way, while you evaluate these new capabilities at no cost, you can understand what would be your monthly spend.

GuardDuty for S3 protection is available in all regions where GuardDuty is offered. For regional availability, please see the AWS Region Table. To learn more, please see the documentation.

Danilo

Building a Self-Service, Secure, & Continually Compliant Environment on AWS

Post Syndicated from Japjot Walia original https://aws.amazon.com/blogs/architecture/building-a-self-service-secure-continually-compliant-environment-on-aws/

Introduction

If you’re an enterprise organization, especially in a highly regulated sector, you understand the struggle to innovate and drive change while maintaining your security and compliance posture. In particular, your banking customers’ expectations and needs are changing, and there is a broad move away from traditional branch and ATM-based services towards digital engagement.

With this shift, customers now expect personalized product offerings and services tailored to their needs. To achieve this, a broad spectrum of analytics and machine learning (ML) capabilities are required. With security and compliance at the top of financial service customers’ agendas, being able to rapidly innovate and stay secure is essential. To achieve exactly that, AWS Professional Services engaged with a major Global systemically important bank (G-SIB) customer to help develop ML capabilities and implement a Defense in Depth (DiD) security strategy. This blog post provides an overview of this solution.

The machine learning solution

The following architecture diagram shows the ML solution we developed for a customer. This architecture is designed to achieve innovation, operational performance, and security performance in line with customer-defined control objectives, as well as meet the regulatory and compliance requirements of supervisory authorities.

Machine learning solution developed for customer

This solution is built and automated using AWS CloudFormation templates with pre-configured security guardrails and abstracted through the service catalog. AWS Service Catalog allows you to quickly let your users deploy approved IT services ensuring governance, compliance, and security best practices are enforced during the provisioning of resources.

Further, it leverages Amazon SageMaker, Amazon Simple Storage Service (S3), and Amazon Relational Database Service (RDS) to facilitate the development of advanced ML models. As security is paramount for this workload, data in S3 is encrypted using client-side encryption and column-level encryption on columns in RDS. Our customer also codified their security controls via AWS Config rules to achieve continual compliance

Compute and network isolation

To enable our customer to rapidly explore new ML models while achieving the highest standards of security, separate VPCs were used to isolate infrastructure and accessed control by security groups. Core to this solution is Amazon SageMaker, a fully managed service that provides the ability to rapidly build, train, and deploy ML models. Amazon SageMaker notebooks are managed Juypter notebooks that:

  1. Prepare and process data
  2. Write code to train models
  3. Deploy models to SageMaker hosting
  4. Test or validate models

In our solution, notebooks run in an isolated VPC with no egress connectivity other than VPC endpoints, which enable private communication with AWS services. When used in conjunction with VPC endpoint policies, you can use notebooks to control access to those services. In our solution, this is used to allow the SageMaker notebook to communicate only with resources owned by AWS Organizations through the use of the aws:PrincipalOrgID condition key. AWS Organizations helps provide governance to meet strict compliance regulation and you can use the aws:PrincipalOrgID condition key in your resource-based policies to easily restrict access to Identity Access Management (IAM) principals from accounts.

Data protection

Amazon S3 is used to store training data, model artifacts, and other data sets. Our solution uses server-side encryption with customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS) encryption to protect data at rest. SSE-KMS leverages KMS and uses an envelope encryption strategy with CMKs. Envelop encryption is the practice of encrypting data with a data key and then encrypting that data key using another key – the CMK. CMKs are created in KMS and never leave KMS unencrypted. This approach allows fine-grained control around access to the CMK and the logging of all access and attempts to access the key to Amazon CloudTrail. In our solution, the age of the CMK is tracked by AWS Config and is regularly rotated. AWS Config enables you to assess, audit, and evaluate the configurations of deployed AWS resources by continuously monitoring and recording AWS resource configurations. This allows you to automate the evaluation of recorded configurations against desired configurations.

Amazon S3 Block Public Access is also used at an account level to ensure that existing and newly created resources block bucket policies or access-control lists (ACLs) don’t allow public access. Service control policies (SCPs) are used to prevent users from modifying this setting. AWS Config continually monitors S3 and remediates any attempt to make a bucket public.

Data in the solution are classified according to their sensitivity that corresponds to your customer’s data classification hierarchy. Classification in the solution is achieved through resource tagging, and tags are used in conjunction with AWS Config to ensure adherence to encryption, data retention, and archival requirements.

Continuous compliance

Our solution adopts a continuous compliance approach, whereby the compliance status of the architecture is continuously evaluated and auto-remediated if a configuration change attempts to violate the compliance posture. To achieve this, AWS Config and config rules are used to confirm that resources are configured in compliance with defined policies. AWS Lambda is used to implement a custom rule set that extends the rules included in AWS Config.

Data exfiltration prevention

In our solution, VPC Flow Logs are enabled on all accounts to record information about the IP traffic going to and from network interfaces in each VPC. This allows us to watch for abnormal and unexpected outbound connection requests, which could be an indication of attempts to exfiltrate data. Amazon GuardDuty analyzes VPC Flow Logs, AWS CloudTrail event logs, and DNS logs to identify unexpected and potentially malicious activity within the AWS environment. For example, GuardDuty can detect compromised Amazon Elastic Cloud Compute (EC2) instances communicating with known command-and-control servers.

Conclusion

Financial services customers are using AWS to develop machine learning and analytics solutions to solve key business challenges while ensuring security and compliance needs. This post outlined how Amazon SageMaker, along with multiple security services (AWS Config, GuardDuty, KMS), enables building a self-service, secure, and continually compliant data science environment on AWS for a financial service use case.

 

Updates to the security pillar of the AWS Well-Architected Framework

Post Syndicated from Ben Potter original https://aws.amazon.com/blogs/security/updates-to-security-pillar-aws-well-architected-framework/

We have updated the security pillar of the AWS Well-Architected Framework, based on customer feedback and new best practices. In this post, I’ll take you through the highlights of the updates to the security information in the Security Pillar whitepaper and the AWS Well-Architected Tool, and explain the new best practices and guidance.

AWS developed the Well-Architected Framework to help customers build secure, high-performing, resilient, and efficient infrastructure for their applications. The Well-Architected Framework is based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization. We periodically update the framework as we get feedback from customers, partners, and other teams within AWS. Along with the AWS Well-Architected whitepapers, there is an AWS Well-Architected Tool to help you review your architecture for alignment to these best practices.

Changes to identity and access management

The biggest changes are related to identity and access management, and how you operate your workload. Both the security pillar whitepaper, and the review in the tool, now start with the topic of how to securely operate your workload. Instead of just securing your root user, we want you to think about all aspects of your AWS accounts. We advise you to configure the contact information for your account, so that AWS can reach out to the right contact if needed. And we recommend that you use AWS Organizations with Service Control Policies to manage the guardrails on your accounts.

A new best practice is to identify and validate control objectives. This new best practice is about having objectives built from your own threat model, not just a list of controls to help you measure the effectiveness of your risk mitigation. Adding to that is the best practice to automate testing and validation of security controls in pipelines.

Identity and access management is no longer split between credentials and authentication, human and programmatic access. There are now just two questions which have a simpler approach, and which focus on identity and permissions. These questions don’t draw a distinction between humans and machines. You should think about how identity and permissions are applied to your AWS environment, to the systems supporting your workload, and to the workload itself. New best practices include the following:

  • Define permission guardrails for your organization – Establish common controls that restrict access to all identities in your organization, such as restricting which AWS Regions can be used.
  • Reduce permissions continuously – As teams and workloads determine what access they need, you should remove the permissions they no longer use, and you should establish review processes to achieve least privilege permissions.
  • Establish an emergency access process – Have a process that allows emergency access to your workload, so that you can react appropriately in the unlikely event of an issue with an automated process or the pipeline.
  • Analyze public and cross account access – Continuously monitor findings that highlight public and cross-account access.
  • Share resources securely – Govern the consumption of shared resources across accounts, or within your AWS Organization.

Changes to detective controls

The section that was previously called detective controls is now simply called detection. The main update in this section is a new best practice called automate response to events, which covers automating your investigation processes, alerts, and remediation. For example, you can use Amazon GuardDuty managed threat detection findings, or the new AWS Security Hub foundational best practices, to trigger a notification to your team with Amazon CloudWatch Events, and you can use an AWS Step Function to coordinate or automatically remediate what was found.

Changes to infrastructure protection

From a networking and compute perspective, there are only a few minor refinements. A new best practice in this section is to enable people to perform actions at a distance. This aligns to the design principle: keep people away from data. For example, you can use AWS Systems Manager to remotely run commands on an EC2 instance without needing to open ports outside your network or interactively connect to instances, which reduces the risk of human error.

Changes to data protection

One update in the best practice for data at rest is that we changed provide mechanisms to keep people away from data to use mechanisms to keep people away from data. Simply providing mechanisms is good, but it’s better if they are actually used.

One issue in data protection that hasn’t changed is how to enforce encryption at rest and in transit. For encryption in transit, you should never allow insecure protocols, such as HTTP. You can configure security groups and network access control lists (network ACLs) to not use insecure protocols, and you can use Amazon CloudFront to only serve your content over HTTPs with certificates deployed and automatically rotated by AWS Certificate Manager (ACM). For enforcing encryption at rest, you can use default EBS encryption, Amazon S3 encryption using AWS Key Management Service (AWS KMS) customer master keys (CMKs), and you can detect insecure configurations using AWS Config rules or conformance packs.

Changes to incident response

The final section of the security pillar is incident response. The new best practice in this section is automate containment and recovery capability. Your incident response should already include plans, pre-deployed tools, and access, so your next step should be automation. The incident response section of the whitepaper has been rewritten and takes on many parts of the very helpful AWS Security Incident Response Guide. You should also check out the incident response hands-on lab Incident Response Playbook with Jupyter – AWS IAM. This lab uses the power of Jupyter notebooks to perform interactive queries against AWS APIs for semi-automated playbooks. As always, a best practice is to plan and practice your incident response so you can continue to learn and improve as you operate your workloads securely in AWS.

A huge thank you to all our customers who give us feedback on the tool and whitepapers. I encourage you to review your workloads in the AWS Well-Architected Tool, and take a look at all of the updated Well-Architected whitepapers.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Well-Architected Tool forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Potter

Ben is the global security leader for the AWS Well-Architected Framework and is responsible for sharing best practices in security with customers and partners. Ben is also an ambassador for the No More Ransom initiative helping fight cyber crime with Europol, McAfee and law enforcement across the globe. You can learn more about him in this interview.

How to perform automated incident response in a multi-account environment

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/how-to-perform-automated-incident-response-multi-account-environment/

How quickly you respond to security incidents is key to minimizing their impacts. Automating incident response helps you scale your capabilities, rapidly reduce the scope of compromised resources, and reduce repetitive work by security teams. But when you use automation, you also must manage exceptions to standard response procedures.

In this post, I provide a pattern and ready-made templates for a scalable multi-account setup of an automated incident response process with minimal code base, using native AWS tools. I also explain how to set up exception handling for approved deviations based on resource tags.

Because security response is a broad topic, I provide an overview of some alternative approaches to consider. Incident response is one part of an overarching governance, risk, and compliance (GRC) program that every organization should implement. For more information, see Scaling a governance, risk, and compliance program for the cloud.

Important: Use caution when introducing automation. Carefully test each of the automated responses in a non-production environment, as you should not use untested automated incident response for business-critical applications.

Solution benefits and deliverables

In the solution described below, you use AWS Systems Manager automation to execute most of the incident response steps. In general, you can either write your own or use pre-written runbooks. AWS maintains ready-made operational runbooks (automation documents) so you don’t need to maintain your own code base for them. The AWS runbooks cover many predefined use cases, such as enabling Amazon Simple Storage Service (S3) bucket encryption, opening a Jira ticket, or terminating an Amazon Elastic Compute Cloud (EC2) instance. Every execution for these is well documented and repeatable in AWS Systems Manager. For a few cases where there are no ready-made automation documents, I provide three additional AWS Lambda functions with the required response actions in the templates. The Lambda functions require minimal code, with no external dependencies outside AWS native tools.

You use a central security account to execute the incident response actions in the automation runbooks. You don’t need to do anything in the service accounts where you monitor and respond to incidents. In this post, you learn about and receive patterns for:

  • Responding to an incident based on Amazon GuardDuty alerts or AWS Config findings.
  • Deploying templates for a multi-account setup with a central security account and multiple service accounts. All account resources must be in the same AWS Region and part of the same AWS organization.
  • AWS Systems Manager automation to execute many existing AWS managed automation runbooks (you need access to the AWS Management Console to see all documents at this link). Systems Manager automation is available in these AWS Regions.
  • Prewritten Lambda functions to:
    • Confine permissive (open) security groups to a more-constrained CIDR (classless interdomain routing), such as the VPC (virtual private cloud) range for that security group. This prevents knowable network configuration errors, such as too-open security groups.
    • Isolate a potentially compromised EC2 instance by attaching it to a single, empty security group and removing its existing security groups.
    • Block an AWS Identity and Access Management (IAM) principal (an IAM user or role) by attaching a deny all policy to that principal.
    • Send notifications to the Amazon Simple Notification Service (Amazon SNS) for alerting in addition to (and in concert with) automated response actions.
  • Exception handling when actions should not be executed. I suggest decentralized exception handling based on AWS resource tags.
  • Integrating AWS Security Hub with custom actions for GuardDuty findings. This can be used for manual triggering of remediations that should not receive an automatic response.
  • Custom AWS Config rule for detecting permissive (open) security groups on any port.
  • Mapping the security finding to the response action defined in the CloudWatch Events rule pattern is extendable. I provide an example of how to extend to new findings and responses.

Out-of-scope for this solution

This post only shows you how to get started with security response automation for accounts that are part of the same AWS organization and Region. For more information on developing a comprehensive program for incident response, see How to prepare for & respond to security incidents in your AWS environment.

Understanding the difference between this solution and AWS Config remediation

The AWS Config remediation feature provides remediation of non-compliant resources using AWS Systems Manager within a single AWS account. The solution you learn in this post includes both GuardDuty alerts and AWS Config, a multi-account approach managed from a central security account, rules exception handling based on resource tags, and additional response actions using AWS Lambda functions.

Choosing the right approach for incident response

There are many technical patterns and solutions for responding to security incidents. When considering the right one for your organization, you must consider how much flexibility for response actions you need, what resources are in scope, and the skill level of your security engineers. Also consider dependencies on external code and applications, software licensing costs, and your organization’s experience level with AWS.

If you want the maximum flexibility and extensibility beyond AWS Systems Manager used in this post, I recommend AWS Step Functions as described in the session How to prepare for & respond to security incidents in your AWS environment and in DIY guide to runbooks, incident reports, and incident response. You can create workflows for your runbooks and have full control of all possible actions.

Architecture

 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

The architecture works as follows:

In the service account (the environment for which we want to monitor and respond to incidents):

  1. GuardDuty findings are forwarded to CloudWatch Events.
  2. Changes in the AWS Config compliance status are forwarded to CloudWatch Events.
  3. CloudWatch Events for GuardDuty and AWS Config are forwarded to the central security account, via a CloudWatch Events bus.

In the central security account:

  1. Each event from the service account is mapped to one or more response actions using CloudWatch Events rules. Every rule is an incident response action that is executed on one or more security findings, as defined in the event pattern. If there is an exception rule, the response actions are not executed.

    The list of possible actions that could be taken include:

    1. Trigger a Systems Manager automation document, invoked by the Lambda function StratSsmAutomation within the security account.
    2. Isolate an EC2 instance by attaching an empty security group to it and removing any prior security groups, invoked by the Lambda function IsolateEc2. This assumes the incident response role in the target service account.
    3. Block the IAM principal by attaching a deny all policy, invoked by the Lambda function BlockPrincipal, by assuming the incident response role in the target service account.
    4. Confine security group to safe CIDR, invoked by the Lambda function ConfineSecurityGroup, by assuming the incident response role in the target service account.
    5. Send the finding to an SNS topic for processing outside this solution; for example, by manual evaluation or simply for information.
  2. Invoke AWS Systems Manager within the security account, with a target of the service account and the same AWS Region.
  3. The Systems Manager automation document is executed from the security account against the resources in the service account; for example, EC2, S3, or IAM resources.
  4. The response actions are executed directly from the security account to the service account. This is done by assuming an IAM role, for example, to isolate a potentially compromised EC2 instance.
  5. Manually trigger security responses using Security Hub custom actions. This can be suitable for manual handling of complex findings that need investigation before action.

Response actions decision

Your organization wants to articulate information security policy decisions and then create a list of corresponding automated security responses. Factors to consider include the classification of the resources and the technical complexity of the automation required. Start with common, less complex cases to get immediate gains, and increase complexity as you gain experience. Many stakeholders in your organization, such as business, IT operations, information security, and risk and compliance, should be involved in deciding which incident response actions should be automated. You need executive support for the political will to execute policies.

Creating exceptions to automated responses

There may be cases in which it’s unwise to take an automated response. For example, you might not want an automated response to incidents involving a core production database server that is critical to business operations. Instead, you’d want to use human judgment calls before responding. Or perhaps you know there are alarms that you don’t need for certain resources, like alerting for an open security group when you intentionally use it as a public web server. To address these exceptions, there is a carve-out. If the AWS resource has a tag with the name SecurityException, a response action isn’t executed. The tag name is defined during installation.

Table 1 provides an example of the responses implemented in this solution template. You might decide on different actions and different priorities for your organization. A current list of GuardDuty findings can be found at Active Finding Types and for AWS Config at AWS Config Managed Rules.

NSourceFindingDescriptionResponse
1GuardDuty Backdoor:EC2/Spambot
Backdoor:EC2/C&CActivity.B!DNS,
Backdoor:EC2/DenialOfService.Tcp,
Backdoor:EC2/DenialOfService.Udp,
Backdoor:EC2/DenialOfService.Dns,
Backdoor:EC2/DenialOfService.UdpOnTcpPorts,
Backdoor:EC2/DenialOfService.UnusualProtocol,
Trojan:EC2/BlackholeTraffic,
Trojan:EC2/DropPoint,
Trojan:EC2/BlackholeTraffic!DNS,
Trojan:EC2/DriveBySourceTraffic!DNS,
Trojan:EC2/DropPoint!DNS,
Trojan:EC2/DGADomainRequest.B,
Trojan:EC2/DGADomainRequest.C!DNS,
Trojan:EC2/DNSDataExfiltration,
Trojan:EC2/PhishingDomainRequest!DNS
See Backdoor Finding Types and
 
Trojan Finding Types
Isolate EC2 with empty security group.
 
Archive the GuardDuty finding.
 
Send SNS notification.
2GuardDutyUnauthorizedAccess:IAMUser/InstanceCredentialExfiltration,
UnauthorizedAccess:IAMUser/TorIPCaller,
UnauthorizedAccess:IAMUser/MaliciousIPCaller.Custom,
UnauthorizedAccess:IAMUser/ConsoleLoginSuccess.B,
UnauthorizedAccess:IAMUser/MaliciousIPCaller
See Unauthorized Finding TypesBlock IAM principal by attaching deny all policy.
 
Archive the GuardDuty finding.
 
Send SNS notification.
3AWS ConfigS3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLEDSee the documentation for s3-bucket-server-side-encryption-enabledEnable server-side encryption with Amazon S3-Managed keys (SSE-S3) with SSM document (AWS-EnableS3BucketEncryption).
 
Send SNS notification.
4AWS ConfigS3_BUCKET_PUBLIC_READ_PROHIBITEDSee the documentation for s3-bucket-public-read-prohibitedDisable S3 PublicRead and PublicWrite with SSM document (AWS-DisableS3BucketPublicReadWrite).
 
Send SNS notification.
5AWS ConfigS3_BUCKET_PUBLIC_WRITE_PROHIBITEDSee the documentation for s3-bucket-public-write-prohibitedDisable S3 PublicRead and PublicWrite with SSM document (AWS-DisableS3BucketPublicReadWrite).
 
Send SNS notification.
6AWS ConfigSECURITY_GROUP_OPEN_PROHIBITEDSee template, custom configuration.Confine security group to safe CIDR 172.31. 0.0/16
 
Send SNS notification.
7AWS ConfigENCRYPTED_VOLUMESSee the documentation for encrypted-volumesSend SNS notification.
8AWS ConfigRDS_STORAGE_ENCRYPTEDSee the documentation for rds-storage-encryptedSend SNS notification.

Installation

  1. In the security account, launch the template by selecting Launch Stack.
     
    Select this image to open a link that starts building the CloudFormation stack
    Additionally, you can find the latest code on GitHub, where you can also contribute to the sample code.
  2. Provide the following parameters for the security account (see Figure 2):
    • S3 bucket with sources: This bucket contains all sources, such as the Lambda function and templates. If you’re not customizing the sources, you can leave the default text.
    • Prefix for S3 bucket with sources: Prefix for all objects. If you’re not customizing the sources, you can leave the default.
    • Security IR-Role names: This is the role assumed for the response actions by the Lambda functions in the security account. The role is created by the stack launched in the service account.
    • Security exception tag: This defines the tag name for security exceptions. Resources marked with this tag are not automatically changed in response to a security finding. For example, you could add an exception tag for a valid open security group for a public website.
    • Organization ID: This is your AWS organization ID used to authorize forwarding of CloudWatch Events to the security account. Your accounts must be members of the same AWS organization.
    • Allowed network range IPv4: This CIDRv4 range is used to confine all open security groups that are not tagged for exception.
    • Allowed network range IPv6: This CIDRv6 range is used to confine all open security groups that are not tagged for exception.
    • Isolate EC2 findings: This is a list of all GuardDuty findings that should lead to an EC2 instance being isolated. Comma delimited.
    • Block principal finding: This is a list of all GuardDuty findings that should lead to blocking this role or user by attaching a deny all policy. Comma delimited.
    Figure 2: Stack launch in security account

    Figure 2: Stack launch in security account

  3. In each service account, launch the template by selecting Launch Stack.
     
    Select this image to open a link that starts building the CloudFormation stack

    Additionally, you can find the latest code on GitHub, where you can also contribute to the sample code.

  4. Provide the following parameters for each service account:
    • S3 bucket with sources: This bucket contains all sources, such as Lambda functions and templates. If you’re not customizing the sources, you can leave the default text.
    • Prefix for S3 bucket with sources: Prefix for all objects. If you’re not customizing the sources, you can leave the default text.
    • IR-Security role: This is the role that is created and used by the security account to execute response actions.
    • Security account ID: The CloudWatch Events are forwarded to this central security account.
    • Enable AWS Config: Define whether you want this stack to enable AWS Config. If you have already enabled AWS Config, then leave this value false.
    • Create SNS topic: Provide the name of an SNS topic only if you enable AWS Config and want to stream the configuration change to SNS (optional). Otherwise, leave this field blank.
    • SNS topic name: This is the name of the SNS topic to be created only if enabling AWS Config. The default text is Config.
    • Create S3 bucket for AWS Config: If you enable AWS Config, the template creates an S3 bucket for AWS Config.
    • Bucket name for AWS Config: The name of the S3 bucket created for AWS Config. The default text is config-bucket-{AccountId}.
    • Enable GuardDuty: If you have not already enabled GuardDuty in the service account, then you can do it here.

Testing

After you have deployed both stacks, you can test your environment by following these example steps in one of your service accounts. Before you test, you can subscribe to the SNS topic with prefix Security_Alerts_[Your_Stack] to be notified of a security event.

  1. Create and open security group 0.0.0.0/0 without creating an exception tag. After several minutes, the security group will be confined to the safe CIDR that you defined in your stack.
  2. Create an S3 bucket without enabling encryption. After several minutes, the default encryption AES-256 will be set on the bucket.
  3. For GuardDuty blocking of IAM principal, you can define a list of malicious IPs under the Threat List in the GuardDuty panel. Create a test role or user. When you execute an API call from this IP with the created test role or user, a GuardDuty finding is generated that triggers blocking of the IAM role or user.
  4. You can deploy Amazon GuardDuty tester and generate findings such as Trojan:EC2/DNSDataExfiltration or CryptoCurrency:EC2/BitcoinTool.B!DN. The GuardDuty findings trigger isolation of an EC2 instance by removing all current security groups and attaching an empty one. This new empty group can then be configured for forensic access later.

Exceptions for response actions

If the resource has the tag name SecurityException, a response action is not executed. The tag name is a parameter of the CloudFormation stack in the security account and can be customized at installation. The value of the tag is not validated, but it is good practice for the value to refer to an approval document such as a Jira issue. In this way, you can build an auditing chain of the approval. For example:


Tag name:  SecurityException
Tag value: Jira-1234

Make sure that the security tag can only be set or modified by an appropriate role. You can do this in two ways. The first way is to attach a deny statement to all policies that do not have privileges to assign this tag. An example policy statement to deny setting, removing, and editing of this tag for IAM, EC2, and S3 services is shown below. This policy does not prohibit working with the resources, such as starting or stopping an EC2 instance with this tag. See Controlling access based on tag keys for more information.


{
        {
            "Sid": "S3-Sec-Tag",
            "Effect": "Deny",
            "A	ction": [
                "s3:PutBucketTagging",
                "s3:PutObjectTagging"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringLikeIfExists": {
                    "aws:TagKeys": "TAG-NAME-SecurityException"
                }
            }
        },
        {
            "Sid": "EC2-VPC-Sec-Tag",
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringLikeIfExists": {
                    "aws:TagKeys": "TAG-NAME-SecurityException"
                }
            }
        }
    ]
}

In the above policy, you must modify the TAG-NAME-SecurityException to match your own tag name.

The second way to restrict the attachment of this tag is to use Tag policies to manage tags across multiple AWS accounts.

Centralized versus decentralized exception management

Attaching security tags is a decentralized approach in which you don’t need a centralized database to record remediation exceptions. A centralized exception database requires that you know each individual resource ID to set exceptions, and this is not always possible. Amazon EC2 Auto Scaling is a good example where you might not know the EC2 instance ID in advance. On an up-scaling event, the instance ID can’t be known in advance and preapproved. Furthermore, a central database must be kept in sync with the lifecycle of the resources, like with an instance down-scaling event. Hence, using tags on resources is a decentralized approach. If needed, the automatic scaling launch configuration can propagate the security exception tag; see Tag your auto scaled EC2 instances.

You can manage the IAM policies and roles for tagging either centrally or within an AWS CodePipeline. In this way, you can implement a centralized enforcement point for security exceptions but decentralize storing of the resource tags to the resources themselves.

Using AWS resource groups, you can always find all resources that have the Security tag for inventory and auditing proposes. For more information, see Build queries and groups in AWS resource groups.

Monitoring for security alerts

You can subscribe to the SNS topic with prefix Security_Alerts_[Your_Stack] to be notified of a security event or an execution that has failed.

Every execution is triggered from CloudWatch Events, and each of these events generates CloudWatch metrics. Under CloudWatch rules, you can see the rules for event forwarding from service accounts, or for triggering the responses in the central security account. For each CloudWatch rule, you can view the metrics by checking the box next to the rule, as shown in Figure 3, and then select Show metrics for the rule.
 

Figure 3: Metrics for Forwarding GuardDuty Events in CloudWatch

Figure 3: Metrics for Forwarding GuardDuty Events in CloudWatch

Creating new rules and customization

The mapping of findings to responses is defined in the CloudWatch Events rules in the security account. If you define new responses, you only update the central security account. You don’t adjust the settings of service accounts because they only forward events to the security account.

Here’s an example: Your team decides that a new SSM action – Terminate EC2 instance – should be triggered on the GuardDuty finding Backdoor:EC2/C&CActivity.B!DNS. In the security account, go to CloudWatch in the AWS Management Console. Create a new rule as described in the following steps (and shown in Figure 4):
 

Figure 4: CloudWatch rule creation

Figure 4: CloudWatch rule creation

  1. Go to CloudWatch Events and create a new rule. In the Event Pattern, under Build custom event pattern, define the following JSON:
    
    {
      "detail-type": [
        "GuardDuty Finding"
      ],
      "source": [
        "aws.guardduty"
      ],
      "detail": {
        "type": [
          "Backdoor:EC2/C&CActivity.B!DNS"
        ]
      }
    }
    

  2. Set the target for the rule as the Lambda function for execution of SSM, StratSsmAutomation. Use Input transformer to customize the parameters in invocation of the Lambda function.

    For the Input paths map:

    
    {
       "resourceId":"$.detail.resource.instanceDetails.instanceId",
       "account":"$.account"
    } 
    

    The expression extracts the instanceId from the event as resourceId, and the Account ID as account.

    For the field Input template, enter the following:

    
    {
       "account":<account>,
       "resourseType":"ec2:instance",
       "resourceId":<resourceId>,
       "AutomationDocumentName":"AWS-TerminateEC2Instance",
       "AutomationParameters":{
          "InstanceId":[
          <resourceId>   
          ]
       }
    }
    

You pass the name of the SSM automation document that you want to execute as an input parameter. In this case, the document is AWS-TerminateEC2Instance and the document input parameters are a JSON structure named AutomationParameters.

You have now created a new response action that is ready to test.

Troubleshooting

The automation execution that takes place in the security account is documented in AWS Systems Manager under Automation Execution. You can also refer to the AWS Systems Manager automation documentation.

If Lambda function execution fails, then a CloudWatch alarm is triggered, and notification is sent to an SNS topic with prefix Security_Alerts_[Your_Stack]. Lambda function execution in the security account is documented in the CloudWatch log group for the Lambda function. You can use the logs to understand whether the Lambda function was executed successfully.

Security Hub integration

AWS Security Hub aggregates alarms from GuardDuty and other providers. To get Security Hub to aggregate alarms, you must first activate Security Hub before deploying this solution. Then, invite your environment accounts to the security account as master. For more information, see Master and member accounts in AWS Security Hub.

The integration with Security Hub is for manual triggering and is not part of the automated response itself. This can be useful if you have findings that require security investigation before triggering the action.

From the Security Hub console, select the finding and use the Actions drop down menu in the upper right corner to trigger a response, as shown in Figure 5. A CloudWatch event invokes the automated response, such as a Lambda function to isolate an EC2 instance. The Lambda function fetches the GuardDuty finding from the service account and executes one of the following: Isolate EC2 Instance, Block Principal, or Send SNS Notification.

Only one resource can be selected for execution. If you select multiple resources, the response is not executed, and the Lambda log message is shown in CloudWatch Logs.
 

Figure 5: Security Hub integration

Figure 5: Security Hub integration

Cost of the solution

The cost of the solution depends on the events generated in your AWS account and chosen AWS Region. Costs might be as little as several U.S. dollars per month and per account. You can model your costs by understanding GuardDuty pricing, automation pricing in AWS system manager, and AWS Config pricing for six AWS Config rules. The costs for AWS Lambda and Amazon CloudWatch are expected to be minimal and might be included in your free tier. You can optionally use Security Hub; see AWS Security Hub pricing.

Summary

In this post, you learned how to deploy an automated incident response framework using AWS native features. You can easily extend this framework to meet your future needs. If you would like to extend it further, contact AWS professional services or an AWS partner. If you have technical questions, please use the Amazon GuardDuty or AWS Config forums. Remember, this solution is only an introduction to automated security response and is not a comprehensive solution.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

AWS Security Profiles: Dan Plastina, VP of Security Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-dan-plastina-vp-security-services/

In the weeks leading up to re:Invent 2019, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do as the VP of Security Services?

I’ve been at Amazon for just over two years. I lead the External Security Services organization—our team builds AWS services that help customers improve the security of their workloads. Our services include Amazon Macie, Amazon GuardDuty, Amazon Inspector, and AWS Security Hub.

What drew me to Amazon is the culture of ownership and accountability. I wake up every day and get to help AWS customers do things that transform their world—and I get to do that work with a whole bunch of people who feel the same way and take the same level of ownership. It’s very energizing.

What’s your favorite part of your job?

That’s hard! I love most aspects of my job. Forced to pick one, I’d have to say my favorite part is helping customers. Our Shared Responsibility Model says that AWS is accountable for the security of AWS, and customers are responsible for the resources and workloads they manage in AWS. My job allows me to sit on the customer side of the shared responsibility model. Our team builds the services that help customers improve the security of their workloads on AWS. Being able to help in that way is very rewarding.

One of Amazon’s widely-known leadership principles is Customer Obsession. Can you speak to what that looks like in the context of your work?

Being customer obsessed means that you’re in tune with the needs of the customer you’re working with. In the case of external security services, “customer obsessed” requires you to deeply understand what it means for individual customers to protect their assets in AWS, to empathize with those needs, and then to help them figure out how to get from where they are, to where they want to be. Because of this, I spend a lot of time with customers.

Our team participates in many in-person executive customer briefings. We hold a lot of conference calls. I’m flying to the UK on Monday to meet with customers—and I was there three weeks ago. I’ve spent over six weeks this fall traveling to talk with customers.  That much travel time can be hard, but it’s necessary to be in front of customers and listen to what they tell us. I’m fortunate to have a really strong team and so when I’m not traveling, I’m still able to spend a lot of time thinking about customer needs and about what my team should do next.

You’re on an elevator packed with CISOs, and they want you to explain the difference between Security Hub, GuardDuty, Macie, and Inspector before the doors open. What do you say?

First, I would tell them that the services are best understood as a suite of security services, and that AWS Security Hub offers a single pane of glass [Editor: a management tool that integrates information and offers a unified view] into everything else: Use it to understand the severity and sensitivity of findings across the other services you’re using.

Amazon GuardDuty is a continuous security monitoring and threat detection service. You simply choose to have it on or off in your AWS accounts. When it’s enabled, it detects highly suspicious activity and unauthorized access across the entirety of your AWS workloads. While GuardDuty alerts you to potential threats, Amazon Inspector helps you ensure that you address publicly known software vulnerabilities in a timely manner, removing them as a potential entry point for unauthorized users. Amazon Macie offers a particular focus on protecting your sensitive data by giving you a highly scalable and cost effective way to scan AWS for sensitive data and report back what is found and how it is being protected with access controls and encryption.

Then, I’d invite the entire elevator to come to re:Invent, to learn more about the new work my team is doing.

What can you tell us about your team’s re:Invent plans?

We have some exciting things planned for re:Invent this year. I can’t go into specifics yet, but we’re excited about it. A lot of my team will be present, and we’re looking forward to speaking with customers and learning more about what we should work on for next year.

We’ve got a variety of sessions about Security Hub, GuardDuty, and Inspector. If you can only make it to three security-specific sessions, I recommend Threat management in the cloud (SEC206-R), Automating threat detection and response in AWS (SEC301-R), and Use AWS Security Hub to act on your compliance and security posture (SEC342-R).

Is there some connecting thread to all of the various projects that your teams are working on right now?

I see a few threads. One is the concept of security being priority zero. It’s a theme that we live by at AWS, but customers ask us to stretch a little bit further and include their workloads in our security considerations. So workload security is now priority zero too. We’re spending a lot of time working that out and looking for ways to improve our services.

Another thread is that customers are asking us for prescriptive guidance. They’re saying, “Just tell me how I can ensure that my environment is safe. I promise you won’t offend me. Guide me as much as you can, and I’ll disregard anything that isn’t relevant to my environment.”

What’s one currently available security feature that you wish more customers were aware of?

A service, not a feature: AWS Security Hub. It has the ability to bring together security findings from many different AWS, partner, and customer security detection services. Security Hub takes security findings and normalizes them into our Amazon Security Findings Format, ASFF, and then sends them all back out through Amazon CloudWatch events to many partners that are capable of consuming them.

I think customers underestimate the value of having all of these security events normalized into a format that they can use to write a Splunk Phantom runbook, for example, or a Demisto runbook, or a Lambda function, or to send it to Rapid7 or cut a ticket in Jira. There’s a lot of power in what Security Hub does and it’s very cost effective. Many customers have started to use these capabilities, but I know that not everyone knows about it yet.

How do you stay up to date on important cloud security developments across the industry?

I get a lot of insight from customers. Customers have a lot of questions, and I can take these questions as a good indicator of what’s on peoples’ minds. I then do the research needed to get them smart answers, and in the process I learn things myself.

I also subscribe to a number of newsletters, such as Last Week In AWS, that give some interesting information about what’s trending. Reading our AWS blogs also helps because just keeping up with AWS is hard. There’s a lot going on! Listening to the various feeds and channels that we have is very informative.

And then there’s tinkering. I tinker with home automation / Internet of Things projects and with vendor-provided offers such as those provided to me by Splunk, Palo Alto, and CheckPoint of recent. It’s been fun learning partner offerings by building out VLANs, site-to-site tunnels, VPNs, DNS filters, SSL inspection, gateway-level anonymizers, central logging, and intrusion detection systems. You know, the home networking ‘essentials’ we all need.

You’re into riding Superbikes as a hobby. What’s the appeal?

I ride fast bikes on well-known race tracks all around the US several times a year. I love how speed and focus must come together. Going through different corners requires orchestrating all kinds of different motor and mental skills. It flushes the brain and clears your thoughts like nothing else. So, I appreciate the hobby as a way of escaping from normal day-to-day routine. Honestly, there’s nothing like doing 160 mph down a straightaway to teach you how to focus on what is needed, now.

You’re originally from Montreal. What’s one thing a visitor should eat on a trip there?

Let me give you two, eh. If you find yourself in a small rural Quebec restaurant, you must have poutine, the local ‘delicacy’. If you find yourself downtown, near my Alma matter Concordia University, you must enjoy our local student staple, Kojax. That said, it’s honestly hard to make a mistake when you’re eating in Montreal. They have a lot of good food there.

Want more AWS Security news? Follow us on Twitter.

The AWS External Security Services team is hiring! Want to find out more? Check out our career page.

Dan Plastina

Dan is Vice President, Head of Security Services for Amazon Web Services (AWS). He’s most often seen working alongside his team leaders on product design, management and engineering development efforts to enable business and government customers to secure themselves, when using AWS. He has travelled extensively, meeting c-suite and security leaders at all corners of the globe.

AWS Security Profiles: Greg McConnel, Senior Manager, Security Specialists Team

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-greg-mcconnel-senior-manager-security-specialists-team/

Greg McConnel, Senior Manager
In the weeks leading up to re:Invent 2019, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS nearly seven years. I was previously a Solutions Architect on the Security Specialists Team for the Americas, and I recently took on an interim management position for the team. Once we find a permanent team lead, I’ll become the team’s West Coast manager.

How do you explain your job?

The Security Specialist team is a customer-facing role where we get to talk to customers about the security, identity, and compliance of AWS. We help enable customers to securely and compliantly adopt the AWS Cloud. Some of what we do is one-to-one with customers, but we also engage in a lot of one-to-many activities. My team talks to customers about their security concerns, how best to secure their workloads on AWS, and how to use our services alongside partner solutions to improve their security posture. A big part of my role is building content and delivering that to customers. This includes things like white papers, blog posts, hands-on workshops, Well-Architected reviews, and presentations.

What are you currently working on that you’re most excited about?

In the Security Specialist team, the one-to-one engagements we have with customers are primarily short-term and centered around particular questions or topics. Although this can benefit customers a great deal, it’s a reactive model. In order to move to a more proactive stance, we plan to start working with individual customers on a longer-term basis to help them solve particular security challenges or opportunities. This will remove any barriers to working directly with a Security Specialist and allow us to dive deep into the security concerns of our customers.

What’s the most challenging part of your job?

Keeping up with the pace of innovation. Just about everyone who works with AWS experiences this, whether you’re a customer or working in the AWS field. AWS is launching so many amazing things all the time it can be challenging to keep up. The specialist SA role is attractive because the area we cover is somewhat limited. It’s a carved-out space within the larger, continuously expanding body of AWS knowledge. So it’s a little easier, and it allows me to dive deeper into particular security topics. But even within this carved out area of security, it takes some dedication to stay current and informed for my customers. I find things like Jeff Barr’s AWS News Blog and the AWS Security Blog really valuable. Also, I’m a big fan of hands on learning, so just playing around with new services, especially by following along with examples in new blog posts, can be an interesting way to keep up.

What’s your favorite part of your job?

Creating content, particularly workshops. For me, hands-on learning is not only an effective way to learn, it’s simply more fun. The process of creating workshops with a team can be rewarding and educational. It’s a creative, interactive process. Getting to see others try out your workshop and provide feedback is so satisfying, especially when you can tell that people now understand new concepts because of the work you did. Watching other people deliver a workshop that you built is also rewarding.

In your opinion, what’s the biggest challenge facing cloud security right now?

AWS allows you to move very fast. One of the primary advantages of cloud computing is the speed and agility it provides. When you’re moving fast though, security can be overlooked a bit. It’s so easy to start building on AWS that customers might not invest the necessary cycles on security, or they might think that doing so will slow them down. The reality, though, is that AWS provides the tools to allow you to stay agile while maintaining—and in many cases improving—your security. Automation and continuous monitoring are the keys here. AWS makes it possible to automate many basic security tasks, like patching. With the right tooling, you also gain the visibility needed to accurately identify and monitor critical assets and data. We provide great solutions for logging and monitoring that are highly integrated with our other services. So it’s quite possible now to move fast and stay secure, and it’s important that you do both.

What security best practices do you recommend to customers?

There are certain services we recommend that customer look into enabling because they’re an easy win when it comes to security. These aren’t requirements because there are many great partner solutions that customers can also use. But these are solutions that I’d encourage customers to at least consider:

  • Turn on Amazon GuardDuty, which is a threat detection service that continuously monitors for malicious and unauthorized activity. The benefits it provides versus the effort involved to enable it makes it a pretty easy choice. You literally click a button to turn on GuardDuty, and the service starts analyzing tens of billions of events across a number of AWS data sources.
  • Work with your account team to get a Well-Architected review of your key workloads, especially with a focus on the Security pillar. You can even run well-architected reviews yourself with the AWS Well-Architected Tool. A well-architected review can help you build secure, high-performing, resilient, and efficient infrastructure for your application.
  • Use IAM access advisor to help ensure that all the credentials in your AWS accounts are not overly broad by examining your service last accessed information.
  • Use AWS Organizations for centralized administration of your credentials management and governance and move to full-feature mode to take advantage of everything the service offers.
  • Practice the principle of least privilege.

What does cloud security mean to you, personally?

This is the first time in a while that I’ve worked for a company where I actually like the product. I’m constantly surprised by the new services and features we release and what our customers are building on AWS. My background is in security, especially identity, so that’s the area I am most passionate about within AWS. I want people to use AWS because they can build amazing things at a pace that was just not possible before the cloud—but I want people to do all of this securely. Being able to help customers use our services and do so securely keeps me motivated.

You have a passion for building great workshops. What are you and Jesse Fuchs doing to help make workshops at re:Invent better?

Jesse and I have been working on a central site for AWS security workshops. We post a curated list of workshops, and everything on the site has gone through an internal bar raiser review. This is a formal process for reviewing the workshops and other content to make sure they meet certain standards, that they’re up to date, and that they follow a certain format so customers will find them familiar no matter which workshop they go through.

We’re now implementing a version of this bar raiser review for every workshop at re:Invent this year. In the past, there were people who helped review workshops, but there was no formal, independent bar raising process. Now, every re:Invent workshop will go through it. Eventually, we want to extend this review at other AWS events.

Over time, you’ll see a lot more workshops added to the site. All of these workshops can either be delivered by AWS to the customer, or by customers to their own internal teams. It’s all Open Source too. Finally, we’ll keep the workshops up to date as new features are added.

The workshop you and Jesse Fuchs will host at re:Invent promises that attendees will build an end-to-end functional app with a secure identity provider. How can you build such an app in such a relatively short time?

It know it sounds like a tall order, but the workshop is designed to be completed in the allotted time. And since the content will be up on GitHub, you can also work on it afterwards on your own. It’s a level 400 workshop, so we’re expecting people to come into the session with a certain level of familiarity with some of these services. The session is two and half hours, and a big part of this workshop is the one-on-one interaction with the facilitators. So if attendees feel like this is a lot to take in, they should keep in mind that they’ll get one-on-one interaction with experts in these fields. Our facilitators will sit down with attendees and address their questions as they go through the hands-on work. A lot of the learning actually comes out of that facilitation. The session starts with an initial presentation and detailed instructions for doing the workshop. Then the majority of the time will be spent hands-on with that added one-on-one interaction. I don’t think we stress enough the value that customers get from the facilitation that occurs in workshops and how much learning occurs during that process.

What else should we know about your workshop?

One of the things I concentrate on is identity, and that’s not just Identity and Access Management. It’s identity across the board. If you saw Quint Van Deman’s session last year at re:Invent, he used an analogy about identity being a cake with three layers. There’s the platform layer on the bottom, which includes access to the console. There’s the infrastructure layer, which is identity for services like EC2 and RDS, for example. And then there’s the application identity layer on the top. That’s sometimes the layer that customers are most interested in because it’s related to the applications they’re building. Our workshop is primarily about that top layer. It’s one of the more difficult topics to cover, but again, an interesting one for customers.

What is your advice to first-time attendees at re:Invent?

Take advantage of your time at re:Invent and plan your week because there’s really no fluff. It’s not about marketing, it’s about learning. re:Invent is an educational event first and foremost. Also, take advantage of meeting people because it’s also a great networking event. Having conversations with people from companies that are doing similar things to what you’re doing on AWS can be very enlightening.

You like obscure restaurants and even more obscure movies. Can you share some favorites?

From a movie standpoint, I like Thrillers and Sci-Fi—especially unusual Sci-Fi movies that are somewhat out of the mainstream. I do like blockbusters, but I definitely like to find great independent movies. I recently saw a movie called Coherence that was really interesting. It’s about what would happen if a group of friends having a dinner party could move between parallel or alternate universes. A few others I’d recommend to people who share my taste for the obscure, and that are maybe even a little though-provoking, are Perfect Sense and The Quiet Earth.

From a restaurant standpoint, I am always looking for new Asian food restaurants, especially Vietnamese and Thai food. There are a lot of great ones hidden here in Los Angeles.
 
Want more AWS Security news? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Greg McConnel

In his nearly 7 years at AWS, Greg has been in a number of different roles, which gives him a unique viewpoint on this company. He started out as a TAM in Enterprise Support, moved to a Gaming Specialist SA role, and then found a fitting home in AWS Security as a Security Specialist SA focusing on identity. Greg will continue to contribute to the team in his new role as a manager. Greg currently lives in LA where he loves to kayak, go on road trips, and in general, enjoy the sunny climate.

Tips for building a cloud security operating model in the financial services industry

Post Syndicated from Stephen Quigg original https://aws.amazon.com/blogs/security/tips-for-building-a-cloud-security-operating-model-in-the-financial-services-industry/

My team helps financial services customers understand how AWS services operate so that you can incorporate AWS into your existing processes and security operations centers (SOCs). As soon as you create your first AWS account for your organization, you’re live in the cloud. So, from day one, you should be equipped with certain information: you should understand some basics about how our products and services work, you should know how to spot when something bad could happen, and you should understand how to recover from that situation. Below is some of the advice I frequently offer to financial services customers who are just getting started.

How to think about cloud security

Security is security – the principles don’t change. Many of the on-premises security processes that you have now can extend directly to an AWS deployment. For example, your processes for vulnerability management, security monitoring, and security logging can all be transitioned over.

That said, AWS is more than just infrastructure. I sometimes talk to customers who are only thinking about the security of their AWS Virtual Private Clouds (VPCs), and about the Amazon Elastic Compute Cloud (EC2) instances running in those VPCs. And that’s good; its traditional network security that remains quite standard. But I also ask my customers questions that focus on other services they may be using. For example:

  • How are you thinking about who has Database Administrator (DBA) rights for Amazon Aurora Serverless? Aurora Serverless is a managed database service that lets AWS do the heavy lifting for many DBA tasks.
  • Do you understand how to configure (and monitor the configuration of) your Amazon Athena service? Athena lets you query large amounts of information that you’ve stored in Amazon Simple Storage Service (S3).
  • How will you secure and monitor your AWS Lambda deployments? Lambda is a serverless platform that has no infrastructure for you to manage.

Understanding AWS security services

As a customer, it’s important to understand the information that’s available to you about the state of your cloud infrastructure. Typically, AWS delivers much of that information via the Amazon CloudWatch service. So, I encourage my customers to get comfortable with CloudWatch, alongside our AWS security services. The key services that any security team needs to understand include:

  • Amazon GuardDuty, which is a threat detection system for the cloud.
  • AWS Cloudtrail, which is the log of AWS API services.
  • VPC Flow Logs, which enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
  • AWS Config, which records all the configuration changes that your teams have made to AWS resources, allowing you to assess those changes.
  • AWS Security Hub, which offers a “single pane of glass” that helps you assess AWS resources and collect information from across your security services. It gives you a unified view of resources per Region, so that you can more easily manage your security and compliance workflow.

These tools make it much quicker for you to get up to speed on your cloud security status and establish a position of safety.

Getting started with automation in the cloud

You don’t have to be a software developer to use AWS. You don’t have to write any code; the basics are straightforward. But to optimize your use of AWS and to get faster at automating, there is a real advantage if you have coding skills. Automation is the core of the operating model. We have a number of tutorials that can help you get up to speed.

Self-service cloud security resources for financial services customers

There are people like me who can come and talk to you. But to keep you from having to wait for us, we also offer a lot of self-service cloud security resources on our website.

We offer a free digital training course on AWS security fundamentals, plus webinars on financial services topics. We also offer an AWS security certification, which lets you show that your security knowledge has been validated by a third-party.

There are also a number of really good videos you can watch. For example, we had our inaugural security conference, re:Inforce, in Boston this past June. The videos and slides from the conference are now on YouTube, so you can sit and watch at your own pace. If you’re not sure where to start, try this list of popular sessions.

Finding additional help

You can work with a number of technology partners to help extend your security tools and processes to the cloud.

  • Our AWS Professional Services team can come and help you on site. In addition, we can simulate security incidents with you tohelp you get comfortable with security and cloud technology and how to respond to incidents.
  • AWS security consulting partners can also help you develop processes or write the code that you might need.
  • The AWS Marketplace is a wonderful self-service location where you can get all sorts of great security solutions, including finding a consulting partner.

And if you’re interested in speaking directly to AWS, you can always get in touch. There are forms on our website, or you can reach out to your AWS account manager and they can help you find the resources that are necessary for your business.

Conclusion

Financial services customers face some tough security challenges. You handle large amounts of data, and it’s really important that this data is stored securely and that its privacy is respected. We know that our customers do lots of due diligence of AWS before adopting our services, and they have many different regulatory environments within which they have to work. In turn, we want to help customers understand how they can build a cloud security operating model that meets their needs while using our services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Stephen Quigg

Stephen Quigg is a Principal Securities Solutions Architect within AWS Financial Services. Quigg started his AWS career in Sydney, Australia, but returned home to Scotland three years ago having missed the wind and rain too much. He manages to fit some work in between being a husband and father to two angelic children and making music.

Nine AWS Security Hub best practices

Post Syndicated from Ketan Srivastava original https://aws.amazon.com/blogs/security/nine-aws-security-hub-best-practices/

AWS Security Hub is a security and compliance service that became generally available on June 25, 2019. It provides you with extensive visibility into your security and compliance status across multiple AWS accounts, in a single dashboard per region. The service helps you monitor critical settings to ensure that your AWS accounts remain secure, allowing you to notice and react quickly to any changes in your environment.

AWS Security Hub aggregates, organizes, and prioritizes security findings from supported AWS services—that’s Amazon GuardDuty, Amazon Inspector, and Amazon Macie at the time this post was published—and from various AWS partner security solutions. AWS Security Hub also generates its own findings, based on automated, resource-level and account-level configuration and compliance checks using service-linked AWS Config rules plus other analytic techniques. These checks help you keep your AWS accounts compliant with industry standards and best practices, such as the Center for Internet Security (CIS) AWS Foundations standard.

In this post, I’ll provide nine best practices to help you use AWS Security Hub as effectively as possible.

1. Use the AWS Labs script to turn on Security Hub in all your AWS accounts in all regions and to establish your existing Amazon GuardDuty master/member hierarchy

As a best practice, you should continuously monitor all regions across all of your AWS accounts for unauthorized behavior or misconfigurations, even in regions that you don’t use heavily. AWS already recommends that you do this when using monitoring services like AWS Config and AWS CloudTrail. I recommend that you enable Security Hub in every region available in your AWS accounts.

In addition, you can also invite other AWS accounts to enable Security Hub and share findings with your AWS account. If you send an invitation and it is accepted by the other account owner, your Security Hub account is designated as the master account, and any associated Security Hub accounts become your member accounts. Users from the master account will then be able to view Security Hub findings from member accounts.

To simplify these configurations, you can utilize the AWS Labs script available on GitHub, which provides a step-by-step guide to automate this process. This script allows you to enable (and disable) AWS Security Hub simultaneously across a list of associated AWS accounts and bulk-add them to become your Security Hub members; it sends invitations from the master account and automatically accepts invitations in all member accounts. To run the script, you must have the AWS account IDs and root email addresses of the AWS accounts that you want as your Security Hub members. (Note that you should only share your root email address and account ID with AWS accounts that you trust. Visit the IAM best practices page to learn more about how to keep access to your AWS accounts secured.)

By default, the Security Hub master/member association is independent of the relationships that you’ve established between your Amazon GuardDuty or Amazon Macie accounts and other associated accounts. If you have an existing master/member hierarchy in GuardDuty or Macie, you can export that list of accounts into a CSV file and then use it with the script. For example in GuardDuty, use the ListMembers API to export the AWS Account ID and email of all member accounts, as follows:

aws guardduty list-members –detector-id <Detector ID> –query "Members[].[AccountId, Email]" –output text | awk ‘{print $1 "," $2}’

The output of the above command will be your GuardDuty member account IDs and their corresponding root email addresses, one per line and separated with a comma as shown below:

12345678910,[email protected]
98765432101,[email protected]

2. Enable AWS Config in all AWS accounts and regions and leave the AWS CIS Foundations standard check enabled

When you enable Security Hub in any region, the AWS CIS standard checks are enabled by default. I recommend leaving them enabled; they are important security measures that are applicable to all AWS accounts.

To run most of these checks, Security Hub uses service-linked AWS Config rules. Because of this, you should make sure that AWS Config is turned on and recording all supported resources, including global resources, in all accounts and regions where Security Hub is deployed. You are not charged by AWS Config for these service-linked rules. You are only charged via Security Hub’s pricing model.

3. Use specific managed IAM policies for different types of Security Hub users

You can choose to allow a large group of users to access List and Read Security Hub actions, which will permit them to view your security findings. However, you should allow only a small group of users to access the Security Hub Write actions. This will permit only authorized users to archive, resolve, or remediate the findings.

You can use AWS managed policies to give your employees the permissions they need to get started. These policies are already available in your account and are maintained and updated by AWS. To grant more granular permission to your Security Hub users, I recommend that you create your own customer managed policies. A great place to start with this is to import an existing AWS managed policy. That way, you know that the policy is initially correct, and all you need to do is customize it for your environment.

AWS categorizes each service action into one of five access levels based on what each action does: List, Read, Write, Permissions management, or Tagging. To determine which access level to include in the IAM policies that you assign to your users, you can view the policy summary by navigating from the IAM Console to Policies, then selecting any AWS managed or customer managed policy. Next, on the Summary page, under the Permissions tab, select Policy summary (see Figure 1). For more details and examples of access level classification, see Understanding Access Level Summaries Within Policy Summaries.
 

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

4. Use tags for access controls and cost allocation

A SecurityHub::Hub resource represents the implementation of the AWS Security Hub service per region in your AWS account. Security Hub allows you to assign metadata to your SecurityHub::Hub resource in the form of tags. Each tag is a string consisting of a user-defined key and an optional key-value that makes it easier for you to identify and manage the AWS resources in your environment.

You can control access permissions by using tags on your SecurityHub::Hub resource. For example, you can allow a group of developer IAM entities to manage and update only the SecurityHub::Hub resources that have the tag key developer associated with them. This can help you restrict access to your production SecurityHub::Hub resources, while allowing your developers to continue testing in their developer environment.

For more information on the supported tag-based conditions which you can use with the Security Hub APIs, refer to Condition Keys for AWS Security Hub. Please note that when you use tag-based conditions for access control, you must define who can modify those tags.

To make it easier to categorize and track your AWS costs, you can also activate cost allocation tags. This helps you organize your SecurityHub::Hub resource costs. AWS generates a cost allocation report as a CSV file, with your usage and costs grouped according to your active tags. You can apply tags that represent business categories (such as cost centers, application names, or project environments) to organize your costs.

For more information on commonly used tagging categories and effective tagging strategies, read about AWS Tagging Strategies.

5. Integrate and enable your existing security products (with 34 integrations today and more to come)

Numerous tools can help you understand the security and compliance posture of your AWS accounts, but these tools generate their own set of findings, often in different formats. Security Hub normalizes the findings.

With Security Hub, findings generated from integrated providers (both third-party services and AWS services) are ingested using a standard findings format, which eliminates the need for security teams to convert the data. You can currently integrate 34 findings providers to import and/or export findings with Security Hub. Some partner products, like PagerDuty, Splunk, and Slack, can receive findings from Security Hub, although they don’t generate findings.

If you want to add a third-party partner product to your AWS environment, you can choose the Purchase link from the Security Hub console’s Integrations page and navigate to AWS Marketplace. Once purchased, choose the Configure link to navigate to step-by-step instructions to install the product and configure its integration with Security Hub. Then choose Enable integration to create a product subscription in your account for that third-party provider (see Figure 2).

After you enable a subscription, a resource policy is automatically attached to it. The resource policy defines the permissions that Security Hub needs to accept and process the product’s findings. You can also enable the subscription via the API and CloudFormation.
 

Figure 2: Integrating partner findings provider with Security Hub

Figure 2: Integrating partner findings provider with Security Hub

6. Build out customized remediation playbooks using Amazon CloudWatch Events, AWS Systems Manager Automation documents, and AWS Step Functions to automatically resolve findings that don’t require human intervention

Security Hub automatically sends all findings to Amazon CloudWatch Events. This integration helps you automate your response to threat incidents by allowing you to take specific actions using AWS Systems Manager Automation documents, OpsItems, and AWS Step Functions. Using these tools, you can create your own incident handling plan. This will allow your security team to focus on strengthening the security of your AWS environments rather than on remediating the current findings.
 

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

7. Create custom actions to send a copy of a Security Hub finding to a resource that is internal or external to your AWS account, enabling additional visibility and remediation options for the finding

Because of its integration with CloudWatch Events, you can use Security Hub to create custom actions that will send specific findings to ticketing, chat, email, or automated remediation systems. Custom actions can also be sent to your own AWS resources, such as AWS Systems Manager OpsCenter, AWS Lambda or Amazon Kinesis, allowing you to do your own remediation or data capture related to the finding.

For an in-depth look at this architecture, plus specific examples of how to implement custom actions, see How to Integrate AWS Security Hub Custom Actions with PagerDuty and How to Enable Custom Actions in AWS Security Hub.

In addition, Security Hub gives you the option to choose a language-specific AWS SDK so that you can use custom actions to resolve findings programmatically. Below, I’ll demonstrate how you can implement this using AWS Lambda and AWS SDK for Python (Boto3). In my example, I’ll remediate the finding generated by Security Hub for CIS check 2.4, “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs.” For this walk-through, I assume that you have the necessary AWS IAM permissions to work with Security Hub, CloudWatch Events, Lambda and AWS CloudTrail.
 

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

As shown in figure 4:

  1. When findings against CIS check 2.4 are generated in Security Hub, Security Hub will send them to CloudWatch Events using custom actions that I’ll describe below.
  2. CloudWatch Events will send the findings to a Lambda function that has been configured as the target.
  3. The Lambda function will utilize a Python script to check whether the finding has been generated against CIS check 2.4. If it has, the Lambda function will identify the affected CloudTrail trail and configure it with CloudWatch Logs to monitor the trail logs.

Prerequisites

  1. You must configure an IAM Role for AWS CloudTrail to assume so that it can deliver events to your CloudWatch Logs log group. For more information about how to do this, refer to the AWS CloudTrail documentation. I’ll refer to this role as the CloudTrail role.
  2. To deploy the Lambda function, you must configure an IAM Role for the Lambda function to assume. I’ll refer to this role as the Lambda execution role. The following sample policy includes the permissions that you’ll assign to it for this example. Please replace <CloudTrail_CloudWatchLogs_Role> with the CloudTrail role that you created in the previous step. Depending on your use case, you can restrict this IAM policy further to grant least privilege, which is a recommended IAM Best Practice.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogGroups",
                "cloudtrail:UpdateTrail",
                "iam:GetRole"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::012345678910:role/<CloudTrail_CloudWatchLogs_Role>"
        }
    ]
}     

Solution deployment

  1. Create a custom action in AWS Security Hub and associate it with a CloudWatch Events rule that you configure for your Security Hub findings. Follow the instructions laid out in the Security Hub user guide for the exact steps to do this.
  2. Create a Lambda Function, which will complete the auto-remediation of the CIS 2.4 findings:
    1. Open the Lambda Console and select Create function.
    2. On the next page, choose Author from scratch.
    3. Under Basic information, enter a name for your function. For Runtime, select Python 3.7.
       
      Figure 5: Updating basic information to create the Lambda function

      Figure 5: Updating basic information to create the Lambda function

    4. Under Permissions, expand Choose or create an execution role.
    5. Under Execution role, select the drop down menu and change the setting to Use an existing role.
    6. Under Existing role, select the Lambda execution role that you created earlier, then select Create function.
       
      Figure 6: Updating basic information and permissions to create the Lambda function

      Figure 6: Updating basic information and permissions to create the Lambda function

    7. Delete the default function code and paste the code I’ve provided below:
      
              import json, boto3
              cloudtrail_client = boto3.client('cloudtrail')
              cloudwatchlogs_client = boto3.client('logs')
              iam_client = boto3.client('iam')
              
              role_details = iam_client.get_role(RoleName='<CloudTrail_CloudWatchLogs_Role>')
              
              def lambda_handler(event, context):
                  # First off all, let us see if the JSON sent by CWE has any Security Hub findings.
                  if 'detail' in event.keys() and 'findings' in event['detail'].keys() and len(event['detail']['findings']) > 0:
                      print("There are some findings. Let's check them!")
                      print("Number of findings: %i" % len(event['detail']['findings']))
              
                      # Then we need to filter out the findings. In this code snippet, we'll handle only findings related to CloudTrail trails for integration with CloudWatch Logs.
                      for finding in event['detail']['findings']:
                          if 'Title' in finding.keys():
                              if 'Ensure CloudTrail trails are integrated with CloudWatch Logs' in finding['Title']:
                                  print("There's a CloudTrail-related finding. I can handle it!")
              
                                  if 'Compliance' in finding.keys() and 'Status' in finding['Compliance'].keys():
                                      print("Compliance Status: %s" % finding['Compliance']['Status'])
              
                                      # We can skip compliant findings, and evaluate only the non-compliant ones.                        
                                      if finding['Compliance']['Status'] == 'PASSED':
                                          continue
              
                                      # For each non-compliant finding, we need to get specific pieces of information so as to create the correct log group and update the CloudTrail trail.                        
                                      for resource in finding['Resources']:
                                          resource_id = resource['Id']
                                          cloudtrail_name = resource['Details']['Other']['name']
                                          loggroup_name = 'CloudTrail/' + cloudtrail_name
                                          print("ResourceId for the finding is %s" % resource_id)
                                          print("LogGroup name: %s" % loggroup_name)
              
                                          # At this point, we can create the log group using the name extracted from the finding.
                                          try:
                                              response_logs = cloudwatchlogs_client.create_log_group(logGroupName=loggroup_name)
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
              
                                          # For updating the CloudTrail trail, we need to have the ARN of the log group. Let's retrieve it now.                            
                                          response_logsARN = cloudwatchlogs_client.describe_log_groups(logGroupNamePrefix = loggroup_name)
                                          print("LogGroup ARN: %s" % response_logsARN['logGroups'][0]['arn'])
                                          print("The role used by CloudTrail is: %s" % role_details['Role']['Arn'])
              
                                          # Finally, let's update the CloudTrail trail so that it sends logs to the new log group created.
                                          try:
                                              response_cloudtrail = cloudtrail_client.update_trail(
                                                  Name=cloudtrail_name,
                                                  CloudWatchLogsLogGroupArn = response_logsARN['logGroups'][0]['arn'],
                                                  CloudWatchLogsRoleArn = role_details['Role']['Arn']
                                              )
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
                              else:
                                  print("Title: %s" % finding['Title'])
                                  print("This type of finding cannot be handled by this function. Skipping it…")
                          else:
                              print("This finding doesn't have a title and so cannot be handled by this function. Skipping it…")
                  else:
                      print("There are no findings to remediate.")            
              

    8. After pasting the code, replace <CloudTrail_CloudWatchLogs_Role> with your CloudTrail role and select Save to save your Lambda function.
       
      Figure 7: Editing Lambda code to replace the correct CloudTrail role

      Figure 7: Editing Lambda code to replace the correct CloudTrail role

  3. Go to your CloudWatch console and select Rules in the navigation pane on the left.
    1. From the list of CloudWatch rules that you see, select the rule which you created in Step 1 of this solution deployment.
    2. Then, select Actions on the top right of the page and choose Edit.
    3. On the Step 1: Create rule page, under Targets, choose Lambda function and select the Lambda function you created in Step 2.
    4. Select Configure details.
    5. On the Step 2: Configure rule details page, select Update rule.
       
      Figure 8: Adding your created Lambda function as Target for the CloudWatch rule

      Figure 8: Adding your created Lambda function as target for the CloudWatch rule

  4. Configuration is now complete, and you can test your rule. Go to your AWS Security Hub console and select Compliance standards in the navigation pane.
    1. Next, select CIS AWS Foundations.
       
      Figure 9: Compliance standards page in the Security Hub console

      Figure 9: Compliance standards page in the Security Hub console

    2. Search for the rule 2.4 Ensure CloudTrail trails are integrated with CloudWatch Logs and select it.
       
      Figure 10: Locating CIS check 2.4 in the Security Hub console

      Figure 10: Locating CIS check 2.4 in the Security Hub console

    3. If you’ve left the default AWS Security Hub CIS checks enabled (along with AWS Config service in the same region), and if you have CloudTrail trails in that region which are not yet configured to deliver events to CloudWatch Logs, you should see a low severity finding with a Failed Compliance status.
    4. Select the failed finding by selecting the checkbox and choosing the Actions button.
    5. Finally, from the dropdown menu, select the custom action that you created in Step 1 to send the finding to CloudWatch Events. CloudWatch Events will send the finding to your Lambda function, which you configured as the target for the rule in step 3. The Lambda function will automatically identify the affected CloudTrail trail and configure CloudWatch Logs log group for you. The log group will have the same name as your trail for identification purposes. You can modify the code to suit your needs further.

    Note: There may be a delay before the compliance status of the remediated resource changes. Once the CIS AWS Foundations Standard is enabled, Security Hub will run the checks within 2 hours. After that, the checks are automatically run once every 24 hours.

     

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub Console

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub console

    8. Customize your insights using the default “managed insights” as templates and use them to prioritize resources and findings to act upon

    A Security Hub “insight” is a collection of related findings to which one or more Security Hub filters have been applied. Insights can help you organize your findings and identify security risks that need immediate attention.

    Security Hub offers several managed (default) insights. You can use these as templates for new insights, and modify them depending on your use case. You can save these modified queries as new custom insights to ensure an even greater visibility of your AWS accounts. Please refer to the documentation for step-by-step instructions on how to create custom insights.
     

    Figure 12: Creating a Security Hub custom insight

    Figure 12: Creating a Security Hub custom insight

    9. Use the free trial to evaluate what your costs could be

    Security Hub provides a 30-day free trial for all AWS accounts and regions. The trial is a good way to evaluate how much Security Hub will cost, on average, to monitor threats and compliance in your environments. You can view an estimate by navigating from the Security Hub console to Settings, then Usage (see Figure 13).
     

    Figure 13: Estimating your Security Hub costs

    Figure 13: Estimating your Security Hub costs

    Conclusion

    AWS Security Hub allows you to have more visibility into the security and compliance status of your AWS environments. Using the Security Hub best practices discussed here, security teams can spend more time on incident remediation and recovery rather than incident detection and organization. Security Hub has undergone HIPAA, ISO, PCI, and SOC certification. To learn more about Security Hub, refer to the AWS Security Hub documentation.

    If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Author

    Ketan Srivastava

    Ketan is a Cloud Support Engineer at AWS. He enjoys the fact that, at AWS, there are always so many opportunities to build things better for our customers and learn from these opportunities. Outside of work, he plays MOBAs and travels to new places with his wife. He holds a Master of Science degree from Rochester Institute of Technology.

How to visualize Amazon GuardDuty findings: serverless edition

Post Syndicated from Ben Romano original https://aws.amazon.com/blogs/security/how-to-visualize-amazon-guardduty-findings-serverless-edition/

Note: This blog provides an alternate solution to Visualizing Amazon GuardDuty Findings, in which the authors describe how to build an Amazon Elasticsearch Service-powered Kibana dashboard to ingest and visualize Amazon GuardDuty findings.

Amazon GuardDuty is a managed threat detection service powered by machine learning that can monitor your AWS environment with just a few clicks. GuardDuty can identify threats such as unusual API calls or potentially unauthorized users attempting to access your servers. Many customers also like to visualize their findings in order to generate additional meaningful insights. For example, you might track resources affected by security threats to see how they evolve over time.

In this post, we provide a solution to ingest, process, and visualize your GuardDuty finding logs in a completely serverless fashion. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers. Our solution covers how to build a pipeline that ingests findings into Amazon Simple Storage Service (Amazon S3), transforms their nested JSON structure into tabular form using Amazon Athena and AWS Glue, and creates visualizations using Amazon QuickSight. We aim to provide both an easy-to-implement and cost-effective solution for consuming and analyzing your GuardDuty findings, and to more generally showcase a repeatable example for processing and visualizing many types of complex JSON logs.

Many customers already maintain centralized logging solutions using Amazon Elasticsearch Service (Amazon ES). If you want to incorporate GuardDuty findings with an existing solution, we recommend referencing this blog post to get started. If you don’t have an existing solution or previous experience with Amazon ES, if you prefer to use serverless technologies, or if you’re familiar with more traditional business intelligence tools, read on!

Before you get started

To follow along with this post, you’ll need to enable GuardDuty in order to start generating findings. See Setting Up Amazon GuardDuty for details if you haven’t already done so. Once enabled, GuardDuty will automatically generate findings as events occur. If you have public-facing compute resources in the same region in which you’ve enabled GuardDuty, you may soon find that they are being scanned quite often. All the more reason to continue reading!

You’ll also need Amazon QuickSight enabled in your account for the visualization sections of this post. You can find instructions in Setting Up Amazon QuickSight.

Architecture from end to end

 

Figure 1:  Complete architecture from findings to visualization

Figure 1: Complete architecture from findings to visualization

Figure 1 highlights the solution architecture, from finding generation all the way through final visualization. The steps are as follows:

  1. Deliver GuardDuty findings to Amazon CloudWatch Events
  2. Push GuardDuty Events to S3 using Amazon Kinesis Data Firehose
  3. Use AWS Lambda to reorganize S3 folder structure
  4. Catalog your GuardDuty findings using AWS Glue
  5. Configure Views with Amazon Athena
  6. Build a GuardDuty findings dashboard in Amazon QuickSight

Below, we’ve included an AWS CloudFormation template to launch a complete ingest pipeline (Steps 1-4) so that we can focus this post on the steps dedicated to building the actual visualizations (Steps 5-6). We cover steps 1-4 briefly in the next section to provide context, and we provide links to the pertinent pages in the documentation for those of you interested in building your own pipeline.
 
Select this image to open a link that starts building the CloudFormation stack

Ingest (Steps 1-4): Get Amazon GuardDuty findings into Amazon S3 and AWS Glue Data Catalog

 

Figure 2: In this section, we'll cover the services highlighted in blue

Figure 2: In this section, we’ll cover the services highlighted in blue

Step 1: Deliver GuardDuty findings to Amazon CloudWatch Events

GuardDuty has integration with and can deliver findings to Amazon CloudWatch Events. To perform this manually, follow the instructions in Creating a CloudWatch Events Rule and Target for GuardDuty.

Step 2: Push GuardDuty events to Amazon S3 using Kinesis Data Firehose

Amazon CloudWatch Events can write to an Kinesis Data Firehose delivery stream to store your GuardDuty events in S3, where you can use AWS Lambda, AWS Glue, and Amazon Athena to build the queries you’ll need to visualize the data. You can create your own delivery stream by following the instructions in Creating a Kinesis Data Firehose Delivery Stream and then adding it as a target for CloudWatch Events.

Step 3: Use AWS Lambda to reorganize Amazon S3 folder structure

Kinesis Data Firehose will automatically create a datetime-based file hierarchy to organize the findings as they come in. Due to the variability of the GuardDuty finding types, we recommend reorganizing the file hierarchy with a folder for each finding type, with separate datetime subfolders for each. This will make it easier to target findings that you want to focus on in your visualization. The provided AWS CloudFormation template utilizes an AWS Lambda function to rewrite the files in a new hierarchy as new files are written to S3. You can use the code provided in it along with Using AWS Lambda with S3 to trigger your own function that reorganizes the data. Once the Lambda function has run, the S3 bucket structure should look similar to the structure we show in figure 3.
 

Figure 3: Sample S3 bucket structure

Figure 3: Sample S3 bucket structure

Step 4: Catalog the GuardDuty findings using AWS Glue

With the reorganized findings stored in S3, use an AWS Glue crawler to scan and catalog each finding type. The CloudFormation template we provided schedules the crawler to run once a day. You can also run it on demand as needed. To build your own crawler, refer to Cataloging Tables with a Crawler. Assuming GuardDuty has generated findings in your account, you can navigate to the GuardDuty findings database in the AWS Glue Data Catalog. It should look something like figure 4:
 

Figure 4: List of finding type tables in the AWS Glue Catalog

Figure 4: List of finding type tables in the AWS Glue Catalog

Note: Because AWS Glue crawlers will attempt to combine similar data into one table, you might need to generate sample findings to ensure enough variability for each finding type to have its own table. If you only intend to build your dashboard from a small subset of finding types, you can opt to just edit the crawler to have multiple data sources and specify the folder path for each desired finding type.

Explore the table structure

Before moving on to the next step, take some time to explore the schema structure of the tables. Selecting one of the tables will bring you to a page that looks like what’s shown in figure 5.
 

Figure 5: Schema information for a single finding table

Figure 5: Schema information for a single finding table

You should see that most of the columns contain basic information about each finding, but there’s a column named detail that is of type struct. Select it to expand, as shown in figure 6.
 

Figure 6: The "detail" column expanded

Figure 6: The “detail” column expanded

Ah, this is where the interesting information is tucked away! The tables for each finding may differ slightly, but in all cases the detail column will hold the bulk of the information you’ll want to visualize. See GuardDuty Active Finding Types for information on what you should expect to find in the logs for each finding type. In the next step, we’ll focus on unpacking detail to prepare it for visualization!

Process (Step 5): Unpack nested JSON and configure views with Amazon Athena

 

Figure 7: In this section, we'll cover the services highlighted in blue

Figure 7: In this section, we’ll cover the services highlighted in blue

Note: This step picks up where the CloudFormation template finishes

Explore the table structure (again) in the Amazon Athena console

Begin by navigating to Athena from the AWS Management Console. Once there, you should see a drop-down menu with a list of databases. These are the same databases that are available in the AWS Glue Data Catalog. Choose the database with your GuardDuty findings and expand a table.
 

Figure 8: Expanded table in the Athena console

Figure 8: Expanded table in the Athena console

This should look very familiar to the table information you explored in step 4, including the detail struct!

You’ll need a method to unpack the struct in order to effectively visualize the data. There are many methods and tools to approach this problem. One that we recommend (and will show) is to use SQL queries within Athena to construct tabular views. This approach will allow you to push the bulk of the processing work to Athena. It will also allow you to simplify building visualizations when using Amazon QuickSight by providing a more conventional tabular format.

Extract details for use in visualization using SQL

The following examples contain SQL statements that will provide everything necessary to extract the necessary fields from the detail struct of the Recon:EC2/PortProbeUnprotectedPort finding to build the Amazon QuickSight dashboard we showcase in the next section. The examples also cover most of the operations you’ll need to work with the elements found in GuardDuty findings (such as deeply nested data with lists), and they serve as a good starting point for constructing your own custom queries. In general, you’ll want to traverse the nested layers (i.e. root.detail.service.count) and create new records for each item in an embedded list that you want to target using the UNNEST function. See this blog for even more examples of constructing queries on complex JSON data using Amazon Athena.

Simply copy the SQL statements that you want into the Athena query field to build the port_probe_geo and affected_instances views.

Note: If your account has yet to generate Recon:EC2/PortProbeUnprotectedPort findings, you can generate sample findings to follow along.


CREATE OR REPLACE VIEW "port_probe_geo" AS

WITH getportdetails AS (
    SELECT id, portdetails
    FROM by_finding_type
    CROSS JOIN UNNEST(detail.service.action.portProbeAction.portProbeDetails) WITH ORDINALITY AS p (portdetails, portdetailsindex)
)

SELECT 
    root.id AS id,
    root.region AS region,
    root.time AS time,
    root.detail.type AS type,
    root.detail.service.count AS count, 
    portdetails.localportdetails.port AS localport, 
    portdetails.localportdetails.portname AS localportname, 
    portdetails.remoteipdetails.geolocation.lon AS longitude, 
    portdetails.remoteipdetails.geolocation.lat AS latitude, 
    portdetails.remoteipdetails.country.countryname AS country, 
    portdetails.remoteipdetails.city.cityname AS city 

FROM 
    by_finding_type  as root, getPortDetails
    
WHERE 
    root.id = getportdetails.id

CREATE OR REPLACE VIEW "affected_instances" AS

SELECT 
    max(root.detail.service.count) AS count,
    date_parse(root.time,'%Y-%m-%dT%H:%i:%sZ') as time,
    root.detail.resource.instancedetails.instanceid

FROM 
    recon_ec2_portprobeunprotectedport  AS root

GROUP BY  
    root.detail.resource.instancedetails.instanceid, 
    time

Visualize (Step 6): Build a GuardDuty findings dashboard in Amazon QuickSight

 

Figure 9: In this section we will cover the services highlighted in blue

Figure 9: In this section we will cover the services highlighted in blue

Now that you’ve created tabular views using Athena, you can jump into Amazon QuickSight from the AWS Management Console and begin visualizing! If you haven’t already done so, enable Amazon QuickSight in your account by following the instructions for Setting Up Amazon QuickSight.

For this example, we’ll leverage the geo_port_probe view to build a geographic visualization and see the locations from which nefarious actors are launching port probes.

Creating an analysis

In the upper left-hand corner of the Amazon QuickSight console select New analysis and then New data set.
 

Figure 10: Create a new analysis

Figure 10: Create a new analysis

To utilize the views you built in the previous step, select Athena as the data source. Give your data source a name (in our example, we use “port probe geo”), and select the database that contains the views you created in the previous section. Then select Visualize.
 

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

 

Figure 12: Select the "port prob geo view" you created in step 5

Figure 12: Select the “port prob geo view” you created in step 5

Viz time!

From the Visual types menu in the bottom left corner, select the globe icon to create a map. Then select the latitude and longitude geospatial coordinates. Choose count (with a max aggregation) for size. Finally, select localportname to break the data down by color.
 

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Voila! A detailed map of your environment’s attackers!

Build out a dashboard

Once you like how everything looks, you can move on to adding more visuals to create a full monitoring dashboard.

To add another visual to the analysis, select Add and then Add visual.
 

Figure 14: Add another visual using the 'Add' option from the Amazon QuickSight menu bar

Figure 14: Add another visual using the ‘Add’ option from the Amazon QuickSight menu bar

If the new visual will use the same dataset, then you can immediately start selecting fields to build it. If you want to create a visual from a different data set (our example dashboard below adds the affected_instances view), follow the Creating Data Sets guide to add a new data set. Then return to the current analysis and associate the data set with the analysis by selecting the pencil icon shown below and selecting Add data set.
 

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Repeat this process until you’ve built out everything you need in your monitoring dashboard. Once it’s completed, you can publish the dashboard by selecting Share and then Publish dashboard.
 

Figure 16: Publish your dashboard using the "Share" option of the Amazon QuickSight menu

Figure 16: Publish your dashboard using the “Share” option of the Amazon QuickSight menu

Here’s an example of a dashboard we created using the port_probe_geo and affected_instances views:
 

Figure 17: An example dashboard created using the "port_probe_geo" and "affected_instances" views

Figure 17: An example dashboard created using the “port_probe_geo” and “affected_instances” views

What does something like this cost?

To get an idea of the scale of the cost, we’ve provided a small pricing example (accurate as of the writing of this blog) that assumes 10,000 GuardDuty findings per month with an average payload size of 5KB.

ServicePricing StructureAmount ConsumedTotal Cost
Amazon CloudWatch Events$1 per million events/td>

10000 events $0.01
Amazon Kinesis Data Firehose$0.029 per GB ingested0.05GB ingested $0.00145
Amazon S3$0.029 per GB stored per month0.1GB stored $0.00230
AWS LambdaFirst million invocations free~200 invocations $0
Amazon Athena$5 per TB Scanned0.003TB scanned (Assume 2 full data scans per day to refresh views) $0.015
AWS Glue$0.44 per DPU hour (2 DPU minimum and 10 minute minimum) = $0.15 per crawler run30 crawler runs $4.50
Total Processing Cost$4.53

Oh, the joys of a consumption-based model: Less than five dollars per month for all of that processing!

From here, all that remains are your visualization costs using Amazon QuickSight. This pricing is highly dependent upon your number of users and their respective usage patterns. See the Amazon QuickSight pricing page for more specific details.

Summary

In this post, we demonstrated how you can ingest your GuardDuty findings into S3, process them with AWS Glue and Amazon Athena, and visualize with Amazon QuickSight. All serverless! Each portion of what we showed can be used in tandem or on its own for this or many other data sets. Go launch the template and get started monitoring your AWS environment!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Romano

Ben is a Solutions Architect in AWS supporting customers in their journey to the cloud with a focus on big data solutions. Ben loves to delight customers by diving deep on AWS technologies and helping them achieve their business and technology objectives.

Author

Jimmy Boyle

Jimmy is a Solutions Architect in AWS with a background in software development. He enjoys working with all things serverless because he doesn’t have to maintain infrastructure. Jimmy enjoys delighting customers to drive their business forward and design solutions that will scale as their business grows.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Visualizing Amazon GuardDuty findings

Post Syndicated from Mike Fortuna original https://aws.amazon.com/blogs/security/visualizing-amazon-guardduty-findings/

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help protect your AWS accounts and workloads. Enable GuardDuty and it begins monitoring for:

  • Anomalous API activity
  • Potentially unauthorized deployments and compromised instances
  • Reconnaissance by attackers.

GuardDuty analyzes and processes VPC flow log, AWS CloudTrail event log, and DNS log data sources. You don’t need to manually manage these data sources because the data is automatically leveraged and analyzed when you activate GuardDuty. For example, GuardDuty consumes VPC Flow Log events directly from the VPC Flow Logs feature through an independent and duplicative stream of flow logs. As a result, you don’t incur any operational burden on existing workloads.

GuardDuty helps find potential threats in your AWS environment by producing security findings that you can view in the GuardDuty console or consume through Amazon CloudWatch Events, which is a service that makes alerts actionable and easier to integrate into existing event management and workflow systems. One common question we hear from customers is “how do I visualize these findings to generate meaningful insights?” In this post, we’re going to show you how to create a dashboard that includes visualizations like this:
 

Figure 1: Example visualization

Figure 1: Example visualization

You’ll learn how you can use AWS services to create a pipeline for your GuardDuty findings so you can log and visualize them. The services include:

Architecture

The architectural diagram below illustrates the pipeline we’ll create.
 

Figure 2: Architectural diagram

Figure 2: Architectural diagram

We’ll walk through the data flow to explain the architecture and highlight the additional customizations available to you.

  1. Amazon GuardDuty is enabled in an account and begins monitoring CloudTrail logs, VPC flow logs, and DNS query logs. If a threat is detected, GuardDuty forwards a finding to CloudWatch Events. For a newly generated finding, GuardDuty sends a notification based on its CloudWatch event within 5 minutes of the finding. CloudWatch Events allows you to send upstream notifications to various services filtered on your configured event patterns. We’ll configure an event pattern that only forwards events coming from the GuardDuty service.
  2. We define two targets in our CloudWatch Event Rule. The first target is a Kinesis Firehose stream for delivery into an Elasticsearch domain and an S3 bucket. The second target is an SNS Topic for Email/SMS notification of findings. We’ll send all findings to our targets; however, you can filter and format the findings you send by using a Lambda function (or by event pattern matching with a CloudWatch Event Rule). For example, you could send only high-severity alarms (that is, findings with detail.severity > 7).
  3. The Firehose stream delivers findings to Amazon Elasticsearch, which provides visualization and analysis for our event findings. The stream also delivers findings to an S3 bucket. The S3 bucket is used for long term archiving. This data can augment your data lake and you can use services such as Amazon Athena to perform advanced analytics.
  4. We’ll search, explore, and visualize the GuardDuty findings using Kibana and the Elasticsearch query Domain Specific Language (DSL) to gain valuable insights. Amazon Elasticsearch has a built-in Kibana plugin to visualize the data and perform operational analyses.
  5. To provide a simplified and secure authentication method, we provide user authentication to Kibana with Amazon Cognito User Pools. This method provides improved security from traditional IP whitelists or proxy infrastructure.
  6. Our second CloudWatch Event target is SNS, which has subscribed email endpoint(s) that allow your operations teams to receive email (or SMS messages) when a new GuardDuty Event is received.

If you would like to centralize your findings from multiple regions into a single S3 bucket, you can adapt this pipeline. You would deploy the frontend of the pipeline by configuring Kinesis Firehose in the remote regions to point to the S3 bucket in the centralized region. You can leverage prefixes in the Kinesis Firehose configuration to identify the source region. For example, you would configure a prefix of us-west-1 for events originating from the us-west-1 region. Analytic queries from tools such as Athena can then selectively target the desired region.

Deployment Steps

This CloudFormation template will install the pipeline and components required for GuardDuty visualization:
 
Select button to launch stack

When you start the stack creation process you will be prompted for the following information:

  • Stack name — This is the name of the stack you will create
  • EmailAddress — This email address is used to create a username in Cognito and a subscriber to the SNS topic.
  • ESDomainName — This will be the name given to the Elasticsearch Domain.
  • IndexName — This will be the Index created by Firehose to load data into Elasticsearch.

 

Figure 3: The "Create stack" interface

Figure 3: The “Create stack” interface

Once the infrastructure is installed, you’ll follow two main steps, each of which is described in detail later:

  1. Add Cognito authentication to Kibana, which is hosted on the Elasticsearch domain. At the time of writing, this can’t be done natively in CloudFormation. We’ll also confirm the SNS subscription so we can start to receive GuardDuty Findings via email.
  2. Configure Kibana with the index, the appropriate scripted fields, and the dashboard to provide the visualizations. We’ll also enable GuardDuty to start monitoring your account and send sample findings to test the pipeline.

Step 1: Enable Cognito authentication in Kibana

To enable user authentication to your dashboards hosted in Kibana, you need to enable the integration from the Elasticsearch domain that was created within the Cloudformation template.

  1. Open the AWS Console and select the Cognito service. Select Manage User Pools to access the User Pool that was created in Cloudformation. Select the user pool beginning with the name VisualizeGuardDutyUserPool and, under the App Integration menu item, select Domain name.
     
    Figure 4: The "Domain name" interface

    Figure 4: The “Domain name” interface

  2. You need to create a unique domain prefix to allow Kibana to authenticate using Cognito. Enter a unique domain prefix (it can only contain lowercase letters, numbers, and hyphens). After entering the prefix, select the Check Availability button to ensure it’s available in the region. If it’s available, select Save Changes button.
  3. From the AWS Console, select the Cloudformation service.
  4. Select the template you created for your pipeline, select the Outputs tab, and then, under Value, copy the value of ESCognitoRole. You’ll use this role when you enable Cognito authentication of Elasticsearch.
     
    Figure 5: The "Outputs" tab and the "ESCognitoRole" key

    Figure 5: The “Outputs” tab and the “ESCognitoRole” key

  5. Next, browse to the Elasticsearch Service, select the domain you created from the CloudFormation template, and select the Configure cluster button:
     
    Figure 6: The "Configure cluster" button

    Figure 6: The “Configure cluster” button

  6. Under the Kibana authentication section, select the Enable Amazon Cognito for authentication checkbox. You’ll be presented with several fields you need to configure, including: Cognito User Pool (the name of the user pool should start with VisualizeGuardDutyUserPool), Cognito Identity Pool (the name of the identity pool should start with VisualizeGuardDutyIDPool), and IAM Role Name (this was copied in step 4 earlier). A Cognito User Pool is a user directory in Amazon Cognito, we use this to create a user account to provide authentication to Kibana. Amazon Cognito Identity Pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. The Cognito Identity Pool in our case is used to provide federated access to Kibana. After you provided values for these fields, select the Submit button.
     
    Figure 7: The "Kibana authentication" interface

    Figure 7: The “Kibana authentication” interface

  7. The cluster reconfiguration will take several minutes to complete processing. When you see Domain status as Active, you can proceed.
  8. Finally, confirm the subscription email you received from SNS. Look for an email from: AWS Notifications <[email protected]>, open the message and select Confirm subscription to allow SNS to send you email when the SNS Topic receives a notification for new GuardDuty findings.

Step 2: Set up the Kibana dashboard and enable GuardDuty

Now, you can set up the Kibana dashboard with custom visualizations.

  1. Open the CloudFormation service page and select the stack you created earlier.
  2. Under the Outputs section, copy the Kibana URL.
     
    Figure 8: Copy the Kibana URL

    Figure 8: Copy the Kibana URL

  3. Paste the Kibana URL in a new browser window.
  4. Check your email client. You should have an email containing the temporary password from Cognito. Copy the temporary password and use it to log in to Cognito. If you haven’t received the email, check your email junk folder. You can also create additional users in the Cognito User Pool that was created from the CloudFormation Stack to provide additional users Kibana access.
     
    Figure 9: Example email with temporary password

    Figure 9: Example email with temporary password

  5. At the login prompt, enter the email address and password for the Cognito user the CloudFormation template created. A prompt to change your password will appear. Change your password to proceed. The Cognito User Pool requires: upper case letters, lower case letters, special characters, and numbers with a minimum length of 8 characters.
  6. It’s time to add mapping information to your index to instruct Kibana that some of the fields are delivered as geopoints. This allows these fields to be properly visualized with a Coordinate Map. Select Dev Tools in the menu on the left side:
  7.  

    Figure 10: Select "Dev tools"

    Figure 10: Select “Dev tools”

  8. Paste the following API call in the text box to provide the appropriate mappings for the networkConnectionAction & portProbeAction geolocation field. This calls the Elasticsearch API and updates the geolocation mapping for the above fields:
    
    PUT _template/gdt
    {
      "template": "gdt*",
      "settings": {},
      "mappings": {
        "_default_": {
          "properties": {
            "detail.service.action.portProbeAction.portProbeDetails.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            },
            "detail.service.action.networkConnectionAction.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            }        
          }
        }
      }
    }
    

  9. After you paste the API call be sure to remove whitespace after the ending brace. This allows you to select the green arrow to execute it. You should receive a message that the call was successful.
     
    Figure 11: Paste the API call

    Figure 11: Paste the API call

  10. Next, enable GuardDuty and send sample findings so you can create the Kibana Dashboard with data present. Find the GuardDuty service in the AWS Console and select the Get started button.
  11. From the Welcome to GuardDuty page, select the Enable GuardDuty button.
  12. Next, send some sample events. From the GuardDuty service, select the Settings menu on the left-hand menu, and then select Generate sample findings as shown here:
     
    Figure 12: The "Generate sample findings" button

    Figure 12: The “Generate sample findings” button

  13. Optionally, if you want to test with real GuardDuty findings, you can leverage the Amazon GuardDuty Tester. This AWS CloudFormation template creates an isolated environment with a bastion host, a tester EC2 instance, and two target EC2 instances to simulate five types of common attacks that GuardDuty is built to detect and notify you with generated findings. Once deployed, you would use the tester EC2 instance to execute a shell script to generate GuardDuty findings. Additional detail about this option can be found in the GuardDuty documentation.
  14. On the Kibana landing page, in the menu on the left side, create the Index by selecting Management.
  15. On the Management page, select Index Patterns.
  16. On the Create index pattern page, under Index patterns, enter gdt-* (if you used a different IndexName in the Cloudformation template, use that here), and then select Next Step.

    Note: It takes several minutes for the GuardDuty findings to generate a CloudWatch Event, work through the pipeline, and create the index in Elasticsearch. If the index doesn’t appear initially, please wait a few minutes and try again.

     

    Figure 13: The "Create index pattern" page

    Figure 13: The “Create index pattern” page

  17. Under Time Filter field name, select time from the drop-down list, and then select Create index pattern.
     
    Figure 14: The "Time Filter field name" list

    Figure 14: The “Time Filter field name” list

Create scripted fields

With the Index defined, we will now create two scripted fields that your dashboard visualizations will use.

Define the severity level

  1. Select the Index you just created, and then select scripted fields.
     
    Figure 15: The "scripted fields" tab

    Figure 15: The “scripted fields” tab

  2. Select Add Scripted Field, and enter the following information:
    • Name — sevLevel
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — copy and paste this script into the text-entry field:
      
      if (doc['detail.severity'].value < 3.9) { 
          return "Low";
      }
      else {if (doc['detail.severity'].value < 6.9) {
                return "Medium";
             }
      return "High";
      }
      

  3. After entering the information, select Create Field.

The sevLevel field provides a value-to-level mapping as defined by GuardDuty Severity Levels. This allows you to visualize the severity levels in a more user-friendly format (High, Medium, and Low) instead of a cryptic numerical value. To generate sevLevel, we used Kibana painless scripting, which allows custom field creation.

Define the attack type

  1. Now create a second scripted field for typeCategory. The typeCategory field extracts the finding attack type. Enter the following information:
    • Name — typeCategory
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — Copy and paste this script into the text-entry field:
      
      def path = doc['detail.type.keyword'].value;
      if (path != null) {
          int firstColon = path.indexOf(":");
          if (firstColon > 0) {
          return path.substring(0,firstColon);
          }
      }
      return "";
      

  2. After entering the information, select Create Field.

The typeCategory field is used to define the broad category “attack type.” The source field (detail.type.keyword) provides a lot of detailed information (for example: Recon:EC2/PortProbeUnprotectedPort), but we want to visualize the category of “attack type” in the high-level dashboard (that is, only Recon). We can still visualize on a more granular level, if necessary.

Create the Kibana Dashboard

  1. Create the Kibana dashboard by importing a JSON file containing its definition. To do this, download the Kibana dashboard and visualizations definition JSON file from here.
  2. Select Management in the menu on the left, and then select Saved Objects. On the right, select Import.
  3. Select the JSON file you downloaded and select Open. This imports the GuardDuty dashboard and visualizations. Select Yes, overwrite all objects.
  4. In the Index Pattern Conflicts section, under New index pattern, select gdt-*, and then select Confirm all changes.

Dashboard in action

  1. Select Dashboard in the menu on the left.
  2. Select the Guard Duty Summary link.

Your GuardDuty Dashboard will look like this:
 

Figure 16: The GuardDuty dashboard with callouts

Figure 16: The GuardDuty dashboard

The dashboard provides the following visualizations:

  1. This filter allows you to filter sample findings from real findings. If you generate sample findings from the GuardDuty AWS console, this filter allows you to remove the sample findings from the dashboard.
  2. The GuardDuty — Affected Instances chart shows which EC2 instances have associated findings. This visualization allows you to filter specific instances from display by selecting them in the graphic.
  3. The Guard Duty — Threat Type chart allows you to filter on the general attack type (inner circle) as well as the specific attack type (outer circle).
  4. The Guard Duty — Events Per Day graph allows you to visualize and filter on a specific time or date to show findings for that specific time, as well as search for temporal patterns in findings.
  5. GuardDuty — Top10 Findings provides a list of the top 10 findings by count.
  6. GuardDuty — Total Events provides the total number of events based on the criteria chosen. This value will change based on the filters defined.
  7. The GuardDuty — Heatmap — Port Probe Source Countries visualizes the countries where port probes are issued from. This is a Coordinate Map visualization that allows you to see the source and volume of the port probes targeting your instances.
  8. The GuardDuty — Network Connection Source Countries visualizes where brute force attacks are coming from. This is a Region Map visualization that allows you to highlight the country the brute force attacks are sourced from.
  9. GuardDuty — Severity Levels is a pie chart that show findings by severity levels (High, Med, Low), and you can filter by a specific level (that is, only show high-severity findings). This visualization uses the scripted field we created earlier for simplified visualization.
  10. The All-GuardDuty table includes the raw findings for all events. This provides complete raw event detail and the ability to filter at very granular levels.

In a previous blog, we saw how you can create a Kibana dashboard to visualize your network security posture by visualizing your VPC flow logs. This GuardDuty dashboard augments that dashboard. You can use a single Elasticsearch cluster to host both of these dashboards, in addition to other data sources you want to analyze and report on.

Conclusion

We’ve outlined an approach to rapidly build a pipeline to help you archive, analyze, and visualize your GuardDuty findings for rapid insight and actionable intelligence. You can extend this solution in a number of ways, including:

  • Modifying the alert email sent with a structured message (instead of raw JSON)
  • Adding additional visualizations, such as a heatmaps or timeseries charts
  • Extending the solution across AWS accounts or regions.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum.

Want more AWS Security news? Follow us on Twitter.

Michael Fortuna

Michael Fortuna

Michael is a Solutions Architect in AWS supporting enterprise customers and their journey to the cloud. Prior to his work on AWS and cloud technologies, Michael’s areas of focus included software-defined networking, security, collaboration, and virtualization technologies. He’s very excited to work as an SA because it allows him to dive deep on technology while helping customers.

Ravi Sakaria

Ravi Sakariar

Ravi is a Senior Solutions Architect at AWS based in New York. He works with enterprise customers as they transform their business and journey to the cloud. He enjoys the culture of innovation at Amazon because it’s similar to his prior experiences building startup companies. Outside of work, Ravi enjoys spending time with his family, cooking, and watching the New Jersey Devils.

How to automate the import of third-party threat intelligence feeds into Amazon GuardDuty

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-automate-import-third-party-threat-intelligence-feeds-into-amazon-guardduty/

Amazon GuardDuty is an AWS threat detection service that helps protect your AWS accounts and workloads by continuously monitoring them for malicious and unauthorized behavior. You can enable Amazon GuardDuty through the AWS Management Console with one click. It analyzes billions of events across your AWS accounts and uses machine learning to detect anomalies in account and workload activity. Then it references integrated threat intelligence feeds to identify suspected attackers. Within an AWS region, GuardDuty processes data from AWS CloudTrail Logs, Amazon Virtual Private Cloud (VPC) Flow Logs, and Domain Name System (DNS) Logs. All log data is encrypted in transit. GuardDuty extracts various fields from the logs for profiling and anomaly detection and then discards the logs. GuardDuty’s threat intelligence findings are based on ingested threat feeds from AWS threat intelligence and from third-party vendors CrowdStrike and Proofpoint.

However, beyond these built-in threat feeds, you have two ways to customize your protection. Customization is useful if you need to enforce industry-specific threat feeds, such as those for the financial services or the healthcare space. The first customization option is to provide your own list of whitelisted IPs. The second is to generate findings based on third-party threat intelligence feeds that you own or have the rights to share and upload into GuardDuty. However, keeping the third party threat list ingested to GuardDuty up-to-date requires many manual steps. You would need to:

  • Authorize administrator access
  • Download the list from a third-party provider
  • Upload the generated file to the service
  • Replace outdated threat feeds

In the following blog we’ll show you how to automate these steps when using a third-party feed. We’ll leverage FireEye iSIGHT Threat Intelligence as an example of how to upload a feed you have licensed to GuardDuty, but this solution can also work with other threat intelligence feeds. If you deploy this solution with the default parameters, it builds the following environment:
 

Figure 1: Diagram of the solution environment

Figure 1: Diagram of the solution environment

The following resources are used in this solution:

  • An Amazon CloudWatch Event that periodically invokes an AWS Lambda function. By default, CloudWatch will invoke the function every six days, but you can alter this if you’d like.
  • An AWS Systems Manager Parameter Store that securely stores the public and private keys that you provide. These keys are required to download the threat feeds.
  • An AWS Lambda function that consists of a script that programmatically imports a licensed FireEye iSIGHT Threat Intelligence feed into Amazon GuardDuty.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following:
    1. GuardDuty, to list, create, obtain, and update threat lists.
    2. CloudWatch Logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload threat lists on Amazon S3 and ingest them to GuardDuty.
  • An Amazon Simple Storage Service (S3) bucket to store your threat lists. After the solution is deployed, the bucket is retained unless you delete it manually.
  • Amazon GuardDuty, which needs to be enabled in the same AWS region in which you want to deploy the solution.

    Note: It’s a security best practice to enable GuardDuty in all regions.

Deploy the solution

Once you’ve taken care of the prerequisites, follow these steps:

  1. Select the Launch Stack button to launch a CloudFormation stack in your account. It takes approximately 5 minutes for the CloudFormation stack to complete:
     
    Select this image to open a link that starts building the CloudFormation stack
  2. Notes: If you’ve invited other accounts to enable GuardDuty and become associated with your AWS account (such that you can view and manage their GuardDuty findings on their behalf), please run this solution from the master account. Find more information on managing master and member GuardDuty accounts here. Executing this solution from the master account ensures that Guard Duty reports findings from all the member accounts as well, using the imported threat list.

    The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar. This is because Guard Duty is a region specific service.

    The code is available on GitHub.

  3. On the Select Template page, select Next.
  4. On the Specify Details page, give your solution stack a name.
  5. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    ParameterValueDescription
    Public Key<Requires input>FireEye iSIGHT Threat Intelligence public key.
    Private Key<Requires input>FireEye iSIGHT Threat Intelligence private key
    Days Requested7The maximum age (in days) of the threats you want to collect. (min 1 – max 30)
    Frequency6The number of days between executions – when the solution downloads a new threat feed (min 1 – max 29)
  6. Select Next.
  7. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like, and then select Next.
  8. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  9. To deploy the stack, select Create.

After approximately 5 minutes, the stack creation should be complete. You can verify this on the Events tab:
 

Figure 2: Check the status of the stack creation on the "Events" tab

Figure 2: Check the status of the stack creation on the “Events” tab

The Lambda function that updates your GuardDuty threat lists is invoked right after you provision the solution. It’s also set to run periodically to keep your environment updated. However, in scenarios that require faster updates to your threat intelligence lists, such as the discovery of a new Zero Day vulnerability, you can manually run the Lambda function to avoid waiting until the scheduled update event. To manually run the Lambda function, follow the steps described here to create and ingest the newly downloaded threat feeds into Amazon GuardDuty.

Summary

We’ve described how to deploy an automated solution that downloads the latest threat intelligence feeds you have licensed from a third-party provider such as FireEye. This solution provides a large amount of individual threat intelligence data for GuardDuty to process and report findings on. Furthermore, as newer threat feeds are published by FireEye (or the threat intelligence feed provider of your choice), they will be automatically ingested into GuardDuty.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuartDuty forum.

Want more AWS Security news? Follow us on Twitter.

How to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hosts

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-suspicious-hosts/

When you’re implementing security measures across your AWS resources, you should use a holistic approach that incorporates controls across multiple areas. In the Cloud Adoption Framework (CAF) Security perspective whitepaper, we define these controls across four categories.

  • Directive controls. Establish the governance, risk, and compliance models the environment will operate within.
  • Preventive controls. Protect your workloads and mitigate threats and vulnerabilities.
  • Detective controls. Provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive controls. Drive remediation of potential deviations from your security baselines.

The use of security automation is also a key principle outlined in the whitepaper. It helps reduce operational overhead and create repeatable, predictable approaches to monitoring and responding to events. You can take advantage of AWS services to build powerful solutions for the automated detection and remediation of threats against your AWS environments. For example, you can configure Amazon CloudWatch Events to invoke a Lambda action in response to suspicious or unexpected behavior in your AWS environment detected by Amazon GuardDuty. You can configure automated flows that use both detective and responsive controls and might also feed into preventative controls to help mitigate the threat in the future. Depending on the type of source event, you can automatically invoke specific actions, such as modifying access controls, terminating instances, or revoking credentials.

In this blog post, we’ll show you how to use Amazon GuardDuty to automatically update the AWS Web Application Firewall Web Access Control Lists (WebACLs) and VPC Network Access Control Lists (NACLs) in response to GuardDuty findings. After GuardDuty detects a suspicious activity, the solution updates these resources to block communication from the suspicious host while you perform additional investigation and remediation. Once communication has been blocked, further occurrences of a finding are reduced, allowing security and operations teams to focus more on higher priority tasks.

Amazon GuardDuty is a continuous security monitoring and threat detection service that incorporates threat intelligence, anomaly detection, and machine learning to help protect your AWS resources, including your AWS accounts. Amazon CloudWatch Events delivers a near-real-time stream of system events that describe changes in AWS resources. Amazon GuardDuty sends notifications based on Amazon CloudWatch Events when any change in the findings takes place. In the context of GuardDuty, such changes include newly generated findings and all subsequent occurrences of these existing findings. Using rules that you can quickly set up, you can match CloudWatch events and route them to one or more target actions. This solution routes matched events to AWS Lambda, which then performs updates to AWS Web Application Firewall (WAF) and VPC NACLs. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports both managed rules as well as a powerful rule language for custom rules. A Network Access Control List (NACL) is an optional layer of security for your Amazon Virtual Private Cloud (VPC) that acts as a firewall for controlling traffic in and out of one or more subnets.

Solution overview

The solution assumes that Amazon GuardDuty is enabled in your AWS account. If it isn’t enabled, you can find more info about the free trial and pricing here, and you can follow the steps in the GuardDuty documentation to set up the service and start monitoring your account.

Figure 1 shows how the CloudFormation template creates the sample solution:

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A GuardDuty finding is raised with suspected malicious activity.
  2. A CloudWatch Event is configured to filter for GuardDuty Finding type.
  3. A Lambda function is invoked by the CloudWatch Event and parses the GuardDuty finding.
  4. State data for blocked hosts is stored in Amazon DynamoDB table. The Lambda function checks the state table for existing host entry.
  5. The Lambda function creates a Rule inside AWS WAF and in a VPC NACL.
  6. A notification email is sent via Amazon Simple Notification Service (SNS).

A second Lambda function runs on a 5-minute recurring schedule and removes entries that are past the configurable retention period from WAF IPSets (which is a list that contains the blacklisted IPs or CIDRs), VPC NACLs, and the Dynamo DB table.

GuardDuty findings referenced in this solution

This solution’s CloudWatch Event Rule pattern is configured to match the following GuardDuty Finding types:

  1. UnauthorizedAccess:EC2/SSHBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to SSH services on Linux-based systems.
  2. UnauthorizedAccess:EC2/RDPBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to RDP services on Windows-based systems.
  3. Recon:EC2/PortProbeUnprotectedPort
    This finding informs you that a port on an EC2 instance in your AWS environment isn’t blocked by a security group, access control list (ACL), or an on-host firewall (for example, Linux IPChains), and known scanners on the internet are actively probing it.
  4. Trojan:EC2/BlackholeTraffic
    This finding informs you that an EC2 instance in your AWS environment might be compromised because it’s trying to communicate with an IP address of a black hole. Black holes refer to places in the network where incoming or outgoing traffic is silently discarded without informing the source that the data didn’t reach its intended recipient.
  5. Backdoor:EC2/XORDDOS
    This finding informs you that an EC2 instance in your AWS environment is attempting to communicate with an IP address that’s associated with XOR DDoS malware. XOR DDoS is Trojan malware that hijacks Linux systems.
  6. UnauthorizedAccess:EC2/TorIPCaller
    This finding informs you that an EC2 instance in your AWS environment is receiving inbound connections from a Tor exit node. Tor is software for enabling anonymous communication. It encrypts and randomly bounces communications through relays between a series of network nodes.
  7. Trojan:EC2/DropPoint
    This finding informs you that an EC2 instance in your AWS environment is trying to communicate with an IP address of a remote host that’s known to hold credentials and other stolen data captured by malware.

When one of these GuardDuty finding types is matched by the CloudWatch Event Rule, an entry is created in the target ACLs to deny the suspicious host, and then a notification is sent to an email address by this solution’s Lambda. Blocking traffic from the suspicious host helps to mitigate the threat while you perform additional investigation and remediation. For more information, see Remediating a Compromised EC2 Instance.

Solution deployment

This sample solution includes 6 main steps:

  1. Deploy the CloudFormation template.
  2. Create and run a Lambda GuardDuty finding test event.
  3. Confirm the entry in the VPC Network ACL.
  4. Confirm the entry in the AWS WAF IPSets.
  5. Confirm the SNS notification subscription.
  6. Apply the WAF Web ACLs to resources.

Step 1: Deploy the CloudFormation template

For this next step, make sure you deploy the template within the AWS account and region where you want to monitor GuardDuty findings.

  1. Select this link to launch a CloudFormation stack in your account.

    Note: The stack will launch in the N. Virginia (us-east-1) region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS regions, first upload the solution’s Lambda deployment packages (zip files with code) to an S3 bucket in the selected region. Once you have uploaded the zip files in the target region, update the CloudFormation ArtifactsBucket and ArticaftsPrefix parameters referenced in step 3 below.

  2. In the CloudFormation console, select the Select Template form, and then select Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameterInput parameter description
    AdminEmailEmail address to receive notifications. Must be a valid email address.
    RetentionHow long to retain IP addresses in the blacklist (in minutes). Default is 12 hours.
    CloudFrontIPSetIdID for existing WAF IPSet on CloudFront. Enter the ID here if there’s an existing WAF IPSet on CloudFront you want to use. Leave set to the default value of False if you want to create a new WebACL and IPSet.
    ALBIPSetIdID for existing WAF IPSet on ALB. Enter if there is an existing WAF IPSet on ALB. Leave set to False for creation of new WebACL and IPSet.
    ArtifactsBucketS3 bucket with artifact files (Lambda functions, templates, html files, etc.). Leave set to the default value for deployment into N. Virginia region.
    ArtifactsPrefixPath in the S3 bucket containing artifact files. Leave set to the default value for deployment into N. Virginia region.

    Note: AWS WAF is not currently available in all regions. For more information about where it’s available, refer to this page.

    Figure 2 shows an example of values entered on this screen:

    Figure 2: CloudFormation parameters on the "Specify Details" page

    Figure 2: CloudFormation parameters on the “Specify Details” page

  4. Enter values for all of the input parameters, and then select Next.
  5. On the Options page, accept the defaults, and then select Next.
  6. On the Review page, confirm the details, and then select Create.
  7. While the stack is being created, check the email inbox for the value you gave for the AdminEmail address parameter. Look for an email message with the subject “AWS Notification – Subscription Confirmation”. Select the link to confirm the subscription to the SNS topic. You should see a message similar to this:

    Figure 3: Subscription confirmation

    Figure 3: Subscription confirmation

Once the Status field for the CloudFormation stack changes to CREATE_COMPLETE, the solution is implemented and is ready for testing.

Figure 4: The "Status" displays "CREATE_COMPLETE"

Figure 4: The “Status” displays “CREATE_COMPLETE”

Step 2: Create and run a Lambda GuardDuty finding test event

Once the CloudFormation stack has completed deployment, you can test the functionality using a Lambda test event.

  1. In the console, select Services > VPC > Subnets and locate a subnet suitable for testing the solution. On the Summary tab, copy the Subnet ID to the clipboard or to a text editor.

    Figure 5: The "Subnet ID" value on the "Summary" tab

    Figure 5: The “Subnet ID” value on the “Summary” tab

  2. In the console, select Services > CloudFormation > GuardDutytoACL stack. In the stack Outputs tab, look for the GuardDutytoACLLambda entry, similar to Figure 6 below:

    Figure 6: The "GuardDutytoACLLambda" entry on the "Outputs" tab

    Figure 6: The “GuardDutytoACLLambda” entry on the “Outputs” tab

  3. 3. Select the link and you’ll be redirected to the Lambda console, with the Lambda function already open, similar to Figure 7:

    Figure 7: The Lambda function open in the Lambda console

    Figure 7: The Lambda function open in the Lambda console

  4. In the top right, select the Select a test event… drop-down list, and then select Configure test events.

    Figure 8: Select "Configure test events" from the drop-down list

    Figure 8: Select “Configure test events” from the drop-down list

  5. To facilitate testing, a test event file has been provided. On the Configure test event page, provide a name for Event name, and then paste the provided test event JSON in the body of the event.
  6. Update the value of subnetId key (line 34) to the value of your Subnet ID from step 2.1, and then select Create.

    Figure 9: Update the value of the "subnetId" key

    Figure 9: Update the value of the “subnetId” key

  7. Select Test to invoke the Lambda with the test event. You should see a message “Execution result: succeeded” similar to below:

    Figure 10: The "Test" button and the "succeeded" message

    Figure 10: The “Test” button and the “succeeded” message

Step 3: Confirm the entry in the VPC Network ACL (NACL)

In this step, you’ll confirm the DENY entry was created in the NACL. This solution is configured to create up to 10 entries in an ACL ranging between rule numbers 71 and 80. Since NACL rules are processed in order, it’s important that the DENY rule is placed before the ALLOW rule.

  1. In the console, select Services > VPC > Subnets and locate the subnet you provided for the test event.
  2. Select the Network ACL tab and confirm the new entry generated from the test event.
    Figure 11: Check the entry from the test event on the "Network" tab

    Figure 11: Check the entry from the test event on the “Network” tab

    Note that VPC NACL entries are created in the rule number range between 71 and 80. Older entries are aged out to create a “sliding window” of blocked hosts.

Step 4: Confirm the entry in the AWS WAF IPSets

In this step, you’ll verify that the entry was added to the CloudFront WAF IPSet and to the ALB WAF IPSet.

  1. In the console, select Services > WAF & Shield, and then select IP addresses.
  2. For Filter, select Global (CloudFront), and then select the IPSet named GD2ACL CloudFront IPSet for Blacklisted IP addresses.

    Figure 12: Filter the list and then select "GD2ACL CloudFront IPSet for Blacklisted IP addresses"

    Figure 12: Filter the list and then select “GD2ACL CloudFront IPSet for Blacklisted IP addresses”

  3. Confirm the IP address that was added to the list in the IPSet:

    Figure 13: Confirm the IP address was added

    Figure 13: Confirm the IP address was added

  4. In the console, select Services > WAF & Shield, and then select IP addresses.
  5. For Filter, select US East (N. Virginia)–or another region in which you deployed this solution–and then select the IPSet named GD2ACL ALB IPSet for blacklisted IP addresses.
  6. Confirm the IP address added to the ALB IPSet:

    Figure 14: Make sure the IP address was added

    Figure 14: Make sure the IP address was added

There might be specific host addresses that you want to prevent from being added to the blacklist. You can do this within GuardDuty by using a trusted IP list. Trusted IP lists consist of IP addresses that you have whitelisted for secure communication with your AWS infrastructure and applications. GuardDuty doesn’t generate findings for IP addresses on trusted IP lists. For additional information, see Working with Trusted IP Lists and Threat Lists.

Step 5: Confirm the SNS notification subscription

In this step, you’ll view the SNS notification that was sent to the email address you set up.

    1. Review the email inbox for the value you provided for the AdminEmail parameter and look for a message with the subject line “AWS GD2ACL Alert.”The contents of the message from SNS should be similar to this:

      Figure 15: SNS message example

      Figure 15: SNS message example

Step 6: Apply the WAF Web ACLs to resources

The final task is to associate the Web ACL with the CloudFront Distributions and Application Load Balancers that you want to automatically update with this solution. To learn how to do this, see Associating or Disassociating a Web ACL with a CloudFront Distribution or an Application Load Balancer.

You can also use AWS Firewall Manger to associate the Web ACLs. AWS Firewall Manager simplifies your AWS WAF administration and maintenance tasks across multiple accounts and resources. With Firewall Manager, you set up your firewall rules just once. The service automatically applies your rules across your accounts and resources, even as you add new resources.

Summary

You’ve learned how to use Amazon GuardDuty to automatically update AWS Web Application Firewall (AWS WAF) and VPC Network Access Control Lists (ACLs) in response to GuardDuty findings. With just a few steps, you can use this sample solution to help mitigate threats by blocking communication with suspicious hosts. You can explore additional solutions possible using GuardDuty Finding types and CloudWatch Events target actions. This solution’s code is available on GitHub. Feel free to play around with the code to add more GuardDuty findings to this solution and also to build bigger and better solutions!

If you have comments about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the GuardDuty, WAF, or CloudWatch forums, or contact AWS Support.

Recovering from a rough Monday morning: An Amazon GuardDuty threat detection and remediation scenario

Post Syndicated from Greg McConnel original https://aws.amazon.com/blogs/security/amazon-guardduty-threat-detection-and-remediation-scenario/

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. Given the many log types that Amazon GuardDuty analyzes (Amazon Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail, and DNS logs), you never know what it might discover in your AWS account. After enabling GuardDuty, you might quickly find serious threats lurking in your account or, preferably, just end up staring at a blank dashboard for weeks…or even longer.

A while back at an AWS Loft event, one of the customers enabled GuardDuty in their AWS account for a lab we were running. Soon after, GuardDuty alerts (findings) popped up that indicated multiple Amazon Elastic Compute Cloud (EC2) instances were communicating with known command and control servers. This means that GuardDuty detected activity commonly seen in the situation where an EC2 instance has been taken over as part of a botnet. The customer asked if this was part of the lab, and we explained it wasn’t and that the findings should be immediately investigated. This led to an investigation by that customer’s security team and luckily the issue was resolved quickly.

Then there was the time we spoke to a customer that had been running GuardDuty for a few days but had yet to see any findings in the dashboard. They were concerned that the service wasn’t working. We explained that the lack of findings was actually a good thing, and we discussed how to generate sample findings to test GuardDuty and their remediation pipeline.

This post, and the corresponding GitHub repository, will help prepare you for either type of experience by walking you through a threat detection and remediation scenario. The scenario will show you how to quickly enable GuardDuty, generate and examine test findings, and then review automated remediation examples using AWS Lambda.

Scenario overview

The instructions and AWS CloudFormation template for setting everything up are provided in a GitHub repository. The CloudFormation template sets up a test environment in your AWS Account, configures everything needed to run through the scenario, generates GuardDuty findings and provides automatic remediation for the simulated threats in the scenario. All you need to do is run the CloudFormation template in the GitHub repository and then follow the instructions to investigate what occurred.

The scenario presented is that you manage an IT organization and Alice, your security engineer, has enabled GuardDuty in a production AWS Account and configured a few automated remediations. In threat detection and remediation, the standard pattern starts with a threat which is then investigated and finally remediated. These remediations can be manual or automated. Alice focused on a few specific attack vectors, which represent a small sample of what GuardDuty is capable of detecting. Alice has set all this up on Thursday but isn’t in the office on Monday. Unfortunately, as soon as you arrive at the office, GuardDuty notifies you that multiple threats have been detected (and given the automated remediation setup, these threats have been addressed but you still need to investigate.) The documentation in GitHub will guide you through the analysis of the findings and discuss how the automatic remediation works. You will also have the opportunity to manually trigger a GuardDuty finding and view that automated remediation.

The GuardDuty findings generated in the scenario are listed here:

You can view all of the GuardDuty findings here.

You can get started immediately by browsing to the GitHub repository for this scenario where you will find the instructions and AWS CloudFormation template. This scenario will show you how easy it is to enable GuardDuty in addition to demonstrating some of the threats GuardDuty can discover. To learn more about Amazon GuardDuty please see the GuardDuty site and GuardDuty documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Podcast: We developed Amazon GuardDuty to meet scaling demands, now it could assist with compliance considerations such as GDPR

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/podcast-we-developed-amazon-guardduty-to-meet-scaling-demands-now-it-could-assist-with-compliance-considerations-such-as-gdpr/

It isn’t simple to meet the scaling requirements of AWS when creating a threat detection monitoring service. Our service teams have to maintain the ability to deliver at a rapid pace. That led to the question what can be done to make a security service as frictionless as possible to business demands?

Core parts of our internal solution can now be found in Amazon GuardDuty, which doesn’t require deployment of software or security infrastructure. Instead, GuardDuty uses machine learning to monitor metadata for access activity such as unusual API calls. This method turned out to be highly effective. Because it worked well for us, we thought it would work well for our customers, too. Additionally, when we externalized the service, we enabled it to be turned on with a single click. The customer response to Amazon GuardDuty has been positive with rapid adoption since launch in late 2017.

The service’s monitoring capabilities and threat detections could become increasingly helpful to customers concerned with data privacy or facing regulations such as the EU’s General Data Privacy Regulation (GDPR). Listen to the podcast with Senior Product Manager Michael Fuller to learn how Amazon GuardDuty could be leveraged to meet your compliance considerations.

AWS Online Tech Talks – April & Early May 2018

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-april-early-may-2018/

We have several upcoming tech talks in the month of April and early May. Come join us to learn about AWS services and solution offerings. We’ll have AWS experts online to help answer questions in real-time. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

April & early May — 2018 Schedule

Compute

April 30, 2018 | 01:00 PM – 01:45 PM PTBest Practices for Running Amazon EC2 Spot Instances with Amazon EMR (300) – Learn about the best practices for scaling big data workloads as well as process, store, and analyze big data securely and cost effectively with Amazon EMR and Amazon EC2 Spot Instances.

May 1, 2018 | 01:00 PM – 01:45 PM PTHow to Bring Microsoft Apps to AWS (300) – Learn more about how to save significant money by bringing your Microsoft workloads to AWS.

May 2, 2018 | 01:00 PM – 01:45 PM PTDeep Dive on Amazon EC2 Accelerated Computing (300) – Get a technical deep dive on how AWS’ GPU and FGPA-based compute services can help you to optimize and accelerate your ML/DL and HPC workloads in the cloud.

Containers

April 23, 2018 | 11:00 AM – 11:45 AM PTNew Features for Building Powerful Containerized Microservices on AWS (300) – Learn about how this new feature works and how you can start using it to build and run modern, containerized applications on AWS.

Databases

April 23, 2018 | 01:00 PM – 01:45 PM PTElastiCache: Deep Dive Best Practices and Usage Patterns (200) – Learn about Redis-compatible in-memory data store and cache with Amazon ElastiCache.

April 25, 2018 | 01:00 PM – 01:45 PM PTIntro to Open Source Databases on AWS (200) – Learn how to tap the benefits of open source databases on AWS without the administrative hassle.

DevOps

April 25, 2018 | 09:00 AM – 09:45 AM PTDebug your Container and Serverless Applications with AWS X-Ray in 5 Minutes (300) – Learn how AWS X-Ray makes debugging your Container and Serverless applications fun.

Enterprise & Hybrid

April 23, 2018 | 09:00 AM – 09:45 AM PTAn Overview of Best Practices of Large-Scale Migrations (300) – Learn about the tools and best practices on how to migrate to AWS at scale.

April 24, 2018 | 11:00 AM – 11:45 AM PTDeploy your Desktops and Apps on AWS (300) – Learn how to deploy your desktops and apps on AWS with Amazon WorkSpaces and Amazon AppStream 2.0

IoT

May 2, 2018 | 11:00 AM – 11:45 AM PTHow to Easily and Securely Connect Devices to AWS IoT (200) – Learn how to easily and securely connect devices to the cloud and reliably scale to billions of devices and trillions of messages with AWS IoT.

Machine Learning

April 24, 2018 | 09:00 AM – 09:45 AM PT Automate for Efficiency with Amazon Transcribe and Amazon Translate (200) – Learn how you can increase the efficiency and reach your operations with Amazon Translate and Amazon Transcribe.

April 26, 2018 | 09:00 AM – 09:45 AM PT Perform Machine Learning at the IoT Edge using AWS Greengrass and Amazon Sagemaker (200) – Learn more about developing machine learning applications for the IoT edge.

Mobile

April 30, 2018 | 11:00 AM – 11:45 AM PTOffline GraphQL Apps with AWS AppSync (300) – Come learn how to enable real-time and offline data in your applications with GraphQL using AWS AppSync.

Networking

May 2, 2018 | 09:00 AM – 09:45 AM PT Taking Serverless to the Edge (300) – Learn how to run your code closer to your end users in a serverless fashion. Also, David Von Lehman from Aerobatic will discuss how they used [email protected] to reduce latency and cloud costs for their customer’s websites.

Security, Identity & Compliance

April 30, 2018 | 09:00 AM – 09:45 AM PTAmazon GuardDuty – Let’s Attack My Account! (300) – Amazon GuardDuty Test Drive – Practical steps on generating test findings.

May 3, 2018 | 09:00 AM – 09:45 AM PTProtect Your Game Servers from DDoS Attacks (200) – Learn how to use the new AWS Shield Advanced for EC2 to protect your internet-facing game servers against network layer DDoS attacks and application layer attacks of all kinds.

Serverless

April 24, 2018 | 01:00 PM – 01:45 PM PTTips and Tricks for Building and Deploying Serverless Apps In Minutes (200) – Learn how to build and deploy apps in minutes.

Storage

May 1, 2018 | 11:00 AM – 11:45 AM PTBuilding Data Lakes That Cost Less and Deliver Results Faster (300) – Learn how Amazon S3 Select And Amazon Glacier Select increase application performance by up to 400% and reduce total cost of ownership by extending your data lake into cost-effective archive storage.

May 3, 2018 | 11:00 AM – 11:45 AM PTIntegrating On-Premises Vendors with AWS for Backup (300) – Learn how to work with AWS and technology partners to build backup & restore solutions for your on-premises, hybrid, and cloud native environments.