All posts by Cameron Worrell

How to get started with security response automation on AWS

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-get-started-security-response-automation-aws/

At AWS, we encourage you to use automation to help quickly detect and respond to security events within your AWS environments. In addition to increasing the speed of detection and response, automation also helps you scale your security operations as you expand your workloads running on AWS. For these reasons, security automation is a key ­principle outlined in both the Well-Architected and Cloud Adoption frameworks as well as in the AWS Security Incident Response Guide.

In this blog post, you’ll learn to implement automated security response mechanisms within your AWS environments. This post will include common patterns, implementation considerations, and an example solution. Security response automation is a broad topic that spans many areas. The goal of this blog post is to introduce you to core concepts and help you get started.

A word from our lawyers: Please note that you are responsible for making your own independent assessment of the information in this post. This post: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers, or licensors.

What is security response automation?

Security response automation is a planned and programmed action taken to achieve a desired state for an application or resource based on a condition or event. When you implement security response automation, you should adopt an approach that draws from existing security frameworks. Frameworks are published materials which consist of standards, guidelines, and best practices in order help organizations manage cybersecurity-related risk. Using frameworks helps you achieve consistency and scalability and enables you to focus more on the strategic aspects of your security program. You should work with compliance professionals within your organization to understand any specific security frameworks that may also be relevant for your AWS environment.

Our example solution is based on the NIST Cybersecurity Framework (CSF), which is designed to help organizations assess and improve their ability to prevent, detect, and respond to security events. According to the CSF, “cybersecurity incident response” supports your ability to contain the impact of potential cybersecurity incidents. Although automation is not a CSF requirement, automating responses to events enables you to create repeatable, predictable approaches to monitoring and responding to threats.

The five main steps in the CSF are identify, protect, detect, respond and recover. We’ve expanded the detect and respond steps to include automation and investigation activities.
 

Figure 1: The five steps in the CSF

Figure 1: The five steps in the CSF

The following definitions for each step in the diagram above are based on the CSF but have been adapted for our example in this blog post. Although we will focus on the detect, automate and respond steps, it’s important to understand the entire process flow.

  • Identify: Identify and understand the resources, applications, and data within your AWS environment.
  • Protect: Develop and implement appropriate controls and safeguards to ensure delivery of services.
  • Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. This step includes the implementation of monitoring capabilities which will be discussed further in the next section.
  • Automate: Develop and implement planned, programmed actions that will achieve a desired state for an application or resource based on a condition or event.
  • Investigate: Perform a systematic examination of the security event to establish the root cause.
  • Respond: Develop and implement appropriate activities to take automated or manual actions regarding a detected security event.
  • Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a security event.

Security response automation on AWS

AWS CloudTrail, AWS Config, and Amazon EventBridge continuously record details about the resources and configuration changes in your AWS account. You can use this information to automatically detect resource changes and to react to deviations from your desired state.
 

Figure 2: Automated remediation flow

Figure 2: Automated remediation flow

As shown in the diagram above, an automated remediation flow on AWS has three stages:

  • Monitor: Your automated monitoring tools collect information about resources and applications running in your AWS environment. For example, they might collect AWS CloudTrail information about activities performed in your AWS account, usage metrics from your Amazon EC2 instances, or flow log information about the traffic going to and from network interfaces in your Amazon Virtual Private Cloud (VPC).
  • Detect: When a monitoring tool detects a predefined condition—such as a breached threshold, anomalous activity, or configuration deviation—it raises a flag within the system. A triggering condition might be an anomalous activity detected by Amazon GuardDuty, a resource becoming out of compliance with an AWS Config Rule, or a high rate of blocked requests on an Amazon VPC security group or AWS WAF web access control list.
  • Respond: When a condition is flagged, an automated response is triggered that performs an action you’ve predefined—something intended to remediate or mitigate the flagged condition. Examples of automated response actions might include modifying a VPC security group, patching an Amazon EC2 instance, or rotating credentials.

You can use the event-driven flow described above to achieve many automated response patterns with varying degrees of complexity. Your response pattern could be as simple as invoking a single AWS Lambda function, or it could be a complex series of AWS Step Function tasks with advanced logic. In this blog post, we’ll use two simple Lambda functions in our example solution.

How to define your response automation

Now that we’ve introduced the concept of security response automation, start thinking about security requirements within your environment that you’d like to enforce through automation. These design requirements might come from general best practices you’d like to follow, or they might be specific controls from compliance frameworks relevant for your business. Regardless, your objectives should be quantitative, not qualitative. Here are some examples of quantitative objectives:

  • Remote administrative network access to servers should be limited.
  • Server storage volumes should be encrypted.
  • AWS console logins should be protected by multi-factor authentication.

As an optional step, you can expand these objectives into user stories that define the conditions and remediation actions when there is an event. User stories are informal descriptions that briefly document a feature within a software system. User stories may be global and span across multiple applications or they may be specific to a single application. For example:

“Remote administrative network access to servers should be limited. Remote access ports include SSH TCP port 22 and RDP TCP port 3389. If open remote access ports are detected within the environment, they should be automatically closed and the owner will be notified.”

Once you’ve completed your user story, you can determine how to use automated remediation to help achieve these objectives in your AWS environment. User stories should be stored in a location that provides versioning support and can reference the associated automation code.

You should carefully consider the effect of your remediation mechanisms in order to prevent unintended impact on your resources and applications. Remediation actions such as instance termination, credential revocation, and security group modification can adversely affect application availability. Depending on the level of risk that’s acceptable to your organization, your automated mechanism might only provide a notification which can then be manually investigated prior to remediation. Once you’ve identified an automated remediation mechanism, you can build out the required components and test them in a non-production environment.

Sample response automation walkthrough

In the following section, we’ll walk you through an automated remediation for a simulated event that indicates potential unauthorized activity—the unintended disabling of CloudTrail logging. Outside parties might want to disable logging to prevent detection and recording of their unauthorized activity. Our response is to re-enable the CloudTrail logging and immediately notify the security contact. Here’s the user story for this scenario:

“CloudTrail logging should be enabled for all AWS accounts and regions. If CloudTrail logging is disabled, it will automatically be enabled and the security operations team will be notified.”

Note: The sample response automation below references Amazon EventBridge which extends and builds upon CloudWatch Events. Amazon EventBridge uses the same Amazon CloudWatch Events API, so the event structure and rules configuration are the same. This blog post uses base functionality that is identical in both EventBridge and CloudWatch Events.

Prerequisites

In order to use our sample remediation, you will need to enable Amazon GuardDuty and AWS Security Hub in the AWS Region you have selected. Both of these services include a 30-day free trial. See the AWS Security Hub pricing page and the Amazon GuardDuty pricing page for additional details.

Important: You’ll use AWS CloudTrail to test the sample remediation. Running more than one CloudTrail trail in your AWS account will result in charges based on the number of events processed while the trail is running. Charges for additional copies of management events recorded in a Region are applied based on the published pricing plan. To minimize the charges, follow the clean-up steps that we provide later in this post to remove the sample automation and delete the trail.

Deploy the sample response automation

In this section, we’ll show you how to deploy and test the CloudTrail logging remediation sample. Amazon GuardDuty generates the finding Stealth:IAMUser/CloudTrailLoggingDisabled when CloudTrail logging is disabled, and AWS Security Hub collects findings from GuardDuty using the standardized finding format mentioned earlier. We recommend that you deploy this sample into a non-production AWS account.

Select the Launch Stack button below to deploy a CloudFormation template with an automation sample in the us-east-1 Region. You can also download the template and implement it in another Region. The template consists of an Amazon EventBridge rule, an AWS Lambda function and the IAM permissions necessary for both components to execute. It takes several minutes for the CloudFormation stack build to complete.

Select this image to open a link that starts building the CloudFormation stack

  1. In the CloudFormation console, choose the Select Template form, and then select Next.
  2. On the Specify Details page, provide the email address for a security contact. (For the purpose of this walkthrough, it should be an email address you have access to.) Then select Next.
  3. On the Options page, accept the defaults, then select Next.
  4. On the Review page, confirm the details, then select Create.
  5. While the stack is being created, check the inbox of the email address you provided in step 2. Look for an email message with the subject AWS Notification – Subscription Confirmation. Select the link in the body of the email to confirm your subscription to the Amazon Simple Notification Service (Amazon SNS) topic. You should see a success message similar to the screenshot below:
     
    Figure 3: SNS subscription confirmation

    Figure 3: SNS subscription confirmation

  6. Return to the CloudFormation console. Once the Status field for the CloudFormation stack changes to CREATE COMPLETE (as shown in figure 4), the solution is implemented and is ready for testing.
     
    Figure 4: CREATE_COMPLETE status

    Figure 4: CREATE_COMPLETE status

Test the sample automation

You’re now ready to test the automated response by creating a test trail in CloudTrail, then trying to stop it.

  1. From the AWS Management Console, choose Services > CloudTrail.
  2. Select Trails, then select Create Trail.
  3. On the Create Trail form:
    1. Enter a value for Trail name. We use test-trail in our example below.
    2. Under Management events, select Write-only (to minimize event volume).
       
      Figure 5: Create a CloudTrail trail

      Figure 5: Create a CloudTrail trail

    3. Under Storage location, choose an existing S3 bucket or create a new one. Note that since S3 bucket names are globally unique, you must add characters (such as a random string) to the name. For example: my-test-trail-bucket-<random-string>.
  4. On the Trails page of the CloudTrail console, verify that the new trail has started. You should see a green checkmark in the Status column, as shown in figure 6.
     
    Figure 6: Verify new trail has started

    Figure 6: Verify new trail has started

  5. You’re now ready to act like an unauthorized user trying to cover their tracks! Stop the logging for the trail you just created:
    1. Select the new trail name to display its configuration page.
    2. Toggle the Logging switch in the top-right corner to OFF.
    3. When prompted with a warning dialog box, select Continue.
    4. Verify that the Logging switch is now off, as shown below.
       
      Figure 7: Verify logging switch is off

      Figure 7: Verify logging switch is off

      You have now simulated a security event by disabling logging for one of the trails in the CloudTrail service. Within the next few seconds, the near real-time automated response will detect the stopped trail, restart it, and send an email notification. You can refresh the Trails page of the CloudTrail console to verify that the trail’s status is ON again.

      Within the next several minutes, the investigatory automated response will also begin. GuardDuty will detect the action that stopped the trail and enrich the data about the source of unexpected behavior. Security Hub will then ingest that information and optionally correlate with other security events.

      Following the steps below, you can monitor findings within Security Hub for the finding type TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled to be generated:

  6. In the AWS Management Console, choose Services > Security Hub
    1. Select Findings in the left pane.
    2. Select the Add filters field, then select Type.
    3. Select EQUALS, paste TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled into the field, then select Apply.
    4. Refresh your browser periodically until the finding is generated.
    5. Figure 8: Monitor Security Hub for your finding

      Figure 8: Monitor Security Hub for your finding

While you wait on that detection, let’s dig into the components of automation.

How the sample automation works

This example incorporates two automated responses: a near real-time workflow and an investigatory workflow. The near real-time workflow provides a rapid response to an individual event, in this case the stopping of a trail. The goal is to restore the trail to a functioning state and alert security responders as quickly as possible. The investigatory workflow still includes a response to provide defense in depth and also uses services that support a more in-depth investigation of the incident.

Figure 9: Sample automation workflow

Figure 9: Sample automation workflow

In the near real-time workflow, Amazon EventBridge monitors for the undesired activity. When a trail is stopped, AWS CloudTrail publishes an event on the EventBridge bus. An EventBridge rule detects the trail-stopping event and invokes a Lambda function to respond to the event by restarting the trail and notifying the security contact via an Amazon Simple Notification Service (SNS) topic.

In the investigative workflow, CloudTrail logs are monitored for undesired activities. For example, if a trail is stopped, there will be a corresponding log record. GuardDuty detects this activity and retrieves additional data points about the source IP that executed the API call. Two common examples of those additional data points in GuardDuty findings include whether the API call came from an IP address on a threat list, or whether it came from a network not commonly used in your AWS account. An AWS Lambda function responds by restarting the trail and notifying the security contact. Finally, the finding is imported into AWS Security Hub for additional investigation.

AWS Security Hub imports findings from AWS security services such as GuardDuty, Amazon Macie and Amazon Inspector, plus from any third-party product integrations you’ve enabled. All findings are provided to Security Hub in AWS Security Finding Format, which eliminates the need for data conversion. Security Hub correlates these findings to help you identify related security events and determine a root cause. Security Hub also publishes its findings to Amazon EventBridge to enable further processing by other AWS services such as AWS Lambda.

Respond step deep dive

Amazon EventBridge and AWS Lambda work together to respond to a security finding. Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and Software-as-a-Service (SaaS) applications without writing code. In this example, EventBridge identifies a Security Hub finding that requires action and invokes a Lambda function that performs remediation. As shown in figure 10, the Lambda function both notifies the security operator via SNS and restarts the stopped CloudTrail.

Figure 10: Sample "respond" workflow

Figure 10: Sample “respond” workflow

To set this response up, we looked for an event to indicate that a trail had stopped or was disabled. We knew that the GuardDuty finding Stealth:IAMUser/CloudTrailLoggingDisabled is raised when CloudTrail logging is disabled. Therefore, we configured the default event bus to look for this event. You can learn more about all of the available GuardDuty findings in the user guide.

How the code works

When Security Hub publishes a finding to EventBridge, it includes full details of the incident as discovered by GuardDuty. The finding is published in JSON format. If you review the details of the sample finding, note that it has several fields helping you identify the specific events that you’re looking for. Here are some of the relevant details:


{
   …
   "source":"aws.securityhub",
   …
   "detail":{
      "findings": [{
		…
    	“Types”: [
			"TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled"
			],
		…
      }]
}

You can build an event pattern using these fields, which an EventBridge filtering rule can then use to identify events and to invoke the remediation Lambda function. Below is a snippet from the CloudFormation template we provided earlier that defines that event pattern for the EventBridge filtering rule:


# pattern matches the nested JSON format of a specific Security Hub finding
      EventPattern:
        source:
        - aws.securityhub
        detail-type:
          - "Security Hub Findings - Imported"
        detail:
          findings:
            Types:
              - "TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled"

Once the rule is in place, EventBridge continuously scans the event bus for this pattern. When EventBridge finds a match, it invokes the remediating Lambda function and passes the full details of the event to the function. The Lambda function then parses the JSON fields in the event so that it can act as shown in this Python code snippet:


# extract trail ARN by parsing the incoming Security Hub finding (in JSON format)
trailARN = event['detail']['findings'][0]['ProductFields']['action/awsApiCallAction/affectedResources/AWS::CloudTrail::Trail']   

# description contains useful details to be sent to security operations
description = event['detail']['findings'][0]['Description']

The code also issues a notification to security operators so they can review the findings and insights in Security Hub and other services to better understand the incident and to decide whether further manual actions are warranted. Here’s the code snippet that uses SNS to send out a note to security operators:


#Sending the notification that the AWS CloudTrail has been disabled.
snspublish = snsclient.publish(
	TargetArn = snsARN,
	Message="Automatically restarting CloudTrail logging.  Event description: \"%s\" " %description
	)

While notifications to human operators are important, the Lambda function will not wait to take action. It immediately remediates the condition by restarting the stopped trail in CloudTrail. Here’s a code snippet that restarts the trail to reenable logging:


#Enabling the AWS CloudTrail logging
try:
	client = boto3.client('cloudtrail')
	enablelogging = client.start_logging(Name=trailARN)
	logger.debug("Response on enable CloudTrail logging- %s" %enablelogging)
except ClientError as e:
	logger.error("An error occured: %s" %e)

After the trail has been restarted, API activity is once again logged and can be audited. This can help provide relevant data for the remaining steps in the incident response process. The data is especially important for the post-incident phase, when your team analyzes lessons learned to prevent future incidents. You can also use this phase to identify additional steps to automate in your incident response.

Clean up

After you’ve completed the sample security response automation, we recommend that you remove the resources created in this walkthrough example from your account in order to minimize any charges associated with the trail in CloudTrail and data stored in S3.

Important: Deleting resources in your account can negatively impact the applications running in your AWS account. Verify that applications and AWS account security do not depend on the resources you’re about to delete.

Here are the clean-up steps:

  1. Delete the CloudFormation stack.
  2. Delete the trail you created in CloudTrail.
  3. If you created an S3 bucket for CloudTrail logs, you can also delete that S3 bucket.
  4. New accounts can try GuardDuty at no cost for 30 days. You can suspend or disable GuardDuty before the free trial period to avoid charges.
  5. Security Hub comes with a 30-day free trial. You can avoid charges by disabling the service before the trial period is over.

Summary

You’ve learned the basic concepts and considerations behind security response automation on AWS and how to use Amazon EventBridge, Amazon GuardDuty and AWS Security Hub to automatically re-enable AWS CloudTrail when it becomes disabled unexpectedly. As a next step, you may want to start building your own response automations and dive deeper into the AWS Security Incident Response Guide, NIST Cybersecurity Framework (CSF) or the AWS Cloud Adoption Framework (CAF) Security Perspective. You can explore additional automatic remediation solutions on the AWS Security Blog. You can find the code used in this example on GitHub.

If you have feedback about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the EventBridgeGuardDuty or Security Hub forums, or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Cameron Worrell

Cameron Worrell

Cameron is a Solutions Architect with a passion for security and enterprise transformation. He joined AWS in 2015.

Alex Tomic

Alex Tomic

Alex is an AWS Enterprise Solutions Architect focused on security and compliance. He joined AWS in 2014.

Nathan Case

Nathan Case

Nathan is a Senior Security Strategist, and joined AWS in 2016. He is always interested to see where our customers plan to go and how we can help them get there. He is also interested in intel, combined data lake sharing opportunities, and open source collaboration. In the end Nathan loves technology and that we can change the world to make it a better place.

How to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hosts

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-suspicious-hosts/

When you’re implementing security measures across your AWS resources, you should use a holistic approach that incorporates controls across multiple areas. In the Cloud Adoption Framework (CAF) Security perspective whitepaper, we define these controls across four categories.

  • Directive controls. Establish the governance, risk, and compliance models the environment will operate within.
  • Preventive controls. Protect your workloads and mitigate threats and vulnerabilities.
  • Detective controls. Provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive controls. Drive remediation of potential deviations from your security baselines.

The use of security automation is also a key principle outlined in the whitepaper. It helps reduce operational overhead and create repeatable, predictable approaches to monitoring and responding to events. You can take advantage of AWS services to build powerful solutions for the automated detection and remediation of threats against your AWS environments. For example, you can configure Amazon CloudWatch Events to invoke a Lambda action in response to suspicious or unexpected behavior in your AWS environment detected by Amazon GuardDuty. You can configure automated flows that use both detective and responsive controls and might also feed into preventative controls to help mitigate the threat in the future. Depending on the type of source event, you can automatically invoke specific actions, such as modifying access controls, terminating instances, or revoking credentials.

In this blog post, we’ll show you how to use Amazon GuardDuty to automatically update the AWS Web Application Firewall Web Access Control Lists (WebACLs) and VPC Network Access Control Lists (NACLs) in response to GuardDuty findings. After GuardDuty detects a suspicious activity, the solution updates these resources to block communication from the suspicious host while you perform additional investigation and remediation. Once communication has been blocked, further occurrences of a finding are reduced, allowing security and operations teams to focus more on higher priority tasks.

Amazon GuardDuty is a continuous security monitoring and threat detection service that incorporates threat intelligence, anomaly detection, and machine learning to help protect your AWS resources, including your AWS accounts. Amazon CloudWatch Events delivers a near-real-time stream of system events that describe changes in AWS resources. Amazon GuardDuty sends notifications based on Amazon CloudWatch Events when any change in the findings takes place. In the context of GuardDuty, such changes include newly generated findings and all subsequent occurrences of these existing findings. Using rules that you can quickly set up, you can match CloudWatch events and route them to one or more target actions. This solution routes matched events to AWS Lambda, which then performs updates to AWS Web Application Firewall (WAF) and VPC NACLs. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports both managed rules as well as a powerful rule language for custom rules. A Network Access Control List (NACL) is an optional layer of security for your Amazon Virtual Private Cloud (VPC) that acts as a firewall for controlling traffic in and out of one or more subnets.

Solution overview

The solution assumes that Amazon GuardDuty is enabled in your AWS account. If it isn’t enabled, you can find more info about the free trial and pricing here, and you can follow the steps in the GuardDuty documentation to set up the service and start monitoring your account.

Figure 1 shows how the CloudFormation template creates the sample solution:

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A GuardDuty finding is raised with suspected malicious activity.
  2. A CloudWatch Event is configured to filter for GuardDuty Finding type.
  3. A Lambda function is invoked by the CloudWatch Event and parses the GuardDuty finding.
  4. State data for blocked hosts is stored in Amazon DynamoDB table. The Lambda function checks the state table for existing host entry.
  5. The Lambda function creates a Rule inside AWS WAF and in a VPC NACL.
  6. A notification email is sent via Amazon Simple Notification Service (SNS).

A second Lambda function runs on a 5-minute recurring schedule and removes entries that are past the configurable retention period from WAF IPSets (which is a list that contains the blacklisted IPs or CIDRs), VPC NACLs, and the Dynamo DB table.

GuardDuty findings referenced in this solution

This solution’s CloudWatch Event Rule pattern is configured to match the following GuardDuty Finding types:

  1. UnauthorizedAccess:EC2/SSHBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to SSH services on Linux-based systems.
  2. UnauthorizedAccess:EC2/RDPBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to RDP services on Windows-based systems.
  3. Recon:EC2/PortProbeUnprotectedPort
    This finding informs you that a port on an EC2 instance in your AWS environment isn’t blocked by a security group, access control list (ACL), or an on-host firewall (for example, Linux IPChains), and known scanners on the internet are actively probing it.
  4. Trojan:EC2/BlackholeTraffic
    This finding informs you that an EC2 instance in your AWS environment might be compromised because it’s trying to communicate with an IP address of a black hole. Black holes refer to places in the network where incoming or outgoing traffic is silently discarded without informing the source that the data didn’t reach its intended recipient.
  5. Backdoor:EC2/XORDDOS
    This finding informs you that an EC2 instance in your AWS environment is attempting to communicate with an IP address that’s associated with XOR DDoS malware. XOR DDoS is Trojan malware that hijacks Linux systems.
  6. UnauthorizedAccess:EC2/TorIPCaller
    This finding informs you that an EC2 instance in your AWS environment is receiving inbound connections from a Tor exit node. Tor is software for enabling anonymous communication. It encrypts and randomly bounces communications through relays between a series of network nodes.
  7. Trojan:EC2/DropPoint
    This finding informs you that an EC2 instance in your AWS environment is trying to communicate with an IP address of a remote host that’s known to hold credentials and other stolen data captured by malware.

When one of these GuardDuty finding types is matched by the CloudWatch Event Rule, an entry is created in the target ACLs to deny the suspicious host, and then a notification is sent to an email address by this solution’s Lambda. Blocking traffic from the suspicious host helps to mitigate the threat while you perform additional investigation and remediation. For more information, see Remediating a Compromised EC2 Instance.

Solution deployment

This sample solution includes 6 main steps:

  1. Deploy the CloudFormation template.
  2. Create and run a Lambda GuardDuty finding test event.
  3. Confirm the entry in the VPC Network ACL.
  4. Confirm the entry in the AWS WAF IPSets.
  5. Confirm the SNS notification subscription.
  6. Apply the WAF Web ACLs to resources.

Step 1: Deploy the CloudFormation template

For this next step, make sure you deploy the template within the AWS account and region where you want to monitor GuardDuty findings.

  1. Select this link to launch a CloudFormation stack in your account.

    Note: The stack will launch in the N. Virginia (us-east-1) region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS regions, first upload the solution’s Lambda deployment packages (zip files with code) to an S3 bucket in the selected region. Once you have uploaded the zip files in the target region, update the CloudFormation ArtifactsBucket and ArticaftsPrefix parameters referenced in step 3 below.

  2. In the CloudFormation console, select the Select Template form, and then select Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameterInput parameter description
    AdminEmailEmail address to receive notifications. Must be a valid email address.
    RetentionHow long to retain IP addresses in the blacklist (in minutes). Default is 12 hours.
    CloudFrontIPSetIdID for existing WAF IPSet on CloudFront. Enter the ID here if there’s an existing WAF IPSet on CloudFront you want to use. Leave set to the default value of False if you want to create a new WebACL and IPSet.
    ALBIPSetIdID for existing WAF IPSet on ALB. Enter if there is an existing WAF IPSet on ALB. Leave set to False for creation of new WebACL and IPSet.
    ArtifactsBucketS3 bucket with artifact files (Lambda functions, templates, html files, etc.). Leave set to the default value for deployment into N. Virginia region.
    ArtifactsPrefixPath in the S3 bucket containing artifact files. Leave set to the default value for deployment into N. Virginia region.

    Note: AWS WAF is not currently available in all regions. For more information about where it’s available, refer to this page.

    Figure 2 shows an example of values entered on this screen:

    Figure 2: CloudFormation parameters on the "Specify Details" page

    Figure 2: CloudFormation parameters on the “Specify Details” page

  4. Enter values for all of the input parameters, and then select Next.
  5. On the Options page, accept the defaults, and then select Next.
  6. On the Review page, confirm the details, and then select Create.
  7. While the stack is being created, check the email inbox for the value you gave for the AdminEmail address parameter. Look for an email message with the subject “AWS Notification – Subscription Confirmation”. Select the link to confirm the subscription to the SNS topic. You should see a message similar to this:

    Figure 3: Subscription confirmation

    Figure 3: Subscription confirmation

Once the Status field for the CloudFormation stack changes to CREATE_COMPLETE, the solution is implemented and is ready for testing.

Figure 4: The "Status" displays "CREATE_COMPLETE"

Figure 4: The “Status” displays “CREATE_COMPLETE”

Step 2: Create and run a Lambda GuardDuty finding test event

Once the CloudFormation stack has completed deployment, you can test the functionality using a Lambda test event.

  1. In the console, select Services > VPC > Subnets and locate a subnet suitable for testing the solution. On the Summary tab, copy the Subnet ID to the clipboard or to a text editor.

    Figure 5: The "Subnet ID" value on the "Summary" tab

    Figure 5: The “Subnet ID” value on the “Summary” tab

  2. In the console, select Services > CloudFormation > GuardDutytoACL stack. In the stack Outputs tab, look for the GuardDutytoACLLambda entry, similar to Figure 6 below:

    Figure 6: The "GuardDutytoACLLambda" entry on the "Outputs" tab

    Figure 6: The “GuardDutytoACLLambda” entry on the “Outputs” tab

  3. 3. Select the link and you’ll be redirected to the Lambda console, with the Lambda function already open, similar to Figure 7:

    Figure 7: The Lambda function open in the Lambda console

    Figure 7: The Lambda function open in the Lambda console

  4. In the top right, select the Select a test event… drop-down list, and then select Configure test events.

    Figure 8: Select "Configure test events" from the drop-down list

    Figure 8: Select “Configure test events” from the drop-down list

  5. To facilitate testing, a test event file has been provided. On the Configure test event page, provide a name for Event name, and then paste the provided test event JSON in the body of the event.
  6. Update the value of subnetId key (line 34) to the value of your Subnet ID from step 2.1, and then select Create.

    Figure 9: Update the value of the "subnetId" key

    Figure 9: Update the value of the “subnetId” key

  7. Select Test to invoke the Lambda with the test event. You should see a message “Execution result: succeeded” similar to below:

    Figure 10: The "Test" button and the "succeeded" message

    Figure 10: The “Test” button and the “succeeded” message

Step 3: Confirm the entry in the VPC Network ACL (NACL)

In this step, you’ll confirm the DENY entry was created in the NACL. This solution is configured to create up to 10 entries in an ACL ranging between rule numbers 71 and 80. Since NACL rules are processed in order, it’s important that the DENY rule is placed before the ALLOW rule.

  1. In the console, select Services > VPC > Subnets and locate the subnet you provided for the test event.
  2. Select the Network ACL tab and confirm the new entry generated from the test event.
    Figure 11: Check the entry from the test event on the "Network" tab

    Figure 11: Check the entry from the test event on the “Network” tab

    Note that VPC NACL entries are created in the rule number range between 71 and 80. Older entries are aged out to create a “sliding window” of blocked hosts.

Step 4: Confirm the entry in the AWS WAF IPSets

In this step, you’ll verify that the entry was added to the CloudFront WAF IPSet and to the ALB WAF IPSet.

  1. In the console, select Services > WAF & Shield, and then select IP addresses.
  2. For Filter, select Global (CloudFront), and then select the IPSet named GD2ACL CloudFront IPSet for Blacklisted IP addresses.

    Figure 12: Filter the list and then select "GD2ACL CloudFront IPSet for Blacklisted IP addresses"

    Figure 12: Filter the list and then select “GD2ACL CloudFront IPSet for Blacklisted IP addresses”

  3. Confirm the IP address that was added to the list in the IPSet:

    Figure 13: Confirm the IP address was added

    Figure 13: Confirm the IP address was added

  4. In the console, select Services > WAF & Shield, and then select IP addresses.
  5. For Filter, select US East (N. Virginia)–or another region in which you deployed this solution–and then select the IPSet named GD2ACL ALB IPSet for blacklisted IP addresses.
  6. Confirm the IP address added to the ALB IPSet:

    Figure 14: Make sure the IP address was added

    Figure 14: Make sure the IP address was added

There might be specific host addresses that you want to prevent from being added to the blacklist. You can do this within GuardDuty by using a trusted IP list. Trusted IP lists consist of IP addresses that you have whitelisted for secure communication with your AWS infrastructure and applications. GuardDuty doesn’t generate findings for IP addresses on trusted IP lists. For additional information, see Working with Trusted IP Lists and Threat Lists.

Step 5: Confirm the SNS notification subscription

In this step, you’ll view the SNS notification that was sent to the email address you set up.

    1. Review the email inbox for the value you provided for the AdminEmail parameter and look for a message with the subject line “AWS GD2ACL Alert.”The contents of the message from SNS should be similar to this:

      Figure 15: SNS message example

      Figure 15: SNS message example

Step 6: Apply the WAF Web ACLs to resources

The final task is to associate the Web ACL with the CloudFront Distributions and Application Load Balancers that you want to automatically update with this solution. To learn how to do this, see Associating or Disassociating a Web ACL with a CloudFront Distribution or an Application Load Balancer.

You can also use AWS Firewall Manger to associate the Web ACLs. AWS Firewall Manager simplifies your AWS WAF administration and maintenance tasks across multiple accounts and resources. With Firewall Manager, you set up your firewall rules just once. The service automatically applies your rules across your accounts and resources, even as you add new resources.

Summary

You’ve learned how to use Amazon GuardDuty to automatically update AWS Web Application Firewall (AWS WAF) and VPC Network Access Control Lists (ACLs) in response to GuardDuty findings. With just a few steps, you can use this sample solution to help mitigate threats by blocking communication with suspicious hosts. You can explore additional solutions possible using GuardDuty Finding types and CloudWatch Events target actions. This solution’s code is available on GitHub. Feel free to play around with the code to add more GuardDuty findings to this solution and also to build bigger and better solutions!

If you have comments about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the GuardDuty, WAF, or CloudWatch forums, or contact AWS Support.

How to Configure an LDAPS Endpoint for Simple AD

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-configure-an-ldaps-endpoint-for-simple-ad/

Simple AD, which is powered by Samba  4, supports basic Active Directory (AD) authentication features such as users, groups, and the ability to join domains. Simple AD also includes an integrated Lightweight Directory Access Protocol (LDAP) server. LDAP is a standard application protocol for the access and management of directory information. You can use the BIND operation from Simple AD to authenticate LDAP client sessions. This makes LDAP a common choice for centralized authentication and authorization for services such as Secure Shell (SSH), client-based virtual private networks (VPNs), and many other applications. Authentication, the process of confirming the identity of a principal, typically involves the transmission of highly sensitive information such as user names and passwords. To protect this information in transit over untrusted networks, companies often require encryption as part of their information security strategy.

In this blog post, we show you how to configure an LDAPS (LDAP over SSL/TLS) encrypted endpoint for Simple AD so that you can extend Simple AD over untrusted networks. Our solution uses Elastic Load Balancing (ELB) to send decrypted LDAP traffic to HAProxy running on Amazon EC2, which then sends the traffic to Simple AD. ELB offers integrated certificate management, SSL/TLS termination, and the ability to use a scalable EC2 backend to process decrypted traffic. ELB also tightly integrates with Amazon Route 53, enabling you to use a custom domain for the LDAPS endpoint. The solution needs the intermediate HAProxy layer because ELB can direct traffic only to EC2 instances. To simplify testing and deployment, we have provided an AWS CloudFormation template to provision the ELB and HAProxy layers.

This post assumes that you have an understanding of concepts such as Amazon Virtual Private Cloud (VPC) and its components, including subnets, routing, Internet and network address translation (NAT) gateways, DNS, and security groups. You should also be familiar with launching EC2 instances and logging in to them with SSH. If needed, you should familiarize yourself with these concepts and review the solution overview and prerequisites in the next section before proceeding with the deployment.

Note: This solution is intended for use by clients requiring an LDAPS endpoint only. If your requirements extend beyond this, you should consider accessing the Simple AD servers directly or by using AWS Directory Service for Microsoft AD.

Solution overview

The following diagram and description illustrates and explains the Simple AD LDAPS environment. The CloudFormation template creates the items designated by the bracket (internal ELB load balancer and two HAProxy nodes configured in an Auto Scaling group).

Diagram of the the Simple AD LDAPS environment

Here is how the solution works, as shown in the preceding numbered diagram:

  1. The LDAP client sends an LDAPS request to ELB on TCP port 636.
  2. ELB terminates the SSL/TLS session and decrypts the traffic using a certificate. ELB sends the decrypted LDAP traffic to the EC2 instances running HAProxy on TCP port 389.
  3. The HAProxy servers forward the LDAP request to the Simple AD servers listening on TCP port 389 in a fixed Auto Scaling group configuration.
  4. The Simple AD servers send an LDAP response through the HAProxy layer to ELB. ELB encrypts the response and sends it to the client.

Note: Amazon VPC prevents a third party from intercepting traffic within the VPC. Because of this, the VPC protects the decrypted traffic between ELB and HAProxy and between HAProxy and Simple AD. The ELB encryption provides an additional layer of security for client connections and protects traffic coming from hosts outside the VPC.

Prerequisites

  1. Our approach requires an Amazon VPC with two public and two private subnets. The previous diagram illustrates the environment’s VPC requirements. If you do not yet have these components in place, follow these guidelines for setting up a sample environment:
    1. Identify a region that supports Simple AD, ELB, and NAT gateways. The NAT gateways are used with an Internet gateway to allow the HAProxy instances to access the internet to perform their required configuration. You also need to identify the two Availability Zones in that region for use by Simple AD. You will supply these Availability Zones as parameters to the CloudFormation template later in this process.
    2. Create or choose an Amazon VPC in the region you chose. In order to use Route 53 to resolve the LDAPS endpoint, make sure you enable DNS support within your VPC. Create an Internet gateway and attach it to the VPC, which will be used by the NAT gateways to access the internet.
    3. Create a route table with a default route to the Internet gateway. Create two NAT gateways, one per Availability Zone in your public subnets to provide additional resiliency across the Availability Zones. Together, the routing table, the NAT gateways, and the Internet gateway enable the HAProxy instances to access the internet.
    4. Create two private routing tables, one per Availability Zone. Create two private subnets, one per Availability Zone. The dual routing tables and subnets allow for a higher level of redundancy. Add each subnet to the routing table in the same Availability Zone. Add a default route in each routing table to the NAT gateway in the same Availability Zone. The Simple AD servers use subnets that you create.
    5. The LDAP service requires a DNS domain that resolves within your VPC and from your LDAP clients. If you do not have an existing DNS domain, follow the steps to create a private hosted zone and associate it with your VPC. To avoid encryption protocol errors, you must ensure that the DNS domain name is consistent across your Route 53 zone and in the SSL/TLS certificate (see Step 2 in the “Solution deployment” section).
  2. Make sure you have completed the Simple AD Prerequisites.
  3. We will use a self-signed certificate for ELB to perform SSL/TLS decryption. You can use a certificate issued by your preferred certificate authority or a certificate issued by AWS Certificate Manager (ACM).
    Note: To prevent unauthorized connections directly to your Simple AD servers, you can modify the Simple AD security group on port 389 to block traffic from locations outside of the Simple AD VPC. You can find the security group in the EC2 console by creating a search filter for your Simple AD directory ID. It is also important to allow the Simple AD servers to communicate with each other as shown on Simple AD Prerequisites.

Solution deployment

This solution includes five main parts:

  1. Create a Simple AD directory.
  2. Create a certificate.
  3. Create the ELB and HAProxy layers by using the supplied CloudFormation template.
  4. Create a Route 53 record.
  5. Test LDAPS access using an Amazon Linux client.

1. Create a Simple AD directory

With the prerequisites completed, you will create a Simple AD directory in your private VPC subnets:

  1. In the Directory Service console navigation pane, choose Directories and then choose Set up directory.
  2. Choose Simple AD.
    Screenshot of choosing "Simple AD"
  3. Provide the following information:
    • Directory DNS – The fully qualified domain name (FQDN) of the directory, such as corp.example.com. You will use the FQDN as part of the testing procedure.
    • NetBIOS name – The short name for the directory, such as CORP.
    • Administrator password – The password for the directory administrator. The directory creation process creates an administrator account with the user name Administrator and this password. Do not lose this password because it is nonrecoverable. You also need this password for testing LDAPS access in a later step.
    • Description – An optional description for the directory.
    • Directory Size – The size of the directory.
      Screenshot of the directory details to provide
  4. Provide the following information in the VPC Details section, and then choose Next Step:
    • VPC – Specify the VPC in which to install the directory.
    • Subnets – Choose two private subnets for the directory servers. The two subnets must be in different Availability Zones. Make a note of the VPC and subnet IDs for use as CloudFormation input parameters. In the following example, the Availability Zones are us-east-1a and us-east-1c.
      Screenshot of the VPC details to provide
  5. Review the directory information and make any necessary changes. When the information is correct, choose Create Simple AD.

It takes several minutes to create the directory. From the AWS Directory Service console , refresh the screen periodically and wait until the directory Status value changes to Active before continuing. Choose your Simple AD directory and note the two IP addresses in the DNS address section. You will enter them when you run the CloudFormation template later.

Note: Full administration of your Simple AD implementation is out of scope for this blog post. See the documentation to add users, groups, or instances to your directory. Also see the previous blog post, How to Manage Identities in Simple AD Directories.

2. Create a certificate

In the previous step, you created the Simple AD directory. Next, you will generate a self-signed SSL/TLS certificate using OpenSSL. You will use the certificate with ELB to secure the LDAPS endpoint. OpenSSL is a standard, open source library that supports a wide range of cryptographic functions, including the creation and signing of x509 certificates. You then import the certificate into ACM that is integrated with ELB.

  1. You must have a system with OpenSSL installed to complete this step. If you do not have OpenSSL, you can install it on Amazon Linux by running the command, sudo yum install openssl. If you do not have access to an Amazon Linux instance you can create one with SSH access enabled to proceed with this step. Run the command, openssl version, at the command line to see if you already have OpenSSL installed.
    [[email protected] ~]$ openssl version
    OpenSSL 1.0.1k-fips 8 Jan 2015

  2. Create a private key using the command, openssl genrsa command.
    [[email protected] tmp]$ openssl genrsa 2048 > privatekey.pem
    Generating RSA private key, 2048 bit long modulus
    ......................................................................................................................................................................+++
    ..........................+++
    e is 65537 (0x10001)

  3. Generate a certificate signing request (CSR) using the openssl req command. Provide the requested information for each field. The Common Name is the FQDN for your LDAPS endpoint (for example, ldap.corp.example.com). The Common Name must use the domain name you will later register in Route 53. You will encounter certificate errors if the names do not match.
    [[email protected] tmp]$ openssl req -new -key privatekey.pem -out server.csr
    You are about to be asked to enter information that will be incorporated into your certificate request.

  4. Use the openssl x509 command to sign the certificate. The following example uses the private key from the previous step (privatekey.pem) and the signing request (server.csr) to create a public certificate named server.crt that is valid for 365 days. This certificate must be updated within 365 days to avoid disruption of LDAPS functionality.
    [[email protected] tmp]$ openssl x509 -req -sha256 -days 365 -in server.csr -signkey privatekey.pem -out server.crt
    Signature ok
    subject=/C=XX/L=Default City/O=Default Company Ltd/CN=ldap.corp.example.com
    Getting Private key

  5. You should see three files: privatekey.pem, server.crt, and server.csr.
    [[email protected] tmp]$ ls
    privatekey.pem server.crt server.csr

    Restrict access to the private key.

    [[email protected] tmp]$ chmod 600 privatekey.pem

    Keep the private key and public certificate for later use. You can discard the signing request because you are using a self-signed certificate and not using a Certificate Authority. Always store the private key in a secure location and avoid adding it to your source code.

  6. In the ACM console, choose Import a certificate.
  7. Using your favorite Linux text editor, paste the contents of your server.crt file in the Certificate body box.
  8. Using your favorite Linux text editor, paste the contents of your privatekey.pem file in the Certificate private key box. For a self-signed certificate, you can leave the Certificate chain box blank.
  9. Choose Review and import. Confirm the information and choose Import.

3. Create the ELB and HAProxy layers by using the supplied CloudFormation template

Now that you have created your Simple AD directory and SSL/TLS certificate, you are ready to use the CloudFormation template to create the ELB and HAProxy layers.

  1. Load the supplied CloudFormation template to deploy an internal ELB and two HAProxy EC2 instances into a fixed Auto Scaling group. After you load the template, provide the following input parameters. Note: You can find the parameters relating to your Simple AD from the directory details page by choosing your Simple AD in the Directory Service console.
Input parameterInput parameter description
HAProxyInstanceSizeThe EC2 instance size for HAProxy servers. The default size is t2.micro and can scale up for large Simple AD environments.
MyKeyPairThe SSH key pair for EC2 instances. If you do not have an existing key pair, you must create one.
VPCIdThe target VPC for this solution. Must be in the VPC where you deployed Simple AD and is available in your Simple AD directory details page.
SubnetId1The Simple AD primary subnet. This information is available in your Simple AD directory details page.
SubnetId2The Simple AD secondary subnet. This information is available in your Simple AD directory details page.
MyTrustedNetworkTrusted network Classless Inter-Domain Routing (CIDR) to allow connections to the LDAPS endpoint. For example, use the VPC CIDR to allow clients in the VPC to connect.
SimpleADPriIPThe primary Simple AD Server IP. This information is available in your Simple AD directory details page.
SimpleADSecIPThe secondary Simple AD Server IP. This information is available in your Simple AD directory details page.
LDAPSCertificateARNThe Amazon Resource Name (ARN) for the SSL certificate. This information is available in the ACM console.
  1. Enter the input parameters and choose Next.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details and choose Create. The stack will be created in approximately 5 minutes.

4. Create a Route 53 record

The next step is to create a Route 53 record in your private hosted zone so that clients can resolve your LDAPS endpoint.

  1. If you do not have an existing DNS domain for use with LDAP, create a private hosted zone and associate it with your VPC. The hosted zone name should be consistent with your Simple AD (for example, corp.example.com).
  2. When the CloudFormation stack is in CREATE_COMPLETE status, locate the value of the LDAPSURL on the Outputs tab of the stack. Copy this value for use in the next step.
  3. On the Route 53 console, choose Hosted Zones and then choose the zone you used for the Common Name box for your self-signed certificate. Choose Create Record Set and enter the following information:
    1. Name – The label of the record (such as ldap).
    2. Type – Leave as A – IPv4 address.
    3. Alias – Choose Yes.
    4. Alias Target – Paste the value of the LDAPSURL on the Outputs tab of the stack.
  4. Leave the defaults for Routing Policy and Evaluate Target Health, and choose Create.
    Screenshot of finishing the creation of the Route 53 record

5. Test LDAPS access using an Amazon Linux client

At this point, you have configured your LDAPS endpoint and now you can test it from an Amazon Linux client.

  1. Create an Amazon Linux instance with SSH access enabled to test the solution. Launch the instance into one of the public subnets in your VPC. Make sure the IP assigned to the instance is in the trusted IP range you specified in the CloudFormation parameter MyTrustedNetwork in Step 3.b.
  2. SSH into the instance and complete the following steps to verify access.
    1. Install the openldap-clients package and any required dependencies:
      sudo yum install -y openldap-clients.
    2. Add the server.crt file to the /etc/openldap/certs/ directory so that the LDAPS client will trust your SSL/TLS certificate. You can copy the file using Secure Copy (SCP) or create it using a text editor.
    3. Edit the /etc/openldap/ldap.conf file and define the environment variables BASE, URI, and TLS_CACERT.
      • The value for BASE should match the configuration of the Simple AD directory name.
      • The value for URI should match your DNS alias.
      • The value for TLS_CACERT is the path to your public certificate.

Here is an example of the contents of the file.

BASE dc=corp,dc=example,dc=com
URI ldaps://ldap.corp.example.com
TLS_CACERT /etc/openldap/certs/server.crt

To test the solution, query the directory through the LDAPS endpoint, as shown in the following command. Replace corp.example.com with your domain name and use the Administrator password that you configured with the Simple AD directory

$ ldapsearch -D "[email protected]corp.example.com" -W sAMAccountName=Administrator

You should see a response similar to the following response, which provides the directory information in LDAP Data Interchange Format (LDIF) for the administrator distinguished name (DN) from your Simple AD LDAP server.

# extended LDIF
#
# LDAPv3
# base <dc=corp,dc=example,dc=com> (default) with scope subtree
# filter: sAMAccountName=Administrator
# requesting: ALL
#

# Administrator, Users, corp.example.com
dn: CN=Administrator,CN=Users,DC=corp,DC=example,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
description: Built-in account for administering the computer/domain
instanceType: 4
whenCreated: 20170721123204.0Z
uSNCreated: 3223
name: Administrator
objectGUID:: l3h0HIiKO0a/ShL4yVK/vw==
userAccountControl: 512
…

You can now use the LDAPS endpoint for directory operations and authentication within your environment. If you would like to learn more about how to interact with your LDAPS endpoint within a Linux environment, here are a few resources to get started:

Troubleshooting

If you receive an error such as the following error when issuing the ldapsearch command, there are a few things you can do to help identify issues.

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
  • You might be able to obtain additional error details by adding the -d1 debug flag to the ldapsearch command in the previous section.
    $ ldapsearch -D "[email protected]" -W sAMAccountName=Administrator –d1

  • Verify that the parameters in ldap.conf match your configured LDAPS URI endpoint and that all parameters can be resolved by DNS. You can use the following dig command, substituting your configured endpoint DNS name.
    $ dig ldap.corp.example.com

  • Confirm that the client instance from which you are connecting is in the CIDR range of the CloudFormation parameter, MyTrustedNetwork.
  • Confirm that the path to your public SSL/TLS certificate configured in ldap.conf as TLS_CAERT is correct. You configured this in Step 5.b.3. You can check your SSL/TLS connection with the command, substituting your configured endpoint DNS name for the string after –connect.
    $ echo -n | openssl s_client -connect ldap.corp.example.com:636

  • Verify that your HAProxy instances have the status InService in the EC2 console: Choose Load Balancers under Load Balancing in the navigation pane, highlight your LDAPS load balancer, and then choose the Instances

Conclusion

You can use ELB and HAProxy to provide an LDAPS endpoint for Simple AD and transport sensitive authentication information over untrusted networks. You can explore using LDAPS to authenticate SSH users or integrate with other software solutions that support LDAP authentication. This solution’s CloudFormation template is available on GitHub.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Directory Service forum.

– Cameron and Jeff

How to Monitor Host-Based Intrusion Detection System Alerts on Amazon EC2 Instances

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-monitor-host-based-intrusion-detection-system-alerts-on-amazon-ec2-instances/

To help you secure your AWS resources, we recommend that you adopt a layered approach that includes the use of preventative and detective controls. For example, incorporating host-based controls for your Amazon EC2 instances can restrict access and provide appropriate levels of visibility into system behaviors and access patterns. These controls often include a host-based intrusion detection system (HIDS) that monitors and analyzes network traffic, log files, and file access on a host. A HIDS typically integrates with alerting and automated remediation solutions to detect and address attacks, unauthorized or suspicious activities, and general errors in your environment.

In this blog post, I show how you can use Amazon CloudWatch Logs to collect and aggregate alerts from an open-source security (OSSEC) HIDS. I use a CloudWatch Logs subscription to deliver the alerts to Amazon Elasticsearch Service (Amazon ES) for analysis and visualization with Kibana – a popular open-source visualization tool. To make it easier for you to see this solution in action, I provide a CloudFormation template to handle most of the deployment work. You can use this solution to gain improved visibility and insights across your EC2 fleet and help drive security remediation activities. For example, if specific hosts are scanning your EC2 instances and triggering OSSEC alerts, you can implement a VPC network access control list (ACL) or AWS WAF rule to block those source IP addresses or CIDR blocks.

Solution overview

The following diagram depicts a high-level overview of this post’s solution.

Diagram showing a high-level overview of this post's solution

Here is how the solution works:

  1. On the target EC2 instances, the OSSEC HIDS generates alerts that the CloudWatch Logs agent captures. The HIDS performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, real-time alerting, and active response. For more information, see Getting started with OSSEC.
  2. The CloudWatch Logs group receives the alerts as events.
  3. A CloudWatch Logs subscription is applied to the target log group to forward the events through AWS Lambda to Amazon ES.
  4. Amazon ES loads the logged alert data.
  5. Kibana visualizes the alerts in near-real time. Amazon ES provides a default installation of Kibana with every Amazon ES domain.

Deployment considerations

For the purposes of this post, the primary OSSEC HIDS deployment consists of a Linux-based installation for which the alerts are generated locally within each system. Note that this solution depends on Amazon ES and Lambda in the target region for deployment. You can find the latest information about AWS service availability in the Region table. You also must identify an Amazon Virtual Private Cloud (VPC) subnet that has Internet access and DNS resolution for your EC2 instances to provision the required components properly.

To simplify the deployment process, I created a test environment AWS CloudFormation template. You can use this template to provision a test environment stack automatically into an existing Amazon VPC subnet. You will use CloudFormation to provision the core components of this solution and then configure Kibana for alert analysis. The source code for this solution is available on GitHub.

This post’s template performs the following high-level steps in the region you choose:

  1. Creates two EC2 instances running Amazon Linux with an AWS Identity and Access Management (IAM) role for CloudWatch Logs access. Note: To provide sample HIDS alert data, the two EC2 instances are configured automatically to generate simulated HIDS alerts locally.
  2. Installs and configures OSSEC, the CloudWatch Logs agent, and additional packages used for the test environment.
  3. Creates the target HIDS Amazon ES domain.
  4. Creates the target HIDS CloudWatch Logs group.
  5. Creates the Lambda function and CloudWatch Logs subscription to send HIDS alerts to Amazon ES.

After the CloudFormation stack has been deployed, you can access the Kibana instance on the Amazon ES domain to complete the final steps of the setup for the test environment, which I show later in the post.

Although out of scope for this blog post, when deploying OSSEC into your existing EC2 environment, you should determine the desired configuration, including target log files for monitoring, directories for integrity checking, and active response. This typically also requires time for testing and tuning of the system to optimize it for your environment. The OSSEC documentation is a good place to start to familiarize yourself with this process. You could take another approach to OSSEC deployment, which involves an agent installation and a separate OSSEC manager to process events centrally before exporting them to CloudWatch Logs. This deployment requires an additional server component and network communication between the agent and the manager. Note that although Windows Server is supported by OSSEC, it requires an agent-based installation and therefore requires an OSSEC manager to be present. Review OSSEC Architecture for additional information about OSSEC architecture and deployment options.

Deploy the solution

This solution’s high-level steps are:

  1. Launch the CloudFormation stack.
  2. Configure a Kibana index pattern and begin exploring alerts.
  3. Configure a Kibana HIDS dashboard and visualize alerts.

1. Launch the CloudFormation stack

You will launch your test environment by using a CloudFormation template that automates the provisioning process. For the following input parameters, you must identify a target VPC and subnet (which requires Internet access) for deployment. If the target subnet uses an Internet gateway, set the AssignPublicIP parameter to true. If the target subnet uses a NAT gateway, you can leave the default setting of AssignPublicIP as false.

First, you will need to stage the Lambda function deployment package in an S3 bucket located in the region into which you are deploying. To do this, download the zipped deployment package and upload it to your in-region bucket. For additional information about uploading objects to S3, see Uploading Object into Amazon S3.

You also must provide a trusted source IP address or CIDR block for access to the environment following the creation of the stack and an EC2 key pair to associate with the instances. For information about creating an EC2 key pair, see Creating a Key Pair Using Amazon EC2. Note that the trusted IP address or CIDR block also is used to create the Amazon ES access policy automatically for Kibana access. We recommend that you use a specific IP address or CIDR range rather than using 0.0.0.0/0, which would allow all IPv4 addresses to access your instances. For more information about authorizing inbound traffic to your instances, see Authorizing Inbound Traffic for Your Linux Instances.

After you have confirmed the input parameters (see the following screenshot and table for more details), create the CloudFormation stack.

Numbered screenshot showing input parameters

Input parameterInput parameter description
1. HIDSInstanceSizeEC2 instance size for test server
2. ESInstanceSizeAmazon ES instance size
3. MyKeyPairA public/private key pair that allows you to connect securely to your instance after it launches
4. MyS3BucketIn-region S3 bucket with the zipped deployment package
5. MyS3KeyIn-region S3 key for the zipped deployment package
6. VPCIdAn Amazon VPC into which to deploy the solution
7. SubnetIdA SubnetId with outbound connectivity within the VPC you selected (requires Internet access)
8. AssignPublicIPSet to true if your subnet is configured to connect through an Internet gateway; set to false if your subnet is configured to connect through a NAT gateway
9. MyTrustedNetworkYour trusted source IP or CIDR block that is used to whitelist access to the EC2 instances and the Amazon ES endpoint

To finish creating the CloudFormation stack:

  1. Enter the input parameters and choose Next.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details, select the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. (The stack will be created in approximately 10 minutes.)

After the stack has been created, note the HIDSESKibanaURL on the CloudFormation Outputs tab. Then, proceed to the Kibana configuration instructions in the next section.

2. Configure a Kibana index pattern and begin exploring alerts

In this section, you perform the initial setup of Kibana. To access Kibana, find the HIDSESKibanaURL in the CloudFormation stack outputs (see the previous section) and choose it. This will bring you to the Kibana instance, which is automatically provisioned to your Amazon ES instance. The source IP you provided in the CloudFormation input parameters is used to automatically populate the Amazon ES access policy. If you receive an error similar to the following error, you must confirm that your Amazon ES access policy is correct.

{"Message":"User: anonymous is not authorized to perform: es:ESHttpGet on resource: hids-alerts"}

For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Service Domain.

The OSSEC HIDS alerts now are being processed into Amazon ES. To use Kibana to analyze the alert data interactively, you must configure an index pattern that identifies the data you wish to analyze in Amazon ES. You can read additional information about index patterns in the Kibana documentation.

In the Index name or pattern box, type cwl-2017.*. The index pattern is generated within the Lambda function as cwl-YYYY.MM.DD, so you can use a wildcard character for the month and day to match data from 2017. From the Time-field name drop-down list, choose @timestamp, and then choose Create.

Screenshot of the "Configure an index pattern" screen

In Kibana, you should now be able to choose the Discover pane and see alerts being populated. To set the refresh rate for the display of near-real-time alerts, choose your desired time range in the top right (such as Last 15 minutes).

Screenshot of setting the refresh rate of near-real-time alerts

Choose Auto-refresh, and then choose an interval, such as 5 seconds.

Screenshot of auto-refresh of 5 seconds

Kibana should now be configured to auto-refresh at a 5-second interval within the timeframe you configured. You should now see your alerts updating along with a count graph, as shown in the following screenshot.

Screenshot of the alerts updating with a count graph

The EC2 instances are automatically configured by CloudFormation to simulate activity to display several types of alerts, including:

  • Successful sudo to ROOT executed – The Linux sudo command was successfully executed.
  • Web server 400 error code – The server cannot process the request due to an apparent client error (such as malformed request syntax, too large size, invalid request message framing, or deceptive request routing).
  • SSH insecure connection attempt (scan) – Invalid connection attempt to the SSH listener.
  • Login session opened – Opened login session on the system.
  • Login session closed – Closed login session on the system.
  • New Yum package installed – Package installed on the system.
  • Yum package deleted – Package deleted from the system.

Let’s take a closer look at some of the alert fields, as shown in the following screenshot.

Screenshot highlighting some of the alert fields

The numbered alert fields in the preceding screenshot are defined as follows:

  1. @log_group – The source CloudWatch Logs group
  2. @log_stream – The CloudWatch Logs stream name (InstanceID)
  3. @message – The JSON payload from the source alerts.json OSSEC log
  4. @owner – The AWS account ID where the alert originated
  5. @timestamp – The time stamp applied by the consumer Lambda function
  6. full_log – The log event from the source file
  7. location – The source log file path and file name
  8. rule.comment – A brief description of the OSSEC rule that was matched
  9. rule.level – The OSSEC rule classification from 0 to 16 (see Rules Classification for more information)
  10. rule.sidid – The rule ID of the OSSEC rule that was matched
  11. srcip – The source IP address that triggered the alert; in this case, the simulated alerts contain the local IP of the server

You can enter search criteria in the Kibana query bar to explore HIDS alert data interactively. For example, you can run the following query to see all the rule.level 6 alerts for the EC2 InstanceID i-0e427a8594852eca2 where the source IP is 10.10.10.10.

“rule.level: 6 AND @log_stream: "i-0e427a8594852eca2" AND srcip: 10.10.10.10”

You can perform searches including simple text, Lucene query syntax, or use the full JSON-based Elasticsearch Query DSL. You can find additional information on searching your data in the Elasticsearch documentation.

3. Configure a Kibana HIDS dashboard and visualize alerts

To analyze alert trends and patterns over time, it can be helpful to use charts and graphs to represent the alert data. I have configured a basic dashboard template that you can import into your Kibana instance.

To add the template of a sample HIDS dashboard to your Kibana instance:

  1. Save the template locally and then choose Management in the Kibana navigation pane.
  2. Choose Saved Objects, Import, and the HIDS dashboard template.
  3. Choose the eye icon to the right of the HIDS Alerts dashboard entry. This will take you to the imported dashboard.
    Screenshot of the "Edit Saved Objects" screen

After importing the Kibana dashboard template and selecting it, you will see the HIDS dashboard, as shown in the following screenshot. This sample HIDS dashboard includes Alerts Over Time, Top 20 Alert Types, Rule Level Breakdown, Top 10 Rule Source ID, and Top 10 Source IPs.

Screenshot of the HIDS dashboard

To explore the alert data in more detail, you can choose an alert type on which to filter, as shown in the following two screenshots.

Alert showing SSH insecure connection attempts

Alert showing @timestamp per 30 seconds

You can see more details about the alerts based on criteria such as source IP address or time range. For more information about using Kibana to visualize alert data, see the Kibana User Guide.

Summary

In this blog post, I showed how to use CloudWatch Logs to collect alerts in near-real time from an OSSEC HIDS and use a CloudWatch Logs subscription to pass the alerts into Amazon ES for analysis and visualization with Kibana. The dashboard deployed by this solution can help you improve the security monitoring of your EC2 fleet as part of a defense-in-depth security strategy in your AWS environment.

You can use this solution to help detect attacks, anomalous activities, and error trends across your EC2 fleet. You can also use it to help prioritize remediation efforts for your systems or help determine where to introduce additional security controls such as VPC security group rules, VPC network ACLs, or AWS WAF rules.

If you have comments about this post, add them to the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the CloudWatch or Amazon ES forum. The source code for this solution is available on GitHub. If you need OSSEC-specific support, see OSSEC Support Options.

– Cameron