Tag Archives: AWS Web Application Firewall

Using AWS security services to protect against, detect, and respond to the Log4j vulnerability

Post Syndicated from Marshall Jones original https://aws.amazon.com/blogs/security/using-aws-security-services-to-protect-against-detect-and-respond-to-the-log4j-vulnerability/

January 7, 2022: The blog post has been updated to include using Network ACL rules to block potential log4j-related outbound traffic.

January 4, 2022: The blog post has been updated to suggest using WAF rules when correct HTTP Host Header FQDN value is not provided in the request.

December 31, 2021: We made a minor update to the second paragraph in the Amazon Route 53 Resolver DNS Firewall section.

December 29, 2021: A paragraph under the Detect section has been added to provide guidance on validating if log4j exists in an environment.

December 23, 2021: The GuardDuty section has been updated to describe new threat labels added to specific finding to give log4j context.

December 21, 2021: The post includes more info about Route 53 Resolver DNS query logging.

December 20, 2021: The post has been updated to include Amazon Route 53 Resolver DNS Firewall info.

December 17, 2021: The post has been updated to include using Athena to query VPC flow logs.

December 16, 2021: The Respond section of the post has been updated to include IMDSv2 and container mitigation info.

This blog post was first published on December 15, 2021.


Overview

In this post we will provide guidance to help customers who are responding to the recently disclosed log4j vulnerability. This covers what you can do to limit the risk of the vulnerability, how you can try to identify if you are susceptible to the issue, and then what you can do to update your infrastructure with the appropriate patches.

The log4j vulnerability (CVE-2021-44228, CVE-2021-45046) is a critical vulnerability (CVSS 3.1 base score of 10.0) in the ubiquitous logging platform Apache Log4j. This vulnerability allows an attacker to perform a remote code execution on the vulnerable platform. Version 2 of log4j, between versions 2.0-beta-9 and 2.15.0, is affected.

The vulnerability uses the Java Naming and Directory Interface (JNDI) which is used by a Java program to find data, typically through a directory, commonly a LDAP directory in the case of this vulnerability.

Figure 1, below, highlights the log4j JNDI attack flow.

Figure 1. Log4j attack progression

Figure 1. Log4j attack progression. Source: GovCERT.ch, the Computer Emergency Response Team (GovCERT) of the Swiss government

As an immediate response, follow this blog and use the tool designed to hotpatch a running JVM using any log4j 2.0+. Steve Schmidt, Chief Information Security Officer for AWS, also discussed this hotpatch.

Protect

You can use multiple AWS services to help limit your risk/exposure from the log4j vulnerability. You can build a layered control approach, and/or pick and choose the controls identified below to help limit your exposure.

AWS WAF

Use AWS Web Application Firewall, following AWS Managed Rules for AWS WAF, to help protect your Amazon CloudFront distribution, Amazon API Gateway REST API, Application Load Balancer, or AWS AppSync GraphQL API resources.

  • AWSManagedRulesKnownBadInputsRuleSet esp. the Log4JRCE rule which helps inspects the request for the presence of the Log4j vulnerability. Example patterns include ${jndi:ldap://example.com/}.
  • AWSManagedRulesAnonymousIpList esp. the AnonymousIPList rule which helps inspect IP addresses of sources known to anonymize client information.
  • AWSManagedRulesCommonRuleSet, esp. the SizeRestrictions_BODY rule to verify that the request body size is at most 8 KB (8,192 bytes).

You should also consider implementing WAF rules that deny access, if the correct HTTP Host Header FQDN value is not provided in the request. This can help reduce the likelihood of scanners that are scanning the internet IP address space from reaching your resources protected by WAF via a request with an incorrect Host Header, like an IP address instead of an FQDN. It’s also possible to use custom Application Load Balancer listener rules to achieve this.

If you’re using AWS WAF Classic, you will need to migrate to AWS WAF or create custom regex match conditions.

Have multiple accounts? Follow these instructions to use AWS Firewall Manager to deploy AWS WAF rules centrally across your AWS organization.

Amazon Route 53 Resolver DNS Firewall

You can use Route 53 Resolver DNS Firewall, following AWS Managed Domain Lists, to help proactively protect resources with outbound public DNS resolution. We recommend associating Route 53 Resolver DNS Firewall with a rule configured to block domains on the AWSManagedDomainsMalwareDomainList, which has been updated in all supported AWS regions with domains identified as hosting malware used in conjunction with the log4j vulnerability. AWS will continue to deliver domain updates for Route 53 Resolver DNS Firewall through this list.

Also, you should consider blocking outbound port 53 to prevent the use of external untrusted DNS servers. This helps force all DNS queries through DNS Firewall and ensures DNS traffic is visible for GuardDuty inspection. Using DNS Firewall to block DNS resolution of certain country code top-level domains (ccTLD) that your VPC resources have no legitimate reason to connect out to, may also help. Examples of ccTLDs you may want to block may be included in the known log4j callback domains IOCs.

We also recommend that you enable DNS query logging, which allows you to identify and audit potentially impacted resources within your VPC, by inspecting the DNS logs for the presence of blocked outbound queries due to the log4j vulnerability, or to other known malicious destinations. DNS query logging is also useful in helping identify EC2 instances vulnerable to log4j that are responding to active log4j scans, which may be originating from malicious actors or from legitimate security researchers. In either case, instances responding to these scans potentially have the log4j vulnerability and should be addressed. GreyNoise is monitoring for log4j scans and sharing the callback domains here. Some notable domains customers may want to examine log activity for, but not necessarily block, are: *interact.sh, *leakix.net, *canarytokens.com, *dnslog.cn, *.dnsbin.net, and *cyberwar.nl. It is very likely that instances resolving these domains are vulnerable to log4j.

AWS Network Firewall

Customers can use Suricata-compatible IDS/IPS rules in AWS Network Firewall to deploy network-based detection and protection. While Suricata doesn’t have a protocol detector for LDAP, it is possible to detect these LDAP calls with Suricata. Open-source Suricata rules addressing Log4j are available from corelight, NCC Group, from ET Labs, and from CrowdStrike. These rules can help identify scanning, as well as post exploitation of the log4j vulnerability. Because there is a large amount of benign scanning happening now, we recommend customers focus their time first on potential post-exploitation activities, such as outbound LDAP traffic from their VPC to untrusted internet destinations.

We also recommend customers consider implementing outbound port/protocol enforcement rules that monitor or prevent instances of protocols like LDAP from using non-standard LDAP ports such as 53, 80, 123, and 443. Monitoring or preventing usage of port 1389 outbound may be particularly helpful in identifying systems that have been triggered by internet scanners to make command and control calls outbound. We also recommend that systems without a legitimate business need to initiate network calls out to the internet not be given that ability by default. Outbound network traffic filtering and monitoring is not only very helpful with log4j, but with identifying other classes of vulnerabilities too.

Network Access Control Lists

Customers may be able to use Network Access Control List rules (NACLs) to block some of the known log4j-related outbound ports to help limit further compromise of successfully exploited systems. We recommend customers consider blocking ports 1389, 1388, 1234, 12344, 9999, 8085, 1343 outbound. As NACLs block traffic at the subnet level, careful consideration should be given to ensure any new rules do not block legitimate communications using these outbound ports across internal subnets. Blocking ports 389 and 88 outbound can also be helpful in mitigating log4j, but those ports are commonly used for legitimate applications, especially in a Windows Active Directory environment. See the VPC flow logs section below to get details on how you can validate any ports being considered.

Use IMDSv2

Through the early days of the log4j vulnerability researchers have noted that, once a host has been compromised with the initial JDNI vulnerability, attackers sometimes try to harvest credentials from the host and send those out via some mechanism such as LDAP, HTTP, or DNS lookups. We recommend customers use IAM roles instead of long-term access keys, and not store sensitive information such as credentials in environment variables. Customers can also leverage AWS Secrets Manager to store and automatically rotate database credentials instead of storing long-term database credentials in a host’s environment variables. See prescriptive guidance here and here on how to implement Secrets Manager in your environment.

To help guard against such attacks in AWS when EC2 Roles may be in use — and to help keep all IMDS data private for that matter — customers should consider requiring the use of Instance MetaData Service version 2 (IMDSv2). Since IMDSv2 is enabled by default, you can require its use by disabling IMDSv1 (which is also enabled by default). With IMDSv2, requests are protected by an initial interaction in which the calling process must first obtain a session token with an HTTP PUT, and subsequent requests must contain the token in an HTTP header. This makes it much more difficult for attackers to harvest credentials or any other data from the IMDS. For more information about using IMDSv2, please refer to this blog and documentation. While all recent AWS SDKs and tools support IMDSv2, as with any potentially non-backwards compatible change, test this change on representative systems before deploying it broadly.

Detect

This post has covered how to potentially limit the ability to exploit this vulnerability. Next, we’ll shift our focus to which AWS services can help to detect whether this vulnerability exists in your environment.

Figure 2. Log4j finding in the Inspector console

Figure 2. Log4j finding in the Inspector console

Amazon Inspector

As shown in Figure 2, the Amazon Inspector team has created coverage for identifying the existence of this vulnerability in your Amazon EC2 instances and Amazon Elastic Container Registry Images (Amazon ECR). With the new Amazon Inspector, scanning is automated and continual. Continual scanning is driven by events such as new software packages, new instances, and new common vulnerability and exposure (CVEs) being published.

For example, once the Inspector team added support for the log4j vulnerability (CVE-2021-44228 & CVE-2021-45046), Inspector immediately began looking for this vulnerability for all supported AWS Systems Manager managed instances where Log4j was installed via OS package managers and where this package was present in Maven-compatible Amazon ECR container images. If this vulnerability is present, findings will begin appearing without any manual action. If you are using Inspector Classic, you will need to ensure you are running an assessment against all of your Amazon EC2 instances. You can follow this documentation to ensure you are creating an assessment target for all of your Amazon EC2 instances. Here are further details on container scanning updates in Amazon ECR private registries.

GuardDuty

In addition to finding the presence of this vulnerability through Inspector, the Amazon GuardDuty team has also begun adding indicators of compromise associated with exploiting the Log4j vulnerability, and will continue to do so. GuardDuty will monitor for attempts to reach known-bad IP addresses or DNS entries, and can also find post-exploit activity through anomaly-based behavioral findings. For example, if an Amazon EC2 instance starts communicating on unusual ports, GuardDuty would detect this activity and create the finding Behavior:EC2/NetworkPortUnusual. This activity is not limited to the NetworkPortUnusual finding, though. GuardDuty has a number of different findings associated with post exploit activity, such as credential compromise, that might be seen in response to a compromised AWS resource. For a list of GuardDuty findings, please refer to this GuardDuty documentation.

To further help you identify and prioritize issues related to CVE-2021-44228 and CVE-2021-45046, the GuardDuty team has added threat labels to the finding detail for the following finding types:

Backdoor:EC2/C&CActivity.B
If the IP queried is Log4j-related, then fields of the associated finding will include the following values:

  • service.additionalInfo.threatListName = Amazon
  • service.additionalInfo.threatName = Log4j Related

Backdoor:EC2/C&CActivity.B!DNS
If the domain name queried is Log4j-related, then the fields of the associated finding will include the following values:

  • service.additionalInfo.threatListName = Amazon
  • service.additionalInfo.threatName = Log4j Related

Behavior:EC2/NetworkPortUnusual
If the EC2 instance communicated on port 389 or port 1389, then the associated finding severity will be modified to High, and the finding fields will include the following value:

  • service.additionalInfo.context = Possible Log4j callback
Figure 3. GuardDuty finding with log4j threat labels

Figure 3. GuardDuty finding with log4j threat labels

Security Hub

Many customers today also use AWS Security Hub with Inspector and GuardDuty to aggregate alerts and enable automatic remediation and response. In the short term, we recommend that you use Security Hub to set up alerting through AWS Chatbot, Amazon Simple Notification Service, or a ticketing system for visibility when Inspector finds this vulnerability in your environment. In the long term, we recommend you use Security Hub to enable automatic remediation and response for security alerts when appropriate. Here are ideas on how to setup automatic remediation and response with Security Hub.

VPC flow logs

Customers can use Athena or CloudWatch Logs Insights queries against their VPC flow logs to help identify VPC resources associated with log4j post exploitation outbound network activity. Version 5 of VPC flow logs is particularly useful, because it includes the “flow-direction” field. We recommend customers start by paying special attention to outbound network calls using destination port 1389 since outbound usage of that port is less common in legitimate applications. Customers should also investigate outbound network calls using destination ports 1388, 1234, 12344, 9999, 8085, 1343, 389, and 88 to untrusted internet destination IP addresses. Free-tier IP reputation services, such as VirusTotal, GreyNoise, NOC.org, and ipinfo.io, can provide helpful insights related to public IP addresses found in the logged activity.

Note: If you have a Microsoft Active Directory environment in the captured VPC flow logs being queried, you might see false positives due to its use of port 389.

Validation with open-source tools

With the evolving nature of the different log4j vulnerabilities, it’s important to validate that upgrades, patches, and mitigations in your environment are indeed working to mitigate potential exploitation of the log4j vulnerability. You can use open-source tools, such as aws_public_ips, to get a list of all your current public IP addresses for an AWS Account, and then actively scan those IPs with log4j-scan using a DNS Canary Token to get notification of which systems still have the log4j vulnerability and can be exploited. We recommend that you run this scan periodically over the next few weeks to validate that any mitigations are still in place, and no new systems are vulnerable to the log4j issue.

Respond

The first two sections have discussed ways to help prevent potential exploitation attempts, and how to detect the presence of the vulnerability and potential exploitation attempts. In this section, we will focus on steps that you can take to mitigate this vulnerability. As we noted in the overview, the immediate response recommended is to follow this blog and use the tool designed to hotpatch a running JVM using any log4j 2.0+. Steve Schmidt, Chief Information Security Officer for AWS, also discussed this hotpatch.

Figure 4. Systems Manager Patch Manager patch baseline approving critical patches immediately

Figure 4. Systems Manager Patch Manager patch baseline approving critical patches immediately

AWS Patch Manager

If you use AWS Systems Manager Patch Manager, and you have critical patches set to install immediately in your patch baseline, your EC2 instances will already have the patch. It is important to note that you’re not done at this point. Next, you will need to update the class path wherever the library is used in your application code, to ensure you are using the most up-to-date version. You can use AWS Patch Manager to patch managed nodes in a hybrid environment. See here for further implementation details.

Container mitigation

To install the hotpatch noted in the overview onto EKS cluster worker nodes AWS has developed an RPM that performs a JVM-level hotpatch which disables JNDI lookups from the log4j2 library. The Apache Log4j2 node agent is an open-source project built by the Kubernetes team at AWS. To learn more about how to install this node agent, please visit the this Github page.

Once identified, ECR container images will need to be updated to use the patched log4j version. Downstream, you will need to ensure that any containers built with a vulnerable ECR container image are updated to use the new image as soon as possible. This can vary depending on the service you are using to deploy these images. For example, if you are using Amazon Elastic Container Service (Amazon ECS), you might want to update the service to force a new deployment, which will pull down the image using the new log4j version. Check the documentation that supports the method you use to deploy containers.

If you’re running Java-based applications on Windows containers, follow Microsoft’s guidance here.

We recommend you vend new application credentials and revoke existing credentials immediately after patching.

Mitigation strategies if you can’t upgrade

In case you either can’t upgrade to a patched version, which disables access to JDNI by default, or if you are still determining your strategy for how you are going to patch your environment, you can mitigate this vulnerability by changing your log4j configuration. To implement this mitigation in releases >=2.10, you will need to remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class.

For a more comprehensive list about mitigation steps for specific versions, refer to the Apache website.

Conclusion

In this blog post, we outlined key AWS security services that enable you to adopt a layered approach to help protect against, detect, and respond to your risk from the log4j vulnerability. We urge you to continue to monitor our security bulletins; we will continue updating our bulletins with our remediation efforts for our side of the shared-responsibility model.

Given the criticality of this vulnerability, we urge you to pay close attention to the vulnerability, and appropriately prioritize implementing the controls highlighted in this blog.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Syed Shareef

Syed is a Senior Security Solutions Architect at AWS. He works with large financial institutions to help them achieve their business goals with AWS, whilst being compliant with regulatory and security requirements.

Deploying defense in depth using AWS Managed Rules for AWS WAF (part 2)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/deploying-defense-in-depth-using-aws-managed-rules-for-aws-waf-part-2/

In this post, I show you how to use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

In part 1 of this post I describe the technologies and methods that you can use to build and manage defense in depth for your network. In part 2, I will show you how to use those tools to build your defense in depth using AWS Managed Rules as the starting point and how it can be used for optimal effectiveness

Managing policies for multiple environments can be done with minimal administrative overhead and can now be part of a deployment pipeline where you programmatically enforce policies for broad edge network policy enforcements and protect production workloads without compromising on development speed or safety.

Building robust security policy enforcement relies on a layered approach and the same applies to securing your web applications. Having edge policies, application policies, and even private or internal policy enforcement layers adds to the visibility of communication requests as well as unified policy enforcement.

Using a layered AWS WAF deployment, such as is deployed by the procedure that follows, gives you greater flexibility in the amount of rules you can use and the option to standardize edge policies and production policies. This lets you test and develop new applications without comprising the production environments.

In the following example, the application load balancer is in us-east-1. To create a web ACL for Amazon CloudFront you need to deploy the stack in us-east-1. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template can create both web ACLs in this scenario.

Note: If you’re using CloudFront and hosting the origin in us-east-1, you only need to maintain one stack. If your origin is in another region, you need to deploy a stack in us-east-1 for CloudFront web ACLs and another in the region where your application load balancer is. That scenario isn’t covered in the following procedure. None of the underlying infrastructure would be deployed with the example AWS CloudFromation templates provided. Only the AWS WAF configurations would be deployed using the example templates.

Solution overview

The following diagram illustrates the traffic flow where traffic comes in via CloudFront and serves the traffic to the backend load balancers. Both CloudFront and the load balancers support AWS WAF. This is where dedicated web security policies can be enforced to build out a defense-in-depth, multi layered policy enforcement.
 

Figure 1: Defense in depth deployment on AWS WAF

Figure 1: Defense in depth deployment on AWS WAF

Creating AWS Managed Rule web ACLs

During this process we create two web ACLs that are designed for policy enforcement for two dedicated layers. The process won’t deploy the required infrastructure, such as the CloudFront distribution or application load balancers. This example template deploys a single stack in us-east-1 where the CloudFront origin load balancer is located.

To create AWS Managed Rule web ACLs

  1. Download the Amazon-CloudFront-Application-Load-Balancer-AMR.yml template.
  2. Open the AWS Management Console and select the region where the origin application load balancer is deployed. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template that you downloaded deploys both web ACLs for CloudFront and the application load balancer.
     
    Figure 2: Select a region from the console

    Figure 2: Select a region from the console

  3. Under Find Services enter AWS CloudFormation and select Enter.
     
    Figure 3: Find and select AWS CloudFormation

    Figure 3: Find and select AWS CloudFormation

  4. Select Create stack.
     
    Figure 4: Create stack

    Figure 4: Create stack

  5. Select a template file for the stack.
    1. In the Create stack window, select Template is ready and Upload a template file.
    2. Under Upload a template file, select Choose file and select the Amazon-CloudFront-Application-Load-Balancer-AMR.yml example AWS CloudFormation template you downloaded earlier.
    3. Choose Next.
    Figure 5: Prepare and choose a template

    Figure 5: Prepare and choose a template

  6. Add stack details.
    1. Enter a name for the stack in Stack name.
    2. Enter a name for the Edge Network AWS WAF WebACL and for the Public Layer AWS WAF WebACL.
    3. Set a rate-limit for HTTP GET requests in HTTP Get Flood Protection (this rate is applied per IP address over a 5 minute period).
    4. Set a rate limit for HTTP POST requests in HTTP Post Flood Protection.
    5. Use the Login URL to apply the limit to a targeted login page. If you want to rate-limit all HTTP POST requests, leave the login URL section blank.
    Figure 6: Set stack details

    Figure 6: Set stack details

  7. By default, all the rules within the rule-sets are in action override (count mode). This does not include the rate based rules. If you want to deploy selected rules in a block, remove them from the pre-populated list by highlighting and deleting them. It’s best practice to evaluate firewall rules before changing them from count to block mode. Choose Next to move to the next step.
     
    Figure 7: Default managed rules options

    Figure 7: Default managed rules options

  8. Here you can add tags to apply to the resources in the stack that these rules will be deployed to. Tagging is a recommended best practice as it enables you to add metadata information to resources during the creation. For more information on tagging please see the Tagging AWS resources documentation. Then choose Next. On the following page choose Create stack.
     
    Figure 8: Add tags

    Figure 8: Add tags

  9. Wait until the stack has been deployed. When deployment is complete, the status of the stack will change to CREATE_COMPLETE.
     
    Figure 9: Stack deployment status

    Figure 9: Stack deployment status

Associating the web ACLs to resources

During this process we associate the two newly created web ACLs to the corresponding infrastructure resources. In this example, it would be the CloudFront distribution and its origin load balancer which should have been created prior.

To associate the web ACLs to resources

  1. In the console search for and select WAF & Shield.
     
    Figure 10: Select WAF & Shield

    Figure 10: Select WAF & Shield

  2. Select Web ACLs from the list on the left.
     
    Figure 11: Select Web ACLs

    Figure 11: Select Web ACLs

  3. Select Global (CloudFront) from the drop down list at the top of the page. Choose the Edge-Network-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 12: Select the web ACL

    Figure 12: Select the web ACL

  4. Next select Associated AWS and then choose Add AWS resources.
     
    Figure 13: Add AWS resources

    Figure 13: Add AWS resources

  5. Select the CloudFront distribution you want to protect. Choose Add.
     
    Figure 14: Select the CloudFront distribution to protect

    Figure 14: Select the CloudFront distribution to protect

  6. Select the region the application load balancer is deployed in—this example is us-east-1—and then repeat the same association process as in steps by selecting Web ACLs and now associating the Application Load Balancer similar to steps 3 and 4 above. However, this time, select the application load balancer that serves as the CloudFront Distribution origin. Select US East (N. Virginia) from the drop-down list at the top of the page. Choose the Public-Application-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 15: Application layer Web ACL association

    Figure 15: Application layer Web ACL association

Conclusion

Using AWS WAF to manage a multi-layer web application security enforcement policy you are able to build defense in depth stack for each specific web application. The configuration will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications. Now with AWS Managed Rules this has enabled customers to make use of prebuild rule sets that can easily be deployed to create a layered defense that will fit into customers web application deployment pipelines. For customers that would like to centrally manage and control WAF in their AWS Organization, consider AWS Firewall Manager.

The AWS CloudFormation templates used in this procedure are in this GitHub repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

Defense in depth using AWS Managed Rules for AWS WAF (part 1)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/defense-in-depth-using-aws-managed-rules-for-aws-waf-part-1/

In this post, I discuss how you can use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

The post is in two parts. This first part describes AWS Managed Rules for AWS WAF and how it can be used to provide defense in depth. The second part shows how to apply AWS Managed Rules for WAF.

AWS Managed Rules for AWS WAF is a service that provides groups of rules created by Amazon Web Services (AWS) or by an AWS technology partner. By using AWS Managed Rules, you can reduce the administrative overhead of configuring rules for AWS WAF. You still need a comprehensive strategy for web application policy enforcement to help you make the best use of AWS Managed Rules for your web applications.

By using a layered policy enforcement strategy, you can create policy enforcement that’s specific to each part of your applications. This helps you avoid having to maintain and manage monolithic AWS WAF configurations for each of your applications. When you can separate policies for the edge network and for the application layer network, replicating separate policies across larger workloads becomes modular. This makes your application security more agile and lets you protect public-facing web applications without writing new rules or including rules that aren’t relevant to your web application.

Policy enforcement becomes even less of an administrative burden when you use AWS Firewall Manager to enforce policies across all accounts. This helps ensure organizations have robust policy enforcement measures across multiple accounts, with increased application layer visibility.

The new AWS WAF JSON document-style configuration enables traditional code review processes. You can now easily manage AWS WAF configurations on multiple layers of your web applications. This has also enabled partners to create more dynamic and robust rules that they can deliver on AWS WAF, which ultimately helps those customers manage their web application security policies.

AWS WAF enhancements

AWS WAF uses web ACL capacity units (WCU) to calculate and control the operating resources that are used to run your rules, rule groups, and web ACLs.

You can use JSON key-value pair document-based configuration to more easily integrate AWS WAF into the development practices of your organization. As noted in the prior paragraph, using document-style configuration removes the need to use multiple API calls to create objects in the correct order before you can create and deploy a web ACL to protect your web applications.

Using this method lets firewall changes be implemented with normal development and operations best practices because it will be infrastructure as code. This enables version control and code review before deploying updates to your production environment.

Solution overview

The following diagram illustrates the layers and functions of a defense-in-depth solution. The text that follows describes each layer.
 

Figure 1: Solution overview diagram

Figure 1: Solution overview diagram

Edge network layer policy enforcement

The edge network is the first layer of policy enforcement and should be used for broad security policy enforcement. This is the ideal place for rules such as AWS Managed Rules Core rule set (CRS), geographical location blocks, IP reputational lists, anonymous IP lists, and basic rate limits enforcement. By limiting known bad traffic at the edge network, the CRS limits the exposure of the application layer to known bad IP address ranges, malicious requests, bad bots, and request floods. This provides broad protection to the inner application layer against malicious activity, which can be applied regardless of the web application being served at the application layer.

Combining Amazon CloudFront with the distributed denial of service (DDoS) mitigation capabilities of AWS Shield is supported by AWS WAF for your outer layer of web application security enforcement.

It’s a common misconception that CloudFront is only a content delivery platform, but it also has robust transparent reverse proxy capabilities. CloudFront can help protect your environment from a broad range of web application risks. For example, you can use CloudFront to ensure that HTTP requests conform to standards on the far outer layer of your web application environment while serving content closer to the user.

Application layer policy enforcement

The next level of enforcement should be an application load balancer in a public subnet with another web ACL at the CloudFront origin. This policy enforcement layer is where you create a regional web ACL for the CloudFront origin. In addition, this layer is where you apply application-specific rules. For example, if you have a web application that uses a LAMP stack, it would be best to use AWS Managed Rules for SQL Injection, Linux, and PHP as an enforcement layer.

Note: IP-based enforcement is not effective on this part of the environment. Consider making use of an origin custom header on the CloudFront distribution. Then using this custom header to create a BLOCK rule within this web ACL to deny any request without the origin custom header as the first rule in your web ACL list. This rule needs to be created manually and will not be configured by the supplied templates.

(Optional) Third-party web application firewall layer policy enforcement

AWS WAF enforces policies on inbound requests and doesn’t have outbound inspection capabilities. If you need to enforce policies based on outbound responses, you can use Amazon Machine Image (AMI) based web application firewalls, which are available via the AWS Marketplace.

Using an instance-based web application firewall is used here because most of the heavy lifting of computational expenditure is done on the AWS WAF enforcement layers. The third-party layer is where you can enforce policies that require requests to be stateful.

Using an AMI from AWS Marketplace also gives you access to capabilities such as higher visibility, threat intelligence, and robust firewall rules. This adds an additional layer of security enhancement to your environment.

(Optional) Private layer policy enforcement

When working with a traditional three-tier web architecture, you can add an additional layer of enforcement on the private layer, which can be used for the web front ends. This stage is where you would deploy an application load balancer in a private subnet serving your web front ends. This load balancer is there for any computational expensive regex-based rule enforcement that you don’t want to enforce on the instances-based WAF. This also gives you another layer of visibility before requests reaches the web front ends themselves. This example can be seen in Figure 2 below as a reference.

Use case examples

The AWS CloudFormation templates supplied can be deployed in a modular fashion. If the application load balancer is located in the us-east-1 region, you can deploy a single template called Amazon-CloudFront-Application-Load-Balancer-AMR.yml.

If the application load balancer isn’t located in us-east-1, you can use the Amazon-CloudFront-EdgeLayer-AMR.yml template to deploy the stack in us-east-1 to support the web ACL on CloudFront and then deploy ApplicationLayer-Load-Balancer-AMR.yml in the region the original application load balancer was deployed for its web ACL.

All CloudFormation templates are available on the Github project page and a summary of each can be found in the main readme.md file.

Note: All the individual rules in each rule set is set to ACTION OVERRIDE for initial deployment. If any of the rule actions in the group are set to block or allow, this override changes the behavior so that matching rules are only counted. You may change the setting to NO ACTION OVERRIDE after a period of evaluation to avoid disrupting production workloads with potential false positives.

Edge network and application load balancer origin using AWS Managed Rules for AWS WAF

When considering some of the web application best practices on AWS for resiliency and security, the recommendation is to use CloudFront where possible, because it can terminate TLS/SSL connections and serve cached content close to the end user. CloudFront has advanced mitigation capabilities such as SYN cookies and a massively distributed network separate from the traditional Amazon Elastic Compute Cloud (Amazon EC2) networking space. CloudFront also supports AWS WAF rate limits, IP blacklists, and broad security policies, which can be enforced at the edge network layer.

In the example Amazon-CloudFront-Application-Load-Balancer-AMR.yml template, we place a rate-limit for HTTP GET and HTTP POST methods. This is dependent upon expected traffic request rates. You can review Amazon CloudWatch metrics for your CloudFront distribution or application load balancer to determine the baseline for your rate limit based on the maximum expected requests per minute.

The rate limit is adjustable within the parameter options at deployment of the AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml. The HTTP POST rate limit also helps to slow down credential stuffing attacks—a form of brute force attack—on login pages. The ApplicationLayer-Load-Balancer-AMR.yml template used in part 2 of this post also deploys the Amazon IP reputation list to drop IP addresses based on Amazon internal threat intelligence.

We also use the AWS Managed Rules CommonRuleSet that blocks cross-site scripting (XSS) attacks, request with no user-agents, requests with known bad user-agents, large queries, posts, cookies, and URLs, and known LFI/RFI attacks.

Note: The size constraint rules aren’t recommended for protecting APIs or web applications with large HTTP POSTs or long cookies. Evaluate the possible effects of size constraint rules thoroughly before setting them to block requests.

There is also an AWS Managed Rule for known bad inputs which is based on threat intelligence gathered by the AWS Threat Research Team. Finally, there is an admin protection rule set that drops requests to known management login pages. It’s not advised that web applications have front door access to admin controls.

At the origin, it’s a good idea to use an application load balancer that also supports AWS WAF. This is where you want to apply application-specific web policies. For example, this is where you would apply rules to protect against a SQL injection attack if your web application uses a SQL database.

In the example AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml, for the origin application load balancer, we use AWS Managed Rules for SQL injections, Linux rule set, Unix rule set, PHP rule set, and the WordPress rule set to cover most eventualities customers could be using on their web applications.

For the example solution in part 2 of this post, if the origin application load balancer is in us-east-1, you can use Amazon-CloudFront-Application-Load-Balancer-AMR.yml, which will deploy both web ACLs.

If the origin is not in us-east-1, you can use two example templates which are Amazon-CloudFront-EdgeLayer-AMR.yml for the edge network and ApplicationLayer-Load-Balancer-AMR.yml in the origin region.

Using AWS Managed WAF Rules on public and private application load balancers

Some customers have reasons to not use CloudFront and will use two application load balancers. One load balancer for the public facing environment for web front ends and an internal load balancer for the application backends.

The following figure shows a deployment that uses two load balancers. A public load balancer works with the edge network WAF to connect to a web front end in a private subnet and an internal load balancer connects to the backend application.
 

Figure 2: Diagram of stacked load balancers

Figure 2: Diagram of stacked load balancers

In this use case, we can still use the same structure of edge network and application layer network, now only using load balancers. Using a three-tier web application approach to deploy web applications there will be an external facing and an internal application load balancer where you can deploy the same style of policy enforcement, but only on load balancers.

Note: To deploy something similar to this example, you can use the template EdgeLayerALB-PrivateLayerALB-AMR.yml in the relevant regions where the load balancers have been deployed.

Alarms and logging

After deploying these AWS CloudFormation templates you should consider setting CloudWatch alarms on certain metrics for the HTTP GET and HTTP POST flood rules as well as the reputation and anonymous IP lists. Customers that are familiar with developing may also opt to use Lambda responders to use CloudWatch Events to trigger and update to the rule change from COUNT to BLOCK. Also enabling full logging for each web ACL will give you higher visibility into each request and will make potential investigations easier.

Conclusion

Using the new enhancements of AWS WAF makes it easier to manage a multi-layer web application security enforcement policy by using AWS WAF to maintain and deploy web application firewall configurations across their different deployment stages, as well as across different types of applications. By making use of partner or AWS Managed Rules, administrative overhead can be significantly reduced, and with AWS Firewall Manager, customers can enforce these policies across all of an organization’s accounts. Part 2 of this post will show you one example of how this can be done.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.