Tag Archives: AWS WAF

The three most important AWS WAF rate-based rules

Post Syndicated from Artem Lovan original https://aws.amazon.com/blogs/security/three-most-important-aws-waf-rate-based-rules/

In this post, we explain what the three most important AWS WAF rate-based rules are for proactively protecting your web applications against common HTTP flood events, and how to implement these rules. We share what the Shield Response Team (SRT) has learned from helping customers respond to HTTP floods and show how all AWS WAF customers can benefit from these learnings.

When you have business-critical applications that are internet-facing, you need to protect them from risks such as distributed denial of service (DDoS) attacks. AWS Shield Advanced is a managed DDoS protection service that safeguards applications that are running behind Amazon Web Services (AWS) internet-facing resources. The backend origin of your application can exist anywhere, including on premises, and Shield Advanced can protect it. Shield Advanced provides DDoS protection for Layers 3–7. It also includes 24/7 access to the SRT to help you quickly respond to sophisticated unauthorized activity scenarios that might be unique to your application. To learn more about what resource types are supported to associate AWS WAF, see AWS WAF.

Increasingly, the SRT has been assisting customers in protecting against Layer 7 HTTP flood occurrences that negatively impact application availability or performance by overloading the application with an unusually high number of HTTP requests. In many cases, these malicious events can be automatically mitigated by using AWS WAF. In addition, AWS WAF has an easy-to-configure native rate-based rule capability, which detects source IP addresses that make large numbers of HTTP requests within a 5-minute time span, and automatically blocks requests from the offending source IP until the rate of requests falls below a set threshold. In this post, we show how you can pull insights from the AWS WAF logs to determine what your rate-based rule threshold should be.

The top three most important AWS WAF rate-based rules are:

  • A blanket rate-based rule to protect your application from large HTTP floods.
  • A rate-based rule to protect specific URIs at more restrictive rates than the blanket rate-based rule.
  • A rate-based rule to protect your application against known malicious source IPs.

Solution overview

AWS WAF is a web application firewall that helps protect your web applications against common web exploits that might affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over which web traffic reaches your applications. If you already know the request rates for your application, you have all the necessary information to start creating your AWS WAF rate-based rules. To learn more about how to create rules, see Creating a rule and adding conditions. However, if you don’t have this data and want to learn how to get started, this solution helps you determine appropriate rates for your applications, and how to create AWS WAF rate-based rules.

Figure 1 shows how incoming request information is captured so that the operations team can use it to determine rate-based rules.

Figure 1: The workflow to collect and query logs and apply rate-based rules

Figure 1: The workflow to collect and query logs and apply rate-based rules

Let’s go through the flow to better understand what’s happening at each step:

  1. An application user makes requests to the application.
  2. AWS WAF captures information about the incoming requests and sends this to Amazon Kinesis Data Firehose.
  3. Kinesis Data Firehose delivers the logs to an Amazon Simple Storage Service (Amazon S3) bucket, where they will be stored.
  4. The operations team uses Amazon Athena to analyze the logs with SQL queries.
  5. Athena queries the logs in the S3 bucket and shows the query results.
  6. The operations team uses the query results to determine the appropriate AWS WAF rate-based rule.

The three rate-based rules in detail

Each of the rules helps to protect web applications from unauthorized activity. Each of the rules focuses on a specific aspect of protection. The rules complement each other, and so when they’re combined, they can offer greater help in protecting your web application. We’ll look at each of the rules to understand what they do.

Blanket rate-based rule

A blanket rate-based rule is designed to prevent any single source IP address from negatively impacting the availability of a website. For example, if the threshold for the rate-based rule is set to 2,000, the rule will block all IPs that are making more than 2,000 requests in a rolling 5-minute period. This is the most basic rate-based rule, and one of the most valuable for AWS WAF customers to implement. The SRT often helps customers who are actively under a DDoS attack to quickly implement this rule. In past experiences with HTTP flood cases, if this rule were proactively in place, the customer would have been protected and wouldn’t have needed to reach out to the SRT for assistance. The blanket rate-based rule would have automatically blocked the attempt without any human intervention.

URI-specific rate-based rule

Some application URI endpoints typically receive a high request volume, but for others it would be unusual and suspicious to see a high request count. For example, multiple requests in a 5-minute period to an application’s login page is suspicious and indicates a potential brute force or credential-stuffing attack against the application. A URI-specific rule can prevent a single source IP address from connecting to the login page as few as 100 times per 5-minute period, while still allowing a much higher request volume to the rest of the application. Some applications naturally have computationally expensive URIs that, when called, require considerably more resources to process the request. An example of this could be a database query or search function. If a bad actor targets these computationally expensive URIs, this can quickly lead to application performance or availability issues. If you assign a URI-specific rate-based rule to these portions of your site, you can configure a much lower threshold than the blanket rate-based rule. It’s beyond the scope of this blog post, but some customers use Application Load Balancer access logs and the target_processing_time information to determine precisely which portions of the site are the slowest to respond and might represent a computationally expensive call. These customers then put additional rate-based rule protections on calls that are made to these URIs.

IP reputation rate-based rule

Many of the DDoS events the SRT assists customers with include HTTP floods that originate from known malicious source IPs. The AWS WAF Security Automations solution provides AWS WAF customers with a subscription to four open-source threat intelligence lists. Rate-based rules with low thresholds can be applied to requests coming from these suspect sources. Some customers feel comfortable completely blocking web requests from these IPs, but at the very least, requests from these IPs should be rate-limited to protect the application from these well-known malicious sources.

It’s also common to see HTTP floods originate from IP addresses within certain countries. You can use AWS WAF geographical matching rules to assign lower rate-based rule thresholds to requests that originate from certain countries, or countries that don’t contain your web application’s primary user base. For example, suppose your application primarily serves users in the United States. In that case, it could be beneficial to create a rate-based rule with a low threshold for requests that come from any country other than the United States. HTTP floods are also commonly seen originating from IP addresses classified as cloud hosting provider IPs. You can use AWS WAF’s “HostingProviderIPList” Managed Rule to label these requests and then assign a lower rate-based rule threshold to them as well.

Prerequisites

Before you implement the solution, verify that:

  • AWS WAF is deployed in your AWS account and is associated with an Amazon CloudFront distribution or an Application Load Balancer.
  • Your AWS WAF default action is set to Block. When you create and configure a web ACL, you set the web ACL default action, which determines how AWS WAF handles web requests that don’t match any rules in the web ACL. To learn more about default action for a web ACL, see Deciding on the default action for a web ACL.
  • AWS WAF logging is configured and logs are being stored in an S3 bucket.

    Note: You can follow these instructions to configure delivery of AWS WAF logs to your S3 bucket, and you can also use AWS Firewall Manager to configure centralized AWS WAF logging in a multi-account environment.

Set up Athena to analyze AWS WAF logs

Amazon Athena is an interactive query service that you can use to analyze data in Amazon S3 by using standard SQL. For this solution, you’ll use Athena to connect to the S3 bucket where AWS WAF logs are stored and query the AWS WAF logs. The first step is to open the Athena console and create a database.

Note: The Athena database and table creation is a once-off configuration process. You can then come back and run the queries and see the query results based on your latest AWS WAF log data.

To create an Athena database, you’ll use a data definition language (DDL) statement. Paste the following query in the Athena query editor, replacing values as described here:

  • Replace <your-bucket-name> with the S3 bucket name that holds your AWS WAF logs.
  • For <bucket-prefix-if-exist>, if AWS WAF logs are stored in an S3 bucket prefix, replace with your prefix name. Otherwise, remove this part from the query, including the slash “/” at the end.
CREATE DATABASE IF NOT EXISTS wafrulesdb
  COMMENT 'AWS WAF logs'
  LOCATION 's3://<your-bucket-name>/<bucket-prefix-if-exist>/';

Choose Run query to run the query and create the database. Successful completion will be indicated by the query result, as shown below.

Results
Query successful. 

Next, you’ll create a table inside the database. Paste the following query in the Athena query editor, replacing values as described here:

  • Replace <your-bucket-name> with the S3 bucket name that holds your AWS WAF logs.
  • For <bucket-prefix-if-exist>, if AWS WAF logs are stored in an S3 bucket prefix, replace with your prefix name. Otherwise, you can remove this part from the query, including the slash “/” at the end.
  • For has_encrypted_data, if your AWS WAF log data is encrypted at rest, change the value to true, otherwise false is the correct value.
CREATE EXTERNAL TABLE IF NOT EXISTS wafrulesdb.waftable (
  `terminatingRuleId` string,
  `httpSourceName` string,
  `action` string,
  `httpSourceId` string,
  `terminatingRuleType` string,
  `webaclId` string,
  `timestamp` float,
  `formatVersion` int,
  `ruleGroupList` array<string>,
  `httpRequest` struct<`headers`:array<struct<name:string,value:string>>,clientIp:string,args:string,requestId:string,httpVersion:string,httpMethod:string,country:string,uri:string>,
  `rateBasedRuleList` string,
  `nonTerminatingMatchingRules` string,
  `terminatingRuleMatchDetails` string 
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
) LOCATION 's3://<your-bucket-name>/<bucket-prefix-if-exist>/'
TBLPROPERTIES ('has_encrypted_data'='false');

Run the query in the Athena console. After the query completes, Athena registers the waftable table, which makes the data in it available for queries.

Run SQL queries to identify rate-based rule thresholds

Now that you have a table in Athena, know where the data is located, and have the correct schema, you can run SQL queries for each of the rate-based rules and see the query results.

Blanket rate-based rule for all application endpoints

You’ll start with a SQL query that identifies the blanket rule. The critical factor in determining the blanket rule is to run the query against AWS WAF logs data that represents a healthy high request volume. The following query defines a time window of 6 hours in the evening, expressed as 2020-12-01 16:00:00 and 2020-12-01 22:00:00. Time windows can span a few hours or several days; however, this time window must be a good representation of your traffic volume, which you will use as the basis to identify the threshold. For example, if your application is busier during certain periods, you should evaluate the log data for that time. In the example shown here, we limit the query results to the top 100 IPs in our SQL queries. You can adjust the limit to your needs by updating the LIMIT value.

SELECT
  httprequest.clientip,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable
WHERE from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
GROUP BY httprequest.clientip, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100; 

Update the time window to your needs and run the query in the Athena console. The results will show the top requesting IPs in any 5-minute period between two dates, as illustrated in Figure 2.

Figure 2: The top requesting IP in any 5-minute period between dates

Figure 2: The top requesting IP in any 5-minute period between dates

You can visualize the results data to see a holistic view of the request count per IP. The chart in Figure 3 illustrates the SQL query results.

Figure 3: Chart: Top requesting IP in any 5-minute period between dates

Figure 3: Chart: Top requesting IP in any 5-minute period between dates

The results are sorted by showing the IPs with the highest request volume for every 5-minute period. This means that the same IP could appear multiple times, if most of the requests were made within that 5-minute interval. In our example, looking at the result, an excellent first blanket rule would limit the request volume to about 7,000 requests within a 5-minute time period. You can either create the AWS WAF rule by using the following JSON and the JSON rule editor, or by using the AWS WAF visual rule editor and following these instructions. If you’re using the following JSON, make sure to replace the Limit value with the value that you identified by running the SQL query earlier.

{
  "Name": "BlanketRule",
  "Priority": 2,
  "Action": {
    "Block": {}
  },
  "VisibilityConfig": {
    "SampledRequestsEnabled": true,
    "CloudWatchMetricsEnabled": true,
    "MetricName": "BlanketRule"
  },
  "Statement": {
    "RateBasedStatement": {
      "Limit": 7000,
      "AggregateKeyType": "IP"
    }
  }
}

Sometimes a client connects to an application through an HTTP proxy or a content delivery network (CDN), which obscures the client origin IP. It’s important to identify the client IP instead of the one from the proxy or CDN, because blocking source IPs can cause a wider unwanted impact. You can use many tools to help you identify whether the source IP might be a CDN. In this case, you would need to query and filter on the X-Forwarded-For, True-Client-IP, or other custom headers. CDN providers typically publish which headers they add to the requests, but X-Forwarded-For and True-Client-IP are common. The following query shows how you can reference these headers, illustrating with the X-Forwarded-For header, to write rate-based rules. You can replace X-Forwarded-For with the header you expect to hold the client IP.

SELECT
  header.value,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable, UNNEST(httprequest.headers) as t(header)
WHERE
    from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
  AND
    header.name = 'X-Forwarded-For'
GROUP BY header.value, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100;

URI-based rule for specific application endpoints

Suppose that you want to further limit requests to the login page on your website. To do this, you could add the following string match condition to a rate-based rule:

  • The part of the request to filter on is URI
  • The Match Type is Starts with
  • A Value to match is /login (this needs to be whatever identifies the login page in the URI portion of the web request)

Next you have to identify what is a typical request volume to the /login URI for the application. The following SQL query does exactly that.

SELECT
  httprequest.clientip,
  httprequest.uri,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable
WHERE 
  from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
AND
  httprequest.uri = '/login'
GROUP BY httprequest.clientip, httprequest.uri, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100;

Replace the time window 2020-12-01 16:00:00 and 2020-12-01 22:00:00 and the httprequest.uri value, if applicable, and run the query in the Athena console. The results show the highest requesting IP and /login URI for every 5-minute period between dates, as illustrated in Figure 4.

Figure 4: The highest requesting IP and /login URI for every 5-minute period between dates

Figure 4: The highest requesting IP and /login URI for every 5-minute period between dates

Figure 5 illustrates a chart based on the query results for the highest requesting IP and /login URI for every 5-minute period between dates.

Figure 5: Chart: The highest requesting IP and /login URI for every 5-minute period between dates

Figure 5: Chart: The highest requesting IP and /login URI for every 5-minute period between dates

Based on the SQL query results, you would specify a rate limit of 150 requests per 5 minutes. Adding this rate-based rule to a web ACL will limit requests to your login page per IP address without affecting the rest of your site. Once again, you can either create the AWS WAF rule by using the following JSON and the JSON rule editor, or by using the AWS WAF visual rule editor and following these instructions. If you’re using the following JSON, make sure to replace the Limit value with the value that you identified by running the SQL query earlier.

{
  "Name": "UriBasedRule",
  "Priority": 1,
  "Action": {
    "Block": {}
  },
  "VisibilityConfig": {
    "SampledRequestsEnabled": true,
    "CloudWatchMetricsEnabled": true,
    "MetricName": "UriBasedRule"
  },
  "Statement": {
    "RateBasedStatement": {
      "Limit": 150,
      "AggregateKeyType": "IP",
      "ScopeDownStatement": {
        "ByteMatchStatement": {
          "FieldToMatch": {
            "UriPath": {}
          },
          "PositionalConstraint": "STARTS_WITH",
          "SearchString": "/login",
          "TextTransformations": [
            {
              "Type": "NONE",
              "Priority": 0
            }
          ]
        }
      }
    }
  }
}

AWS WAF rules with a lower value for Priority are evaluated before rules with a higher value. For the AWS WAF rules to work as expected (first evaluating the more specific rule—the URI-based rule, and only after that, the more general blanket rule) you have to set the AWS WAF rule priority. You can do that by updating the JSON and setting the Priority value to 1 for the blanket rule and 0 for the URI-based rule, or by using the AWS WAF visual rule editor. The expected AWS WAF rule priority should be as illustrated in Figure 6.

Figure 6: AWS WAF rules with priority for UriBasedRule

Figure 6: AWS WAF rules with priority for UriBasedRule

If you want to know the request volume across all application URIs, the following SQL will accomplish that.

SELECT
  httprequest.clientip,
  httprequest.uri,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable
WHERE from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
GROUP BY httprequest.clientip, httprequest.uri, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100;

Figure 7 shows a chart of what the SQL query results might look like.

Figure 7: The highest requesting IP and URI for every 5-minute period between dates

Figure 7: The highest requesting IP and URI for every 5-minute period between dates

IP reputation rule groups to block bots or other threats

You can use IP reputation rules to block requests based on their source. AWS WAF offers a wide selection of managed rule groups, and Amazon IP reputation list is the one that will help to reduce your exposure to bot traffic or exploitation attempts.

To add the Amazon IP reputation list rule to your web ACL

  1. Open the AWS WAF console and navigate to the managed rule groups view.

    Figure 8: The managed rule group view in AWS WAF

    Figure 8: The managed rule group view in AWS WAF

  2. Expand AWS managed rule groups, and for Amazon IP reputation list, choose Add to web ACL.

    Figure 9: Add the Amazon IP reputation list to the web ACL

    Figure 9: Add the Amazon IP reputation list to the web ACL

  3. Scroll to the bottom of the page and choose Add rule.
  4. At this point, you should see the Set rule priority view. Move up the Amazon managed rule so that it has the highest priority. If a request originates from a bot, you want to deny the request as early as possible, and you achieve exactly that by assigning the highest priority to the Amazon IP reputation list rule. Your final AWS WAF rules order should be as shown in Figure 10.

    Figure 10: Final AWS WAF rules ordered by priority

    Figure 10: Final AWS WAF rules ordered by priority

Considerations for rate-based rules

It’s important to note that the more specific AWS WAF rules should have a higher priority, because you want these rules to limit the request volume first. In our example, the rules strategy is first based on a specific URI, and then on a blanket rule that limits requests across the whole application.

The rate-based rules that we discussed here provide a solid foundation to help you protect your internet-facing applications from common basic HTTP request floods. However, the solution in this blog post shouldn’t be seen as a one-time setup but rather as an iterative activity.

You should determine a healthy time frame to rerun Amazon Athena queries to identify a new rate-based rule that aligns with the application’s growth and increasing request volume. Reviewing the rate-based rules on an iterative basis and incorporating it into your existing processes, such as software development life cycle, is a great way to schedule in the review process. Each AWS WAF rule can publish Amazon CloudWatch metrics, which can be used to trigger alerts before thresholds are crossed. You can use alerts to create tickets to operations teams based on thresholds you set. This alerts your operations teams to review the situation to see if it’s a DDoS attack being thwarted versus legitimate traffic being dropped.

After you define your request, add a buffer to allow for growth. Rate-based rules should have a reasonable buffer to account for near-future application growth. For instance, when an Athena query result shows a request volume of 500 requests, a rate-based rule with a limit of 1,000 requests gives a buffer for an additional 500 requests to account for application growth.

Summary

In this post, we introduced you to the top three most important AWS WAF rate-based rules to protect your web applications from common HTTP flood events. We also covered how to implement these rate-based rules and determined an appropriate request threshold for your application by using AWS WAF logs and Amazon Athena queries. To learn more about best practices that help you protect your websites and web applications against various attack vectors by using AWS WAF, see our whitepaper, Guidelines for Implementing AWS WAF.

You can learn more about AWS WAF in other AWS WAF–related Security Blog posts.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Artem Lovan

Artem is a Senior Solutions Architect based in New York. He helps customers architect and optimize applications on AWS. He has been involved in IT at many levels, including infrastructure, networking, security, DevOps, and software development.

Author

Jesse Lepich

Jesse is a Senior Security Solutions Architect at AWS based in Lake St. Louis, Missouri, focused on helping customers implement native AWS security services. Outside of cloud security, his interests include relaxing with family, barefoot waterskiing, snowboarding/snow skiing, surfing, boating/sailing, and mountain climbing.

Automatically update AWS WAF IP sets with AWS IP ranges

Post Syndicated from Fola Bolodeoku original https://aws.amazon.com/blogs/security/automatically-update-aws-waf-ip-sets-with-aws-ip-ranges/

Note: This blog post describes how to automatically update AWS WAF IP sets with the most recent AWS IP ranges for AWS services. This related blog post describes how to perform a similar update for Amazon CloudFront IP ranges that are used in VPC Security Groups.

You can use AWS Managed Rules for AWS WAF to quickly create baseline protections for your web applications, including setting up lists of IP addresses to be blocked. In some cases, you might need to create an IP set in AWS WAF with the IP address ranges of Amazon Web Services (AWS) services that you use, so that traffic from these services is allowed. In this blog post, we provide a solution that automatically updates an AWS WAF IP set with the IP address ranges of the AWS services Amazon CloudFront, Amazon Route 53 health checks, and Amazon EC2 (and also the services that share the same IP address ranges, such as AWS Lambda, Amazon CloudWatch, and so on). These services are present in the AWS Managed Rules Anonymous IP list, and blocking them may cause inadvertent service impairment for applications that expect traffic from the services.

As an application owner, you can improve your security posture by using the Anonymous IP list in your AWS WAF web access control lists (web ACLs) to block source IP addresses from specific hosting providers and anonymization services, such as VPNs, proxies, and Tor nodes. Due to the generic nature of these rules, when you use the Anonymous IP list, you might want to exclude certain IPs from the list of IPs to be blocked, in order to allow web traffic from those sources. For example, you can allow traffic that originates from the AWS network.

Alternatively, you might want to permit only IP addresses from certain AWS services in a web ACL. This is a common requirement when you protect an Application Load Balancer by restricting all incoming traffic to CloudFront IP ranges. Creating your own custom list to allow expected traffic from the AWS network requires some effort, because you need to periodically update the list by using the IP ranges that we provide. With the solution we present here, you don’t have to manually manage the exclusion list. When the new AWS IP ranges are published, this solution will automatically fetch and update the list.

Note: This solution only works with AWS WAF, and will not work with AWS WAF Classic.

Solution overview

Figure 1 shows the solution architecture.

Figure 1: Automatic update process for service IPs

Figure 1: Automatic update process for service IPs

AWS sends Amazon Simple Notification Service (Amazon SNS) notifications to subscribers of the AmazonIPSpaceChanged SNS topic when updates are made to the public IP addresses for AWS services. This solution uses an AWS CloudFormation template to deploy an AWS Lambda function that is triggered by these SNS notifications. The function creates AWS WAF IP sets for IPv4 and IPv6 address ranges in your web ACL.

The solution workflow is as follows:

  1. In the CloudFormation template, you select the services that you want the AWS WAF IP set to be updated with.
  2. The template deploys the required AWS resources with the configuration that specifies what services to fetch from an AWS public IP address update.
  3. AWS Lambda function is manually invoked one first time to populate AWS WAF IP sets with selected IPs from AWS IP range.
  4. Once AWS IP range is updated, an Amazon SNS notification is sent to subscribers of the SNS topic.
  5. SNS notification triggers the AWS Lambda function.
  6. The Lambda function fetches the selected IP ranges and updates IP sets for IPv4 addresses and IPv6 addresses.
  7. The application owner adds a custom AWS WAF web ACL rule that uses the IP sets to allow traffic from the AWS services that you’ve selected. This way, the web ACL makes reference to always updated AWS WAF IP sets with no further action required from your side.

Solution prerequisites

The solution is automatically created when you deploy the AWS CloudFormation template that is available on the solution’s GitHub page. There are three resources that you must have in place before you deploy the template:

  • The Python code that will be used as the Lambda function.
    • Download the update_aws_waf_ipset.py Python code from the project’s AWS Lambda directory in GitHub. This function is responsible for constantly checking AWS IPs and making sure that your AWS WAF IP sets are always updated with the most recent set of IPs in use by the AWS service of choice.
  • An Amazon Simple Storage Service (Amazon S3) bucket that you will use to store the compressed Python code.
    • Compress the file to a .zip file and upload it to an Amazon Simple Storage Service (Amazon S3) bucket in the same AWS Region where you will deploy the template. For instructions on how to create an S3 bucket, see Creating a bucket.
  • An AWS WAF web ACL to filter requests that come in from trusted sources. The web ACL uses the IP sets that the solution creates and updates with the necessary IP addresses.

Deploy the AWS CloudFormation template

The CloudFormation template deploys the required resources for this solution in your account. The following resources are deployed:

  • Two AWS WAF IP sets, IPv4Set and IPv6Set that are used to store IPv4 and IPv6 IP addresses from the services you’re interested in allowing. Those IP sets are visible in the AWS WAF console under the same Region where the template is deployed.
    • Note: The IP address 192.0.2.0/24 that appears in the template is a placeholder for the IP addresses that will be populated by the solution, and it is used for documentation purposes only.
  • The update_aws_waf_ipset.py Python code is used in an AWS Lambda function called UpdateWAFIPSet. This is the function that will read which services the solution should collects IPs from, and which IP sets should be populated. If you don’t change those parameters, the function will use default IP set suffixes. By default, the solution will select ROUTE53_HEALTHCHECKS and CLOUDFRONT as the services for which to download IPs. You can update the list of IP addresses as needed, by referring to the AWS IP JSON document for a list of service names and IP ranges.
  • A Lambda execution role with permissions restricted to least privilege required.
  • The Lambda function is automatically subscribed to the AmazonIPSpaceChanged SNS topic, which is responsible for monitoring changes in the list of AWS IPs.
  • A Lambda permission resource to allow the previously created SNS topic to invoke the template’s Lambda function.

Solution deployment through the console

You can download the AWS CloudFormation template, called template.yml, from the solution’s GitHub page.

After you’ve downloaded the template, access the CloudFormation console to create the stack. See the CloudFormation User Guide for instructions on selecting a downloaded template in the CloudFormation console to deploy a stack.

Note: The Region that you use when you deploy the template is where resources will be created.

On the Specify stack details page, you can enter the stack name, which will be the name used as a reference for resources created by the template, as well as six other stack parameters, shown in Figure 2.

Figure 2: Template parameters

Figure 2: Template parameters

The parameters are as follows:

  • EC2REGIONS – This is the Region that the solution will use as a reference when it updates its list of IPs. Select all for all Regions, but you can also specify a Region of interest.
  • IPV4SetNameSuffix – The solution will create an AWS WAF IPv4 IP set with the stack name as its name, but you can also add a suffix of your choice to the name.
  • IPV6SetNameSuffix – Like the AWS WAF IPv4 IP set, the IPv6 IP set can also have a suffix of your choice.
  • LambdaCodeS3Bucket – As mentioned in the Prerequisites section, you need to have previously uploaded the Lambda function Python code to an Amazon S3 bucket in the same Region where you’re deploying the stack. Enter the bucket name here, for example, mybucket.
  • LambdaCodeS3Object – Enter the name of the .zip file of the compressed Lambda function in the S3 bucket, for example, myfunction.zip.
  • SERVICES – Enter the list of AWS services for which you want the IP addresses populated in the AWS WAF IP sets. By default, this solution uses ROUTE53_HEALTHCHECKS and CLOUDFRONT, but you can change this parameter and add any service name, according to the list in the AWS IP ranges JSON.

After you deploy the template, its status will change to CREATE_COMPLETE.

Solution deployment through the AWS CLI

You can also deploy the solution template through the AWS Command Line Interface (AWS CLI). On the solution’s GitHub page, in the Setup section, follow the instructions for deploying the solution by using AWS CLI commands.

Note: To use the AWS CLI, you must have set it up in your environment. To set up the AWS CLI, follow the instructions in the AWS CLI installation documentation.

Invoke the Lambda function for the first time

After you successfully deploy the CloudFormation stack, it’s required that you run an initial Lambda invocation so that the AWS WAF IP sets are updated with AWS services IPs. This Lambda invocation is only required once, and after this initial call, the solution will handle future updates on your behalf.

To invoke this Lambda call through the AWS Management Console, open the Lambda console, select the Lambda function that was created by the template, and use the following event to create a test event. See Invoke the Lambda function in the AWS Lambda Developer Guide for step-by-step guidance on how to run a test event.

{
  "Records": [
    {
      "EventVersion": "1.0",
      "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
      "EventSource": "aws:sns",
      "Sns": {
        "SignatureVersion": "1",
        "Timestamp": "1970-01-01T00:00:00.000Z",
        "Signature": "EXAMPLE",
        "SigningCertUrl": "EXAMPLE",
        "MessageId": "12345678-1234-1234-1234-123456789012",
        "Message": "{\"create-time\": \"yyyy-mm-ddThh:mm:ss+00:00\", \"synctoken\": \"0123456789\", \"md5\": \"test-hash\", \"url\": \"https://ip-ranges.amazonaws.com/ip-ranges.json\"}",
        "Type": "Notification",
        "UnsubscribeUrl": "EXAMPLE",
        "TopicArn": "arn:aws:sns:EXAMPLE",
        "Subject": "TestInvoke"
      }
    }
  ]
}

The success of the event will mean that the newly created AWS WAF IP sets now have the updated list of IPs from the services you’re working with.

You can also achieve Lambda function invocation through the AWS CLI by using the following command, where test_event.json is the test event I mentioned earlier.

aws lambda invoke \
  --function-name $CFN_STACK_NAME-UpdateWAFIPSets \
  --region $REGION \
  --payload file://lambda/test_event.json lambda_return.json

You can use the documentation for invoking a Lambda function in the AWS CLI to explore this command and its parameters.

After successful invocation, status code 200 is returned on the AWS CLI to illustrate that invocation happened as expected. At this point, the AWS WAF IP sets are updated.

Use the solution IP sets in your AWS WAF web ACL

Now the AWS WAF IPv4 and IPv6 IP sets are populated, and you can obtain the IP lists either by using the AWS WAF console, or by calling the GetIPSet API through the AWS CLI command get-ip-set.

To use AWS WAF IP sets in your web ACL, see Creating and managing an IP set in the AWS WAF Developer Guide. You can use these IP sets in the same web ACL or rule group that contains the AWS Managed Rules Anonymous IP list and is associated to the AWS resource that AWS WAF is protecting. AWS WAF evaluation order and solution positioning within WebACL will be discussed in later section.

To associate your web ACL with an AWS resource, see Associating or disassociating a web ACL with an AWS resource.

Validate the solution

To validate the solution, let’s consider a scenario where you would like to allow requests from CloudFront to come through, while blocking any other anonymous and hosting provider sources. In this scenario, consider the following requests that are filtered by AWS WAF.

In the first one, a customer has the AWSManagedRulesAnonymousIpList rule group, and a request coming from an Amazon EC2 instance IP is blocked.

{
    "timestamp": 1619175030566,
    "formatVersion": 1,
    "webaclId": "arn:aws:wafv2:eu-west-1:111122223333:regional/webacl/managedRuleValidation/11fd1e32-ae25-45f8-811f-3c1485f76ceb",
    "terminatingRuleId": "AWS-AWSManagedRulesAnonymousIpList",
    "terminatingRuleType": "MANAGED_RULE_GROUP",
    "action": "BLOCK",
    (...)
    "ruleGroupList": [
        {
            "ruleGroupId": "AWS#AWSManagedRulesAnonymousIpList",
            "terminatingRule": {
                "ruleId": "HostingProviderIPList",
                "action": "BLOCK",
                "ruleMatchDetails": null
            },
            (...)
    ],
    (...)
    "httpRequest": {
        "clientIp": "203.0.113.176",
        (...)
    }
}

In the second request, this time coming in from CloudFront, you can see that AWS WAF didn’t block the request.

{
    "timestamp": 1619175149405,
    "formatVersion": 1,
    "webaclId": "arn:aws:wafv2:eu-west-1:111122223333:regional/webacl/managedRuleValidation/11fd1e32-ae25-45f8-811f-3c1485f76ceb",
    "terminatingRuleId": "Default_Action",
    "terminatingRuleType": "REGULAR",
    "action": "ALLOW",
    "terminatingRuleMatchDetails": [],
    (...)
    "httpRequest": {
       "clientIp": "130.176.96.86",
       (...)
    }
}

To achieve this result, you need to edit AWSManagedRulesAnonymousIpList and add a scope-down statement so that the rule set only blocks requests that aren’t sent from sources within this solution’s IPv4 and IPv6 IP sets.

To create a scope-down statement for AWSManagedRulesAnonymousIpList

  1. In the AWS WAF console, access your web ACL.
  2. Open the Rules tab.
  3. Select AWSManagedRulesAnonymousIpList rule set, and then choose Edit.
  4. Choose the arrow next to Scope-down statement – optional. You will see two options, Rule visual editor and Rule JSON editor.
  5. Choose Rule JSON editor and enter the following JSON. Replacing <IPv4-IPSET-ARN> and <IPv6-IPSET-ARN> with respective IP sets’ Amazon Resource Numbers (ARNs).

Note: You can use the AWS WAF ListIPSets action or the list-ip-sets CLI command to obtain the IP set Amazon Resource Numbers (ARNs) and enter that information in the provided JSON.

{
  "NotStatement": {
    "Statement": {
      "OrStatement": {
        "Statements": [
          {
            "IPSetReferenceStatement": {
              "ARN": "<IPv4-IPSET-ARN>"
            }
          },
          {
            "IPSetReferenceStatement": {
              "ARN": "<IPv6-IPSET-ARN>"
            }
          }
        ]
      }
    }
  }
}

After making this change, your rule editing page will look like the following.

Figure 3: AWSManagedRulesAnonymousIpList scope-down statement

Figure 3: AWSManagedRulesAnonymousIpList scope-down statement

When you set the rule priority, consider using the AWSManagedRulesAnonymousIpList rule group with a lower priority than other rules within the web ACL. This causes that rule group to be evaluated prior to rules that are configured with terminating actions (that is, Allow and Block actions). The scope-down statement will match the request and allow traffic from the IP addresses within the IP set, and pass every other IP on to the next rule for further evaluation. Figure 4 shows an example of the suggested priority.

Figure 4: Example with suggested use of AWS WAF web ACL priority

Figure 4: Example with suggested use of AWS WAF web ACL priority

Summary

This blog post provides you with a solution that is capable of automatically updating AWS WAF IP sets with the list of current IP ranges for one or more AWS services. You can use this solution in various ways, such as to allow requests from Amazon CloudFront when you’re using the AWS Managed Rules Anonymous IP List.

For best practices on AWS WAF implementation, see Guidelines for Implementing AWS WAF. For further reading on AWS WAF, see the AWS WAF Developer Guide.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Fola Bolodeoku

Fola is a Security Engineer on the AWS Shield Team, where he focuses on helping customers improve their application security posture against DDoS, and other application threats. When he is not working, he enjoys spending time on road trips in the Western Cape, and beyond.

Author

Mario Pinho

Mário is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into their simpler components. In his free time, he pretends to be an artist by playing piano and doing landscape photography.

Author

Davidson Junior

Davidson is a Cloud Support Engineer at AWS in Cape Town, South Africa. He is a subject matter expert in AWS WAF, and is focused on helping customers troubleshoot and protect their network in the cloud. Outside of work, he enjoys listening to music, outdoor photography, and hiking the Western Cape.

Building well-architected serverless applications: Implementing application workload security – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-implementing-application-workload-security-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Security question SEC3: How do you implement application security in your workload?

Review and automate security practices at the application code level, and enforce security code review as part of development workflow. By implementing security at the application code level, you can protect against emerging security threats and reduce the attack surface from malicious code, including third-party dependencies.

Required practice: Review security awareness documents frequently

Stay up to date with both AWS and industry security best practices to understand and evolve protection of your workloads. Having a clear understanding of common threats helps you to mitigate them when developing your workloads.

The AWS Security Blog provides security-specific AWS content. The Open Web Application Security Project (OWASP) Top 10 is a guide for security practitioners to understand the most common application attacks and risks. The OWASP Top 10 Serverless Interpretation provides information specific to serverless applications.

Review and subscribe to vulnerability and security bulletins

Regularly review news feeds from multiple sources that are relevant to the technologies used in your workload. Subscribe to notification services to be informed of critical threats in near-real time.

The Common Vulnerabilities and Exposures (CVE) program identifies, defines, and catalogs publicly disclosed cybersecurity vulnerabilities. You can search the CVE list directly, for example “Python”.

CVE Python search

CVE Python search

The US National Vulnerability Database (NVD) allows you to search by vulnerability type, severity, and impact. You can also perform advanced searching by vendor name, product name, and version numbers. GitHub also integrates with CVE, which allows for advanced searching within the CVEproject/cvelist repository.

AWS Security Bulletins are a notification system for security and privacy events related to AWS services. Subscribe to the security bulletin RSS feed to keep up to date with AWS security announcements.

The US Cybersecurity and Infrastructure Security Agency (CISA) provides alerts about current security issues, vulnerabilities, and exploits. You can receive email alerts or subscribe to the RSS feed.

AWS Partner Network (APN) member Palo Alto Networks provides the “Serverless architectures Security Top 10” list. This is a security awareness and education guide to use while designing, developing, and testing serverless applications to help minimize security risks.

Good practice: Automatically review a workload’s code dependencies/libraries

Regularly reviewing application and code dependencies is a good industry security practice. This helps detect and prevent non-certified application code, and ensure that third-party application dependencies operate as intended.

Implement security mechanisms to verify application code and dependencies before using them

Combine automated and manual security code reviews to examine application code and its dependencies to ensure they operate as intended. Automated tools can help identify overly complex application code, and common security vulnerability exposures that are already cataloged.

Manual security code reviews, in addition to automated tools, help ensure that application code works as intended. Manual reviews can include business contextual information and integrations that automated tools may not capture.

Before adding any code dependencies to your workload, take time to review and certify each dependency to ensure that you are adding secure code. Use third-party services to review your code dependencies on every commit automatically.

OWASP has a code review guide and dependency check tool that attempt to detect publicly disclosed vulnerabilities within a project’s dependencies. The tool has a command line interface, a Maven plugin, an Ant task, and a Jenkins plugin.

GitHub has a number of security features for hosted repositories to inspect and manage code dependencies.

The dependency graph allows you to explore the packages that your repository depends on. Dependabot alerts show information about dependencies that are known to contain security vulnerabilities. You can choose whether to have pull requests generated automatically to update these dependencies. Code scanning alerts automatically scan code files to detect security vulnerabilities and coding errors.

You can enable these features by navigating to the Settings tab, and selecting Security & analysis.

GitHub configure security and analysis features

GitHub configure security and analysis features

Once Dependabot analyzes the repository, you can view the dependencies graph from the Insights tab. In the serverless airline example used in this series, you can view the Loyalty service package.json dependencies.

Serverless airline loyalty dependencies

Serverless airline loyalty dependencies

Dependabot alerts for security vulnerabilities are visible in the Security tab. You can review alerts and see information about how to resolve them.

Dependabot alert

Dependabot alert

Once Dependabot alerts are enabled for a repository, you can also view the alerts when pushing code to the repository from the terminal.

Dependabot terminal alert

Dependabot terminal alert

If you enable security updates, Dependabot can automatically create pull requests to update dependencies.

Dependabot pull requests

Dependabot pull requests

AWS Partner Network (APN) member Snyk has an integration with AWS Lambda to manage the security of your function code. Snyk determines what code and dependencies are currently deployed for Node.js, Ruby, and Java projects. It tests dependencies against their vulnerability database.

If you build your functions using container images, you can use Amazon Elastic Container Registry’s (ECR) image scanning feature. You can manually scan your images, or scan them on each push to your repository.

Elastic Container Registry image scanning example results

Elastic Container Registry image scanning example results

Best practice: Validate inbound events

Sanitize inbound events and validate them against a predefined schema. This helps prevent errors and increases your workload’s security posture by catching malformed events or events intentionally crafted to be malicious. The OWASP Input validation cheat sheet includes guidance for providing input validation security functionality in your applications.

Validate incoming HTTP requests against a schema

Implicitly trusting data from clients could lead to malformed data being processed. Use data type validators or web application frameworks to ensure data correctness. These should include regular expressions, value range, data structure, and data normalization.

You can configure Amazon API Gateway to perform basic validation of an API request before proceeding with the integration request to add another layer of security. This ensures that the HTTP request matches the desired format. Any HTTP request that does not pass validation is rejected, returning a 400 error response to the caller.

The Serverless Security Workshop has a module on API Gateway input validation based on the fictional Wild Rydes unicorn raid hailing service. The example shows a REST API endpoint where partner companies of Wild Rydes can submit unicorn customizations, such as branded capes, to advertise their company. The API endpoint should ensure that the request body follows specific patterns. These include checking the ImageURL is a valid URL, and the ID for Cape is a numeric value.

In API Gateway, a model defines the data structure of a payload, using the JSON schema draft 4. The model ensures that you receive the parameters in the format you expect. You can check them against regular expressions. The CustomizationPost model specifies that the ImageURL and Cape schemas should contain the following valid patterns:

    "imageUrl": {
      "type": "string",
      "title": "The Imageurl Schema",
      "pattern": "^https?:\/\/[[email protected]:%_+.~#?&//=]+$"
    },
    "sock": {
      "type": "string",
      "title": " The Cape Schema ",
      "pattern": "^[0-9]*$"
    },
    …

The model is applied to the /customizations/post method as part of the Method Request. The Request Validator is set to Validate body and the CustomizationPost model is set for the Request Body.

API Gateway request validator

API Gateway request validator

When testing the POST /customizations API with valid parameters using the following input:

{  
   "name":"Cherry-themed unicorn",
   "imageUrl":"https://en.wikipedia.org/wiki/Cherry#/media/File:Cherry_Stella444.jpg",
   "sock": "1",
   "horn": "2",
   "glasses": "3",
   "cape": "4"
}

The result is a valid response:

{"customUnicornId":<the-id-of-the-customization>}

Testing validation to the POST /customizations API using invalid parameters shows the input validation process.

The ImageUrl is not a valid URL:

 {  
    "name":"Cherry-themed unicorn",
    "imageUrl":"htt://en.wikipedia.org/wiki/Cherry#/media/File:Cherry_Stella444.jpg",
    "sock": "1" ,
    "horn": "2" ,
    "glasses": "3",
    "cape": "4"
 }

The Cape parameter is not a number, which shows a SQL injection attempt.

 {  
    "name":"Orange-themed unicorn",
    "imageUrl":"https://en.wikipedia.org/wiki/Orange_(fruit)#/media/File:Orange-Whole-%26-Split.jpg",
    "sock": "1",
    "horn": "2",
    "glasses": "3",
    "cape":"2); INSERT INTO Cape (NAME,PRICE) VALUES ('Bad color', 10000.00"
 }

These return a 400 Bad Request response from API Gateway before invoking the Lambda function:

{"message": "Invalid request body"}

To gain further protection, consider adding an AWS Web Application Firewall (AWS WAF) access control list to your API endpoint. The workshop includes an AWS WAF module to explore three AWS WAF rules:

  • Restrict the maximum size of request body
  • SQL injection condition as part of the request URI
  • Rate-based rule to prevent an overwhelming number of requests
AWS WAF ACL

AWS WAF ACL

AWS WAF also includes support for custom responses and request header insertion to improve the user experience and security posture of your applications.

For more API Gateway security information, see the security overview whitepaper.

Also add further input validation logic to your Lambda function code itself. For examples, see “Input Validation for Serverless”.

Conclusion

Implementing application security in your workload involves reviewing and automating security practices at the application code level. By implementing code security, you can protect against emerging security threats. You can improve the security posture by checking for malicious code, including third-party dependencies.

In this post, I cover reviewing security awareness documentation such as the CVE database. I show how to use GitHub security features to inspect and manage code dependencies. I then show how to validate inbound events using API Gateway request validation.

This well-architected question will be continued where I look at securely storing, auditing, and rotating secrets that are used in your application code.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Managing application security boundaries – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-managing-application-security-boundaries-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Security question SEC2: How do you manage your serverless application’s security boundaries?

Defining and securing your serverless application’s boundaries ensures isolation for, within, and between components.

Required practice: Evaluate and define resource policies

Resource policies are AWS Identity and Access Management (IAM) statements. They are attached to resources such as an Amazon S3 bucket, or an Amazon API Gateway REST API resource or method. The policies define what identities have fine-grained access to the resource. To see which services support resource-based policies, see “AWS Services That Work with IAM”. For more information on how resource policies and identity policies are evaluated, see “Identity-Based Policies and Resource-Based Policies”.

Understand and determine which resource policies are necessary

Resource policies can protect a component by restricting inbound access to managed services. Use resource policies to restrict access to your component based on a number of identities, such as the source IP address/range, function event source, version, alias, or queues. Resource policies are evaluated and enforced at IAM level before each AWS service applies it’s own authorization mechanisms, when available. For example, IAM resource policies for API Gateway REST APIs can deny access to an API before an AWS Lambda authorizer is called.

If you use multiple AWS accounts, you can use AWS Organizations to manage and govern individual member accounts centrally. Certain resource policies can be applied at the organizations level, providing guardrail for what actions AWS accounts within the organization root or OU can do. For more information see, “Understanding how AWS Organization Service Control Policies work”.

Review your existing policies and how they’re configured, paying close attention to how permissive individual policies are. Your resource policies should only permit necessary callers.

Implement resource policies to prevent unauthorized access

For Lambda, use resource-based policies to provide fine-grained access to what AWS IAM identities and event sources can invoke a specific version or alias of your function. Resource-based policies can also be used to control access to Lambda layers. You can combine resource policies with Lambda event sources. For example, if API Gateway invokes Lambda, you can restrict the policy to the API Gateway ID, HTTP method, and path of the request.

In the serverless airline example used in this series, the IngestLoyalty service uses a Lambda function that subscribes to an Amazon Simple Notification Service (Amazon SNS) topic. The Lambda function resource policy allows SNS to invoke the Lambda function.

Lambda resource policy document

Lambda resource policy document

API Gateway resource-based policies can restrict API access to specific Amazon Virtual Private Cloud (VPC), VPC endpoint, source IP address/range, AWS account, or AWS IAM users.

Amazon Simple Queue Service (SQS) resource-based policies provide fine-grained access to certain AWS services and AWS IAM identities (users, roles, accounts). Amazon SNS resource-based policies restrict authenticated and non-authenticated actions to topics.

Amazon DynamoDB resource-based policies provide fine-grained access to tables and indexes. Amazon EventBridge resource-based policies restrict AWS identities to send and receive events including to specific event buses.

For Amazon S3, use bucket policies to grant permission to your Amazon S3 resources.

The AWS re:Invent session Best practices for growing a serverless application includes further suggestions on enforcing security best practices.

Best practices for growing a serverless application

Best practices for growing a serverless application

Good practice: Control network traffic at all layers

Apply controls for controlling both inbound and outbound traffic, including data loss prevention. Define requirements that help you protect your networks and protect against exfiltration.

Use networking controls to enforce access patterns

API Gateway and AWS AppSync have support for AWS Web Application Firewall (AWS WAF) which helps protect web applications and APIs from attacks. AWS WAF enables you to configure a set of rules called a web access control list (web ACL). These allow you to block, or count web requests based on customizable web security rules and conditions that you define. These can include specified IP address ranges, CIDR blocks, specific countries, or Regions. You can also block requests that contain malicious SQL code, or requests that contain malicious script. For more information, see How AWS WAF Works.

private API endpoint is an API Gateway interface VPC endpoint that can only be accessed from your Amazon Virtual Private Cloud (Amazon VPC). This is an elastic network interface that you create in a VPC. Traffic to your private API uses secure connections and does not leave the Amazon network, it is isolated from the public internet. For more information, see “Creating a private API in Amazon API Gateway”.

To restrict access to your private API to specific VPCs and VPC endpoints, you must add conditions to your API’s resource policy. For example policies, see the documentation.

By default, Lambda runs your functions in a secure Lambda-owned VPC that is not connected to your account’s default VPC. Functions can access anything available on the public internet. This includes other AWS services, HTTPS endpoints for APIs, or services and endpoints outside AWS. The function cannot directly connect to your private resources inside of your VPC.

You can configure a Lambda function to connect to private subnets in a VPC in your account. When a Lambda function is configured to use a VPC, the Lambda function still runs inside the Lambda service VPC. The function then sends all network traffic through your VPC and abides by your VPC’s network controls. Functions deployed to virtual private networks must consider network access to restrict resource access.

AWS Lambda service VPC with VPC-to-VPT NAT to customer VPC

AWS Lambda service VPC with VPC-to-VPT NAT to customer VPC

When you connect a function to a VPC in your account, the function cannot access the internet, unless the VPC provides access. To give your function access to the internet, route outbound traffic to a NAT gateway in a public subnet. The NAT gateway has a public IP address and can connect to the internet through the VPC’s internet gateway. For more information, see “How do I give internet access to my Lambda function in a VPC?”. Connecting a function to a public subnet doesn’t give it internet access or a public IP address.

You can control the VPC settings for your Lambda functions using AWS IAM condition keys. For example, you can require that all functions in your organization are connected to a VPC. You can also specify the subnets and security groups that the function’s users can and can’t use.

Unsolicited inbound traffic to a Lambda function isn’t permitted by default. There is no direct network access to the execution environment where your functions run. When connected to a VPC, function outbound traffic comes from your own network address space.

You can use security groups, which act as a virtual firewall to control outbound traffic for functions connected to a VPC. Use security groups to permit your Lambda function to communicate with other AWS resources. For example, a security group can allow the function to connect to an Amazon ElastiCache cluster.

To filter or block access to certain locations, use VPC routing tables to configure routing to different networking appliances. Use network ACLs to block access to CIDR IP ranges or ports, if necessary. For more information about the differences between security groups and network ACLs, see “Compare security groups and network ACLs.”

In addition to API Gateway private endpoints, several AWS services offer VPC endpoints, including Lambda. You can use VPC endpoints to connect to AWS services from within a VPC without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Using tools to audit your traffic

When you configure a Lambda function to use a VPC, or use private API endpoints, you can use VPC Flow Logs to audit your traffic. VPC Flow Logs allow you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or S3 to see where traffic is being sent to at a granular level. Here are some flow log record examples. For more information, see “Learn from your VPC Flow Logs”.

Block network access when required

In addition to security groups and network ACLs, third-party tools allow you to disable outgoing VPC internet traffic. These can also be configured to allow traffic to AWS services or allow-listed services.

Conclusion

Managing your serverless application’s security boundaries ensures isolation for, within, and between components. In this post, I cover how to evaluate and define resource policies, showing what policies are available for various serverless services. I show some of the features of AWS WAF to protect APIs. Then I review how to control network traffic at all layers. I explain how Lambda functions connect to VPCs, and how to use private APIs and VPC endpoints. I walk through how to audit your traffic.

This well-architected question will be continued where I look at using temporary credentials between resources and components. I cover why smaller, single purpose functions are better from a security perspective, and how to audit permissions. I show how to use AWS Serverless Application Model (AWS SAM) to create per-function IAM roles.

For more serverless learning resources, visit https://serverlessland.com.

Customize requests and responses with AWS WAF

Post Syndicated from Kaustubh Phatak original https://aws.amazon.com/blogs/security/customize-requests-and-responses-with-aws-waf/

In March 2021, AWS introduced support for custom responses and request header insertion with AWS WAF. This blog post will demonstrate how you can use these new features to customize your AWS WAF solution to improve the user experience and security posture of your applications.

HTTP response codes are standard responses sent by a server in response to a client request. When AWS WAF blocks a request, the default response code sent back to the client is HTTP 403 (Forbidden). The HTTP 403 response code is associated with a default error page built by the web server engine. This page is typically generic and not user-friendly. With the Custom Response feature, AWS WAF now allows you to modify the status code from HTTP 403 to HTTP 2xx, 3xx, 4xx, and 5xx, and to return a custom body when the request is blocked by AWS WAF. The custom responses unique to AWS WAF also allow you to differentiate blocked requests generated by AWS WAF or your server.

When inspected HTTP requests are allowed by AWS WAF, the request is passed through to the associated resource. Now you have the ability to insert custom HTTP request headers for each rule inside your web access control list (web ACL) set to allow or count, and you can create additional logic with your application by tagging these requests with the headers.

We will be outlining three different use cases to show how you can use these AWS WAF features.

Use case 1: Custom response code

In this example, you will use the custom response code feature to redirect a viewer request to a different webpage. You use HTTP 3xx response codes to redirect the incoming request, and use the HTTP header Location to specify the website URL for redirection. Figure 1 shows an overview of this workflow.

Figure 1: Overview of using custom response code to redirect the request

Figure 1: Overview of using custom response code to redirect the request

Figure 1 illustrates the following steps:

  1. AWS WAF has a rate-based rule to allow 100 requests every 5 minutes.
  2. A user sends multiple requests and breaches AWS WAF rate-based rules threshold.
  3. AWS WAF blocks any further requests from the user.
  4. The AWS WAF custom response code feature modifies the response code from HTTP 403 to HTTP 302 – Temporary Redirect with a Location header specifying the redirected URL.

Configure the AWS WAF web ACL and rule for custom response code

To create an Application Load Balancer and associate it to AWS WAF

  1. Follow the steps to configure a load balancer and a listener to create an internet-facing load balancer in the N.Virginia AWS Region.
  2. After the load balancer is created, open the AWS WAF console.
  3. In the navigation pane, choose Web ACLs, and then choose Create web ACL in US east (N.Virginia) Region.
  4. For Name, enter the name that you want to use to identify this web ACL.
  5. For Resource type, choose the Application Load Balancer that you created in Step 1 and choose Add.
  6. Choose Next.
  7. Choose Add rules and then choose Add my own rules and rule groups.
  8. For Name, enter the name that you want to use to identify this rule.
  9. For Rule type, choose Rate-based rule.
  10. For Rate limit, enter 100.
  11. Under Actions, keep the default action of Block and enable Custom response.
  12. Enter the response code as 302.
  13. Under Response headers, add a new custom header with Key as Location and Value as example.com
  14. Choose Add rule.
  15. Continue to choose Next to reach the summary page, and then choose Create new web ACL.

After the web ACL is created, you should see the web ACL configuration as shown in Figure 2.

Figure 2: Custom Response - Web ACL configuration

Figure 2: Custom Response – Web ACL configuration

Now, the setup is complete. You have a web ACL with a rate-based rule configured to redirect blocked requests to a different URL. To verify that the setup is working as expected, you can enable and analyze the AWS WAF logs for a test user that is sending more than 100 requests in a period of 5 minutes.

In Figure 3, you can see the custom response code of 302 being sent to the test user instance.

Figure 3: Verifying the AWS WAF logs for custom response

Figure 3: Verifying the AWS WAF logs for custom response

In the example in Figure 3, we tested our configuration by having a user send more than 100 requests from a PC to trigger a block. To verify the Location header, we analyzed the network traffic by using the developer tools of the browser. As you can see in Figure 4, the response includes the custom header Location with the configured redirect URL.

Figure 4: Verifying response in the browser tools for custom response

Figure 4: Verifying response in the browser tools for custom response

Use case 2: Custom error page

In this example, you will use the AWS WAF custom error page to route the request to a different error page, rather than the default web server error pages. As you can see in Figure 5, the workflow is similar to use case 1.

Figure 5: Overview of using custom error page to redirect the request

Figure 5: Overview of using custom error page to redirect the request

Figure 5 shows the following steps:

  1. AWS WAF has a rate-based rule to allow 100 requests every 5 minutes.
  2. A user sends multiple requests and breaches AWS WAF rate-based rules threshold.
  3. AWS WAF blocks any further requests from the user.
  4. AWS WAF custom response code feature modifies the response code to HTTP 307 – Temporary Redirect and responds with a custom error page with the message Too Many Requests.

To configure the AWS WAF web ACL and rule for custom error page

  1. In the AWS WAF console, in the navigation pane, choose Web ACLs, and then choose the web ACL that you created in use case 1.
  2. Click on Rules tab and choose Add rules and then choose Add my own rules and rule groups.
  3. For Name, enter the name that you want to use to identify this rule.
  4. For Rule type, choose Rate-based rule.
  5. For Rate limit, enter 100.
  6. Under Actions, keep the default action of Block and enable Custom response.
  7. For the response code, enter 307.
  8. For Choose how you would like to specify the response body, select Create a custom response body.
  9. A pop-up box will open. Enter a name for the Response body object name.
  10. For Content type, you can select JSON, HTML, or Plain Text. In this example, we select Plain Text.
  11. For Response body, enter any sample text. In this example, we enter This is a sample custom error page. Then choose Save.
  12. Choose Add Rule.
  13. For Set rule priority, move your new rule to the top so that this rule is processed first.

Figure 6 shows a summary of the rate based-rule created for use case 2.

Figure 6: Custom error page - Web ACL configuration

Figure 6: Custom error page – Web ACL configuration

Now, the setup is complete. You have a web ACL with a rate-based rule configured to redirect blocked requests to different URL. To verify the setup is working as expected, you can analyze the AWS WAF logs for a test user that is sending more than 100 requests in a period of 5 minutes. Figure 7 shows the custom response code of 307 being sent to our example test user instance.

Figure 7: Verifying responseCodeSent in the AWS WAF logs

Figure 7: Verifying responseCodeSent in the AWS WAF logs

When you access the load balancer URL from your browser, you should see the custom error page similar to Figure 8.

Figure 8: Verifying response using the browser

Figure 8: Verifying response using the browser

Use case 3: Header insertion for request tagging

This example demonstrates the AWS WAF header insertion capability to route the request based on geolocation. You will use the header country-check to notify the Application Load Balancer to route the request to a different target group, by using the Application Load Balancer advanced routing feature.

Figure 9: Overview of using request header insertion to tag the request to be processed downstream

Figure 9: Overview of using request header insertion to tag the request to be processed downstream

Figure 9 shows the following steps:

  1. User sends request to the Application Load Balancer that is attached with AWS WAF.
  2. AWS WAF applies a geographic location rule that conditionally allows requests from unexpected countries in Count mode.
  3. AWS WAF adds a custom HTTP request header to tag this request.
  4. An Application Load Balancer listener rule is configured to route requests based on this header.
  5. Request tagged by AWS WAF with the custom header is routed to a separate target group.

To add a geographical location rule for request header insertion

  1. In the AWS WAF console, in the navigation pane, choose Web ACLs, and then choose the web ACL that you created in use case 1.
  2. On the Rules tab, choose Add rules and then choose Add my own rules and rule groups.
  3. For Name, enter the name that you want to use to identify this rule.
  4. For Rule type, choose Regular rule.
  5. For If a request, select doesn’t match the statement (NOT).
  6. For Inspect, select Originates from a country in.
  7. In this example, normal traffic originates from United States; so under Country codes, select United States – US.
  8. For IP address to use to determine the country of origin, Choose Source IP Address.
  9. For Action, choose Count. This will allow requests to be logged and tagged while processing other rules that follow.
  10. Expand Custom request, choose Add new custom header. For Key, choose country-check and for Value, choose true.

    Note: custom request headers are prefixed with x-amzn-waf-

  11. Choose Save rule.
  12. Set rule priority, move your new rule to the top to allow this rule to be processed first.
  13. Choose Save.

 

Figure 10: Header insertion - Web ACL configuration

Figure 10: Header insertion – Web ACL configuration

For this use-case, you set up a geographical location rule to check for requests that originate from countries outside of the normal traffic flow of your application (in this example, the United States). You do not want to block the requests right away, but instead tag the requests triggered by this AWS WAF rule for further validation downstream by the application logic. To route the tagged requests differently, you use ALB advanced request routing feature to route AWS WAF tagged traffic to a different target group.

You can verify the header inserted by the rule by enabling AWS WAF full logs and looking at the requestHeadersInserted log field, as shown in Figure 11.

Figure 11: Verifying the AWS WAF logs for header insertion

Figure 11: Verifying the AWS WAF logs for header insertion

Conclusion

AWS WAF provides the ability to create a custom response for blocked requests by changing the status code and response body. The header insertion capability allows you to tag requests allowed by AWS WAF for your application to perform another action.

In this post, we showed you three basic use-cases to demonstrate how you can create a better user experience by redirecting users to another location instead of responding with a denied page. We showed you how you can create custom AWS WAF rules by tagging the request for your application logic to see it has been inspected, and how you can make a decision around this information.

If you’re new to AWS WAF, see Getting started with AWS WAF.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kaustubh Phatak

Kaustubh is a Solutions Architect at AWS. He’s passionate about helping customers build scalable, secure, and cost-effective applications to achieve business outcomes. Outside work, Kaustubh likes to play cricket and spend time with his wife and kid.

Author

EJ Chen

EJ is an Edge Specialist Solutions Architect at AWS.

AWS Shield threat landscape review: 2020 year-in-review

Post Syndicated from Mario Pinho original https://aws.amazon.com/blogs/security/aws-shield-threat-landscape-review-2020-year-in-review/

AWS Shield is a managed service that protects applications that are running on Amazon Web Services (AWS) against external threats, such as bots and distributed denial of service (DDoS) attacks. Shield detects network and web application-layer volumetric events that may indicate a DDoS attack, web content scraping, or other unauthorized non-human traffic that is interacting with AWS resources.

In this blog post, I’ll show you some of the volumetric event trends from network traffic and web request patterns that we observed in 2020 as more workloads moved to the cloud. It includes insights that are broadly applicable to cloud applications and insights that are specific to gaming applications. I will also share tips and best practices that you can follow to protect the availability of the applications that you run on AWS.

DDoS trends as more developers rely on the cloud

In 2020, we saw an increase in developers building applications on AWS and protecting their availability with AWS Shield Advanced, which includes AWS WAF at no additional cost. The DDoS threat vectors we observed were similar to the ones that were observed in 2019, but they occurred with greater frequency. Between February 2020 and April 2020, we observed a 72% increase in the monthly number of events that were detected by Shield.

TCP SYN floods and UDP reflection attacks, which attempt to reflect and amplify packets off legitimate services running on the internet, were among the most common infrastructure-layer events detected by AWS Shield in 2020. (In this blog post, we’ll use the term infrastructure layer to refer to Layers 3 and 4 of the OSI model.) These tactics attempt to affect the availability of an application by overwhelming its ability to process packets or establish new connections on behalf of legitimate users. One of the oldest UDP reflection vectors, DNS reflection, remains the most common, at 15.5% of all infrastructure-layer events detected by Shield. TCP SYN floods were the second most common at 13.8%. This is unsurprising, because web applications commonly rely upon both DNS and TCP traffic. Bad actors can find a consistent supply of systems on the internet that can be used as reflectors, due to the properties of these protocols, or system misconfiguration.

Bad actors may use application-layer requests, in isolation or together with infrastructure-layer attacks, in their attempt to affect the availability of an application. The most common application-layer attack observed by Shield in 2020 was the web request flood, an observation that is consistent with prior years. This vector gives a bad actor more leverage, meaning that they can have a greater effect with less traffic and effort. Instead of having to exhaust the capacity of a network path, device, or other lower-level component, they only need to send more web requests than the application is able to handle. This attack vector was a significant cause of increased volumetric events detected by Shield in the first half of 2020. For more information about events detected by Shield during 2020, see Figure 1.
 

Figure 1: Monthly number of volumetric events detected by AWS Shield in 2020

Figure 1: Monthly number of volumetric events detected by AWS Shield in 2020

A closer look at web application-layer attacks

The request volume of web application-layer events that are detected by AWS Shield has increased, an indication that bad actors are making greater investments in tactics that are more challenging to detect and mitigate than infrastructure-layer events. Shield continuously monitors DDoS activity and alerts customers if there is an elevated threat at any point in time. In 2020, Shield reported elevated threats on 53 days, 33 of which were caused by high-volume web request floods. There were 55 events with a volume of greater than 500,000 requests per second (RPS), some of which reached millions of RPS. The RPS of the 99th percentile (P99) of the volume of web request floods detected by Shield nearly doubled between the first and second halves of the year. (The 99th percentile is the request volume in RPS, below which 99% of request floods were observed.). For more information about the volume of web request floods detected by Shield in 2020, see Figure 2.
 

Figure 2: Quarterly P90 and P99 volume of web request floods detected by AWS Shield in 2020

Figure 2: Quarterly P90 and P99 volume of web request floods detected by AWS Shield in 2020

It’s important to protect web applications against DDoS attacks of any size. The more common request floods are relatively small, but smaller attacks can affect an application if it isn’t architected for DDoS resiliency. You can follow these best practices to help protect your web application against request floods and other DDoS attacks:

  • Protect internet-facing resources with AWS Shield Advanced. You can use AWS Shield Advanced to protect your applications that are running on AWS against most common, frequently occurring network and transport layer DDoS attacks. When you add protected resources in AWS Shield Advanced, network volumetric attacks against those resources are detected and mitigated more quickly. You also receive visibility into security events by using the AWS Shield console, API, or Amazon CloudWatch metrics. If you need assistance during an active event, you can quickly engage with AWS Shield experts or escalate to the AWS Shield Response Team (SRT).
  • Access greater network and request capacity with Amazon CloudFront and Amazon Route 53. You can use these services to serve static and dynamic web content, as well as DNS answers, by using the global network of AWS edge locations. This provides you with greater capacity to help mitigate large volumetric attacks. Applications that are fronted by Amazon CloudFront and Amazon Route 53 also benefit from inline mitigation that continually inspects all traffic and mitigates most infrastructure-layer DDoS attempts in less than one second. CloudFront and the AWS Shield DDoS mitigation systems use SYN cookies to verify new connections, which protects against SYN floods and other traffic floods that aren’t valid for the application. (A SYN cookie is a technique by which the Shield infrastructure encodes connection setup information into the SYN response (SYN-ACK packet) in such a way that the TCP connection resources are only consumed for legitimate clients who complete the TCP handshake.)
  • Use AWS WAF and rate-based rules to mitigate application-layer attacks. AWS Shield Advanced provides you with protection against infrastructure-layer attacks that can be mitigated with network-based DDoS mitigation systems. When you add Shield Advanced protection to CloudFront or Application Load Balancer (ALB) for serving web content, you receive AWS WAF at no additional cost. AWS Managed Rules for AWS WAF makes it easy to select and apply pre-configured rules, depending on your specific requirements. You also receive web request flood detection and can mitigate security events by configuring rate-based rules to match and temporarily block IP addresses that are sending traffic above a rate that you define. For larger applications, or applications that span multiple AWS accounts, you can use AWS Firewall Manager to deploy and manage rules across all of your resources.

Considerations unique to gaming use cases

On AWS, you can build and protect any kind of application. Internet-facing applications are more likely to receive DDoS attacks, particularly if a bad actor is motivated to disrupt the normal function of the application. We looked across AWS Shield data and found that one type of application stood out as the most likely to be targeted by DDoS attacks: gaming servers. Gaming servers host matches between players on their personal computers or gaming consoles. 16% of infrastructure-layer events detected by Shield in 2020 targeted gaming applications. The application might be targeted simply out of malice, or to gain an advantage in the game. Between Q1 2020 and Q2 2020, we observed a 46% increase in the frequency of events that were detected on behalf of gaming applications. This increase aligns with the increased use of residential internet networks during the same time.

There are unique considerations for protecting a gaming application against DDoS attacks. Many gaming applications rely upon UDP traffic, which makes it infeasible to block UDP as a countermeasure against the most common DDoS attacks, like UDP reflection attacks or UDP floods. You can nevertheless protect your gaming application and the experience of your players by using Elastic IP addresses and protecting these resources with AWS Shield Advanced. Shield Advanced has the ability to perform deep packet inspection of all traffic, even at extremely high PPS rates. Using that powerful tool, the AWS Shield Response Team (SRT) can work with you to understand your application and build a custom mitigation that allows only valid player traffic.

Reacting to extortion attempts

From August 2020 through November 2020, we saw a revival of DDoS extortion attempts, a tactic that is now more than six years old. Each extortion attempt reported by customers to the AWS SRT had familiar characteristics. A malicious actor would target an application that wasn’t running on AWS as a proof of concept and then threaten a larger, follow-on attack if a ransom wasn’t paid. Although it’s very uncommon for the follow-on attack to actually occur, application owners take these threats seriously and use the opportunity to assess their own protection and operational readiness. In approximately 90% of AWS support cases related to these attempts, the SRT assisted the application owners directly with their preparation. We also assisted Shield Advanced customers who weren’t directly targeted by extortion attempts but were aware of other extortion campaigns.

One question that we frequently hear is how AWS can help developers monitor their applications and take quick action if a possible DDoS attack is detected. When you protect your resources with AWS Shield Advanced, you have the option to associate an Amazon Route 53 health check. The status of the health check is used to improve the decisions that are made by the Shield detection system. If you have Shield Advanced proactive engagement enabled, the SRT is automatically engaged any time a Shield event corresponds to an unhealthy Route 53 health check that is associated to your protected resource. Based on the contact information provided in the Shield console, an SRT engineer will contact you to coordinate a response to the detected event. If you’re running a web application, you can choose to delegate access to your Shield Advanced and AWS WAF APIs to the SRT and provide the team with copies of your AWS WAF logs. During an escalation, an SRT engineer will evaluate your logs for DDoS signatures and robotic patterns and assist in building effective mitigations.

Summary

In this blog post, I shared some of the trends that were observed by AWS Shield in 2020, as well as steps that you can take to protect the availability of your applications against DDoS attacks. If you’d like to learn more about DDoS protection on AWS and configuring AWS Shield Advanced, check out the following resources:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Shield forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mário Pinho

Mário is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into their simpler components. In his free time, he pretends to be an artist by playing piano and doing landscape photography.

Automatically update security groups for Amazon CloudFront IP ranges using AWS Lambda

Post Syndicated from Yeshwanth Kottu original https://aws.amazon.com/blogs/security/automatically-update-security-groups-for-amazon-cloudfront-ip-ranges-using-aws-lambda/

Amazon CloudFront is a content delivery network that can help you increase the performance of your web applications and significantly lower the latency of delivering content to your customers. For CloudFront to access an origin (the source of the content behind CloudFront), the origin has to be publicly available and reachable. Anyone with the origin domain name or IP address could request content directly and bypass CloudFront. In this blog post, I describe an automated solution that uses security groups to permit only CloudFront to access the origin.

Amazon Simple Storage Service (Amazon S3) origins provide a feature called Origin Access Identity, which blocks public access to selected buckets, making them accessible only through CloudFront. When you use CloudFront to secure your web applications, it’s important to ensure that only CloudFront can access your origin (such as Amazon Elastic Cloud Compute (Amazon EC2) or Application Load Balancer (ALB)) and any direct access to origin is restricted. This blog post shows you how to create an AWS Lambda function to automatically update Amazon Virtual Private Cloud (Amazon VPC) security groups with CloudFront service IP ranges to permit only CloudFront to access the origin.

AWS publishes the IP ranges in JSON format for CloudFront and other AWS services. If your origin is an Elastic Load Balancer or an Amazon EC2 instance, you can use VPC security groups to allow only CloudFront IP ranges to access your applications. The IP ranges in the list are separated by service and Region, and you must specify only the IP ranges that correspond to CloudFront.

The IP ranges that AWS publishes change frequently and without an automated solution, you would need to retrieve this document frequently to understand the current IP ranges for CloudFront. Frequent polling is inefficient because there is no notice of when the IP ranges change, and if these IP ranges aren’t modified immediately, your client might see 504 errors when they access CloudFront. Additionally, there are numerous IP ranges for each service, performing the change manually isn’t an efficient way of updating these ranges. This means you need infrastructure to support the task. However, in that case you end up with another host to manage—complete with the typical patching, deployment, and monitoring. As you can see, a small task could quickly become more complicated than the problem you intended to solve.

An Amazon Simple Notification Service (Amazon SNS) message is sent to a topic whenever the AWS IP ranges change. Enabling you to build an event-driven, serverless solution that updates the IP ranges for your security groups, as needed by using a Lambda function that is triggered in response to the SNS notification.

Here are the steps we are going to take to implement the solution:

  1. Create your resources
    1. Create an IAM policy and execution role for the Lambda function
    2. Create your Lambda function
  2. Test your Lambda function
  3. Configure your Lambda function’s trigger

Create your resources

The first thing you need to do is create a Lambda function execution role and policy. Lambda function uses execution role to access or create AWS resources. This Lambda function is triggered by an SNS notification whenever there’s a change in the IP ranges document. Based on the number of IP ranges present for CloudFront and also the number of ports (for example, 80,443) that you want to whitelist on the origin, this Lambda function creates the required security groups. These security groups will allow only traffic from CloudFront to your ELB load balancers or EC2 instances.

Create an IAM policy and execution role for the Lambda function

When you create a Lambda function, it’s important to understand and properly define the security context for the Lambda function. Using AWS Identity and Access Management (IAM), you can create the Lambda execution role that determines the AWS service calls that the function is authorized to complete. (Learn more about the Lambda permissions model.)

To create the IAM policy for your role

  1. Log in to the IAM console with the user account that you will use to manage the Lambda function. This account must have administrator permissions.
  2. In the navigation pane, choose Policies.
  3. In the content pane, choose Create policy.
  4. Choose the JSON tab and copy the text from the following JSON policy document. Paste this text into the JSON text box.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "CloudWatchPermissions",
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource": "arn:aws:logs:*:*:*"
        },
        {
          "Sid": "EC2Permissions",
          "Effect": "Allow",
          "Action": [
            "ec2:DescribeSecurityGroups",
            "ec2:AuthorizeSecurityGroupIngress",
            "ec2:RevokeSecurityGroupIngress",
            "ec2:CreateSecurityGroup",
            "ec2:DescribeVpcs",
    		"ec2:CreateTags",
            "ec2:ModifyNetworkInterfaceAttribute",
            "ec2:DescribeNetworkInterfaces"
            
          ],
          "Resource": "*"
        }
      ]
    }
    

  5. When you’re finished, choose Review policy.
  6. On the Review page, enter a name for the policy name (e.g. LambdaExecRolePolicy-UpdateSecurityGroupsForCloudFront). Review the policy Summary to see the permissions granted by your policy, and then choose Create policy to save your work.

To understand what this policy allows, let’s look closely at both statements in the policy. The first statement allows the Lambda function to create and write to CloudWatch Logs, which is vital for debugging and monitoring our function. The second statement allows the function to get information about existing security groups, get existing VPC information, create security groups, and authorize and revoke ingress permissions. It’s an important best practice that your IAM policies be as granular as possible, to support the principal of least privilege.

Now that you’ve created your policy, you can create the Lambda execution role that will use the policy.

To create the Lambda execution role

  1. In the navigation pane of the IAM console, choose Roles, and then choose Create role.
  2. For Select type of trusted entity, choose AWS service.
  3. Choose the service that you want to allow to assume this role. In this case, choose Lambda.
  4. Choose Next: Permissions.
  5. Search for the policy name that you created earlier and select the check box next to the policy.
  6. Choose Next: Tags.
  7. (Optional) Add metadata to the role by attaching tags as key-value pairs. For more information about using tags in IAM, see Tagging IAM Users and Roles.
  8. Choose Next: Review.
  9. For Role name (e.g. LambdaExecRole-UpdateSecurityGroupsForCloudFront), enter a name for your role.
  10. (Optional) For Role description, enter a description for the new role.
  11. Review the role, and then choose Create role.

Create your Lambda function

Now, create your Lambda function and configure the role that you created earlier as the execution role for this function.

To create the Lambda function

  1. Go to the Lambda console in N. Virginia region and choose Create function. On the next page, choose Author from scratch. (I’ll be providing the code for your Lambda function, but for other functions, the Use a blueprint option can be a great way to get started.)
  2. Give your Lambda function a name (e.g UpdateSecurityGroupsForCloudFront) and description, and select Python 3.8 from the Runtime menu.
  3. Choose or create an execution role: Select the execution role you created earlier by selecting the option Use an Existing Role.
  4. After confirming that your settings are correct, choose Create function.
  5. Paste the Lambda function code from here.
  6. Select Save.

Additionally, in the Basic Settings of the Lambda function, increase the timeout to 10 seconds.

To set the timeout value in the Lambda console

  1. In the Lambda console, choose the function you just created.
  2. Under Basic settings, choose Edit.
  3. For Timeout, select 10s.
  4. Choose Save.

By default, the Lambda function has these settings:

  • The Lambda function is configured to create security groups in the default VPC.
  • CloudFront IP ranges are updated as inbound rules on port 80.
  • The created security groups are tagged with the name prefix AUTOUPDATE.
  • Debug logging is turned off.
  • The service for which IP ranges are extracted is set to CloudFront.
  • The SDK client in the Lambda function set to us-east-1(N. Virginia).

If you want to customize these settings, set the following environment variables for the Lambda function. For more details, see Using AWS Lambda environment variables.

Action Key-value data
To create security groups in a specific VPC Key: VPC_ID
Value: vpc-id
To create security groups rules for a different port or multiple ports
 
The solution in this example supports a total of two ports. One can be used for HTTP and another for HTTPS.

Key: PORTS
Value: portnumber
or
Key: PORTS
Value: portnumber,portnumber
To customize the prefix name tag of your security groups Key: PREFIX_NAME
Value: custom-name
To enable debug logging to CloudWatch Key: DEBUG
Value: true
To extract IP ranges for a different service other than CloudFront Key: SERVICE
Value: servicename
To configure the Region for the SDK client used in the Lambda function
 
If the CloudFront origin is present in a different Region than N. Virginia, the security groups must be created in that region.
Key: REGION
Value: regionname

To set environment variables in the Lambda console

  1. In the Lambda console, choose the function you created.
  2. Under Environment variables, choose Edit.
  3. Choose Add environment variable.
  4. Enter a key and value.
  5. Choose Save.

Test your Lambda function

Now that you’ve created your function, it’s time to test it and initialize your security group.

To create your test event for the Lambda function

  1. In the Lambda console, on the Functions page, choose your function. In the drop-down menu next to Actions, choose Configure test events.
  2. Enter an Event Name (e.g. TriggerSNS)
  3. Replace the following as your sample event, which will represent an SNS notification and then select Create.
    {
        "Records": [
            {
                "EventVersion": "1.0",
                "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
                "EventSource": "aws:sns",
                "Sns": {
                    "SignatureVersion": "1",
                    "Timestamp": "1970-01-01T00:00:00.000Z",
                    "Signature": "EXAMPLE",
                    "SigningCertUrl": "EXAMPLE",
                    "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
                    "Message": "{\"create-time\": \"yyyy-mm-ddThh:mm:ss+00:00\", \"synctoken\": \"0123456789\", \"md5\": \"7fd59f5c7f5cf643036cbd4443ad3e4b\", \"url\": \"https://ip-ranges.amazonaws.com/ip-ranges.json\"}",
                    "Type": "Notification",
                    "UnsubscribeUrl": "EXAMPLE",
                    "TopicArn": "arn:aws:sns:EXAMPLE",
                    "Subject": "TestInvoke"
                }
      		}
        ]
    }
    

  4. After you’ve added the test event, select Save and then select Test. Your Lambda function is then invoked, and you should see log output at the bottom of the console in Execution Result section, similar to the following.
    Updating from https://ip-ranges.amazonaws.com/ip-ranges.json
    MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b: Exception
    Traceback (most recent call last):
      File "/var/task/lambda_function.py", line 29, in lambda_handler
        ip_ranges = json.loads(get_ip_groups_json(message['url'], message['md5']))
      File "/var/task/lambda_function.py", line 50, in get_ip_groups_json
        raise Exception('MD5 Missmatch: got ' + hash + ' expected ' + expected_hash)
    Exception: MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b
    

  5. Edit the sample event again, and this time change the md5 value in the sample event to be the first MD5 hash provided in the log output. In this example, you would update the md5 value in the sample event configured earlier with the hash value seen in the error ‘2e967e943cf98ae998efeec05d4f351c’. Lambda code successfully executes only when the original hash of the IP ranges document and the hash received from the event trigger match. After you modify the hash value from the error message received earlier, the test event matches the hash of the IP ranges document.
  6. Select Save and test. This invokes your Lambda function.

After the function is invoked the second time with updated md5 has Lambda function should execute without any errors. You should be able to see the new security groups created and the IP ranges of CloudFront updated in the rules in the EC2 console, as shown in Figure 1.
 

Figure 1: EC2 console showing the security groups created

Figure 1: EC2 console showing the security groups created

In the initial successful run of this function, it created the total number of security groups required to update all the IP ranges of CloudFront for the ports mentioned. The function creates security groups based on the maximum number of rules that can be added to individual security groups. The new security groups can be identified from the EC2 console by the name AUTOUPDATE_random if you used the default configuration, or a custom name if you provided a PREFIX_NAME.

You can now attach these security groups to your Elastic LoadBalancer or EC2 instances. If your log output is different from what is described here, the output should help you identify the issue.

Configure your Lambda function’s trigger

After you’ve validated that your function is executing properly, it’s time to connect it to the SNS topic for IP changes. To do this, use the AWS Command Line Interface (CLI). Enter the following command, making sure to replace Lambda ARN with the Amazon Resource Name (ARN) of your Lambda function. You can find this ARN at the top right when viewing the configuration of your Lambda function.

aws sns subscribe - -topic-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged" - -region us-east-1 - -protocol lambda - -notification-endpoint "Lambda ARN"

You should receive the ARN of your Lambda function’s SNS subscription.

Now add a permission that allows the Lambda function to be invoked by the SNS topic. The following command also adds the Lambda trigger.

aws lambda add-permission - -function-name "Lambda ARN" - -statement-id lambda-sns-trigger - -region us-east-1 - -action lambda:InvokeFunction - -principal sns.amazonaws.com - -source-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"

When AWS changes any of the IP ranges in the document, an SNS notification is sent and your Lambda function will be triggered. This Lambda function verifies the modified ranges in the document and efficiently updates the IP ranges on the existing security groups. Additionally, the function dynamically scales and creates additional security groups if the number of IP ranges for CloudFront is increased in future. Any newly created security groups are automatically attached to the network interface where the previous security groups are attached in order to avoid service interruption.

Summary

As you followed this blog post, you created a Lambda function to create a security groups and update the security group’s rules dynamically whenever AWS publishes new internal service IP ranges. This solution has several advantages:

  • The solution isn’t designed as a periodic poll, so it only runs when it needs to.
  • It’s automatic, so you don’t need to update security groups manually which lowers the operational cost.
  • It’s simple, because you have no extra infrastructure to maintain as the solution is completely serverless.
  • It’s cost effective, because the Lambda function runs only when triggered by the AmazonIpSpaceChanged SNS topic and only runs for a few seconds, this solution costs only pennies to operate.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon CloudFront forum. If you have any other use cases for using Lambda functions to dynamically update security groups, or even other networking configurations such as VPC route tables or ACLs, we’d love to hear about them!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Yeshwanth Kottu

Yeshwanth is a Systems Development Engineer at AWS in Cupertino, CA. With a focus on CloudFront and [email protected], he enjoys helping customers tackle challenges through cloud-scale architectures. Yeshwanth has an MS in Computer Systems Networking and Telecommunications from Northeastern University. Outside of work, he enjoys travelling, visiting national parks, and playing cricket.

Centrally manage AWS WAF (API v2) and AWS Managed Rules at scale with Firewall Manager

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/centrally-manage-aws-waf-api-v2-and-aws-managed-rules-at-scale-with-firewall-manager/

Since AWS Firewall Manager was introduced in 2018, it has evolved with many more features and today also supports the newest version of AWS WAF, as well as the latest AWS WAF APIs (AWS WAFV2), and AWS Managed Rules for AWS WAF. (Note that the original AWS WAF APIs are still available and supported under the name AWS WAF Classic. Firewall Manager already supported AWS WAF Classic and continues to do so.) In this blog, we walk you through the steps of setting up Firewall Manager policies for AWS WAF and highlight some of the options available. We also walk through how to use the recently launched feature that enables centralized logging for your AWS WAF policies. In addition, we share an AWS CloudFormation template that you can use to set up Firewall Manager policies, AWS WAF rule groups, and the related AWS WAF rules (both custom and managed rules). Before we get into the content of the blog, here’s some background information you should know.

AWS WAF rules define how to inspect web requests and what to do when a web request matches the inspection criteria. Firewall Manager simplifies the administration of AWS WAF rules at scale across multiple accounts.

Background

Web ACL capacity units

The web ACL capacity unit (WCU) is a new concept that we introduced to AWS WAF. WCU is a measurement that is used to calculate and control the operating resources that are used to run your rules with your web access control list (web ACL).

AWS Managed Rules

AWS Managed Rules (AMRs) are a set of AWS WAF rules curated and maintained by the AWS Threat Research Team. With just a few clicks, these rules can help protect your web applications from new and emerging risks, so you don’t need to spend time researching and writing your own rules. The core rule set covers some of the common security risks described in the OWASP Top 10 publication. AMRs also include IP reputation lists based on Amazon threat intelligence that can help reduce your exposure to bot traffic or exploitation attempts. You can add multiple AMRs to your web ACL or write hundreds of your own rules. Additional rule sets from AWS Marketplace sellers like Cyber Security Cloud and Fortinet are also available to use. If you subscribe to rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller.

Rule groups

A rule group is a group of AWS WAF rules. In the new AWS WAF, a rule group is defined under AWS WAF, and you can add rule groups as a reusable set of rules under a web ACL. With the addition of AMRs, customers can select from AWS Managed Rule groups in addition to Partner Managed and Custom Configured rule groups. So, you now have the option to create Firewall Manager policies by using either AWS WAF Classic rule groups or new AWS WAF rule groups.

Firewall Manager policy

A Firewall Manager policy is an entity that contains a set of rule groups that can be associated to and applied to your resources. In this policy, you also specify the resource types you want to protect in specific or all accounts. Based on the policy, Firewall Manager creates a Firewall Manager web ACL in the corresponding accounts. In addition, individual application teams can add more rules or rule groups to this web ACL.

Firewall Manager prerequisites

Firewall Manager has the following prerequisites:

  • AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. If you’re new to Organizations, use these steps to enable AWS Organizations in your account. For more information, see Creating an organization and Enabling all features in your organization.
  • A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the administrator for Firewall Manager. This gives the account permission to deploy AWS WAF rules across the organization. On the web console of the AWS account that you want to designate as the Firewall Manager administrator, go to the Firewall Manager service, and on the Settings tab, choose Set administrator account to set that account as the administrator, as shown in figure 1.
     
    Figure 1: Firewall Manager administrator account

    Figure 1: Firewall Manager administrator account

  • AWS Config: You must enable AWS Config for all the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all the accounts in your organization, you can use the Enable AWS Config template on the StackSets Sample Templates page. For more information, see Getting Started with AWS Config.

Walkthrough: Set up Firewall Manager policies

In the following steps, we walk you through the steps of creating and applying a Firewall Manager policy across AWS accounts, explaining the implications of the options you can choose. This policy uses AWS WAF rules that include AMRs such as the core rule set, Amazon’s IP reputation list and SQL injection.

To create and apply Firewall Manager policies

  1. In the AWS Management Console, navigate to the Firewall Manager service, choose Security Policies, and then choose the appropriate Region.
  2. Choose the Create Policy button. Under Policy type, you can see four options to choose from, as shown in figure 2. For this walkthrough, choose AWS WAF, which is the most recent version of AWS WAF, and then choose Next.
     
    Figure 2: Choosing the policy type

    Figure 2: Choosing the policy type

  3. You can now see options to add two sets of rule groups, first rule groups and last rule groups, as shown in figure 3.
    • First rule groups: When the web ACL inspects a web request, these are the set of rule groups that are prioritized to be evaluated at the very beginning. Note that these rules could be either custom build rules, or managed AWS WAF rules offered by AWS or other sellers.
    • Last rule groups: If the web request makes it through the first rule set and the AWS WAF rules defined for this web ACL (which will be in the format FMManagedWebACLV2PartialRuleName-policyXXXX), then it is evaluated against this last set of rule groups, before resulting in the action defined in this rule set.
    • Default web ACL action: This option specifies whether you want the web request evaluated by the rule to be allowed or blocked.
    Figure 3: Updating Rule groups

    Figure 3: Updating Rule groups

  4. See the AWS Managed Rules rule groups list to understand the rules that are defined in the managed rule set. You can choose to override the default action set and instead apply the “count” setting to monitor the traffic for the rule group. This will help customers to monitor for false positives before deciding to allow or reject certain web-requests.
     
    You can apply this count mode to specific rules of the rule set by selecting the rule, choosing the Edit button, as shown in figure 4, and then setting the toggle for individual rules.
     
    Figure 4: Edit rule group to override the default action set

    Figure 4: Edit rule group to override the default action set

    Figure 5 shows the Override rules action setting toggled on and off for various rules. AWS Core Rule set contains rules to protect against commonly occurring vulnerabilities described in OWASP publications. The two parameters, “SizeRestrictions_QUERYSTRING” and “SizeRestrictions_BODY” are set to monitoring mode which helps verify that the URI query string length and request body size are within the standard boundary for web applications.
     

    Figure 5: Example of setting a managed rule to override the rules action

    Figure 5: Example of setting a managed rule to override the rules action

  5. On the same page, under Policy action, you can also set the policy action to either identify the resources and monitor those that don’t comply with the policy rules, without auto-remediation, OR choose to auto-remediate any of the non-compliant resources. This option is shown in figure 6.
     
    Auto-remediation: Here is where you can get creative in using Firewall Manager policies based on various requirements. For example, you can create policies for specific departments using AWS tags OR all applications deployed in specific accounts, where you want to enforce certain AWS WAF rules to meet requirements to protect all the resources. Notice in figure 6 that you can choose to auto-remediate. What this means is that these AWS WAF rules are not only applied to all the resources types that you select to protect, but if someone tampers with this Firewall Manager web ACL by deleting the rule group, Firewall Manager creates this rule group again within a few minutes. In the background, Firewall Manager has a tight integration with AWS Config to monitor all the resources across other accounts in that parent organization. Similarly, in the same account, you could create another policy for a different department or team where you don’t want to enforce the AWS WAF rules but instead let the application teams in these departments create their own web ACLs to use.
     
    Figure 6: Setting the default web ACL and auto-remediate actions

    Figure 6: Setting the default web ACL and auto-remediate actions

    In figure 6, you see an option to replace the existing web ACLs. You may have created custom AWS WAF rules and applied those to the resources by using a custom web ACL. With this option, you can either choose not to alter such existing resources that are already protected by another custom web ACL (and not by Firewall Manager–created web ACLs), or to disassociate the resources and have them protected by the new web ACLs created by this policy.

  6. In this last step, under Policy scope, you can decide to apply the policy to specific types of AWS resources, either to all accounts in that organization or just to some of them, and also filter based on tags. This option is shown in figure 7. In the below example, you will see that the policy is restricted to two accounts, and two corresponding OU’s with tags limiting to “DeptName: PCI”.
     
    Figure 7: Defining policy scope for specific accounts

    Figure 7: Defining policy scope for specific accounts

Once these changes are applied, then in the background, with AWS Config enabled in the child accounts that are included in the policy, AWS WAF rules are created in the format “FMManagedWebACLV2<rulegroupname>XXXXXXXXXXXXX”. All resources that meet the conditions called out in the policy are automatically protected.

Validation: You can now log in to one of the child accounts to verify whether the resources are created. In our example, all the Application Load Balancers with the tag name DepName:PCI will be associated to this web ACL. You can also add more AWS WAF rules to this web ACL.
 

Figure 8: Reviewing and managing web ACLs in participating child accounts

Figure 8: Reviewing and managing web ACLs in participating child accounts

Firewall Manager logging capability

As part of the AWS WAF capability you want to make sure that logging is enabled with the recently announced feature for centralized logging of your AWS WAF policies. With this logging feature, you get detailed information about traffic within your organization.

The steps are similar to enabling logging for AWS WAF, and consists of two steps. In the first step, Amazon Kinesis Data Firehose is used to capture logs from your policy’s web ACLs to a configured storage destination through the HTTPS endpoint. In the second step, you enable logging in a Firewall Manager policy and select the Firehose stream you created in the first step.

Step 1: Set up a new Kinesis Data Firehose delivery stream

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Elasticsearch Service (ES), Splunk, and Amazon Redshift. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

To set up the delivery stream

  1. In the AWS Management Console, using your Firewall Manager administrator account, open the Amazon Kinesis Data Firehose service and choose the button to create a new delivery stream.
    1. For Delivery stream name, enter a name for your new stream that starts with aws-waf-logs- as shown in figure 9. AWS WAF filters all streams starting with the keyword aws-waf-logs when it displays the delivery streams. Note the name of your stream since you’ll need it again later in the walkthrough.
    2. For Source, choose Direct PUT, since AWS WAF logs will be the source in this walkthrough.
    3. We recommend that you also enable server-side encryption. You can either choose to use AWS-owned keys or the customer-managed keys. In this example, we have chosen AWS owned Customer master keys (CMKs). If you have your own CMK’s, you can select the 2nd option of the customer-managed keys and pick your corresponding key from the dropdown list.
       
      Figure 9: Setting the source for the Kinesis Data Firehose delivery stream

      Figure 9: Setting the source for the Kinesis Data Firehose delivery stream

  2. Next, you have the option to enable AWS Lambda if you need to transform your data before transferring it to your destination. (You can learn more about data transformation in the Amazon Kinesis Data Firehose documentation.) In this walkthrough, there are no transformations that need to be performed, so for Data transformation, choose Disabled. You also have the option to convert the JSON object to Apache Parquet or Apache ORC format for better query performance. In this example, for Record format conversion, choose Disabled. Figure 10 shows these options.
     
    Figure 10: Setting data transformation and format conversion

    Figure 10: Setting data transformation and format conversion

  3. On the Select destination screen, for Destination, choose Amazon S3, as shown in figure 11.
    1. For the S3 destination, you can either enter the name of an existing S3 bucket or create a new S3 bucket. Note the name of the S3 bucket, since you’ll need the bucket name in a later step in this walkthrough.
    2. (Optional) Set the S3 prefix and error bucket for logs.
       
      Figure 11: Selecting the destination

      Figure 11: Selecting the destination

    3. On the Configure Settings screen, shown in figure 12, choose other S3 options, such as encryption and compression, and creation of an IAM role for granting access to the Firehose delivery stream.
      1. You can leave the default values for S3 buffer values. We also recommend enabling compression and encryption.
      2. Either choose the option to create a new IAM role, or choose an existing role if you’ve already created one.
      Figure 12: Setting S3 options and IAM role

      Figure 12: Setting S3 options and IAM role

  4. In the next screen, review all the options you selected and choose the Create Delivery Stream button to create a Kinesis Data Firehose delivery stream.

Step 2: Enable logging for an AWS WAF policy

In this step, you configure Firewall Manager policy to direct the log ingestion to the Kinesis Data Firehose delivery stream that you created in the previous step.

To enable logging for an AWS WAF policy

  1. On the AWS Management Console, search for AWS Firewall Manager and in the navigation pane, choose Security Policies.
  2. Choose the AWS WAF policy that you want to enable logging for, and on the Policy details tab, in the Policy rules section, choose Edit. For Logging configuration status, choose Enabled.
  3. Choose the Kinesis Data Firehose that you created for your logging. You must choose a firehose that begins with “aws-waf-logs-”.
     
    Figure 13: Enable Firewall Manager logs

    Figure 13: Enable Firewall Manager logs

  4. Review your settings, and then choose Save to save your changes to the policy.

    Note: Firewall Manager supports this option for the latest version of AWS WAF, but not for AWS WAF Classic.

With these two steps, you now have logging enabled for your Firewall Manager policies.

Deploy Firewall Manager policy with a CloudFormation template

In this section, we provide you with an example CloudFormation template that deploys Firewall Manager policy with a rule group that consists of both an AWS Managed rule set and a custom AWS WAF rule. As a part of the custom AWS WAF rule, this template creates an IP set in which you specify the list of IP addresses from which you want to block traffic. It also creates a rule with an AND statement that blocks cross-site scripting requests and requests that originate from the IP addresses that you specified. You will also notice that this rule is applied to specific accounts that are entered in the parameters as comma-delimited values. Out of many available AWS Managed Rules, we used two rules: AWSManagedRulesCommonRuleSet and AWSManagedRulesSQLiRuleSet. For the former rule, we set the override action to count, which means the requests won’t be blocked but will be counted for further investigation. The latter rule contains rules to block request patterns associated with exploitation of SQL databases, such as SQL injection attacks. This can help prevent remote injection of unauthorized queries.

As a best practice, before using a rule group in production, test it in a non-production environment, with the action override set to count. Evaluate the rule group using Amazon CloudWatch metrics combined with AWS WAF sampled requests or AWS WAF logs. When you’re satisfied that the rule group does what you want, remove the override on the group.

To deploy the CloudFormation template

  1. Copy the following template code and save it in a file named deploy-firewall-manager-policy.yaml.
    Description: Create IPSet, RuleGroup and Firewall Manager Policy. Firewall Policy contains two AWS managed rule groups and one custom rule group.
    Parameters:
      BlockIpAddressCIDR:
        Type: CommaDelimitedList
        Description: "Enter IP Address range by using CIDR notation separated by comma to block incoming traffic originating from them. For eg: To specify the IPV4 address 192.0.2.44 type 192.0.2.44/32 or 10.0.2.0/24"
      AWSAccountIds:
        Type: CommaDelimitedList
        Description: "Enter AWS Account IDs separated by comma in which you want to apply Firewall Manager policy"
    
    Resources:
      WAFIPSetFMS:
          Type: 'AWS::WAFv2::IPSet'
          Properties:
            Description: Block ranges of IP addresses using this IP Set
            Name: WAFIPSetFMS
            Scope: REGIONAL
            IPAddressVersion: IPV4	
            Addresses: !Ref BlockIpAddressCIDR
      RuleGroupXssFMS:
        Type: 'AWS::WAFv2::RuleGroup'
        DependsOn: WAFIPSetFMS
        Properties: 
          Capacity: 500
          Description: AWS WAF Rule Group to block web requests that contain cross site scripting injection attacks and originate from specific IP ranges.
          Name: RuleGroupXssFMS
          Scope: REGIONAL
          VisibilityConfig:
              SampledRequestsEnabled: true
              CloudWatchMetricsEnabled: true
              MetricName: RuleGroupXssFMS
          Rules: 
            - Name: xssException
              Priority: 0
              Action:
                Block: {}
              VisibilityConfig:
                  SampledRequestsEnabled: true
                  CloudWatchMetricsEnabled: true
                  MetricName: xssException
              Statement:
                AndStatement:
                  Statements:
                  - XssMatchStatement:
                      FieldToMatch:
                        Body: {}
                      TextTransformations:
                      - Type: HTML_ENTITY_DECODE
                        Priority: 0
                      - Type: LOWERCASE
                        Priority: 1
                  - IPSetReferenceStatement:
                      Arn: !GetAtt WAFIPSetFMS.Arn
      PolicyWAFv2:
        Type: AWS::FMS::Policy
        Properties:
          ExcludeResourceTags: false
          PolicyName: WAF-Policy
          IncludeMap: 
            ACCOUNT: !Ref AWSAccountIds
          RemediationEnabled: true
          ResourceType: AWS::ElasticLoadBalancingV2::LoadBalancer 
          SecurityServicePolicyData: 
            Type: WAFV2
            ManagedServiceData: !Sub '{"type":"WAFV2", 
                                      "preProcessRuleGroups":[{ 
                                      "ruleGroupType":"RuleGroup",
                                      "ruleGroupArn":"${RuleGroupXssFMS.Arn}",
                                      "overrideAction":{"type":"NONE"}},{
                                      "managedRuleGroupIdentifier":{
                                      "managedRuleGroupName":"AWSManagedRulesCommonRuleSet", 
                                      "vendorName":"AWS"},
                                      "overrideAction":{"type":"COUNT"}, 
                                      "excludeRules":[],"ruleGroupType":"ManagedRuleGroup"},{
                                      "managedRuleGroupIdentifier":{
                                      "managedRuleGroupName":"AWSManagedRulesSQLiRuleSet", 
                                      "vendorName":"AWS"},
                                      "overrideAction":{"type":"NONE"}, 
                                      "excludeRules":[],"ruleGroupType":"ManagedRuleGroup"}],
                                      "postProcessRuleGroups":[],
                                      "defaultAction":{"type":"BLOCK"}}'
    
    

  2. Execute the following AWS CLI command to deploy the stack. If you haven’t configured AWS CLI, refer to this quickstart.
    aws cloudformation create-stack --stack-name firewall-manager-policy-stack \
          --template-body file://deploy-firewall-manager-policy.yaml \
          --parameters \
    ParameterKey=BlockIpAddressCIDR,ParameterValue=<Enter-BlockIpAddressCIDR>
    ParameterKey= AWSAccountIds,ParameterValue=<Enter-AWSAccountIds>
    

Pricing

As of today, price per AWS WAF protection policy per Region is $100.00. To get an overall idea on pricing, we recommend that you review this AWS Firewall Manager pricing guide that covers a few scenarios.

AWS WAF charges based on the number of web access control lists (web ACLs) that you create, the number of rules that you add per web ACL, and the number of web requests that you receive. There are no upfront commitments. Learn more about AWS WAF pricing.

There is no additional charge for using AWS Managed Rules for AWS WAF. When you subscribe to a Managed Rule Group provided by an AWS Marketplace seller, you will be charged additional fees based on the price set by the seller.

AWS Shield Advanced customers can use Firewall Manager to apply AWS Shield Advanced and AWS WAF protections across their entire organization at no additional cost.

Conclusion

This blog post describes how you can create Firewall Manager policies with the new version of AWS WAF rules, by using the web console or AWS CloudFormation. You can also create these rules by using the command line interface (CLI), or programmatically with the SDK and other similar scripting tools. Using both AWS WAF and Firewall Manager, you can create a deployment strategy to safeguard all your accounts centrally at the organization level, and also choose to automatically remediate the AWS WAF rules if anything is changed after deployment.

For further reading, see the AWS Firewall Manager Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Umesh Kumar Ramesh

Umesh is a Senior Cloud Infrastructure Architect with AWS who delivers proof-of-concept projects and topical workshops, and leads implementation projects. He holds a bachelor’s degree in Computer Science & Engineering from the National Institute of Technology, Jamshedpur (India). Outside of work, he enjoys watching documentaries, biking, practicing meditation and discussing spirituality.

Author

Mahek Pavagadhi

Mahek is a Cloud Infrastructure Architect at AWS in San Francisco, CA. She has a master’s degree in Software Engineering with a major in Cloud Computing. She is passionate about cloud services and building solutions with it. Outside of work, she is an avid traveler who loves to explore local cafeterias.

Field Notes: How to Identify and Block Fake Crawler Bots Using AWS WAF

Post Syndicated from Vijay Menon original https://aws.amazon.com/blogs/architecture/field-notes-how-to-identify-and-block-fake-crawler-bots-using-aws-waf/

In this blog post, we focus on how to identify fake bots using these AWS services: AWS WAF, Amazon Kinesis Data Firehose, Amazon S3 and AWS Lambda. We use fake Google/Bing bots to demonstrate, but the principles can be applied to other popular crawlers like Slurp Bot from Yahoo, DuckDuckBot from DuckDuckGo, Alexa crawler from Alexa internet ranking service.

For industries like media, online retailors, news or social websites, content is critical and often sets them apart from other competitors. These companies put in a significant amount of effort to make the content as visible and accessible as possible. To do that these companies rely on crawler bots, so that legitimate users searching for content can find the content easily. Crawler bots are useful for indexing the site pages and helping make the content more searchable and improve rankings.

However, this capability can be misused. So it is important to distinguish between genuine crawler bots and fake ones that are doing more than just indexing your site. It’s important to properly identify good and bad actors so that you can stop the bad ones without impacting the ability of good ones, and at scale. This helps in driving more traffic, visitors, and more revenue from your websites.

Identifying bots

There are two primary sources of information required to identify a fake bot:

  • HTTP Header User-Agent: Fake bots try to present themselves as real bots, for example as Google or Bing, by using the same user agent string used by Google or Bing.
  • IP Address: You can look at the source IP address of the incoming request and determine if it belongs to the search engine provider network like Google or Bing. You can do this by performing a forward and reverse look up and comparing the results. These methods are well documented by the search engine providers Google and Bing.

Solution Overview

The solution leverages the capabilities of AWS WAF. Our demonstration application is a static website hosted on Amazon S3 fronted by Amazon CloudFront. This means that we can provide permission of access to the S3 bucket only to CloudFront using origin access identity.

The logs are streamed in near real time using Amazon Kinesis Data Firehose, inspected using a Lambda function to help identify fake bots before storing the logs on Amazon S3. The Lambda function does two things:

  1. Inspect the traffic using the rules to identify bad or fake bots. In this case it uses forward and reverse DNS lookup results of the Client IP address of packets with User-Agent string resembling GoogleBot or BingBot.
  2. Once identified as a fake bot, the Lambda function updates AWS WAF IP-Set to permanently block the requests coming from IP addresses of fake bots.

Note: For the sake of this demonstration, we are using a static website hosted on Amazon S3 with CloudFront. This requires the AWS WAF and IP-Set used by AWS WAF to be of scope ‘CLOUDFRONT’. You can modify it to use scope ‘REGIONAL’ if you chose to protect your web properties behind Application Load Balancer or API Gateway.

Solution Architecture

WAF Solution Architecture

Prerequisites

Readers of this blog post should be familiar with HTTP and the following AWS services:

For this walkthrough, you should have the following:

  • AWS Account
  • AWS Command Line Interface (AWS CLI): You need AWS CLI installed and configured on the workstation from where you are going to try the steps mentioned below.
  • Credentials configured in AWS CLI should have the required IAM permissions to spin up and modify the resources mentioned in this post.
  • Make sure that you deploy the solution to us-east-1 Region and your AWS CLI default Region is us-east-1. If us-east-1 is not the default Region, reference the Regions explicitly while executing AWS CLI commands using --region us-east-1 switch.
  • Amazon S3 bucket in us-east-1 region

Walkthrough

1.     Create an Amazon S3 static Website and put it behind an Amazon CloudFront distribution. You can follow these steps. Note down the CloudFront distribution ID. We will use it in subsequent steps.

2.     Download and unzip the file containing CloudFormation template and lambda function code from here to a folder in your local workstation. You will need to run all of the subsequent commands from this folder.

3.     Zip the lambda function lambda_function.py and upload it to an Amazon S3 bucket of your choice in us-east-1. Note the bucket name as it will be used in the subsequent steps. The blog uses waf-logs-fake-bots-us-east-1 for reference as the S3 bucket name.

$ zip lambda_function.py lambda_function.py.zip
$ aws s3 cp lambda_function.py.zip s3://waf-logs-fake-bots-us-east-1/

4.     Create the resources required for this blog post by deploying the AWS CloudFormation template and running the below command:

aws cloudformation create-stack \
--stack-name FakeBotBlockBlog \
--template-body file://BotBlog.yml \
--parameters ParameterKey=KinesisBufferIntervalSeconds,ParameterValue=900 ParameterKey=KinesisBufferSizeMB,ParameterValue=3 ParameterKey=IPSetName,ParameterValue=BlockFakeBotIPSet ParameterKey=IPSetScope,ParameterValue=CLOUDFRONT ParameterKey=S3BucketWithDeploymentPackage,ParameterValue=waf-logs-fake-bots-us-east-1 ParameterKey=DeploymentPackageZippedFilename,ParameterValue=lambda_function.py.zip \
--capabilities CAPABILITY_IAM \
--region us-east-1

You need to provide the following information, and you can change the parameters based on your specific needs:

a.     KinesisBufferIntervalSeconds and KinesisBufferSizeMB. These will define the interval at which Kinesis Firehose ships the logs to Amazon S3, the Default is 900 seconds and 3MB respectively, whichever is met first.

b.     IPSetName. Name of the IP Set which will be used to record the client IP address of fake bots. Default value is BlockFakeBotIPSet.

c.     IPSetScope. Scope of the IP Set. I am using CLOUDFRONT and associate it with the CloudFront distribution created in step 1. You can choose to make it REGIONAL in which case WebACLassociation will need to be with an ALB or an API Gateway.

d.     S3BucketWithDeploymentPackage. Name of S3 bucket used in step 3. The blog assumes waf-logs-fake-bots-us-east-1.

e.     DeploymentPackageZipedFilename. Lambda function filename without the file extension. For example, the blog assumes lambda_function.py.zip is available on the Amazon S3 bucket and uses this value for this parameter.

Some stack templates might include resources that can affect permissions in your AWS account, for example, by creating new AWS Identity and Access Management (IAM) role. For those stacks, you must explicitly acknowledge this by specifying CAPABILITY_IAM or CAPABILITY_NAMED_IAM value for the –capabilities parameter.

Stack creation will take you approximately 5-7 minutes. Check the status of the stack by executing the below command every few minutes. You should see StackStatus value as CREATE_COMPLETE.

Example:

aws cloudformation describe-stacks --stack-name FakeBotBlockBlog | grep StackStatus

The CloudFormation template will create the following resources:

  • IP Set for AWS WAF
  • WebACL with rules to block the client IP addresses of fake bots, and an AWS-managed common rule set.
  • Lambda function to help detect fake bots and modify the AWS WAF IP Set to block them
  • Kinesis Firehose delivery stream, which will use the above Lambda function for processing
  • IAM roles with required permissions for the Lambda function and Kinesis Firehose
  • S3 bucket for AWS WAF logs

5.     Enable logging for the WebACL using AWS CLI. For this you need the ARN of the WebACL and Kinesis Firehose. You can find that information from the output of the CloudFormation stack created in step 4 using the below AWS CLI command

aws cloudformation describe-stacks --stack-name FakeBotBlockBlog

Please note the 2 ARNs and run the following commands by replacing (1) ResourceArn value with WebACL ARN and (2) LogDestinationConfigs value with Kinesis Firehose delivery stream ARN.

Example:

aws wafv2 put-logging-configuration –-logging-configuration ResourceArn=arn:aws:wafv2:us-east-1:123456789012:global/webacl/FakeBotWebACL/259ea98f-24ba-4acd-8803-3e7d02e8d482,LogDestinationConfigs=arn:aws:firehose:us-east-1:123456789012:deliverystream/aws-waf-logs-FakeBotBlockBlog --region us-east-1

6.     Associate CloudFront distribution with this WebACL: Sign in to the AWS Management Console and open the AWS WAF and Shield console at https://console.aws.amazon.com/wafv2/homev2/web-acls?region=global .

  • Click on the WebACL created earlier in this procedure
  • Navigate to ‘Associated AWS resources’ tab and select Add AWS resources
  • In the subsequent screen, select the CloudFront distribution created in step 1
  • Select Add

Note: If you are using an ALB or API Gateway for your web property, then you need to use REGIONAL WebACL and IP Set.  Review the procedure to associate an ALB or API Gateway to the WebACL.

You can monitor WebACL performance from the Overview section of WebACL from the AWS WAF and Shield console.

Testing

To test, you will need to generate some traffic which will trigger the lambda function to detect and block the fake bots created earlier in this blog. The web traffic can be generated from the local machine or from an EC2 instance with access to the internet using curl. Manually set the user agent to resemble Googlebot by running the following command from shell:

Replace http://www.awsdemodesign.com/ with the URL of your CloudFront distribution you created in step 1 of the walkthrough.

for i in {1..1000}; do curl -I -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" http://www.awsdemodesign.com/; done

Initially you will see HTTP1.1/ 200 OK response. This will trigger Lambda and modify the IP Set to include your public IP address to be blocked. You can verify that by inspecting the IP set from AWS WAF and Shield console.

  • Sign in to the AWS Management Console and open the AWS WAF and Shield console
  • Click on IP Set created earlier in this blog. In the subsequent screen you can see your public IP address in the list of IP addresses.

If you run the curl command again, you will see that the response now is HTTP/1.1 403 Forbidden.

Clean Up

  • Disassociate the CloudFront Distribution from WebACL
  • Delete the S3 bucket and CloudFront Distribution created in Step 1
  • Empty and delete the S3 bucket created by the CloudFormation stack for AWS WAF logs.
  • Delete CloudFormation stack created in Step 4.

Conclusion

In this blog post, we demonstrated how you can set up and inspect incoming web traffic using AWS Lambda, AWS WAF native logging capabilities, and Kinesis Firehose to help detect and block bad or fake bots at scale. Furthermore, the solution outlined in this post provides a framework which can be extended to identify similar unwanted traffic impersonating as other good bots.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

How to enhance Amazon CloudFront origin security with AWS WAF and AWS Secrets Manager

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-enhance-amazon-cloudfront-origin-security-with-aws-waf-and-aws-secrets-manager/

Whether your web applications provide static or dynamic content, you can improve their performance, availability, and security by using Amazon CloudFront as your content delivery network (CDN). CloudFront is a web service that speeds up distribution of your web content through a worldwide network of data centers called edge locations. CloudFront ensures that end-user requests are served by the closest edge location. As a result, viewer requests travel a short distance, improving performance for your viewers. When you deliver web content through a CDN such as CloudFront, a best practice is to prevent viewer requests from bypassing the CDN and accessing your origin content directly. In this blog post, you’ll see how to use CloudFront custom headers, AWS WAF, and AWS Secrets Manager to restrict viewer requests from accessing your CloudFront origin resources directly.

You can configure CloudFront to add custom HTTP headers to the requests that it sends to your origin. HTTP header fields are components of the header section of request and response messages in the Hypertext Transfer Protocol (HTTP). These custom headers enable you to send and gather information from your origin that isn’t included in typical viewer requests. You can use custom headers to control access to content. By configuring your origin to respond to requests only when they include a custom header that was added by CloudFront, you prevent users from bypassing CloudFront and accessing your origin content directly. In addition to offloading traffic from your origin servers, this also helps enforce web traffic being processed at CloudFront edge locations according to your AWS WAF rules prior to being forwarded to your origin.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports managed rules as well as a powerful rule language for custom rules. AWS WAF is tightly integrated with CloudFront and the Application Load Balancer (ALB). AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.

Solution overview

This blog post includes a sample solution you can deploy to see how its components integrate to implement the origin access restriction. The sample solution includes a web server deployed on Amazon Elastic Compute Cloud (Amazon EC2) Linux instances running in an AWS Auto Scaling group. Elastic Load Balancing distributes the incoming application traffic across the EC2 instances by using an ALB. The ALB is associated with an AWS WAF web access control list (web ACL), which is used to validate the incoming origin requests. Finally, a CloudFront distribution is deployed with an AWS WAF web ACL and configured to point to the origin ALB.

Although the sample solution is designed for deployment with CloudFront with an AWS WAF–associated ALB as its origin, the same approach could be used for origins that use Amazon API Gateway. A custom origin is any origin that is not an Amazon Simple Storage Service (Amazon S3) bucket, with one exception. An S3 bucket that is configured with static website hosting is a custom origin. You can refer to the CloudFront Developer Guide for more information on securing content that CloudFront delivers from S3 origins.

This solution is intended to enhance security for CloudFront custom origins that support AWS WAF, such as ALB, and is not a substitute for authentication and authorization mechanisms within your web applications. In this solution, Secrets Manager is used to control, audit, monitor, and rotate a random string used within your CloudFront and AWS WAF configurations. Although most of these lifecycle attributes could be set manually, Secrets Manager makes it easier.

Figure 1 shows how the provided AWS CloudFormation template creates the sample solution.
 

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A viewer accesses your website or application and requests one or more files, such as an image file and an HTML file.
  2. DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency.
  3. At the edge location, AWS WAF inspects the incoming request according to configured web ACL rules.
  4. At the edge location, CloudFront checks its cache for the requested content. If the content is in the cache, CloudFront returns it to the user. If the content isn’t in the cache, CloudFront adds the custom header, X-Origin-Verify, with the value of the secret from Secrets Manager, and forwards the request to the origin.
  5. At the origin Application Load Balancer (ALB), AWS WAF inspects the incoming request header, X-Origin-Verify, and allows the request if the string value is valid. If the header isn’t valid, AWS WAF blocks the request.
  6. At the configured interval, Secrets Manager automatically rotates the custom header value and updates the origin AWS WAF and CloudFront configurations.

Solution deployment

This sample solution includes seven main steps:

  1. Deploy the CloudFormation template.
  2. Confirm successful viewer access to the CloudFront URL.
  3. Confirm that direct viewer access to the origin URL is blocked by AWS WAF.
  4. Review the CloudFront origin custom header configuration.
  5. Review the AWS WAF web ACL header validation rule.
  6. Review the Secrets Manager configuration.
  7. Review the Secrets Manager AWS Lambda rotation function.

Step 1: Deploy the CloudFormation template

The stack will launch in the N. Virginia (us-east-1) Region. It takes approximately 10 minutes for the CloudFormation stack to complete.

Note: The sample solution requires deployment in the N. Virginia (us-east-1) Region. Although out of scope for this blog post, an additional sample template is available in this solution’s GitHub repository for testing this solution with an existing CloudFront distribution and regional AWS WAF web ACL. Refer to the AWS regional service support information for more details on regional service availability.

To launch the CloudFormation stack

  1. Choose the following Launch Stack icon to launch a CloudFormation stack in your account in the N. Virginia Region.
     
    Select the Launch Stack button to launch the template
  2. In the CloudFormation console, leave the configured values, and then choose Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameter Input parameter description
    EC2InstanceSize The instance size for EC2 web servers.
    HeaderName The HTTP header name for the secret string.
    WAFRulePriority The rule number to use for the regional AWS WAF web ACL. 0 is recommended, because rules are evaluated in order based on the value of priority.
    RotateInterval The rotation interval, in days, for the origin secret value. Full rotation requires two intervals.
    ArtifactsBucket The S3 bucket with artifact files (Lambda functions, templates, HTML files, and so on). Keep the default value.
    ArtifactsPrefix The path for the S3 bucket that contains artifact files. Keep the default value.

    Figure 2 shows an example of values entered under Parameters.
     

    Figure 2: Input parameters for the CloudFormation stack

    Figure 2: Input parameters for the CloudFormation stack

  4. Enter values for all of the input parameters, and then choose Next.
  5. On the Options page, keep the defaults, and then choose Next.
  6. On the Review page, confirm the details, acknowledge the statements under Capabilities and transforms as shown in Figure 3, and then choose Create stack.
     
    Figure 3: CloudFormation Capabilities and Transforms acknowledgments

    Figure 3: CloudFormation Capabilities and Transforms acknowledgments

Step 2: Confirm access to the website through CloudFront

Next, confirm that website access through CloudFront is functioning as intended. After the CloudFormation stack completes deployment, you can access the test website using the domain name that was automatically assigned to the distribution.

To confirm viewer access to the website through CloudFront

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the cfEndpoint entry, similar to that shown in Figure 4.
     
    Figure 4: CloudFormation cfEndpoint stack output

    Figure 4: CloudFormation cfEndpoint stack output

  2. The cfEndpoint is the URL for the site, and it is automatically assigned by CloudFront. Choose the cfEndpoint link to open the test page, as shown in Figure 5.
     
    Figure 5: CloudFormation cfEndpoint test page

    Figure 5: CloudFormation cfEndpoint test page

In this step, you’ve confirmed that website accessibility through CloudFront is functioning as intended.

Step 3: Confirm that direct viewer access to the origin URL is blocked by AWS WAF

In this step, you confirm that direct access to the test website is blocked by the regional AWS WAF web ACL.

To test direct access to the origin URL

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the albEndpoint entry.
  2. Choose the albEndpoint link to go to the test site URL that was automatically assigned to the ALB. Choosing this link will result in a 403 Forbidden response. When AWS WAF blocks a web request based on the conditions that you specify, it returns HTTP status code 403 (Forbidden).

In this step, you’ve confirmed that website accessibility directly to the origin ALB is blocked by the regional AWS WAF web ACL.

Step 4: Review the CloudFront origin custom header configuration

Now that you’ve confirmed that the test website can only be accessed through CloudFront, you can review the detailed CloudFront, WAF, and Secrets Manager configurations that enable this restriction.

To review the custom header configuration

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the cfDistro entry.
  2. Choose the cfDistro link to go to this distribution’s configuration in the CloudFront console. On the Origin Groups tab, under Origins, select the origin as shown in Figure 6.
     
    Figure 6: CloudFront Origins and Origin Groups settings

    Figure 6: CloudFront Origins and Origin Groups settings

  3. Choose Edit to go to the Origin Settings section, scroll to the bottom and review the Origin Custom Headers as shown in Figure 7.
     
    Figure 7: CloudFront Origin Custom Headers settings

    Figure 7: CloudFront Origin Custom Headers settings

    You can see that the custom header, X-Origin-Verify, has been configured using Secrets Manager with a random 32-character alpha-numeric value. This custom header will be added to web requests that are forwarded from CloudFront to your origin. As you learned in steps 2 and 3, requests without this header are blocked by AWS WAF at the origin ALB. In the next two steps, you will dive deeper into how this works.

Step 5: Review the AWS WAF web ACL header validation rule

In this step, you review the AWS WAF rule configuration that validates the CloudFront custom header X-Origin-Verify.

To review the header validation rule

  1. In the CloudFormation console, select Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the wafWebACLR entry.
  2. Choose the wafWebACLR link to go to the origin ALB web ACL configuration in the WAF and Shield console. On the Overview tab, you can view the Requests per 5 minute period chart and the Sampled requests list, which shows requests from the last three hours that the ALB has forwarded to AWS WAF for inspection. The sample of requests includes detailed data about each request, such as the originating IP address and Uniform Resource Identifier (URI). You also can view which rule the request matched, and whether the rule Action is configured to ALLOW, BLOCK, or COUNT requests. You can enable AWS WAF logging to get detailed information about traffic that’s analyzed by your web ACL. You send logs from your web ACL to an Amazon Kinesis Data Firehose with a configured storage destination such as Amazon S3. Information that’s contained in the logs includes the time that AWS WAF received the request from your AWS resource, detailed information about the request, and the action for the rule that each request matched.
  3. Choose the Rules tab to review the rules for this web ACL, as shown in Figure 8.
     
    Figure 8: AWS WAF web ACL rules

    Figure 8: AWS WAF web ACL rules

    On the Rules tab, you can see that the CFOriginVerifyXOriginVerify rule has been configured with the Allow action, while the Default web ACL action is Block. This means that any incoming requests that don’t match the conditions in this rule will be blocked.

    In every AWS WAF rule group and every web ACL, rules define how to inspect web requests and what to do when a web request matches the inspection criteria. Each rule requires one top-level statement, which might contain nested statements at any depth, depending on the rule and statement type. You can learn more about AWS WAF rule statements in the AWS WAF Developer Guide, AWS Online Tech Talks, and samples on GitHub.

  4. Choose the CFOriginVerifyXOriginVerify rule, and then choose Edit to bring up the Rule Builder tool. In the Rule Builder, you can see that a rule has been created with two Rule Statements similar to those in Figure 9.
     
    Figure 9: AWS WAF web ACL rule statement

    Figure 9: AWS WAF web ACL rule statement

    In the Rule Builder configuration for Statement 1, you can see that the request Header is being inspected for the x-origin-verify Header field name (HTTP header field names are case insensitive), and the String to match value is set to the value you reviewed in step 4. In the Rule Builder, you can also see a logical OR with an additional rule statement, Statement 2. You will notice that the configuration for Statement 2 is the same as Statement 1, except that the String to match value is different. You will learn about this in detail in step 7, but Statement 2 helps to ensure that valid web requests are processed by your origin servers when Secrets Manager automatically rotates the value of the X-Origin-Verify header. The effect of this rule configuration is that inspected web requests will be allowed if they match either of the two statements.

    In addition to the visual web ACL representation you just reviewed in the WAF Rule visual editor, every web ACL also has a JSON format representation you can edit by using the WAF Rule JSON editor. You can retrieve the complete configuration for a web ACL in JSON format, modify it as you need, and then provide it to AWS WAF through the console, API, or command line interface (CLI).

    This step demonstrated how your request was allowed to access the test website in step 2 and why your request was blocked in step 3.

Step 6: Review Secrets Manager configuration

Now that you’re familiar with the CloudFront and AWS WAF configurations, you will learn how Secrets Manager creates and rotates the secret used for the X-Origin-Verify header field value. Secrets Manager uses an AWS Lambda function to perform the actual rotation of the secret used for the value and update the associated AWS WAF web ACL and CloudFront distribution.

To review the Secrets Manager configuration

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the OriginVerifySecret entry.
  2. Choose the OriginVerifySecret link to go to the configuration for the secret in the Secrets Manager console. Scroll down to the section titled Secret value, and then choose Retrieve secret value to display the Secret key/value as shown in Figure 10.
     
    Figure 10: Secrets Manager retrieve value

    Figure 10: Secrets Manager retrieve value

    When you retrieve the secret, Secrets Manager programmatically decrypts the secret and displays it in the console. You can see that the secret is stored as a key-value pair, where the secret key is HEADERVALUE, and the secret value is the string used in the CloudFront and WAF configurations you reviewed in steps 3 and 4.

  3. While you’re in the Secrets Manager console, review the Rotation configuration section, as shown in Figure 11.
     
    Figure 11: Secrets Manager rotation configuration

    Figure 11: Secrets Manager rotation configuration

    You can see that rotation was enabled for this secret at an interval of one day. This configuration also includes a Lambda rotation function. Secrets Manager uses a Lambda function to perform the actual rotation of a secret. If you use your secret for one of the supported Amazon Relational Database Service (Amazon RDS) databases, then Secrets Manager provides the Lambda function for you. If you use your secret for another service, then you must provide the code for the Lambda function, as we’ve done in this solution.

Step 7: Review the Secrets Manager Lambda rotation function

In this step, you review the Secrets Manager Lambda rotation function.

To review the Secrets Manager Lambda rotation function

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. In the stack Outputs tab, look for the OriginSecretRotateFunction entry.
  2. Choose the OriginSecretRotateFunction link to go to the Lambda function that is configured for this secret. The code used for this secrets rotation function is based on the AWS Secrets Manager Rotation Template. Choose the Monitoring tab and review the Invocations graph as shown in Figure 12.
     
    Figure 12: Monitoring tab for the Lambda rotation function

    Figure 12: Monitoring tab for the Lambda rotation function

    Shortly after the CloudFormation stack creation completes, you should see several invocations in the Invocations graph. When a configured rotation schedule or a manual process triggers rotation, Secrets Manager calls the Lambda function several times, each time with different parameters. The Lambda function performs several tasks throughout the process of rotating a secret. This includes the following steps: createSecret, setSecret, testSecret, and finishSecret. Secrets Manager uses staging labels, a simple text string, to enable you to identify different versions of a secret during rotation. This includes the following staging labels: AWSPENDING, AWSCURRENT, and AWSPREVIOUS, which are covered in the following step.

  3. To learn more about the rotation steps configured for this solution, choose View logs in CloudWatch on the Monitoring tab.
    1. On the Log streams tab, select the top entry in the list.
    2. Enter Event in the Filter events field, and then choose the arrows to expand the details for each event as shown in Figure 13.
       
      Figure 13: CloudWatch event logs for the Lambda rotation function

      Figure 13: CloudWatch event logs for the Lambda rotation function

The four rotation steps annotated in Figure 13 work as follows:

Note: This section provides an overview of the rotation process for this solution. For more detailed information about the Lambda rotation function, see the Secrets Manager User Guide.

  1. The createSecret step: In this step, the Lambda function generates a new version of the secret. The rotation Lambda function calls the GetRandomPassword method to generate a new random string, and then labels the new version of the secret with the staging label AWSPENDING to mark it as the in-process version of the secret.
  2. The SetSecret step: In this step, the rotation function retrieves the version of the secret labeled AWSPENDING from Secrets Manager and updates the web ACL rule for the AWS WAF associated with the origin ALB. The two rule statements you reviewed in step 5 of this blog post are updated with the AWSPENDING and AWSCURRENT values. The rotation function also updates the value for the Origin Custom Header X-Origin-Verify. When the rotation function updates your distribution configuration, CloudFront starts to propagate the changes to all edge locations. Maintaining both the AWSPENDING and AWSCURRENT secret values helps to ensure that web requests forwarded to your origin by CloudFront are not blocked. Therefore, once a secret value is created, two rotation intervals are required for it to be removed from the configuration.
  3. The testSecret step: This step of the Lambda function verifies the AWSPENDING version of the secret by using it to access the origin ALB endpoint with the X-Origin-Verify header. Both AWSPENDING and AWSCURRENT X-Origin-Verify header values are tested to confirm a “200 OK” response from the origin ALB endpoint.
  4. The finishSecret step: In the last step, the Lambda function moves the label AWSCURRENT from the current version to this new version of the secret. The old version receives the AWSPREVIOUS staging label, and is available for recovery as the last known good version of the secret, if needed. The old version with the AWSPREVIOUS staging label no longer has any staging labels attached, so Secrets Manager considers the old version deprecated and subject to deletion.

When the finishSecret step has successfully completed, Secrets Manager schedules the next rotation by adding the rotation interval (number of days) to the completion date. This automated process causes the values used for the validation headers to be updated at the configured interval. Although out of scope for this blog post, you should monitor your secrets to ensure usage of your secrets and log any changes to them. This helps you to make sure that any unexpected usage or change can be investigated, and unwanted changes can be rolled back.

Summary

You’ve learned how to use Amazon CloudFront, AWS WAF and AWS Secrets Manager to prevent web requests from directly accessing your CloudFront origin resources. You can use this solution to improve security for CloudFront custom origins that support AWS WAF, such as ALB, Amazon API Gateway, and AWS AppSync.

When using this solution, you will incur AWS WAF usage charges for both the ALB and CloudFront associated AWS WAF web ACLs. You might wish to consider subscribing to AWS Shield Advanced, which provides higher levels of protection against distributed denial of service (DDoS) attacks and includes AWS WAF and AWS Firewall Manager at no additional cost for usage on resources protected by AWS Shield Advanced. You can also learn more about pricing for CloudFront, AWS WAF, Secrets Manager, and AWS Shield Advanced.

You can review more options for restricting access to content with CloudFront, additional AWS WAF security automations, or managed rules for AWS WAF. You can explore solutions for using AWS IP address ranges to enhance CloudFront origin security. You might also wish to learn more about Secrets Manager best practices. This code for this solution is available on GitHub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about using this solution, you can start a thread in the CloudFront, WAF, or Secrets Manager forums, review or open an issue in this solution’s GitHub repository, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Cameron Worrell

Cameron Worrell

Cameron is a Solutions Architect with a passion for security and enterprise transformation. He joined AWS in 2015.

Automate AWS Firewall Manager onboarding using AWS Centralized WAF and VPC Security Group Management solution

Post Syndicated from Satheesh Kumar original https://aws.amazon.com/blogs/security/automate-aws-firewall-manager-onboarding-using-aws-centralized-waf-and-vpc-security-group-management-solution/

Many customers—especially large enterprises—run workloads across multiple AWS accounts and in multiple AWS regions. AWS Firewall Manager service, launched in April 2018, enables customers to centrally configure and manage AWS WAF rules, audit Amazon VPC security group rules across accounts and applications in AWS Organizations, and protect resources against distributed DDoS attacks.

In this blog post, we show you how to onboard your accounts and your AWS Organizations into the AWS Firewall Manager service and start centrally managing the security policies of your AWS Organizations member accounts. We also show you examples of how to perform operations on the Firewall Manager policies after you’ve deployed the solution, so you can adjust your security posture over time.

As more and more customers began using Firewall Manager for centralized management, they gave us feedback on how it could be improved. We heard that the process of defining policies and configuring rule sets can be challenging and time consuming, especially in a multi-account, multi-region scenario. We built the AWS Centralized WAF and VPC Security Group Management solution to make it easier and faster.

Solution overview

The AWS Centralized WAF and VPC Security Group Management solution fully automates the deployment of Firewall Manager with a set of opinionated defaults for the policies. We call it “opinionated,” because there’s no set of security rules that is right for absolutely every customer. We’re providing an example opinion, but you might have a different opinion, given your unique circumstance. Our examples block traffic that you might have a reason to allow. We install the following policies when you deploy this solution:

  • AWS WAF global policy for Amazon CloudFront distributions and regional policy for Application Load Balancer and Amazon API Gateway: Both the AWS WAF policies include the following AWS Managed Rules for AWS WAF. You can further customize these rules to suit your WAF requirements.
    • AWSManagedRulesCommonRuleSet
    • AWSManagedRulesAdminProtectionRuleSet
    • AWSManagedRulesKnownBadInputsRuleSet
    • AWSManagedRulesSQLiRuleSet
  • Amazon VPC security group usage audit and content audit policy: These policies flag security groups that are unused or redundant. Automatic remediation is turned off by default, but you can turn it on by customizing the solution.
  • AWS Shield Advanced policy: If your account has enabled Shield Advanced, then Shield Advanced protection is enabled for CloudFront, Application Load Balancer, and Elastic IPs.

The core of the solution is the AWS CloudFormation template aws-centralized-waf-and-security-group-management. This template deploys the components shown in Figure 1:

Figure 1: Main solutions template - aws-centralized-waf-and-vpc-security-group-management.template

Figure 1: Main solutions template – aws-centralized-waf-and-vpc-security-group-management.template

  1. After deployment, you can update the three AWS Systems Manager parameters—/FMS/OUs, /FMS/Regions, and /FMS/Tags—with appropriate values to control the scope and applicability of the Firewall Manager security policies.
  2. On update of the values in the Systems Manager parameters, the Amazon EventBridge rule captures the parameter update event.
  3. Amazon EventBridge triggers an AWS Lambda function to deploy the Firewall Manager policies.
  4. The AWS Lambda function will deploy the Firewall manager security policies across the OUs and regions specified in step 1.
  5. The lambda function updates the Amazon DynamoDB table with Firewall manager policies metadata.

Configuring Prerequisites Automatically

There are a few important prerequisites that must be configured in your account before you deploy this solution. We have built a template called aws-fms-prereq that will launch the solution prerequisites.

When you execute the aws-fms-prereq template, the following things will happen:

Note: If the Firewall Manager prerequisites described above are already met, you can skip this step and go directly to next step: Deploy the solution template.

If you run this prerequisite template in the Organization primary account that is also your Firewall Manager admin account, the solution template will be deployed automatically. If you do this, you can skip the step of deploying the solution template and jump straight to Manage your Firewall manager security policies.

Figure 2 shows how the prerequisite template creates a Lambda function. That function validates and installs the prerequisites and AWS CloudFormation stack sets to enable AWS Config across all member accounts in the organization.

Figure 2: Solution prerequisites

Figure 2: Solution prerequisites

Installing Prerequisites

If the Firewall Manager prerequisites are already met, skip this step and go directly to next step, below: Deploy the solution template.

  1. Install the AWS CLI

    Note: If you already have the AWS Command Line Interface v2 installed on your workstation, you can skip this step.

    The template deployment can be done using the AWS Management Console or the AWS CLI. This procedure uses the AWS CLI to do the deployment. To get started, install the AWS CLI and configure it with credentials that have the required IAM permissions to create resources.

  2. Deploy the prerequisite templateIf this is the first time you’re using Firewall Manager, download the aws-fms-prereq.template, and run the following AWS CLI command in the primary account of your AWS Organization to check for and complete the prerequisites.Replace the variable <your_FW_account_ID> with the account ID of your AWS Firewall Admin. This deployment typically takes 2–3 minutes, but sometimes a little longer if there are a large number of member accounts in your organization. If you want AWS Config to be enabled in your member accounts, then set the EnableConfig parameter to Yes; however, if AWS Config is already enabled, then set it to No.
    aws cloudformation create-stack \
    --stack-name fms-prereq-stack \
    --template-body file://aws-fms-prereq.template \
    --parameters ParameterKey=FMSAdmin,ParameterValue=<your_FW_account_ID> ParameterKey=EnableConfig,ParameterValue=Yes
    

Deploy the solution template

Deploy the solution using aws-centralized-waf-and-vpc-security-group-management.template—an AWS CloudFormation template—in your Firewall Manager admin account that was created by the prerequisite template.

This template deploys the AWS Centralized WAF and VPC Security Group Management solution shown in Figure 1, with all the resources and integrations.

The following AWS CLI command will deploy the solution template:

aws cloudformation create-stack \
--stack-name fms-central-policy-mgmt-stack \
--template-body file://aws-centralized-waf-and-vpc-security-group-management.template \
--capabilities CAPABILITY_IAM

The command will print very basic output, simply identifying the stack’s name, as shown below.

{
 "StackId": "arn:aws:cloudformation:us-east-1:<your_FW_admin_account_ID>:stack/fms-central-policy-mgmt-stack/<stack ARN>"
}

You can run the following CLI command to check the status of the stack deployment. Once you see that “StackStatus” is set to “CREATE_COMPLETE” in the output, you can proceed to the next step.

aws cloudformation describe-stacks \
--stack-name fms-central-policy-mgmt-stack

Manage your Firewall manager security policies

Once the solution is deployed, you can deploy the Firewall Manager policies to the organization member accounts, which will be reflected in the Parameter Store. These changes to the Parameter Store are picked up by the EventBridge rule, which triggers the Lambda function to automatically create, delete, or modify the policies as required.

Note: All of the following commands must be executed in the Firewall manager admin account, which might not be the root account of your AWS Organization.

  1. Add OUs to the scope of Firewall Manager policiesTo begin, define the OUs that the Firewall Manager policies should apply to. Store the list of OUs in a parameter named /FMS/OUs. The following AWS CLI command stores a comma-separated list of OU IDs in the right parameter.
    aws ssm put-parameter \
     --name "/FMS/OUs" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_OU_IDs>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  2. Add AWS Regions to the scope of Firewall Manager policiesNext you need to add the AWS regions where you want these policies to be applied. In the AWS CLI command that follows, you can customize the AWS region list depending on where you run your workloads. If, for example, you want to use us-west-2, us-east-1, and eu-west-1, then you need to provide us-west-2,us-east-1,eu-west-1 as your value in the command below.
    aws ssm put-parameter \
     --name "/FMS/Regions" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_regions>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  3. (Optional) Add resource tagsPlease note that this is an optional step. Resource tags are a way to apply these Firewall Manager policies to some resources, but not others. Imagine that we only want to apply these policies to resources that have the tag Environment set to the value Prod. The following command will do that:
    aws ssm put-parameter \
     --name "/FMS/Tags" \
     --type "String" \
     --value "{\"ResourceTags\":[{\"Key\":\"Environment\",\"Value\":\"Prod\"}],\"ExcludeResourceTags\":false}" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
     "Version": 2,
     "Tier": "Standard"
    }
    

Test the solution

Test the solution by creating a global CloudFront distribution and an Amazon VPC security group in one of your member accounts, and configure these resources to be noncompliant with the policies enforced by Firewall Manager.

Deploy test resources in one of your member accounts

Run the following AWS CLI command to deploy the demo template in one of the member accounts to create a sample CloudFront distribution and an Amazon VPC security group that aren’t compliant with the default Firewall Manager policies.

aws cloudformation create-stack \
  --stack-name demo-stack \
  --template-body file://aws-fms-demo.template \
  --capabilities CAPABILITY_IAM

Check for web ACL and security group audit results

To check the webACL resource association in the member account, follow these steps:

  1. Log in to your member account management console where you had deployed your demo-template and go to the WAF & Shield service page.
  2. You will see the WebACL with the prefix “FMManagedWebACLV2FMS-WAF-01“ , select that webACL.
  3. Go to the tab Associated AWS resources.
  4. You should see the newly created CloudFront distribution listed here.

To check the compliance status of the newly created security group, follow these steps:

  1. Log in to the Firewall manager admin account management console and go to the AWS Firewall Manager service page.
  2. In the Security policies section, you will find the FMS-SecGroup-02 policy. Select the policy FMS-SecGroup-02.
  3. You will see that the member account (where the demo template was deployed) is marked as non-compliant. Select the member account number to see the newly created security group along with the reason for the noncompliant finding.

Clean-up test resources in member account

Follow these steps to clean up the test resources that the demo template would have created in your member account:

  1. Log in to member account management console, and go to Cloudfront service page.
  2. Select the CloudFront distribution. (Origin would have the stack name as its prefix.)
  3. Go to “Behaviours,” select the one and select Edit.
  4. Remove the lambda function association and save changes.
  5. Run this CLI command to delete the stack that you had created earlier:
    aws cloudformation delete-stack \
      --stack-name demo-stack
    

Customizing the solution’s source code

This solution deploys Firewall Manager policies with some opinionated default rules defined in the manifest.json file. They probably aren’t all that you will need. You could customize these policies to fit your own needs, but it takes a bit of development skill. Also, the steps are too long to include here. If, for example, you want to add another AWS WAF rule group to the default AWS WAF security policy, you’ll need to edit the manifest and redeploy the solution. See how to do this customization and several others.

(Optional) Clean-up resources

Once you’ve tried the solution, if you want to clean up the stack you can do so in two steps:

  1. Go to the Firewall Manager admin account management console, navigate to the Parameter Store, and update the /FMS/OU parameter with the value delete. This ensures that the Firewall Manager policies and their related resources are deleted.
  2. Run the following AWS CLI command in the Firewall manager admin account to delete the solution stack you deployed earlier:
    aws cloudformation delete-stack --stack-name fms-central-policy-mgmt-stack
    

Conclusion

The AWS Centralized WAF and VPC Security Group Management solution addresses the feedback we heard from you, our customer. You told us the process of defining policies and configuring rule sets is challenging and time consuming, so we wrote this to make it easier and faster. In this post, we showed you how to deploy the solution following a two-step process using AWS CloudFormation. We also showed you ways that you can use the solution to easily manage your Firewall Manager policies by updating values in Systems Manager Parameter Store. Updating the values automates the deployment based on your choice of OUs, Regions, and resources. Finally, we showed you how to further customize the solution to suit your needs.

This post is just an introduction to what is possible and the overall objective. Before you start using this solution for anything important, we recommend that you review the solution implementation guide. It contains step-by-step directions and more example use cases. The guide also includes security recommendations and some cost estimates for the various supported scenarios.

To learn more about other AWS solutions, visit the AWS Solutions Library.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Satheesh Kumar

Satheesh is a Senior Solutions Architect based out of Bangalore, India. He helps large enterprise customers build solutions using AWS services.

Author

Ramanan Kannan

Ramanan is a Senior Solutions Architect and works with large enterprise customers in the financial domain. He is based out of Chennai, India.

Automatically updating AWS WAF Rule in real time using Amazon EventBridge

Post Syndicated from Adam Cerini original https://aws.amazon.com/blogs/security/automatically-updating-aws-waf-rule-in-real-time-using-amazon-eventbridge/

In this post, I demonstrate a method for collecting and sharing threat intelligence between Amazon Web Services (AWS) accounts by using AWS WAF, Amazon Kinesis Data Analytics, and Amazon EventBridge. AWS WAF helps protect against common web exploits and gives you control over which traffic can reach your application.

Attempted exploitation blocked by AWS WAF provides a data source on potential attackers that can be shared proactively across AWS accounts. This solution can be an effective way to block traffic known to be malicious across accounts and public endpoints. AWS WAF managed rules provide an easy way to mitigate and record the details of common web exploit attempts. This solution will use the Admin protection managed rule for demonstration purposes.

In this post you will see references to the Sender account and the Receiver account. There is only one receiver in this example, but the receiving process can be duplicated multiple times across multiple accounts. This post walks through how to set up the solution. You’ll notice there is also an AWS CloudFormation template that makes it easy to test the solution in your own AWS account. The diagram in figure 1 illustrates how this architecture fits together at a high level.
 

Figure 1: Architecture diagram showing the activity flow of traffic blocked on the Sender AWS WAF

Figure 1: Architecture diagram showing the activity flow of traffic blocked on the Sender AWS WAF

Prerequisites

You should know how to do the following tasks:

Extracting threat intelligence

AWS WAF logs using a Kinesis Data Firehose delivery stream. This allows you to not only log to a destination S3 bucket, but also act on the stream in real time using a Kinesis Data Analytics Application. The following SQL code demonstrates how to extract any unique IP addresses that have been blocked by AWS WAF. While this example returns all blocked IPs, more complex SQL could be used for a more granular result. The full list of log fields is included in the documentation.


CREATE OR REPLACE STREAM "wafstream" (
 "clientIp" VARCHAR(16),
 "action" VARCHAR(8),
 "time_stamp" TIMESTAMP
 );

CREATE OR REPLACE PUMP "WAFPUMP" as
INSERT INTO "wafstream" (
"clientIp",
"action",
"time_stamp"
) 

Select STREAM DISTINCT "clientIp", "action", FLOOR(WAF_001.ROWTIME TO MINUTE) as "time_stamp"
FROM "WAF_001"
WHERE "action" = 'BLOCK';

Proactively blocking unwanted traffic

After extracting the IP addresses involved in the abnormal traffic, you will want to proactively block those IPs on your other web facing resources. You can accomplish this in a scalable way using Amazon EventBridge. After the Kinesis Application extracts the IP address, it will use an AWS Lambda function to call PutEvents on an EventBridge event bus. This process will create the event pattern, which is used to determine when to trigger an event bus rule. This example uses a simple pattern, which acts on any event with a source of “custom.waflogs” as shown in Figure 2. A more complex pattern could be used to for finer grain control of when a rule triggers.
 

Figure 2: EventBridge Rule creation

Figure 2: EventBridge Rule creation

Once the event reaches the event bus, the rule will forward the event to an event bus in “Receiver” account, where a second rule will trigger to call a Lambda function to update a WAF IPSet. A Web ACL rule is used to block all traffic sourcing from an IP address contained in the IPSet.

Test the solution by using AWS CloudFormation

Now that you’ve walked through the design of this solution, you can follow these instructions to test it in your own AWS account by using CloudFormation stacks.

To deploy using CloudFormation

  1. Launch the stack to provision resources in the Receiver account.
  2. Provide the account ID of the Sender account. This will correctly configure the permissions for the EventBridge event bus.
  3. Wait for the stack to complete, and then capture the event bus ARN from the output tab.

    This stack creates the following resources:

    • An AWS WAF v2 web ACL
    • An IPSet which will be used to contain the IP addresses to block
    • An AWS WAF rule that will block IP addresses contained in the IPSet
    • A Lambda function to update the IPSet
    • An IAM policy and execution role for the Lambda function
    • An event bus
    • An event bus rule that will trigger the Lambda function
  4. Switch to the Sender account. This should be the account you used in step 2 of this procedure.
  5. Provide the ARN of the event bus that was captured in step 3. This stack will provision the following resources in your account:
    • A virtual private cloud (VPC) with public and private subnets
    • Route tables for the VPC resources
    • An Application Load Balancer (ALB) with a fixed response rule
    • A security group that allows ingress traffic on port 80 to the ALB
    • A web ACL with the AWS Managed Rule for Admin Protection enabled
    • An S3 bucket for AWS WAF logs
    • A Kinesis Data Firehose delivery stream
    • A Kinesis Data Analytics application
    • An EventBridge event bus
    • An event bus rule
    • A Lambda function to send information to the Receiver account event bus
    • A custom CloudFormation resource which enables WAF logging and starts the Kinesis Application
    • An IAM policy and execution role that allows a Lamba function to put events into the event bus
    • An IAM policy and role to allow the custom CloudFormation resource to enable WAF logging and start the Kinesis Application
    • An IAM policy and role that allows the Kinesis Firehose to put logs into S3
    • An IAM policy that allows the WAF Web ACL to put records into the Firehose
    • An IAM policy and role that allows the Kinesis Application to invoke a Lambda function and log to CloudWatch
    • An IAM policy and role that allows the “Sender” account to put events in the “Receiver” event bus

After the CloudFormation stack completes, you should test your environment. To test the solution, check the output tab for the DNS name of the Application Load Balancer and run the following command:

curl ALBDNSname/admin

You should be able to check the Receiver account’s AWS WAF IPSet named WAFBlockIPset and find your IP.

Conclusion

This example is intentionally simple to clearly demonstrate how each component works. You can take these principles and apply them to your own environment. Layering the Amazon managed rules with your own custom rules is the best way to get started with AWS WAF. This example shows how you can use automation to update your WAF rules without needing to rely on humans. A more complete solution would source log data from each Web ACL and update an active IP Set in each account to protect all resources. As seen in Figure 3, a more complete implementation would send all logs in a region to a centralized Kinesis Firehose to be processed by the Kinesis Analytics Application, EventBridge would be used to update a local IPset as well as forward the event to other accounts event buses for processing.
 

Figure 3: Updating across accounts

Figure 3: Updating across accounts

You can also add additional targets to the event bus to do things such as send to a Simple Notification Service topic for notifications, or run additional automation. To learn more about AWS WAF web ACLs, visit the AWS WAF Developer Guide. Using Amazon EventBridge opens up the possibility to send events to partner integrations. Customers or APN Partners like PagerDuty or Zendesk can enrich this solution by taking actions such as automatically opening a ticket or starting an incident. To learn more about the power of Amazon EventBridge, see the EventBridge User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Adam Cerini

Adam is a Senior Solutions Architect with Amazon Web Services. He focuses on helping Public Sector customers architect scalable, secure, and cost effective systems. Adam holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

Troubleshooting Amazon API Gateway with enhanced observability variables

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/troubleshooting-amazon-api-gateway-with-enhanced-observability-variables/

Amazon API Gateway is often used for managing access to serverless applications. Additionally, it can help developers reduce code and increase security with features like AWS WAF integration and authorizers at the API level.

Because more is handled by API Gateway, developers tell us they would like to see more data points on the individual parts of the request. This data helps developers understand each phase of the API request and how it affects the request as a whole. In response to this request, the API Gateway team has added new enhanced observability variables to the API Gateway access logs. With these new variables, developers can troubleshoot on a more granular level to quickly isolate and resolve request errors and latency issues.

The phases of an API request

API Gateway divides requests into phases, reflected by the variables that have been added. Depending upon the features configured for the application, an API request goes through multiple phases. The phases appear in a specific order as follows:

Phases of an API request

Phases of an API request

  • WAF: the WAF phase only appears when an AWS WAF web access control list (ACL) is configured for enhanced security. During this phase, WAF rules are evaluated and a decision is made on whether to continue or cancel the request.
  • Authenticate: the authenticate phase is only present when AWS Identity and Access Management (IAM) authorizers are used. During this phase, the credentials of the signed request are verified. Access is granted or denied based on the client’s right to assume the access role.
  • Authorizer: the authorizer phase is only present when a Lambda, JWT, or Amazon Cognito authorizer is used. During this phase, the authorizer logic is processed to verify the user’s right to access the resource.
  • Authorize: the authorize phase is only present when a Lambda or IAM authorizer is used. During this phase, the results from the authenticate and authorizer phase are evaluated and applied.
  • Integration: during this phase, the backend integration processes the request.

Each phrase can add latency to the request, return a status, or raise an error. To capture this data, API Gateway now provides enhanced observability variables based on each phase. The variables are named according to the phase they occur in and follow the naming structure, $context.phase.property. Therefore, you can get data about WAF latency by using $context.waf.latency.

Some existing variables have also been given aliases to match this naming schema. For example, $context.integrationErrorMessage has a new alias of $context.integration.error. The resulting list of variables is as follows:

Phases and variables for API Gateway requests

Phases and variables for API Gateway requests

API Gateway provides status, latency, and error data for each phase. In the authorizer and integration phases, there are additional variables you can use in logs. The $context.phase.requestId provides the request ID from that service and the $context.phase.integrationStatus provide the status code.

For example, when using an AWS Lambda function as the integration, API Gateway receives two status codes. The first, $context.integration.integrationStatus, is the status of the Lambda service itself. This is usually 200, unless there is a service or permissions error. The second, $context.integration.status, is the status of the Lambda function and reports on the success or failure of the code.

A full list of access log variables is in the documentation for REST APIs, WebSocket APIs, and HTTP APIs.

A troubleshooting example

In this example, an application is built using an API Gateway REST API with a Lambda function for the backend integration. The application uses an IAM authorizer to require AWS account credentials for application access. The application also uses an AWS WAF ACL to rate limit requests to 100 requests per IP, per five minutes. The demo application and deployment instructions can be found in the Sessions With SAM repository.

Because the application involves an AWS WAF and IAM authorizer for security, the request passes through four phases: waf, authenticate, authorize, and integration. The access log format is configured to capture all the data regarding these phases:

{
  "requestId":"$context.requestId",
  "waf-error":"$context.waf.error",
  "waf-status":"$context.waf.status",
  "waf-latency":"$context.waf.latency",
  "waf-response":"$context.wafResponseCode",
  "authenticate-error":"$context.authenticate.error",
  "authenticate-status":"$context.authenticate.status",
  "authenticate-latency":"$context.authenticate.latency",
  "authorize-error":"$context.authorize.error",
  "authorize-status":"$context.authorize.status",
  "authorize-latency":"$context.authorize.latency",
  "integration-error":"$context.integration.error",
  "integration-status":"$context.integration.status",
  "integration-latency":"$context.integration.latency",
  "integration-requestId":"$context.integration.requestId",
  "integration-integrationStatus":"$context.integration.integrationStatus",
  "response-latency":"$context.responseLatency",
  "status":"$context.status"
}

Once the application is deployed, use Postman to test the API with a sigV4 request.

Configuring Postman authorization

Configuring Postman authorization

To show troubleshooting with the new enhanced observability variables, the first request sent through contains invalid credentials. The user receives a 403 Forbidden error.

Client response view with invalid tokens

Client response view with invalid tokens

The access log for this request is:

{
    "requestId": "70aa9606-26be-4396-991c-405a3671fd9a",
    "waf-error": "-",
    "waf-status": "200",
    "waf-latency": "8",
    "waf-response": "WAF_ALLOW",
    "authenticate-error": "-",
    "authenticate-status": "403",
    "authenticate-latency": "17",
    "authorize-error": "-",
    "authorize-status": "-",
    "authorize-latency": "-",
    "integration-error": "-",
    "integration-status": "-",
    "integration-latency": "-",
    "integration-requestId": "-",
    "integration-integrationStatus": "-",
    "response-latency": "48",
    "status": "403"
}

The request passed through the waf phase first. Since this is the first request and the rate limit has not been exceeded, the request is passed on to the next phase, authenticate. During the authenticate phase, the user’s credentials are verified. In this case, the credentials are invalid and the request is rejected with a 403 response before invoking the downstream phases.

To correct this, the next request uses valid credentials, but those credentials do not have access to invoke the API. Again, the user receives a 403 Forbidden error.

Client response view with unauthorized tokens

Client response view with unauthorized tokens

The access log for this request is:

{
  "requestId": "c16d9edc-037d-4f42-adf3-eaadf358db2d",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "8",
  "authorize-error": "The client is not authorized to perform this operation.",
  "authorize-status": "403",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "-",
  "integration-latency": "-",
  "integration-requestId": "-",
  "integration-integrationStatus": "-",
  "response-latency": "52",
  "status": "403"
}

This time, the access logs show that the authenticate phase returns a 200. This indicates that the user credentials are valid for this account. However, the authorize phase returns a 403 and states, “The client is not authorized to perform this operation”. Again, the request is rejected with a 403 response before invoking downstream phases.

The last request for the API contains valid credentials for a user that has rights to invoke this API. This time the user receives a 200 OK response and the requested data.

Client response view with valid request

Client response view with valid request

The log for this request is:

{
  "requestId": "ac726ce5-91dd-4f1d-8f34-fcc4ae0bd622",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "1",
  "authorize-error": "-",
  "authorize-status": "200",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "200",
  "integration-latency": "16",
  "integration-requestId": "8dc58335-fa13-4d48-8f99-2b1c97f41a3e",
  "integration-integrationStatus": "200",
  "response-latency": "48",
  "status": "200"
}

This log contains a 200 status code from each of the phases and returns a 200 response to the user. Additionally, each of the phases reports latency. This request had a total of 48 ms of latency. The latency breaks down according to the following:

Request latency breakdown

Request latency breakdown

Developers can use this information to identify the cause of latency within the API request and adjust accordingly. While some phases like authenticate or authorize are immutable, optimizing the integration phase of this request could remove a large chunk of the latency involved.

Conclusion

This post covers the enhanced observability variables, the phases they occur in, and the order of those phases. With these new variables, developers can quickly isolate the problem and focus on resolving issues.

When configured with the proper access logging variables, API Gateway access logs can provide a detailed story of API performance. They can help developers to continually optimize that performance. To learn how to configure logging in API Gateway with AWS SAM, see the demonstration app for this blog.

#ServerlessForEveryone

Deploying defense in depth using AWS Managed Rules for AWS WAF (part 2)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/deploying-defense-in-depth-using-aws-managed-rules-for-aws-waf-part-2/

In this post, I show you how to use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

In part 1 of this post I describe the technologies and methods that you can use to build and manage defense in depth for your network. In part 2, I will show you how to use those tools to build your defense in depth using AWS Managed Rules as the starting point and how it can be used for optimal effectiveness

Managing policies for multiple environments can be done with minimal administrative overhead and can now be part of a deployment pipeline where you programmatically enforce policies for broad edge network policy enforcements and protect production workloads without compromising on development speed or safety.

Building robust security policy enforcement relies on a layered approach and the same applies to securing your web applications. Having edge policies, application policies, and even private or internal policy enforcement layers adds to the visibility of communication requests as well as unified policy enforcement.

Using a layered AWS WAF deployment, such as is deployed by the procedure that follows, gives you greater flexibility in the amount of rules you can use and the option to standardize edge policies and production policies. This lets you test and develop new applications without comprising the production environments.

In the following example, the application load balancer is in us-east-1. To create a web ACL for Amazon CloudFront you need to deploy the stack in us-east-1. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template can create both web ACLs in this scenario.

Note: If you’re using CloudFront and hosting the origin in us-east-1, you only need to maintain one stack. If your origin is in another region, you need to deploy a stack in us-east-1 for CloudFront web ACLs and another in the region where your application load balancer is. That scenario isn’t covered in the following procedure. None of the underlying infrastructure would be deployed with the example AWS CloudFromation templates provided. Only the AWS WAF configurations would be deployed using the example templates.

Solution overview

The following diagram illustrates the traffic flow where traffic comes in via CloudFront and serves the traffic to the backend load balancers. Both CloudFront and the load balancers support AWS WAF. This is where dedicated web security policies can be enforced to build out a defense-in-depth, multi layered policy enforcement.
 

Figure 1: Defense in depth deployment on AWS WAF

Figure 1: Defense in depth deployment on AWS WAF

Creating AWS Managed Rule web ACLs

During this process we create two web ACLs that are designed for policy enforcement for two dedicated layers. The process won’t deploy the required infrastructure, such as the CloudFront distribution or application load balancers. This example template deploys a single stack in us-east-1 where the CloudFront origin load balancer is located.

To create AWS Managed Rule web ACLs

  1. Download the Amazon-CloudFront-Application-Load-Balancer-AMR.yml template.
  2. Open the AWS Management Console and select the region where the origin application load balancer is deployed. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template that you downloaded deploys both web ACLs for CloudFront and the application load balancer.
     
    Figure 2: Select a region from the console

    Figure 2: Select a region from the console

  3. Under Find Services enter AWS CloudFormation and select Enter.
     
    Figure 3: Find and select AWS CloudFormation

    Figure 3: Find and select AWS CloudFormation

  4. Select Create stack.
     
    Figure 4: Create stack

    Figure 4: Create stack

  5. Select a template file for the stack.
    1. In the Create stack window, select Template is ready and Upload a template file.
    2. Under Upload a template file, select Choose file and select the Amazon-CloudFront-Application-Load-Balancer-AMR.yml example AWS CloudFormation template you downloaded earlier.
    3. Choose Next.
    Figure 5: Prepare and choose a template

    Figure 5: Prepare and choose a template

  6. Add stack details.
    1. Enter a name for the stack in Stack name.
    2. Enter a name for the Edge Network AWS WAF WebACL and for the Public Layer AWS WAF WebACL.
    3. Set a rate-limit for HTTP GET requests in HTTP Get Flood Protection (this rate is applied per IP address over a 5 minute period).
    4. Set a rate limit for HTTP POST requests in HTTP Post Flood Protection.
    5. Use the Login URL to apply the limit to a targeted login page. If you want to rate-limit all HTTP POST requests, leave the login URL section blank.
    Figure 6: Set stack details

    Figure 6: Set stack details

  7. By default, all the rules within the rule-sets are in action override (count mode). This does not include the rate based rules. If you want to deploy selected rules in a block, remove them from the pre-populated list by highlighting and deleting them. It’s best practice to evaluate firewall rules before changing them from count to block mode. Choose Next to move to the next step.
     
    Figure 7: Default managed rules options

    Figure 7: Default managed rules options

  8. Here you can add tags to apply to the resources in the stack that these rules will be deployed to. Tagging is a recommended best practice as it enables you to add metadata information to resources during the creation. For more information on tagging please see the Tagging AWS resources documentation. Then choose Next. On the following page choose Create stack.
     
    Figure 8: Add tags

    Figure 8: Add tags

  9. Wait until the stack has been deployed. When deployment is complete, the status of the stack will change to CREATE_COMPLETE.
     
    Figure 9: Stack deployment status

    Figure 9: Stack deployment status

Associating the web ACLs to resources

During this process we associate the two newly created web ACLs to the corresponding infrastructure resources. In this example, it would be the CloudFront distribution and its origin load balancer which should have been created prior.

To associate the web ACLs to resources

  1. In the console search for and select WAF & Shield.
     
    Figure 10: Select WAF & Shield

    Figure 10: Select WAF & Shield

  2. Select Web ACLs from the list on the left.
     
    Figure 11: Select Web ACLs

    Figure 11: Select Web ACLs

  3. Select Global (CloudFront) from the drop down list at the top of the page. Choose the Edge-Network-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 12: Select the web ACL

    Figure 12: Select the web ACL

  4. Next select Associated AWS and then choose Add AWS resources.
     
    Figure 13: Add AWS resources

    Figure 13: Add AWS resources

  5. Select the CloudFront distribution you want to protect. Choose Add.
     
    Figure 14: Select the CloudFront distribution to protect

    Figure 14: Select the CloudFront distribution to protect

  6. Select the region the application load balancer is deployed in—this example is us-east-1—and then repeat the same association process as in steps by selecting Web ACLs and now associating the Application Load Balancer similar to steps 3 and 4 above. However, this time, select the application load balancer that serves as the CloudFront Distribution origin. Select US East (N. Virginia) from the drop-down list at the top of the page. Choose the Public-Application-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 15: Application layer Web ACL association

    Figure 15: Application layer Web ACL association

Conclusion

Using AWS WAF to manage a multi-layer web application security enforcement policy you are able to build defense in depth stack for each specific web application. The configuration will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications. Now with AWS Managed Rules this has enabled customers to make use of prebuild rule sets that can easily be deployed to create a layered defense that will fit into customers web application deployment pipelines. For customers that would like to centrally manage and control WAF in their AWS Organization, consider AWS Firewall Manager.

The AWS CloudFormation templates used in this procedure are in this GitHub repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

Defense in depth using AWS Managed Rules for AWS WAF (part 1)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/defense-in-depth-using-aws-managed-rules-for-aws-waf-part-1/

In this post, I discuss how you can use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

The post is in two parts. This first part describes AWS Managed Rules for AWS WAF and how it can be used to provide defense in depth. The second part shows how to apply AWS Managed Rules for WAF.

AWS Managed Rules for AWS WAF is a service that provides groups of rules created by Amazon Web Services (AWS) or by an AWS technology partner. By using AWS Managed Rules, you can reduce the administrative overhead of configuring rules for AWS WAF. You still need a comprehensive strategy for web application policy enforcement to help you make the best use of AWS Managed Rules for your web applications.

By using a layered policy enforcement strategy, you can create policy enforcement that’s specific to each part of your applications. This helps you avoid having to maintain and manage monolithic AWS WAF configurations for each of your applications. When you can separate policies for the edge network and for the application layer network, replicating separate policies across larger workloads becomes modular. This makes your application security more agile and lets you protect public-facing web applications without writing new rules or including rules that aren’t relevant to your web application.

Policy enforcement becomes even less of an administrative burden when you use AWS Firewall Manager to enforce policies across all accounts. This helps ensure organizations have robust policy enforcement measures across multiple accounts, with increased application layer visibility.

The new AWS WAF JSON document-style configuration enables traditional code review processes. You can now easily manage AWS WAF configurations on multiple layers of your web applications. This has also enabled partners to create more dynamic and robust rules that they can deliver on AWS WAF, which ultimately helps those customers manage their web application security policies.

AWS WAF enhancements

AWS WAF uses web ACL capacity units (WCU) to calculate and control the operating resources that are used to run your rules, rule groups, and web ACLs.

You can use JSON key-value pair document-based configuration to more easily integrate AWS WAF into the development practices of your organization. As noted in the prior paragraph, using document-style configuration removes the need to use multiple API calls to create objects in the correct order before you can create and deploy a web ACL to protect your web applications.

Using this method lets firewall changes be implemented with normal development and operations best practices because it will be infrastructure as code. This enables version control and code review before deploying updates to your production environment.

Solution overview

The following diagram illustrates the layers and functions of a defense-in-depth solution. The text that follows describes each layer.
 

Figure 1: Solution overview diagram

Figure 1: Solution overview diagram

Edge network layer policy enforcement

The edge network is the first layer of policy enforcement and should be used for broad security policy enforcement. This is the ideal place for rules such as AWS Managed Rules Core rule set (CRS), geographical location blocks, IP reputational lists, anonymous IP lists, and basic rate limits enforcement. By limiting known bad traffic at the edge network, the CRS limits the exposure of the application layer to known bad IP address ranges, malicious requests, bad bots, and request floods. This provides broad protection to the inner application layer against malicious activity, which can be applied regardless of the web application being served at the application layer.

Combining Amazon CloudFront with the distributed denial of service (DDoS) mitigation capabilities of AWS Shield is supported by AWS WAF for your outer layer of web application security enforcement.

It’s a common misconception that CloudFront is only a content delivery platform, but it also has robust transparent reverse proxy capabilities. CloudFront can help protect your environment from a broad range of web application risks. For example, you can use CloudFront to ensure that HTTP requests conform to standards on the far outer layer of your web application environment while serving content closer to the user.

Application layer policy enforcement

The next level of enforcement should be an application load balancer in a public subnet with another web ACL at the CloudFront origin. This policy enforcement layer is where you create a regional web ACL for the CloudFront origin. In addition, this layer is where you apply application-specific rules. For example, if you have a web application that uses a LAMP stack, it would be best to use AWS Managed Rules for SQL Injection, Linux, and PHP as an enforcement layer.

Note: IP-based enforcement is not effective on this part of the environment. Consider making use of an origin custom header on the CloudFront distribution. Then using this custom header to create a BLOCK rule within this web ACL to deny any request without the origin custom header as the first rule in your web ACL list. This rule needs to be created manually and will not be configured by the supplied templates.

(Optional) Third-party web application firewall layer policy enforcement

AWS WAF enforces policies on inbound requests and doesn’t have outbound inspection capabilities. If you need to enforce policies based on outbound responses, you can use Amazon Machine Image (AMI) based web application firewalls, which are available via the AWS Marketplace.

Using an instance-based web application firewall is used here because most of the heavy lifting of computational expenditure is done on the AWS WAF enforcement layers. The third-party layer is where you can enforce policies that require requests to be stateful.

Using an AMI from AWS Marketplace also gives you access to capabilities such as higher visibility, threat intelligence, and robust firewall rules. This adds an additional layer of security enhancement to your environment.

(Optional) Private layer policy enforcement

When working with a traditional three-tier web architecture, you can add an additional layer of enforcement on the private layer, which can be used for the web front ends. This stage is where you would deploy an application load balancer in a private subnet serving your web front ends. This load balancer is there for any computational expensive regex-based rule enforcement that you don’t want to enforce on the instances-based WAF. This also gives you another layer of visibility before requests reaches the web front ends themselves. This example can be seen in Figure 2 below as a reference.

Use case examples

The AWS CloudFormation templates supplied can be deployed in a modular fashion. If the application load balancer is located in the us-east-1 region, you can deploy a single template called Amazon-CloudFront-Application-Load-Balancer-AMR.yml.

If the application load balancer isn’t located in us-east-1, you can use the Amazon-CloudFront-EdgeLayer-AMR.yml template to deploy the stack in us-east-1 to support the web ACL on CloudFront and then deploy ApplicationLayer-Load-Balancer-AMR.yml in the region the original application load balancer was deployed for its web ACL.

All CloudFormation templates are available on the Github project page and a summary of each can be found in the main readme.md file.

Note: All the individual rules in each rule set is set to ACTION OVERRIDE for initial deployment. If any of the rule actions in the group are set to block or allow, this override changes the behavior so that matching rules are only counted. You may change the setting to NO ACTION OVERRIDE after a period of evaluation to avoid disrupting production workloads with potential false positives.

Edge network and application load balancer origin using AWS Managed Rules for AWS WAF

When considering some of the web application best practices on AWS for resiliency and security, the recommendation is to use CloudFront where possible, because it can terminate TLS/SSL connections and serve cached content close to the end user. CloudFront has advanced mitigation capabilities such as SYN cookies and a massively distributed network separate from the traditional Amazon Elastic Compute Cloud (Amazon EC2) networking space. CloudFront also supports AWS WAF rate limits, IP blacklists, and broad security policies, which can be enforced at the edge network layer.

In the example Amazon-CloudFront-Application-Load-Balancer-AMR.yml template, we place a rate-limit for HTTP GET and HTTP POST methods. This is dependent upon expected traffic request rates. You can review Amazon CloudWatch metrics for your CloudFront distribution or application load balancer to determine the baseline for your rate limit based on the maximum expected requests per minute.

The rate limit is adjustable within the parameter options at deployment of the AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml. The HTTP POST rate limit also helps to slow down credential stuffing attacks—a form of brute force attack—on login pages. The ApplicationLayer-Load-Balancer-AMR.yml template used in part 2 of this post also deploys the Amazon IP reputation list to drop IP addresses based on Amazon internal threat intelligence.

We also use the AWS Managed Rules CommonRuleSet that blocks cross-site scripting (XSS) attacks, request with no user-agents, requests with known bad user-agents, large queries, posts, cookies, and URLs, and known LFI/RFI attacks.

Note: The size constraint rules aren’t recommended for protecting APIs or web applications with large HTTP POSTs or long cookies. Evaluate the possible effects of size constraint rules thoroughly before setting them to block requests.

There is also an AWS Managed Rule for known bad inputs which is based on threat intelligence gathered by the AWS Threat Research Team. Finally, there is an admin protection rule set that drops requests to known management login pages. It’s not advised that web applications have front door access to admin controls.

At the origin, it’s a good idea to use an application load balancer that also supports AWS WAF. This is where you want to apply application-specific web policies. For example, this is where you would apply rules to protect against a SQL injection attack if your web application uses a SQL database.

In the example AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml, for the origin application load balancer, we use AWS Managed Rules for SQL injections, Linux rule set, Unix rule set, PHP rule set, and the WordPress rule set to cover most eventualities customers could be using on their web applications.

For the example solution in part 2 of this post, if the origin application load balancer is in us-east-1, you can use Amazon-CloudFront-Application-Load-Balancer-AMR.yml, which will deploy both web ACLs.

If the origin is not in us-east-1, you can use two example templates which are Amazon-CloudFront-EdgeLayer-AMR.yml for the edge network and ApplicationLayer-Load-Balancer-AMR.yml in the origin region.

Using AWS Managed WAF Rules on public and private application load balancers

Some customers have reasons to not use CloudFront and will use two application load balancers. One load balancer for the public facing environment for web front ends and an internal load balancer for the application backends.

The following figure shows a deployment that uses two load balancers. A public load balancer works with the edge network WAF to connect to a web front end in a private subnet and an internal load balancer connects to the backend application.
 

Figure 2: Diagram of stacked load balancers

Figure 2: Diagram of stacked load balancers

In this use case, we can still use the same structure of edge network and application layer network, now only using load balancers. Using a three-tier web application approach to deploy web applications there will be an external facing and an internal application load balancer where you can deploy the same style of policy enforcement, but only on load balancers.

Note: To deploy something similar to this example, you can use the template EdgeLayerALB-PrivateLayerALB-AMR.yml in the relevant regions where the load balancers have been deployed.

Alarms and logging

After deploying these AWS CloudFormation templates you should consider setting CloudWatch alarms on certain metrics for the HTTP GET and HTTP POST flood rules as well as the reputation and anonymous IP lists. Customers that are familiar with developing may also opt to use Lambda responders to use CloudWatch Events to trigger and update to the rule change from COUNT to BLOCK. Also enabling full logging for each web ACL will give you higher visibility into each request and will make potential investigations easier.

Conclusion

Using the new enhancements of AWS WAF makes it easier to manage a multi-layer web application security enforcement policy by using AWS WAF to maintain and deploy web application firewall configurations across their different deployment stages, as well as across different types of applications. By making use of partner or AWS Managed Rules, administrative overhead can be significantly reduced, and with AWS Firewall Manager, customers can enforce these policies across all of an organization’s accounts. Part 2 of this post will show you one example of how this can be done.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

Migrating your rules from AWS WAF Classic to the new AWS WAF

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/migrating-rules-from-aws-waf-classic-to-new-aws-waf/

In November 2019, Amazon launched a new version of AWS Web Application Firewall (WAF) that offers a richer and easier to use set of features. In this post, we show you some of the changes and how to migrate from AWS WAF Classic to the new AWS WAF.

AWS Managed Rules for AWS WAF is one of the more powerful new capabilities in AWS WAF. It helps you protect your applications without needing to create or manage rules directly in the service. The new release includes other enhancements and a brand new set of APIs for AWS WAF.

Before you start, we recommend that you review How AWS WAF works as a refresher. If you’re already familiar with AWS WAF, please feel free to skip ahead. If you’re new to AWS WAF and want to know the best practice for deploying AWS WAF, we recommend reading the Guidelines for Implementing AWS WAF whitepaper.

What’s changed in AWS WAF

Here’s a summary of what’s changed in AWS WAF:

  • AWS Managed Rules for AWS WAF – a new capability that provides protection against common web threats and includes the Amazon IP reputation list and an anonymous IP list for blocking bots and traffic that originate from VPNs, proxies, and Tor networks.
  • New API (wafv2) – allows you to configure all of your AWS WAF resources using a single set of APIs instead of two (waf and waf-regional).
  • Simplified service limits – gives you more rules per web ACL and lets you define longer regex patterns. Limits per condition have been eliminated and replaced with web ACL capacity units (WCU).
  • Document-based rule writing – allows you to write and express rules in JSON format directly to your web ACLs. You’re no longer required to use individual APIs to create different conditions and associate them to a rule, greatly simplifying your code and making it more maintainable.
  • Rule nesting and full logical operation support – lets you write rules that contain multiple conditions, including OR statements. You can also nest logical operations, creating statement such as [A AND NOT(B OR C)].
  • Variable CIDR range support for IP set – gives you more flexibility in defining the IP range you want to block. The new AWF WAF supports IPv4 /1 to /32 IPv6 /1 to /128.
  • Chainable text transformation – allows you to perform multiple text transformations before executing a rule against incoming traffic.
  • Revamped console experience – features a visual rule builder and more intuitive design.
  • AWS CloudFormation support for all condition types – including that rules written in JSON can easily be converted into YAML format.

Although there were many changes introduced, the concepts and terminology that you’re already familiar with have stayed the same. The previous APIs have been renamed to AWS WAF Classic. It’s important to stress that resources created under AWS WAF Classic aren’t compatible with the new AWS WAF.

About web ACL capacity units

Web ACL capacity units (WCUs) are a new concept that we introduced to AWS WAF in November 2019. WCU is a measurement that’s used to calculate and control the operating resources that are needed to run the rules associated with your web ACLs. WCU helps you visualize and plan how many rules you can add to a web ACL. The number of WCUs used by a web ACL depends on which rule statements you add. The maximum WCUs for each web ACL is 1,500, which is sufficient for most use cases. We recommend that you take some time to review how WCUs work and understand how each type of rule statement consumes WCUs before continuing with your migration.

Planning your migration to the new AWS WAF

We recently announced a new API and a wizard that will help you migrate from AWS WAF Classic to the new AWS WAF. At high level summary, it will parse the web ACL under AWS WAF Classic and generate a CloudFormation template that will create equivalent web ACL under the new AWS WAF once deployed. In this section, we explain how you can use the wizard to plan your migration.

Things to know before you get started

The migration wizard will first examine your existing web ACL. It will examine and record for conversion any rules associated to the web ACL, as well as any IP sets, regex pattern sets, string match filters, and account-owned rule groups. Executing the wizard will not delete or modify your existing web ACL configuration, or any resource associated with that web ACL. Once finished, it will generate an AWS CloudFormation template within your S3 bucket that represents an equivalent web ACL with all rules, sets, filters, and groups converted for use in the new AWS WAF. You’ll need to manually deploy the template in order to recreate the web ACL in the new AWS WAF.

Please note the following limitations:

  • Only the AWS WAF Classic resources that are under the same account will be migrated over.
  • If you migrate multiple web ACLs that reference shared resources—such as IP sets or regex pattern sets—they will be duplicated under new AWS WAF.
  • Conditions associated with rate-based rules won’t be carried over. Once migration is complete, you can manually recreate the rules and the conditions.
  • Managed rules from AWS Marketplace won’t be carried over. Some sellers have equivalent managed rules that you can subscribe to in the new AWS WAF.
  • The web ACL associations won’t be carried over. This was done on purpose so that migration doesn’t affect your production environment. Once you verify that everything has been migrated over correctly, you can re-associate the web ACLs to your resources.
  • Logging for the web ACLs will be disabled by default. You can re-enable the logging once you are ready to switch over.
  • Any CloudWatch alarms that you may have will not be carried over. You will need to set up the alarms again once the web ACL has been recreated.

While you can use the migration wizard to migrate AWS WAF Security Automations, we don’t recommend doing so because it won’t convert any Lambda functions that are used behind the scenes. Instead, redeploy your automations using the new solution (version 3.0 and higher), which has been updated to be compatible with the new AWS WAF.

About AWS Firewall Manager and migration wizard

The current version of the migration API and wizard doesn’t migrate rule groups managed by AWS Firewall Manager. When you use the wizard on web ACLs managed by Firewall Manager, the associated rule groups won’t be carried over. Instead of using the wizard to migrate web ACLs managed by Firewall Manager, you’ll want to recreate the rule groups in the new AWS WAF and replace the existing policy with a new policy.

Note: In the past, the rule group was a concept that existed under Firewall Manager. However, with the latest change, we have moved the rule group under AWS WAF. The functionality remains the same.

Migrate to the new AWS WAF

Use the new migration wizard which creates a new executable AWS CloudFormation template in order to migrate your web ACLs from AWS WAF Classic to the new AWS WAF. The template is used to create a new version of the AWS WAF rules and corresponding entities.

  1. From the new AWS WAF console, navigate to AWS WAF Classic by choosing Switch to AWS WAF Classic. There will be a message box at the top of the window. Select the migration wizard link in the message box to start the migration process.
     
    Figure 1: Start the migration wizard

    Figure 1: Start the migration wizard

  2. Select the web ACL you want to migrate.
     
    Figure 2: Select a web ACL to migrate

    Figure 2: Select a web ACL to migrate

  3. Specify a new S3 bucket for the migration wizard to store the AWS CloudFormation template that it generates. The S3 bucket name needs to start with the prefix aws-waf-migration-. For example, name it aws-waf-migration-helloworld. Store the template in the region you will be deploying it to. For example, if you have a web ACL that is in us-west-2, you would create the S3 bucket in us-west-2, and deploy the stack to us-west-2.

    Select Auto apply the bucket policy required for migration to have the wizard configure the permissions the API needs to access your S3 bucket.

    Choose how you want any rules that can’t be migrated to be handled. Select either Exclude rules that can’t be migrated or Stop the migration process if a rule can’t be migrated. The ability of the wizard to migrate rules is affected by the WCU limit mentioned earlier.
     

    Figure 3: Configure migration options

    Figure 3: Configure migration options

    Note: If you prefer, you can run the following code to manually set up your S3 bucket with the policy below to configure the necessary permissions before you start the migration. If you do this, select Use the bucket policy that comes with the S3 bucket. Don’t forget to replace <BUCKET_NAME> and <CUSTOMER_ACCOUNT_ID> with your information.

    For all AWS Regions (waf-regional):

    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "apiv2migration.waf-regional.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<BUCKET_NAME>/AWSWAF/<CUSTOMER_ACCOUNT_ID>/*"
            }
        ]
    }
    

    For Amazon CloudFront (waf):

    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "apiv2migration.waf.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<BUCKET_NAME>/AWSWAF/<CUSTOMER_ACCOUNT_ID>/*"
            }
        ]
    }
    

  4. Verify the configuration, then choose Start creating CloudFormation template to begin the migration. Creating the AWS CloudFormation template will take about a minute depending on the complexity of your web ACL.
     
    Figure 4: Create CloudFormation template

    Figure 4: Create CloudFormation template

  5. Once completed, you have an option to review the generated template file and make modifications (for example, you can add more rules) should you wish to do so. To continue, choose Create CloudFormation stack.
     
    Figure 5: Review template

    Figure 5: Review template

  6. Use the AWS CloudFormation console to deploy the new template created by the migration wizard. Under Prepare template, select Template is ready. Select Amazon S3 URL as the Template source. Before you deploy, we recommend that you download and review the template to ensure the resources have been migrated as expected.
     
    Figure 6: Create stack

    Figure 6: Create stack

  7. Choose Next and step through the wizard to deploy the stack. Once created successfully, you can review the new web ACL and associate it to resources.
     
    Figure 7: Complete the migration

    Figure 7: Complete the migration

Post-migration considerations

After you verify that migration has been completed correctly, you might want to revisit your configuration to take advantage of some of the new AWS WAF features.

For instance, consider adding AWS Managed Rules to your web ACLs to improve the security of your application. AWS Managed Rules feature three different types of rule groups:

  • Baseline
  • Use-case specific
  • IP reputation list

The baseline rule groups provide general protection against a variety of common threats, such as stopping known bad inputs from making into your application and preventing admin page access. The use-case specific rule groups provide incremental protection for many different use cases and environments, and the IP reputation lists provide threat intelligence based on client’s source IP.

You should also consider revisiting some of the old rules and optimizing them by rewriting the rules or removing any outdated rules. For example, if you deployed the AWS CloudFormation template from our OWASP Top 10 Web Application Vulnerabilities whitepaper to create rules, you should consider replacing them with AWS Managed Rules. While the concepts found within the whitepaper are still applicable and might assist you in writing your own rules, the rules created by the template have been superseded by AWS Managed Rules.

For information about writing your own rules in the new AWS WAF using JSON, please watch the Protecting Your Web Application Using AWS Managed Rules for AWS WAF webinar. In addition, you can refer to the sample JSON/YAML to help you get started.

Revisit your CloudWatch metrics after the migration and set up alarms as necessary. The alarms aren’t carried over by the migration API and your metric names might have changed as well. You’ll also need to re-enable logging configuration and any field redaction you might have had in the previous web ACL.

Finally, use this time to work with your application team and check your security posture. Find out what fields are parsed frequently by the application and add rules to sanitize the input accordingly. Check for edge cases and add rules to catch these cases if the application’s business logic fails to process them. You should also coordinate with your application team when you make the switch, as there might be a brief disruption when you change the resource association to the new web ACL.

Conclusion

We hope that you found this post helpful in planning your migration from AWS WAF Classic to the new AWS WAF. We encourage you to explore the new AWS WAF experience as it features new enhancements, such as AWS Managed Rules, and offers much more flexibility in creating your own rules.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Umesh Kumar Ramesh

Umesh is a Sr. Cloud Infrastructure Architect with Amazon Web Services. He delivers proof-of-concept projects, topical workshops, and lead implementation projects to AWS customers. He holds a Bachelor’s degree in Computer Science & Engineering from National Institute of Technology, Jamshedpur (India). Outside of work, Umesh enjoys watching documentaries, biking, and practicing meditation.

Author

Kevin Lee

Kevin is a Sr. Product Manager at Amazon Web Service, currently overseeing AWS WAF.

Building well-architected serverless applications: Controlling serverless API access – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

This post continues part 1 of this security question. Previously, I cover the different mechanisms for authentication and authorization available for Amazon API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use AWS Identity and Access Management (IAM) to control access to internal or private API consumers.

Required practice: Use appropriate endpoint type and mechanisms to secure access to your API

I continue to show how to implement security mechanisms appropriate for your API endpoint.

Using AWS Amplify CLI to add a GraphQL API

After adding authentication in part 1, I use the AWS Amplify CLI to add a GraphQL AWS AppSync API with the following command:

amplify add api

When prompted, I specify an Amazon Cognito user pool for authorization.

Amplify add Amazon Cognito user pool for authorization

Amplify add Amazon Cognito user pool for authorization

To deploy the AWS AppSync API configuration to the AWS Cloud, I enter:

amplify push

Once the deployment is complete, I view the GraphQL API from within the AWS AppSync console and navigate to Settings. I see the AWS AppSync API uses the authorization configuration added during the part 1 amplify add auth. This uses the Amazon Cognito user pool to store the user sign-up information.

View AWS AppSync authorization settings with Amazon Cognito

View AWS AppSync authorization settings with Amazon Cognito

For a more detailed walkthrough using Amplify CLI to add an AWS AppSync API for the serverless airline, see the build video.

Viewing JWT tokens

When I create a new account from the serverless airline web frontend, Amazon Cognito creates a user within the user pool. It handles the 3-stage sign-up process for new users. This includes account creation, confirmation, and user sign-in.

Serverless airline Amazon Cognito based sign-in process

Serverless airline Amazon Cognito based sign-in process

Once the account is created, I browse to the Amazon Cognito console and choose Manage User Pools. I navigate to Users and groups under General settings and view my user account.

View User Account

View User Account

When I sign in to the serverless airline web app, I authenticate with Amazon Cognito, and the client receives user pool tokens. The client then calls the AWS AppSync API, which authorizes access using the tokens, connects to data sources, and resolves the queries.

Amazon Cognito tokens used by AWS AppSync

Amazon Cognito tokens used by AWS AppSync

During the sign-in process, I can use the browser developer tools to view the three JWT tokens Amazon Cognito generates and returns to the client. These are the accesstoken, idToken, and refreshToken.

View tokens with browser developer tools

View tokens with browser developer tools

I copy the .idToken value and use the decoder at https://jwt.io/ to view the contents.

JSON web token decoded

JSON web token decoded

The decoded token contains claims about my identity. Claims are pieces of information asserted about my identity. In this example, these include my Amazon Cognito username, email address, and other sign-up fields specified in the user pool. The client can use this identity information inside the application.

The ID token expires one hour after I authenticate. The client uses the Amazon Cognito issued refreshToken to retrieve new ID and access tokens. By default, the refresh token expires after 30 days, but can be set to any value between 1 and 3650 days. When using the mobile SDKs for iOS and Android, retrieving new ID and access tokens is done automatically with a valid refresh token.

For more information, see “Using Tokens with User Pools”.

Accessing AWS services

An Amazon Cognito user pool is a managed user directory to provide access for a user to an application. Amazon Cognito has a feature called identity pools (federated identities), which allow you to create unique identities for your users. These can be from user pools, or other external identity providers.

These unique identities are used to get temporary AWS credentials to directly access other AWS services, or external services via API Gateway. The Amplify client libraries automatically expire, rotate, and refresh the temporary credentials.

Identity pools have identities that are either authenticated or unauthenticated. Unauthenticated identities typically belong to guest users. Authenticated identities belong to authenticated users who have received a token by a login provider, such as a user pool. The Amazon Cognito issued user pool tokens are exchanged for AWS access credentials from an identity pool.

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

API keys

For public content and unauthenticated access, both Amazon API Gateway and AWS AppSync provide API keys that can be used to track usage. API keys should not be used as a primary authorization method for production applications. Instead, use these for rate limiting and throttling. Unauthenticated APIs require stricter throttling than authenticated APIs.

API Gateway usage plans specify who can access API stages and methods, and also how much and how fast they can access them. API keys are then associated with the usage plans to identify API clients and meter access for each key. Throttling and quota limits are enforced on individual keys.

Throttling limits determine how many requests per second are allowed for a usage plan. This is useful to prevent a client from overwhelming a downstream resource. There are two API Gateway values to control this, the throttle rate and throttle burst, which use the token bucket algorithm. The algorithm is based on an analogy of filling and emptying a bucket of tokens representing the number of available requests that can be processed. The bucket in the algorithm has a fixed size based on the throttle burst and is filled at the token rate. Each API request removes a token from the bucket. The throttle rate then determines how many requests are allowed per second. The throttle burst determines how many concurrent requests are allowed and is shared across all APIs per Region in an account.

Token bucket algorithm

Token bucket algorithm

Quota limits allow you to set a maximum number of requests for an API key within a fixed time period. When billing for usage, this also allows you to enforce a limit when a client pays by monthly volume.

API keys are passed using the x-api-key header. API Gateway rejects requests without them.

For example, within the serverless airline, the loyalty service uses an AWS Lambda function to fetch loyalty points and next tier progress via an API Gateway REST API /loyalty/{customerId}/get resource.

I can use this API to simulate the effect of usage plans with API keys.

  1. I navigate to the airline-loyalty API /loyalty/{customerId}/get resource in API Gateway console.
  2. I change the API Key Required value to be true.
  3. Setting API Key Required on API Gateway method

    Setting API Key Required on API Gateway method

  4. I choose Deploy API from the Actions menu.
  5. I create a usage plan in the Usage Plans section of the API Gateway Console.
  6. I choose Create and enter a name for the usage plan.
  7. I select Enable throttling and set the rate to one request per second and the burst to two requests. These are artificially low numbers to simulate the effect.
  8. I select Enable quota and set the limit to 10 requests per day.
  9. Create API Gateway usage plan

    Create API Gateway usage plan

  10. I click Next.
  11. I associate an API Stage by choosing Add API Stage, and selecting the airline Loyalty API and Prod Stage.
  12. Associate usage plan to API Gateway stage

    Associate usage plan to API Gateway stage

  13. I click Next, and choose Create API Key and add to Usage Plan
  14. Create API key and add to usage plan.

    Create API key and add to usage plan.

  15. I name the API Key and ensure it is set to Auto Generate.
  16. Name API Key

  17. I choose Save then Done to associate the API key with the usage plan.
API key associated with usage plan

API key associated with usage plan

I test the API authentication, in addition to the throttles and limits using Postman.

I issue a GET request against the API Gateway URL using a customerId from the airline Airline-LoyaltyData Amazon DynamoDB table. I don’t specify any authorization or API key.

Postman unauthenticated GET request

Postman unauthenticated GET request

I receive a Missing Authentication Token reply, which I expect as the API uses IAM authentication and I haven’t authenticated.

I then configure authentication details within the Authorization tab, using an AWS Signature. I enter my AWS user account’s AccessKey and SecretKey, which has an associated IAM identity policy to access the API.

Postman authenticated GET request without access key

Postman authenticated GET request without access key

I receive a Forbidden reply. I have successfully authenticated, but the API Gateway method rejects the request as it requires an API key, which I have not provided.

I retrieve and copy my previously created API key from the API Gateway console API Keys section, and display it by choosing Show.

Retrieve API key.

Retrieve API key.

I then configure an x-api-key header in the Postman Headers section and paste the API key value.

Having authenticated and specifying the required API key, I receive a response from the API with the loyalty points value.

Postman successful authenticated GET request with access key

Postman successful authenticated GET request with access key

I then call the API with a number of quick successive requests.

When I exceed the throttle rate limit of one request per second, and the throttle burst limit of two requests, I receive:

{"message": "Too Many Requests"}

When I then exceed the quota of 10 requests per day, I receive:

{"message": "Limit Exceeded"}

I view the API key usage within the API Gateway console Usage Plan section.

I select the usage plan, choose the API Keys section, then choose Usage. I see how many requests I have made.

View API key usage

View API key usage

If necessary, I can also grant a temporary rate extension for this key.

For more information on using API Keys for unauthenticated access for AWS AppSync, see the documentation.

API Gateway also has support for AWS Web Application Firewall (AWS WAF) which helps protect web applications and APIs from attacks. It is another mechanism to apply rate-based rules to prevent public API consumers exceeding a configurable request threshold. AWS WAF rules are evaluated before other access control features, such as resource policies, IAM policies, Lambda authorizers, and Amazon Cognito authorizers. For more information, see “Using AWS WAF with Amazon API Gateway”.

AWS AppSync APIs have built-in DDoS protection to protect all GraphQL API endpoints from attacks.

Improvement plan summary:

  1. Determine your API consumer and choose an API endpoint type.
  2. Implement security mechanisms appropriate to your API endpoint

Conclusion

Controlling serverless application API access using authentication and authorization mechanisms can help protect against unauthorized access and prevent unnecessary use of resources.

In this post, I cover using Amplify CLI to add a GraphQL API with an Amazon Cognito user pool handling authentication. I explain how to view JSON Web Token (JWT) claims, and how to use identity pools to grant temporary access to AWS services. I also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

This well-architected question will be continued where I look at segregating authenticated users into logical groups. I will first show how to use Amazon Cognito user pool groups to separate users with an Amazon Cognito authorizer to control access to an API Gateway method. I will also show how to pass JWTs to a Lambda function to perform authorization within a function. I will then explain how to also segregate users using custom scopes by defining an Amazon Cognito resource server.

Deploy a dashboard for AWS WAF with minimal effort

Post Syndicated from Tomasz Stachlewski original https://aws.amazon.com/blogs/security/deploy-dashboard-for-aws-waf-minimal-effort/

In this post, I’ll show you how to deploy a solution in your Amazon Web Services (AWS) account that will provide a fully automated dashboard for AWS Web Application Firewall (WAF) service. The solution uses logs generated and collected by AWS WAF, and displays them in a user-friendly dashboard shown in Figure 1.

Figure 1: User-friendly dashboard for AWS Web Application Firewall

Figure 1: User-friendly dashboard for AWS Web Application Firewall

The dashboard provides multiple graphs for you to reference, filter, and adjust that are available out-of-the-box. The example in Figure 1 shows data from a sample web page that I created, where you can see:

  • Executed AWS WAF rules
  • Number of all requests
  • Number of blocked requests
  • Allowed versus blocked requests
  • Countries by number of requests
  • HTTP methods
  • HTTP versions
  • Unique IP count
  • Request count
  • Top 10 IP addresses
  • Top 10 countries
  • Top 10 user-agents
  • Top 10 hosts
  • Top 10 web ACLs

The dashboard is created using Kibana, which provides flexibility by enabling you to add new diagrams and visualizations.

AWS WAF is a web application firewall. It helps protect your web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. In just a few steps, you can deploy AWS WAF to your Application Load Balancer, Amazon CloudFront distribution, or Amazon API Gateway stages. I’ll show you how you can use it to get more insights into what’s happening at the AWS WAF layer. AWS WAF provides two versions of the service: AWS WAF (version 2) and AWS WAF classic. We recommend using version 2 of AWS WAF to stay up to date with the latest features as AWS WAF classic is no longer being updated. The solution that I describe in this blog post works with both AWS WAF versions.

The solution is swift to deploy: The dashboard can be ready to use in less than an hour. The solution is built with multiple AWS services such as Amazon Elasticsearch (Amazon ES), AWS Lambda, Amazon Kinesis Data Firehose, Amazon Cognito, Amazon EventBridge, and more. However, you don’t need to know those services in detail to build and use the dashboard. I prepared a CloudFormation template that you can deploy in the AWS Console to set up the whole solution automatically on your AWS account. You can also find the whole solution on our AWS Github. It’s open source, so you can use and edit it to meet your needs.

The architecture of the solution can be broken down into 7 steps, which are outlined in Figure 2.

Figure 2: Interaction points while architecting the dashboard

Figure 2: Interaction points while architecting the dashboard

The interaction points are as follows:

  1. One of the functionalities of AWS WAF service is AWS WAF logs. The logs capture information about blocked and allowed requests. These logs are forwarded to Kinesis Data Firehose service.
  2. Kinesis Data Firehose buffer receives information, and then sends it to Amazon ES—the core of the solution.
  3. Some information, like the names of AWS WAF web ACLs aren’t provided in the AWS WAF logs. To make the whole solution more friendly for users, I used EventBridge, which will be called whenever a user changes their configuration of AWS WAF.
  4. EventBridge will call a Lambda function when new rules are created.
  5. Lambda will retrieve the information about all existing rules and it will update the mapping between IDs of the rules and their names in the Amazon ES cluster.
  6. To make the whole solution more secure, I’m using Amazon Cognito service to store the credentials of authorized dashboard users.
  7. The user enters their credentials to access the dashboard on Kibana which is installed on Amazon ES cluster.

Now, let’s deploy the solution and see how it works.

Step 1: Deploy solution using CloudFormation template

Click Launch Stack to launch a CloudFormation stack in your account and deploy the solution.

Select button to launch stack

You’ll be redirected to the CloudFormation service in North Virginia, USA, which is the default region to deploy this solution for an AWS WAF WebACL associated to CloudFront. You can change the region if you want. This template will spin up multiple cloud resources, including but not limited to:

  • Amazon ES cluster with Kibana for storing data and displaying dashboard
  • Amazon Cognito user pool with a registry of users who have access to dashboards
  • Kinesis Data Firehose for streaming logs to Amazon ES

In the wizard, you’ll be asked to modify or provide four different parameters. They are:

  • DataNodeEBSVolumeSize: Storage size of Amazon ES cluster which will be created. You can leave the default value.
  • ElasticSearchDomainName: Name of your Amazon ES cluster domain. You can leave the default value.
  • NodeType: Type of the instance which will be used to create Amazon ES cluster. You don’t need to change it if you don’t want to, but you can if necessary to accommodate your needs.
  • UserEmail: You must update this parameter. It is the email address that will receive the password to log in to Kibana.

Step 2: Wait

The process of launching a template, which I named aws-waf-dashboard for this example, will take 20–30 minutes. You can take a break and wait until the status of the stack changes to CREATE_COMPLETE.

Figure 3: Completed launch of the CloudFormation template

Figure 3: Completed launch of the CloudFormation template

Step 3: Validate that Kibana and dashboards work

Check your email. You should have received an email with the required password to log in to the Kibana dashboard. Make a note of it. Now return to the CloudFormation service and select the aws-waf-dashboard template. In the Output tab, there should be one parameter with a link to your dashboard in the Value column.

Figure 4: Output the CloudFormation template

Figure 4: Output the CloudFormation template

Select the link and log in to Kibana. Provide the email address that you set up in Step 1 and the password that was sent to it. You might be prompted to update the password.

In Kibana, select the Dashboard tab, as shown in Figure 5, and then select WAFDashboard in the table. This will call up the AWS WAF dashboard. It should still be empty because it hasn’t been connected with AWS WAF yet.

Figure 5: Empty Kibana dashboard

Figure 5: Empty Kibana dashboard

Step 4: Connect AWS WAF logs

Now it’s time to enable AWS WAF logs on the web ACL for which you want to create a dashboard and connect them to this solution. Open AWS WAF, select the AWS WAF dropdown option, select Web ACLs, and then select your desired web ACL. In this example, I used a previously created web ACL called MyPageWAF, as shown in Figure 6.

Figure 6: WAF & Shield

Figure 6: WAF & Shield

If you didn’t enable AWS WAF logs yet, you need to do it now in order to continue. To do this, select Logging and metrics in your web ACL, and then Enable logging, as shown in Figure 7.

Figure 7: Enable AWS WAF logs

Figure 7: Enable AWS WAF logs

Select the drop-down list under Amazon Kinesis Data Firehose Delivery Stream and then select the Kinesis Firehose which was created by the template in step 2. Its name starts with aws-waf-logs. Save your changes.

Figure 8: Select the Kinesis Firehose

Figure 8: Select the Kinesis Firehose

Step 5: Final validation

Your AWS WAF logs will be sent from the AWS WAF service through Kinesis Data Firehose directly to an Amazon ES cluster and will be available to you using Kibana dashboards. After a couple of minutes, you should start seeing data on your dashboard similar to the screenshot in Figure 1.

And that’s all! As you can see, in just a few steps we built and deployed a solution, which we can use to examine our AWS WAF configuration and see what kind of requests are being made and if they’re blocked or allowed.

Sample Scenario

Let’s go through a sample scenario to see one way you can use this solution. I built a small website for my dog and configured CloudFront to accelerate it and to make it more secure.

Figure 9: Java the Dog's homepage

Figure 9: Java the Dog’s homepage

Next, I configured an AWS WAF web ACL and attached it to my CloudFront distribution, which is the entry point of my website. In my AWS WAF web ACL, I didn’t add any rules, but allowed all requests. This will allow me to log all requests and understand who is visiting my website. Then I configured an AWS WAF dashboard by following the steps in this blog.

My imaginary website is mainly dedicated to three countries—USA, Germany and Japan—where French Bulldogs are very popular. I noticed that I got quite a lot of users from India, which was unexpected. In Figure 10, the AWS WAF dashboard includes data from all four countries and tells me there have been over 11,000 requests for my website.

Figure 10: Kibana dashboard with requests from USA, Japan, Germany, and India

Figure 10: Kibana dashboard with requests from USA, Japan, Germany, and India

To understand the data better, I filtered on requests coming only from India, which is shown in Figure 11:

Figure 11: Website requests coming from India only

Figure 11: Website requests coming from India only

The dashboard shows that I got more than 700 requests from India in the previous hour. This could have been a great success for my website, but unfortunately, all the requests were coming from single IP address. Additionally, most of them have a suspicious user-agent header: “secret-hacker-agent.” This information is provided in the Visualize tab in Kibana, shown in Figure 12.

Figure 12: Visualize tab of Kibana dashboard

Figure 12: Visualize tab of Kibana dashboard

This doesn’t look good, so I decided to block those requests using AWS WAF.

So, the question now is what to block exactly? I can block all requests coming from India, but this isn’t the best idea because there might be other Indian fans of French Bulldogs. I can block this single IP address, but the hacker might use a different IP to continue hitting my website. Finally, I decided to create an AWS WAF rule that inspects the user-agent header. If the user-agent header contains “secret-hacker-agent,” the rule will block the request

Within a couple of minutes of configuring my AWS WAF rule, I noticed that I was still getting requests from India, but this time, requests with the suspicious user-agent header were blocked! As shown in Figure 13, there were around 2,700 requests, but about 2,000 of them were blocked.

Figure 13: Blocked suspicious requests

Figure 13: Blocked suspicious requests

In reality, I was attacking my own website as secret-hacker-agent for the sake of the example. You can see in the following command line screenshot that my request (using wget) with the suspicious user-agent header was blocked (receiving a “403 Forbidden” message). When I use a different header (“good-agent”), my request passes the AWS WAF rule successfully.

Figure 14: Command line screenshot of the 'wget' request

Figure 14: Command line screenshot of the ‘wget’ request

Summary

In this post we’ve detailed how to deploy a dashboard for AWS WAF in a few steps, and how to use it to troubleshoot and block a web application attack. Now it’s your turn to deploy this solution for your own application. Please share your feedback about the solution and the dashboard. You can submit comments in the Comments section below or on the project’s GitHub page.

This post was inspired by a blog post created by my friend Tom Adamski, who also described how to use Kibana and Amazon ES to visualize AWS WAF logs, and with help of Achraf Souk, who contributed his specialist knowledge in AWS edge services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tomasz Stachlewski

Tomasz is a Senior Solution Architecture Manager at AWS, where he helps companies of all sizes (from startups to enterprises) in their Cloud journey. He is a big believer in innovative technology such as serverless architecture, which allows organizations to accelerate their digital transformation.

AWS Shield Threat Landscape report is now available

Post Syndicated from Mario Pinho original https://aws.amazon.com/blogs/security/aws-shield-threat-landscape-report-now-available/

AWS Shield is a managed threat protection service that safeguards applications running on AWS against exploitation of application vulnerabilities, bad bots, and Distributed Denial of Service (DDoS) attacks. The AWS Shield Threat Landscape Report (TLR) provides you with a summary of threats detected by AWS Shield. This report is curated by the AWS Threat Response Team (TRT), who continually monitors and assesses the threat landscape to build protections on behalf of AWS customers. This includes rules and mitigations for services like AWS Managed Rules for AWS WAF and AWS Shield Advanced. You can use this information to expand your knowledge of external threats and improve the security of your applications running on AWS.

Here are some of our findings from the most recent report, which covers Q1 2020:

Volumetric Threat Analysis

AWS Shield detects network and web application-layer volumetric events that may indicate a DDoS attack, web content scraping, account takeover bots, or other unauthorized, non-human traffic. In Q1 2020, we observed significant increases in the frequency and volume of network volumetric threats, including a CLDAP reflection attack with a peak volume of 2.3 Tbps.

You can find a summary of the volumetric events detected in Q1 2020, compared to the same quarter in 2019, in the following table:

Metric Same Quarter, Prior Year (Q1 2019)
Most Recent Quarter (Q1 2020)
Change
Total number of events 253,231 310,954 +23%
Largest bit rate (Tbps) 0.8 2.3 +188%
Largest packet rate (Mpps) 260.1 293.1 +13%
Largest request rate (rps) 1,000,414 694,201 -31%
Days of elevated threat* 1 3 +200%

Days of elevated threat indicates the number of days during which the volume or frequency of events was unusually high.

Malware Threat Analysis

AWS operates a threat intelligence platform that monitors Internet traffic and evaluates potentially suspicious interactions. We observed significant increases in the both the total number of events and the number of unique suspects, relative to the prior quarter. The most common interactions observed in Q1 2020 were Remote Code Execution (RCE) attempts on Apache Hadoop YARN applications, where the suspect attempts to exploit the API of a Hadoop cluster’s resource management system and execute code, without authorization. In March 2020, these interactions accounted for 31% of all events detected by the threat intelligence platform.

You can find a summary of the volumetric events detected in Q1 2020, compared to the prior quarter, in the following table:

Metric Prior Quarter
(Q4 2019)
Most Recent Quarter
(Q1 2020)
Change
Total number of events (billion) 0.7 1.1 +57%
Unique suspects (million) 1.2 1.6 +33%

 

For more information about the threats detected by AWS Shield in Q1 2020 and steps that you can take to protect your applications running on AWS, download the AWS Shield Threat Landscape Report.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Shield forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mario Pinho

Mario Pinho is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into its simpler components. In his free time he pretends to be an artist by playing piano and doing landscape photography.

Enable automatic logging of web ACLs by using AWS Config

Post Syndicated from Mike George original https://aws.amazon.com/blogs/security/enable-automatic-logging-of-web-acls-by-using-aws-config/

In this blog post, I will show you how to use AWS Config, with its auto-remediation functionality, to ensure that all web ACLs have logging enabled. The AWS CloudFormation template included in this blog post will facilitate this solution, and will get you started being able to manage web ACL logging at scale.

AWS Firewall Manager can automatically deploy an AWS Web Application Firewall (WAF) rule to protect your applications when your organization creates new Application Load Balancers, API Gateways, and CloudFront distributions. However, you still have to enable logging for web ACLs on an individual basis. Information contained in web ACL logs includes the time that AWS WAF received the request for your AWS resource, detailed information about the request, and the action for the rule that each request matched. This data can be extremely important for compliance and auditing needs, debugging, or forensic research.

Web ACL logging is a best practice, and is a business requirement within many organizations. Rather than leaving logging as a manual step in a deployment process, I will show you how to use automated mechanisms to enable logging, so that your business can meet its security and compliance requirements.

Prerequisites

The solution in this blog post assumes that you are already using AWS Web Application Firewall (AWS WAF) and AWS Firewall Manager to manage your firewall rules at scale. The following is a list of all the AWS services used in this blog post:

Using AWS Config to ensure automatic logging

AWS Config is a service that enables you to evaluate the configurations of the AWS resources in your account. AWS Config continuously monitors and records resource configuration changes. AWS Config can alert you and perform actions when resources get added, removed, or change state. AWS Config has a set of built-in rules that it can evaluate your AWS resources against, or you can build your own AWS Config rules.

In fact, when you enable AWS Firewall Manager to automatically apply AWS WAF rules to your Application Load Balancers, API Gateways, or CloudFront distributions, AWS Firewall Manager creates AWS Config rules behind the scenes. These AWS Config rules are designed so that the correct web ACLs are automatically applied whenever new Application Load Balancers, API Gateways, or CloudFront distributions are created. Enterprises use AWS Config rules to ensure consistent compliance with their internal organizational policies. You can use AWS Config to ensure that your AWS WAF rules have logging enabled.

When creating custom AWS Config rules, you associate each custom rule with an AWS Lambda function, which contains the logic that evaluates whether your AWS resource complies with the rule. You can configure the custom AWS Config rule to invoke the Lambda function in response to a configuration change, or to run periodically. After the Lambda function executes, it evaluates whether your resource complies with your rule, and it then sends the results back to AWS Config. If the resource violates the conditions of the rule, then AWS Config flags the resource as noncompliant. For more information, see How AWS Config Works in the AWS Developer Documentation.

You can also perform auto-remediation on non-compliant resources by using the built-in remediation functionality in AWS Config. When AWS Config detects a noncompliant resource, it can invoke an automation function that is defined as a Systems Manager Automation document. Systems Manager has a number of pre-built Automation documents that can do things such as create an Amazon Machine Image (AMI), create a Jira issue, and create a ServiceNow incident. For the full list of built-in Automation documents, see Systems Manager Automation Document Details Reference.

You can also create your own Automation documents to support business cases not covered by the built-in Systems Manager Automation documents. Systems Manager Automation documents can run scripts, call AWS API functions, call custom Lambda functions, or execute a CloudFormation stack, and more.

Overview of the solution

The following is a high-level overview diagram of the solution described in this post, when an AWS WAF web ACL has a configuration change:
 

Figure 1: High-level solution overview

Figure 1: High-level solution overview

When an AWS WAF web ACL has a configuration change, the following steps will occur:

  1. The creation of the AWS WAF web ACL generates a ConfigurationItemChangeNotification, which is sent to AWS Config (step 1).
  2. AWS Config in turn sends the notification on to an AWS Lambda function (step 2), which determines if the web ACL in question is “compliant”. In this case, compliant means that the web ACL has logging configured.
  3. Lambda queries the web ACL (step 3) to determine if logging is enabled.
  4. The Lambda query results are then reported back to AWS Config (step 4).
  5. If logging is not enabled, the web ACL is seen as noncompliant and AWS Config kicks off an auto-remediation step (step 5) by executing a Systems Manager Automation document.
  6. The Automation document calls a Lambda function (step 6).
  7. The Lambda function attempts to enable logging on the web ACL (step 7).
  8. If logging is successfully enabled, then the web ACL automatically sends logs through a Kinesis Data Firehose delivery stream (step 8).
  9. The Kinesis Data Firehose delivery stream stores the data in an S3 bucket (step 9).
  10. After the Lambda function has completed enabling logging functionality, it reports back to Systems Manager (step 10).
  11. Systems Manager reports back to AWS Config (step 11).
  12. At this point, the web ACL compliance status still hasn’t been updated. AWS Config still believes the web ACL is noncompliant, so AWS Config calls the Lambda function (step 2) to determine if the compliance status has changed.
  13. Lambda checks the web ACL again (step 3), determines that it is compliant, and returns the results to AWS Config (step 4).

Because AWS Config stores the compliance history of the web ACL configuration, compliance team members will be able to go into AWS Config and see the history of the web ACL, as shown in the following screenshot. You will be able to see that the configuration state was noncompliant when the web ACL was created, and that it became compliant after logging was enabled.
 

Figure 2: Web ACL compliance history in AWS Config

Figure 2: Web ACL compliance history in AWS Config

Using the CloudFormation template

To automatically enable logging on all web ACLs, I created a CloudFormation template for you to use to set up all the necessary components. The CloudFormation template creates the following:

  • An S3 bucket to store the logs.
  • A Kinesis Data Firehose delivery stream.
  • An AWS Config rule.
  • A Systems Manager Automation document.
  • Two Lambda functions. The first Lambda function is used by AWS Config to evaluate whether the web ACL has logging enabled. The second Lambda function is used by the Systems Manager Automation document to automatically enable logging.
  • AWS IAM policies and roles to ensure that everything works correctly.

I designed this CloudFormation template to be executed in an AWS account that already has AWS Firewall Manager enabled, however it will not prevent you from running it in an AWS account that does not have it enabled. Accounts without AWS Firewall Manager won’t benefit from the central configuration and management that AWS Firewall Manager provides. However, this stack will still allow you to ensure that existing or new web ACLs have logging enabled.

To deploy the template

  1. Copy the CloudFormation template file that follows these instructions, and save it to your computer.
  2. Sign in to the AWS account where you want to deploy this stack.
  3. Choose Services, choose CloudFormation, and then choose Stacks.
  4. In the upper right, choose Create stack, and then choose With new resources (standard).
  5. In the Specify template section, choose Upload a template file, and then select Choose file.
  6. Navigate to the file that you saved in step 1. Choose Next.
  7. In the Stack name field, enter a stack name that is meaningful to you. Choose Next, and choose Next again.
  8. Select the checkbox that says I acknowledge that AWS CloudFormation might create IAM resources and choose the Create stack button.

CloudFormation template file


#
# This CloudFormation template enables auto-logging of web ACLs through the use of 
# AWS Config and Systems Manager Automation documents.
#
# This solution creates an S3 bucket, a Kinesis Data Firehose, an AWS Config rule, 
# a Systems Manager Automation document, and two Lambda functions to evaluate and 
# remediate when web ACLs are not configured for logging.
#

Outputs:
  S3BucketName:
    Value: !Ref S3Bucket
  FirehoseName:
    Value: !Ref Firehose

Resources:
  S3Bucket:
    Type: AWS::S3::Bucket

  Firehose:
    Type: AWS::KinesisFirehose::DeliveryStream
    Properties:
      DeliveryStreamName:
        !Join
          - ''
          - - aws-waf-logs-
            - !Ref AWS::StackName
      ExtendedS3DestinationConfiguration:
        RoleARN: !GetAtt DeliveryRole.Arn
        BucketARN: !GetAtt S3Bucket.Arn
        BufferingHints:
          IntervalInSeconds: 300
          SizeInMBs: 5
        CompressionFormat: UNCOMPRESSED

  DeliveryRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: ''
            Effect: Allow
            Principal:
              Service: firehose.amazonaws.com
            Action: 'sts:AssumeRole'
            Condition:
              StringEquals:
                'sts:ExternalId': !Ref 'AWS::AccountId'


  DeliveryPolicy:
    Type: AWS::IAM::Policy
    Properties:
      PolicyName: 'firehose_delivery_policy'
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - 's3:AbortMultipartUpload'
              - 's3:GetBucketLocation'
              - 's3:GetObject'
              - 's3:ListBucket'
              - 's3:ListBucketMultipartUploads'
              - 's3:PutObject'
            Resource:
              - !GetAtt S3Bucket.Arn
              - !Join
                - ''
                - - !GetAtt S3Bucket.Arn
                  - '*'
      Roles:
        - !Ref DeliveryRole

  AutomationDoc:
    Type: "AWS::SSM::Document"
    Properties:
      Content:
        schemaVersion: "0.3"
        description: "Adds logging to non-compliant WebACLs"
        assumeRole: "{{ AutomationAssumeRole }}"
        parameters:
          WebACLId:
            type: "String"
            description: "(Required) The WebACLId of the WebACL"
          AutomationAssumeRole:
            type: "String"
            description: "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf"
        mainSteps:
          - name: performRemediation
            action: aws:invokeLambdaFunction
            inputs:
              FunctionName: !GetAtt WafLambda.Arn
              Payload: '{"webAclName":"{{ WebACLId }}"}'
      DocumentType: Automation


  WafLambda:
    Type: AWS::Lambda::Function
    Properties:
      # The AmazonSSMAutomationRole role expects the Lambda function name to begin with Automation*
      FunctionName: !Sub Automation-${AWS::StackName}-EnableWafLogging
      Code:
        ZipFile:
          !Sub |
            #CODE GOES HERE
            import boto3
            import json
            import os

            #
            # This Lambda function ensures that all WAF web ACLs have logging enabled.
            #
            # Trigger Type: SSM Automation
            # Scope of Automation: AWS::WAF::WebACL & AWS::WAFRegional::WebACL
            #

            FIREHOSE_ARN = os.environ['FIREHOSE_ARN']
            CONFIG_RULE_NAME = os.environ['CONFIG_RULE_NAME']

            def evaluate_compliance(webAclName):
              hasConfig = False

              #Setting up variables
              client = ''
              response = ''
              wafArn = ''

              #Check if this is a WAFv2. The ResourceId passed in is already the ARN
              if webAclName.find('arn:aws:wafv2:') >= 0:
                wafArn = webAclName
                client = boto3.client('wafv2')
              else:

                isWebAcl = True
                #Test if this is AWS::WAF::WebACL
                try:
                  print('Testing for WAF::WebACL')
                  client = boto3.client('waf')
                  response = client.get_web_acl(WebACLId=webAclName)
                except:
                  isWebAcl = False
                  pass

                if not isWebAcl:
                  #Test if this is AWS::WAFRegional::WebACL
                  try:
                    print('Testing for WAFRegional::WebACL')
                    client = boto3.client('waf-regional')
                    response = client.get_web_acl(WebACLId=webAclName)
                  except:
                    pass

                wafArn = response['WebACL']['WebACLArn']

              try:
                response = client.get_logging_configuration(ResourceArn=wafArn)
                hasConfig = True
              except:
                print('Attempting to fix non-compliance')
                print('WAF ARN: ' + wafArn)
                response = client.put_logging_configuration(LoggingConfiguration={'ResourceArn': wafArn,'LogDestinationConfigs': [ FIREHOSE_ARN ]})

            def regen_compliance():
              try:
                print("Attempting to re-run AWS Config rule to update compliance status")
                client = boto3.client('config')
                response = client.start_config_rules_evaluation(ConfigRuleNames=[CONFIG_RULE_NAME])
              except:
                pass

            def handler(event, context):
              aclName = event['webAclName']
              evaluate_compliance(aclName)

              regen_compliance()

      Handler: "index.handler"
      Environment:
        Variables:
          FIREHOSE_ARN: !GetAtt Firehose.Arn
          CONFIG_RULE_NAME: !Ref ConfigRule
      Runtime: python3.7
      Timeout: 30
      Role: !GetAtt LambdaExecutionRole.Arn

  ConfigRule:
    Type: AWS::Config::ConfigRule
    Properties:
      ConfigRuleName:
        !Join
          - ''
          - - Enable-WebACL-Logging-
            - !Ref AWS::StackName
      Description: 'Ensures that all new web ACLs have logging enabled'
      Scope:
        ComplianceResourceTypes:
          - AWS::WAF::WebACL
          - AWS::WAFv2::WebACL
          - AWS::WAFRegional::WebACL
      Source:
        Owner: "CUSTOM_LAMBDA"
        SourceDetails:
        - EventSource: "aws.config"
          MessageType: ConfigurationItemChangeNotification
        - EventSource: "aws.config"
          MessageType: OversizedConfigurationItemChangeNotification
        SourceIdentifier: !GetAtt Lambda.Arn
    DependsOn: PermissionToCallLambda

  WebACLRemediation:
    Type: "AWS::Config::RemediationConfiguration"
    Properties:
      # AutomationAssumeRole, MaximumAutomaticAttempts and RetryAttemptSeconds are Required if Automatic is true
      Automatic: true
      ConfigRuleName: !Ref ConfigRule
      MaximumAutomaticAttempts: 1
      Parameters:
        AutomationAssumeRole:
          StaticValue:
            Values:
              - !GetAtt AutoRemediationIamRole.Arn
        WebACLId:
          ResourceValue:
            Value: RESOURCE_ID
      RetryAttemptSeconds: 60
      TargetId: !Ref AutomationDoc
      TargetType: SSM_DOCUMENT


  AutoRemediationIamRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ssm.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole'
      Policies: []

  PermissionToCallRemediationLambda:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt WafLambda.Arn
      Action: "lambda:InvokeFunction"
      Principal: "ssm.amazonaws.com"

  PermissionToCallLambda:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt Lambda.Arn
      Action: "lambda:InvokeFunction"
      Principal: "config.amazonaws.com"

  Lambda:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile:
          !Sub |
            import boto3
            import json
            #
            # This Lambda function determines if WAF web ACLs have logging enabled
            #
            # Trigger Type: Config: Change Triggered
            # Scope of Changes: AWS::WAF::WebACL, AWS::WAFv2::WebACL & AWS::WAFRegional::WebACL
            #

            def is_applicable(config_item, event):
              status = config_item['configurationItemStatus']
              event_left_scope = event['eventLeftScope']
              test = ((status in ['OK', 'ResourceDiscovered']) and
                event_left_scope == False)
              return test

            def evaluate_compliance(config_item):
              wafArn = config_item['ARN']
              hasConfig = False

              client = ''
              if (config_item['resourceType'] == 'AWS::WAF::WebACL'):
                client = boto3.client('waf')
              elif (config_item['resourceType'] == 'AWS::WAFRegional::WebACL'):
                client = boto3.client('waf-regional')
              elif (config_item['resourceType'] == 'AWS::WAFv2::WebACL'):
                client = boto3.client('wafv2')

              try:
                response = client.get_logging_configuration(ResourceArn=wafArn)
                hasConfig = True
              except:
                pass

              if not hasConfig:
                return 'NON_COMPLIANT'
              else:
                return 'COMPLIANT'

            def handler(event, context):
              invoking_event = json.loads(event['invokingEvent'])
              compliance_value = 'NOT_APPLICABLE'

              if is_applicable(invoking_event['configurationItem'], event):
                compliance_value = evaluate_compliance(invoking_event['configurationItem'])

              config = boto3.client('config')
              response = config.put_evaluations(
                Evaluations=[
                  {
                  'ComplianceResourceType': invoking_event['configurationItem']['resourceType'],
                  'ComplianceResourceId': invoking_event['configurationItem']['resourceId'],
                  'ComplianceType': compliance_value,
                  'OrderingTimestamp': invoking_event['configurationItem']['configurationItemCaptureTime']
                  },
                ],
                ResultToken=event['resultToken'])
      Handler: "index.handler"
      Runtime: python3.7
      Timeout: 30
      Role: !GetAtt LambdaExecutionRole.Arn

  LambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: lambda-logging
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:*
            Resource: arn:aws:logs:*:*:*
      - PolicyName: waf-config
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - waf:PutLoggingConfiguration
            - waf:GetLoggingConfiguration
            - waf:GetWebACL
            - wafv2:PutLoggingConfiguration
            - wafv2:GetLoggingConfiguration
            - wafv2:GetWebACL
            - waf-regional:PutLoggingConfiguration
            - waf-regional:GetLoggingConfiguration
            - waf-regional:GetWebACL
            Resource:
            - arn:aws:waf::*:*
            - arn:aws:wafv2:*:*:*/*/*
            - arn:aws:waf-regional:*:*:*
      - PolicyName: config-evaluate
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - config:PutEvaluations
            - config:StartConfigRulesEvaluation
            Resource: '*'
      - PolicyName: allow-lambda-servicelinkedrole
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - iam:CreateServiceLinkedRole
            Resource: arn:aws:iam::*:role/aws-service-role/*

How the CloudFormation template works

To enable logging on a web ACL, the web ACL expects a Kinesis Data Firehose delivery stream that has a name that starts with aws-waf-logs-. You typically configure a Kinesis Data Firehose delivery stream to deliver data to an S3 bucket. This CloudFormation template creates a Kinesis Data Firehose delivery stream with a name that the web ACL is expecting and is configured to deliver data to an S3 bucket. The Kinesis Data Firehose delivery stream has the name of aws-waf-logs-StackName, where StackName is the name you provided when you created this CloudFormation stack.

The CloudFormation template also creates an AWS Config rule with the name Enable-WebACL-Logging-StackName. This AWS Config rule is configured to monitor resources of type AWS::WAF::WebACL (typically a CloudFront distribution), AWS::WAFRegional::WebACL (typically an API Gateway or an Application Load Balancer), and AWS::WAFv2::WebACL, which is the latest version of the AWS WAF API. When AWS Config detects a change to one of your web ACLs (for example, an AWS WAF rule being added to an Application Load Balancer), the event is sent off to a Lambda function for evaluation against your rule.

The Lambda function is where all the heavy lifting is performed. When the Lambda function is invoked, control is passed to the handler method. This method calls the evaluate_compliance method, which uses the Boto3 Python library to pull the logging configuration of the web ACL in question. The function simply checks to see if it can pull a logging configuration from the web ACL. If it can pull a logging configuration, that means that logging is enabled. If it cannot pull a logging configuration, it means logging is not enabled. The Lambda function then reports back the status of COMPLIANT (meaning logging is enabled) or NON_COMPLIANT (meaning logging is not enabled) to AWS Config.

This AWS Config rule is configured to auto-remediate noncompliant web ACLs. When a noncompliant web ACL is identified, AWS Config executes a Systems Manager Automation document, which calls a Lambda function to enable logging. This Lambda function is configured with an environment variable called FIREHOSE_ARN, which is the ARN of the Kinesis Data Firehose delivery stream that is created as part of this CloudFormation stack. In this Lambda function, if it cannot pull a logging configuration, it creates a new logging configuration using the Kinesis Data Firehose delivery stream that has already been configured. The Lambda function then attempts to call a method on AWS Config to re-evaluate compliance for this rule.

When you view the details of this rule within the AWS Config console, you’ll see all web ACLs listed under the Resource ID column. The Resource compliance status column will show as Compliant, meaning that these web ACLs comply with your AWS Config rule. Because the AWS Config rule enforces logging on web ACLs, you can be confident that logging is properly enabled.
 

Figure 3: Compliance status of web ACLs in AWS Config

Figure 3: Compliance status of web ACLs in AWS Config

The remaining parts of the CloudFormation template are in place to ensure that the system has sufficient permissions to work correctly. The Kinesis Data Firehose delivery stream is assigned to an IAM role, which has a policy assigned that grants it appropriate permissions to write to your S3 bucket. The AWS Config rule is granted permission to call the first Lambda function, and then Systems Manager is granted permission to call the second Lambda function. Finally, the Lambda functions are assigned to an IAM role that has permissions to request and modify the logging configurations of the web ACLs, and to update AWS Config with the results of those actions.

The CloudFormation template in this post provides a simple solution for automatically enabling logging of all web ACLs within an AWS Region. If your organization is looking for additional operational control, you can extend this example CloudFormation template to verify that all web ACLs are using the same logging configuration. This change could be accomplished by modifying the Lambda functions to ensure that the web ACL has both a logging configuration and is using the same Kinesis Data Firehose delivery stream that is defined within the CloudFormation template. If a logging configuration exists for a web ACL, but it is using the wrong Kinesis Data Firehose delivery stream, a Lambda function can delete that logging configuration and re-create it using the correct Kinesis Data Firehose delivery stream.

While this solution described in this blog post uses custom AWS Config rules and Automation documents for enabling logging on web ACLs, this approach can be generalized to use custom AWS Config rules for other contexts and for other resource types. For example, you can use this same approach to ensure that your Amazon Elastic Compute Cloud (Amazon EC2) instances comply with your internal IT security policies.

Cost Considerations

For customers who already use AWS WAF and AWS Firewall Manager, this solution adds additional costs for the use of AWS Config, Amazon Kinesis Data Firehose, and Amazon S3.

With AWS Config, you pay per configuration item recorded in your AWS account per AWS Region and the number of active rule evaluations recorded. For more information, see AWS Config pricing.

With AWS Systems Manager, you pay for the number of initiated actions performed (called steps) in the Automation and the duration of each step per second. I expect that my usage for this solution would fall under the free tier, but your usage may vary. For more information, see AWS Systems Manager pricing.

With AWS Lambda, you pay for the number of requests and the duration of those requests. However, because I don’t expect a lot of requests to Lambda in this solution, I expect that my usage would fall under the free tier, but your usage may vary. For more information, see AWS Lambda pricing.

With Amazon Kinesis Data Firehose, you pay only for the volume of data you ingest into the service. For more information, see Amazon Kinesis Data Firehose pricing.

For customers who want managed distributed denial of service (DDoS) protection, AWS Shield Advanced may be a good solution. Additionally, AWS Shield Advanced customers get AWS WAF and AWS Firewall Manager at no additional cost for usage on their resources that are protected by AWS Shield Advanced. For more information, see AWS Shield Pricing.

Conclusion

AWS Firewall Manager is a powerful solution for managing web ACLs at scale. By using a custom AWS Config rule—the same underlying technology used by AWS Firewall Manager—you can create a scalable approach to verify that all your web ACLs within an AWS Region have logging enabled. The CloudFormation template included in this blog post gives your organization a good starting point for being able to manage web ACL logging at scale.

Find out more:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mike George

Mike is a Senior Solutions Architect based out of Salt Lake City, Utah. He enjoys helping customers solve their technology problems. His interests include software engineering, security, and AI/ML