Tag Archives: AWS WAF

How to enhance Amazon CloudFront origin security with AWS WAF and AWS Secrets Manager

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-enhance-amazon-cloudfront-origin-security-with-aws-waf-and-aws-secrets-manager/

Whether your web applications provide static or dynamic content, you can improve their performance, availability, and security by using Amazon CloudFront as your content delivery network (CDN). CloudFront is a web service that speeds up distribution of your web content through a worldwide network of data centers called edge locations. CloudFront ensures that end-user requests are served by the closest edge location. As a result, viewer requests travel a short distance, improving performance for your viewers. When you deliver web content through a CDN such as CloudFront, a best practice is to prevent viewer requests from bypassing the CDN and accessing your origin content directly. In this blog post, you’ll see how to use CloudFront custom headers, AWS WAF, and AWS Secrets Manager to restrict viewer requests from accessing your CloudFront origin resources directly.

You can configure CloudFront to add custom HTTP headers to the requests that it sends to your origin. HTTP header fields are components of the header section of request and response messages in the Hypertext Transfer Protocol (HTTP). These custom headers enable you to send and gather information from your origin that isn’t included in typical viewer requests. You can use custom headers to control access to content. By configuring your origin to respond to requests only when they include a custom header that was added by CloudFront, you prevent users from bypassing CloudFront and accessing your origin content directly. In addition to offloading traffic from your origin servers, this also helps enforce web traffic being processed at CloudFront edge locations according to your AWS WAF rules prior to being forwarded to your origin.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports managed rules as well as a powerful rule language for custom rules. AWS WAF is tightly integrated with CloudFront and the Application Load Balancer (ALB). AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.

Solution overview

This blog post includes a sample solution you can deploy to see how its components integrate to implement the origin access restriction. The sample solution includes a web server deployed on Amazon Elastic Compute Cloud (Amazon EC2) Linux instances running in an AWS Auto Scaling group. Elastic Load Balancing distributes the incoming application traffic across the EC2 instances by using an ALB. The ALB is associated with an AWS WAF web access control list (web ACL), which is used to validate the incoming origin requests. Finally, a CloudFront distribution is deployed with an AWS WAF web ACL and configured to point to the origin ALB.

Although the sample solution is designed for deployment with CloudFront with an AWS WAF–associated ALB as its origin, the same approach could be used for origins that use Amazon API Gateway. A custom origin is any origin that is not an Amazon Simple Storage Service (Amazon S3) bucket, with one exception. An S3 bucket that is configured with static website hosting is a custom origin. You can refer to the CloudFront Developer Guide for more information on securing content that CloudFront delivers from S3 origins.

This solution is intended to enhance security for CloudFront custom origins that support AWS WAF, such as ALB, and is not a substitute for authentication and authorization mechanisms within your web applications. In this solution, Secrets Manager is used to control, audit, monitor, and rotate a random string used within your CloudFront and AWS WAF configurations. Although most of these lifecycle attributes could be set manually, Secrets Manager makes it easier.

Figure 1 shows how the provided AWS CloudFormation template creates the sample solution.
 

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A viewer accesses your website or application and requests one or more files, such as an image file and an HTML file.
  2. DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency.
  3. At the edge location, AWS WAF inspects the incoming request according to configured web ACL rules.
  4. At the edge location, CloudFront checks its cache for the requested content. If the content is in the cache, CloudFront returns it to the user. If the content isn’t in the cache, CloudFront adds the custom header, X-Origin-Verify, with the value of the secret from Secrets Manager, and forwards the request to the origin.
  5. At the origin Application Load Balancer (ALB), AWS WAF inspects the incoming request header, X-Origin-Verify, and allows the request if the string value is valid. If the header isn’t valid, AWS WAF blocks the request.
  6. At the configured interval, Secrets Manager automatically rotates the custom header value and updates the origin AWS WAF and CloudFront configurations.

Solution deployment

This sample solution includes seven main steps:

  1. Deploy the CloudFormation template.
  2. Confirm successful viewer access to the CloudFront URL.
  3. Confirm that direct viewer access to the origin URL is blocked by AWS WAF.
  4. Review the CloudFront origin custom header configuration.
  5. Review the AWS WAF web ACL header validation rule.
  6. Review the Secrets Manager configuration.
  7. Review the Secrets Manager AWS Lambda rotation function.

Step 1: Deploy the CloudFormation template

The stack will launch in the N. Virginia (us-east-1) Region. It takes approximately 10 minutes for the CloudFormation stack to complete.

Note: The sample solution requires deployment in the N. Virginia (us-east-1) Region. Although out of scope for this blog post, an additional sample template is available in this solution’s GitHub repository for testing this solution with an existing CloudFront distribution and regional AWS WAF web ACL. Refer to the AWS regional service support information for more details on regional service availability.

To launch the CloudFormation stack

  1. Choose the following Launch Stack icon to launch a CloudFormation stack in your account in the N. Virginia Region.
     
    Select the Launch Stack button to launch the template
  2. In the CloudFormation console, leave the configured values, and then choose Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameter Input parameter description
    EC2InstanceSizeThe instance size for EC2 web servers.
    HeaderNameThe HTTP header name for the secret string.
    WAFRulePriorityThe rule number to use for the regional AWS WAF web ACL. 0 is recommended, because rules are evaluated in order based on the value of priority.
    RotateIntervalThe rotation interval, in days, for the origin secret value. Full rotation requires two intervals.
    ArtifactsBucketThe S3 bucket with artifact files (Lambda functions, templates, HTML files, and so on). Keep the default value.
    ArtifactsPrefixThe path for the S3 bucket that contains artifact files. Keep the default value.

    Figure 2 shows an example of values entered under Parameters.
     

    Figure 2: Input parameters for the CloudFormation stack

    Figure 2: Input parameters for the CloudFormation stack

  4. Enter values for all of the input parameters, and then choose Next.
  5. On the Options page, keep the defaults, and then choose Next.
  6. On the Review page, confirm the details, acknowledge the statements under Capabilities and transforms as shown in Figure 3, and then choose Create stack.
     
    Figure 3: CloudFormation Capabilities and Transforms acknowledgments

    Figure 3: CloudFormation Capabilities and Transforms acknowledgments

Step 2: Confirm access to the website through CloudFront

Next, confirm that website access through CloudFront is functioning as intended. After the CloudFormation stack completes deployment, you can access the test website using the domain name that was automatically assigned to the distribution.

To confirm viewer access to the website through CloudFront

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the cfEndpoint entry, similar to that shown in Figure 4.
     
    Figure 4: CloudFormation cfEndpoint stack output

    Figure 4: CloudFormation cfEndpoint stack output

  2. The cfEndpoint is the URL for the site, and it is automatically assigned by CloudFront. Choose the cfEndpoint link to open the test page, as shown in Figure 5.
     
    Figure 5: CloudFormation cfEndpoint test page

    Figure 5: CloudFormation cfEndpoint test page

In this step, you’ve confirmed that website accessibility through CloudFront is functioning as intended.

Step 3: Confirm that direct viewer access to the origin URL is blocked by AWS WAF

In this step, you confirm that direct access to the test website is blocked by the regional AWS WAF web ACL.

To test direct access to the origin URL

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the albEndpoint entry.
  2. Choose the albEndpoint link to go to the test site URL that was automatically assigned to the ALB. Choosing this link will result in a 403 Forbidden response. When AWS WAF blocks a web request based on the conditions that you specify, it returns HTTP status code 403 (Forbidden).

In this step, you’ve confirmed that website accessibility directly to the origin ALB is blocked by the regional AWS WAF web ACL.

Step 4: Review the CloudFront origin custom header configuration

Now that you’ve confirmed that the test website can only be accessed through CloudFront, you can review the detailed CloudFront, WAF, and Secrets Manager configurations that enable this restriction.

To review the custom header configuration

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the cfDistro entry.
  2. Choose the cfDistro link to go to this distribution’s configuration in the CloudFront console. On the Origin Groups tab, under Origins, select the origin as shown in Figure 6.
     
    Figure 6: CloudFront Origins and Origin Groups settings

    Figure 6: CloudFront Origins and Origin Groups settings

  3. Choose Edit to go to the Origin Settings section, scroll to the bottom and review the Origin Custom Headers as shown in Figure 7.
     
    Figure 7: CloudFront Origin Custom Headers settings

    Figure 7: CloudFront Origin Custom Headers settings

    You can see that the custom header, X-Origin-Verify, has been configured using Secrets Manager with a random 32-character alpha-numeric value. This custom header will be added to web requests that are forwarded from CloudFront to your origin. As you learned in steps 2 and 3, requests without this header are blocked by AWS WAF at the origin ALB. In the next two steps, you will dive deeper into how this works.

Step 5: Review the AWS WAF web ACL header validation rule

In this step, you review the AWS WAF rule configuration that validates the CloudFront custom header X-Origin-Verify.

To review the header validation rule

  1. In the CloudFormation console, select Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the wafWebACLR entry.
  2. Choose the wafWebACLR link to go to the origin ALB web ACL configuration in the WAF and Shield console. On the Overview tab, you can view the Requests per 5 minute period chart and the Sampled requests list, which shows requests from the last three hours that the ALB has forwarded to AWS WAF for inspection. The sample of requests includes detailed data about each request, such as the originating IP address and Uniform Resource Identifier (URI). You also can view which rule the request matched, and whether the rule Action is configured to ALLOW, BLOCK, or COUNT requests. You can enable AWS WAF logging to get detailed information about traffic that’s analyzed by your web ACL. You send logs from your web ACL to an Amazon Kinesis Data Firehose with a configured storage destination such as Amazon S3. Information that’s contained in the logs includes the time that AWS WAF received the request from your AWS resource, detailed information about the request, and the action for the rule that each request matched.
  3. Choose the Rules tab to review the rules for this web ACL, as shown in Figure 8.
     
    Figure 8: AWS WAF web ACL rules

    Figure 8: AWS WAF web ACL rules

    On the Rules tab, you can see that the CFOriginVerifyXOriginVerify rule has been configured with the Allow action, while the Default web ACL action is Block. This means that any incoming requests that don’t match the conditions in this rule will be blocked.

    In every AWS WAF rule group and every web ACL, rules define how to inspect web requests and what to do when a web request matches the inspection criteria. Each rule requires one top-level statement, which might contain nested statements at any depth, depending on the rule and statement type. You can learn more about AWS WAF rule statements in the AWS WAF Developer Guide, AWS Online Tech Talks, and samples on GitHub.

  4. Choose the CFOriginVerifyXOriginVerify rule, and then choose Edit to bring up the Rule Builder tool. In the Rule Builder, you can see that a rule has been created with two Rule Statements similar to those in Figure 9.
     
    Figure 9: AWS WAF web ACL rule statement

    Figure 9: AWS WAF web ACL rule statement

    In the Rule Builder configuration for Statement 1, you can see that the request Header is being inspected for the x-origin-verify Header field name (HTTP header field names are case insensitive), and the String to match value is set to the value you reviewed in step 4. In the Rule Builder, you can also see a logical OR with an additional rule statement, Statement 2. You will notice that the configuration for Statement 2 is the same as Statement 1, except that the String to match value is different. You will learn about this in detail in step 7, but Statement 2 helps to ensure that valid web requests are processed by your origin servers when Secrets Manager automatically rotates the value of the X-Origin-Verify header. The effect of this rule configuration is that inspected web requests will be allowed if they match either of the two statements.

    In addition to the visual web ACL representation you just reviewed in the WAF Rule visual editor, every web ACL also has a JSON format representation you can edit by using the WAF Rule JSON editor. You can retrieve the complete configuration for a web ACL in JSON format, modify it as you need, and then provide it to AWS WAF through the console, API, or command line interface (CLI).

    This step demonstrated how your request was allowed to access the test website in step 2 and why your request was blocked in step 3.

Step 6: Review Secrets Manager configuration

Now that you’re familiar with the CloudFront and AWS WAF configurations, you will learn how Secrets Manager creates and rotates the secret used for the X-Origin-Verify header field value. Secrets Manager uses an AWS Lambda function to perform the actual rotation of the secret used for the value and update the associated AWS WAF web ACL and CloudFront distribution.

To review the Secrets Manager configuration

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. On the stack Outputs tab, look for the OriginVerifySecret entry.
  2. Choose the OriginVerifySecret link to go to the configuration for the secret in the Secrets Manager console. Scroll down to the section titled Secret value, and then choose Retrieve secret value to display the Secret key/value as shown in Figure 10.
     
    Figure 10: Secrets Manager retrieve value

    Figure 10: Secrets Manager retrieve value

    When you retrieve the secret, Secrets Manager programmatically decrypts the secret and displays it in the console. You can see that the secret is stored as a key-value pair, where the secret key is HEADERVALUE, and the secret value is the string used in the CloudFront and WAF configurations you reviewed in steps 3 and 4.

  3. While you’re in the Secrets Manager console, review the Rotation configuration section, as shown in Figure 11.
     
    Figure 11: Secrets Manager rotation configuration

    Figure 11: Secrets Manager rotation configuration

    You can see that rotation was enabled for this secret at an interval of one day. This configuration also includes a Lambda rotation function. Secrets Manager uses a Lambda function to perform the actual rotation of a secret. If you use your secret for one of the supported Amazon Relational Database Service (Amazon RDS) databases, then Secrets Manager provides the Lambda function for you. If you use your secret for another service, then you must provide the code for the Lambda function, as we’ve done in this solution.

Step 7: Review the Secrets Manager Lambda rotation function

In this step, you review the Secrets Manager Lambda rotation function.

To review the Secrets Manager Lambda rotation function

  1. In the CloudFormation console, choose Services > CloudFormation > CFOriginVerify stack. In the stack Outputs tab, look for the OriginSecretRotateFunction entry.
  2. Choose the OriginSecretRotateFunction link to go to the Lambda function that is configured for this secret. The code used for this secrets rotation function is based on the AWS Secrets Manager Rotation Template. Choose the Monitoring tab and review the Invocations graph as shown in Figure 12.
     
    Figure 12: Monitoring tab for the Lambda rotation function

    Figure 12: Monitoring tab for the Lambda rotation function

    Shortly after the CloudFormation stack creation completes, you should see several invocations in the Invocations graph. When a configured rotation schedule or a manual process triggers rotation, Secrets Manager calls the Lambda function several times, each time with different parameters. The Lambda function performs several tasks throughout the process of rotating a secret. This includes the following steps: createSecret, setSecret, testSecret, and finishSecret. Secrets Manager uses staging labels, a simple text string, to enable you to identify different versions of a secret during rotation. This includes the following staging labels: AWSPENDING, AWSCURRENT, and AWSPREVIOUS, which are covered in the following step.

  3. To learn more about the rotation steps configured for this solution, choose View logs in CloudWatch on the Monitoring tab.
    1. On the Log streams tab, select the top entry in the list.
    2. Enter Event in the Filter events field, and then choose the arrows to expand the details for each event as shown in Figure 13.
       
      Figure 13: CloudWatch event logs for the Lambda rotation function

      Figure 13: CloudWatch event logs for the Lambda rotation function

The four rotation steps annotated in Figure 13 work as follows:

Note: This section provides an overview of the rotation process for this solution. For more detailed information about the Lambda rotation function, see the Secrets Manager User Guide.

  1. The createSecret step: In this step, the Lambda function generates a new version of the secret. The rotation Lambda function calls the GetRandomPassword method to generate a new random string, and then labels the new version of the secret with the staging label AWSPENDING to mark it as the in-process version of the secret.
  2. The SetSecret step: In this step, the rotation function retrieves the version of the secret labeled AWSPENDING from Secrets Manager and updates the web ACL rule for the AWS WAF associated with the origin ALB. The two rule statements you reviewed in step 5 of this blog post are updated with the AWSPENDING and AWSCURRENT values. The rotation function also updates the value for the Origin Custom Header X-Origin-Verify. When the rotation function updates your distribution configuration, CloudFront starts to propagate the changes to all edge locations. Maintaining both the AWSPENDING and AWSCURRENT secret values helps to ensure that web requests forwarded to your origin by CloudFront are not blocked. Therefore, once a secret value is created, two rotation intervals are required for it to be removed from the configuration.
  3. The testSecret step: This step of the Lambda function verifies the AWSPENDING version of the secret by using it to access the origin ALB endpoint with the X-Origin-Verify header. Both AWSPENDING and AWSCURRENT X-Origin-Verify header values are tested to confirm a “200 OK” response from the origin ALB endpoint.
  4. The finishSecret step: In the last step, the Lambda function moves the label AWSCURRENT from the current version to this new version of the secret. The old version receives the AWSPREVIOUS staging label, and is available for recovery as the last known good version of the secret, if needed. The old version with the AWSPREVIOUS staging label no longer has any staging labels attached, so Secrets Manager considers the old version deprecated and subject to deletion.

When the finishSecret step has successfully completed, Secrets Manager schedules the next rotation by adding the rotation interval (number of days) to the completion date. This automated process causes the values used for the validation headers to be updated at the configured interval. Although out of scope for this blog post, you should monitor your secrets to ensure usage of your secrets and log any changes to them. This helps you to make sure that any unexpected usage or change can be investigated, and unwanted changes can be rolled back.

Summary

You’ve learned how to use Amazon CloudFront, AWS WAF and AWS Secrets Manager to prevent web requests from directly accessing your CloudFront origin resources. You can use this solution to improve security for CloudFront custom origins that support AWS WAF, such as ALB, Amazon API Gateway, and AWS AppSync.

When using this solution, you will incur AWS WAF usage charges for both the ALB and CloudFront associated AWS WAF web ACLs. You might wish to consider subscribing to AWS Shield Advanced, which provides higher levels of protection against distributed denial of service (DDoS) attacks and includes AWS WAF and AWS Firewall Manager at no additional cost for usage on resources protected by AWS Shield Advanced. You can also learn more about pricing for CloudFront, AWS WAF, Secrets Manager, and AWS Shield Advanced.

You can review more options for restricting access to content with CloudFront, additional AWS WAF security automations, or managed rules for AWS WAF. You can explore solutions for using AWS IP address ranges to enhance CloudFront origin security. You might also wish to learn more about Secrets Manager best practices. This code for this solution is available on GitHub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about using this solution, you can start a thread in the CloudFront, WAF, or Secrets Manager forums, review or open an issue in this solution’s GitHub repository, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Cameron Worrell

Cameron Worrell

Cameron is a Solutions Architect with a passion for security and enterprise transformation. He joined AWS in 2015.

Automate AWS Firewall Manager onboarding using AWS Centralized WAF and VPC Security Group Management solution

Post Syndicated from Satheesh Kumar original https://aws.amazon.com/blogs/security/automate-aws-firewall-manager-onboarding-using-aws-centralized-waf-and-vpc-security-group-management-solution/

Many customers—especially large enterprises—run workloads across multiple AWS accounts and in multiple AWS regions. AWS Firewall Manager service, launched in April 2018, enables customers to centrally configure and manage AWS WAF rules, audit Amazon VPC security group rules across accounts and applications in AWS Organizations, and protect resources against distributed DDoS attacks.

In this blog post, we show you how to onboard your accounts and your AWS Organizations into the AWS Firewall Manager service and start centrally managing the security policies of your AWS Organizations member accounts. We also show you examples of how to perform operations on the Firewall Manager policies after you’ve deployed the solution, so you can adjust your security posture over time.

As more and more customers began using Firewall Manager for centralized management, they gave us feedback on how it could be improved. We heard that the process of defining policies and configuring rule sets can be challenging and time consuming, especially in a multi-account, multi-region scenario. We built the AWS Centralized WAF and VPC Security Group Management solution to make it easier and faster.

Solution overview

The AWS Centralized WAF and VPC Security Group Management solution fully automates the deployment of Firewall Manager with a set of opinionated defaults for the policies. We call it “opinionated,” because there’s no set of security rules that is right for absolutely every customer. We’re providing an example opinion, but you might have a different opinion, given your unique circumstance. Our examples block traffic that you might have a reason to allow. We install the following policies when you deploy this solution:

  • AWS WAF global policy for Amazon CloudFront distributions and regional policy for Application Load Balancer and Amazon API Gateway: Both the AWS WAF policies include the following AWS Managed Rules for AWS WAF. You can further customize these rules to suit your WAF requirements.
    • AWSManagedRulesCommonRuleSet
    • AWSManagedRulesAdminProtectionRuleSet
    • AWSManagedRulesKnownBadInputsRuleSet
    • AWSManagedRulesSQLiRuleSet
  • Amazon VPC security group usage audit and content audit policy: These policies flag security groups that are unused or redundant. Automatic remediation is turned off by default, but you can turn it on by customizing the solution.
  • AWS Shield Advanced policy: If your account has enabled Shield Advanced, then Shield Advanced protection is enabled for CloudFront, Application Load Balancer, and Elastic IPs.

The core of the solution is the AWS CloudFormation template aws-centralized-waf-and-security-group-management. This template deploys the components shown in Figure 1:

Figure 1: Main solutions template - aws-centralized-waf-and-vpc-security-group-management.template

Figure 1: Main solutions template – aws-centralized-waf-and-vpc-security-group-management.template

  1. After deployment, you can update the three AWS Systems Manager parameters—/FMS/OUs, /FMS/Regions, and /FMS/Tags—with appropriate values to control the scope and applicability of the Firewall Manager security policies.
  2. On update of the values in the Systems Manager parameters, the Amazon EventBridge rule captures the parameter update event.
  3. Amazon EventBridge triggers an AWS Lambda function to deploy the Firewall Manager policies.
  4. The AWS Lambda function will deploy the Firewall manager security policies across the OUs and regions specified in step 1.
  5. The lambda function updates the Amazon DynamoDB table with Firewall manager policies metadata.

Configuring Prerequisites Automatically

There are a few important prerequisites that must be configured in your account before you deploy this solution. We have built a template called aws-fms-prereq that will launch the solution prerequisites.

When you execute the aws-fms-prereq template, the following things will happen:

Note: If the Firewall Manager prerequisites described above are already met, you can skip this step and go directly to next step: Deploy the solution template.

If you run this prerequisite template in the Organization primary account that is also your Firewall Manager admin account, the solution template will be deployed automatically. If you do this, you can skip the step of deploying the solution template and jump straight to Manage your Firewall manager security policies.

Figure 2 shows how the prerequisite template creates a Lambda function. That function validates and installs the prerequisites and AWS CloudFormation stack sets to enable AWS Config across all member accounts in the organization.

Figure 2: Solution prerequisites

Figure 2: Solution prerequisites

Installing Prerequisites

If the Firewall Manager prerequisites are already met, skip this step and go directly to next step, below: Deploy the solution template.

  1. Install the AWS CLI

    Note: If you already have the AWS Command Line Interface v2 installed on your workstation, you can skip this step.

    The template deployment can be done using the AWS Management Console or the AWS CLI. This procedure uses the AWS CLI to do the deployment. To get started, install the AWS CLI and configure it with credentials that have the required IAM permissions to create resources.

  2. Deploy the prerequisite templateIf this is the first time you’re using Firewall Manager, download the aws-fms-prereq.template, and run the following AWS CLI command in the primary account of your AWS Organization to check for and complete the prerequisites.Replace the variable <your_FW_account_ID> with the account ID of your AWS Firewall Admin. This deployment typically takes 2–3 minutes, but sometimes a little longer if there are a large number of member accounts in your organization. If you want AWS Config to be enabled in your member accounts, then set the EnableConfig parameter to Yes; however, if AWS Config is already enabled, then set it to No.
    aws cloudformation create-stack \
    --stack-name fms-prereq-stack \
    --template-body file://aws-fms-prereq.template \
    --parameters ParameterKey=FMSAdmin,ParameterValue=<your_FW_account_ID> ParameterKey=EnableConfig,ParameterValue=Yes
    

Deploy the solution template

Deploy the solution using aws-centralized-waf-and-vpc-security-group-management.template—an AWS CloudFormation template—in your Firewall Manager admin account that was created by the prerequisite template.

This template deploys the AWS Centralized WAF and VPC Security Group Management solution shown in Figure 1, with all the resources and integrations.

The following AWS CLI command will deploy the solution template:

aws cloudformation create-stack \
--stack-name fms-central-policy-mgmt-stack \
--template-body file://aws-centralized-waf-and-vpc-security-group-management.template \
--capabilities CAPABILITY_IAM

The command will print very basic output, simply identifying the stack’s name, as shown below.

{
 "StackId": "arn:aws:cloudformation:us-east-1:<your_FW_admin_account_ID>:stack/fms-central-policy-mgmt-stack/<stack ARN>"
}

You can run the following CLI command to check the status of the stack deployment. Once you see that “StackStatus” is set to “CREATE_COMPLETE” in the output, you can proceed to the next step.

aws cloudformation describe-stacks \
--stack-name fms-central-policy-mgmt-stack

Manage your Firewall manager security policies

Once the solution is deployed, you can deploy the Firewall Manager policies to the organization member accounts, which will be reflected in the Parameter Store. These changes to the Parameter Store are picked up by the EventBridge rule, which triggers the Lambda function to automatically create, delete, or modify the policies as required.

Note: All of the following commands must be executed in the Firewall manager admin account, which might not be the root account of your AWS Organization.

  1. Add OUs to the scope of Firewall Manager policiesTo begin, define the OUs that the Firewall Manager policies should apply to. Store the list of OUs in a parameter named /FMS/OUs. The following AWS CLI command stores a comma-separated list of OU IDs in the right parameter.
    aws ssm put-parameter \
     --name "/FMS/OUs" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_OU_IDs>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  2. Add AWS Regions to the scope of Firewall Manager policiesNext you need to add the AWS regions where you want these policies to be applied. In the AWS CLI command that follows, you can customize the AWS region list depending on where you run your workloads. If, for example, you want to use us-west-2, us-east-1, and eu-west-1, then you need to provide us-west-2,us-east-1,eu-west-1 as your value in the command below.
    aws ssm put-parameter \
     --name "/FMS/Regions" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_regions>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  3. (Optional) Add resource tagsPlease note that this is an optional step. Resource tags are a way to apply these Firewall Manager policies to some resources, but not others. Imagine that we only want to apply these policies to resources that have the tag Environment set to the value Prod. The following command will do that:
    aws ssm put-parameter \
     --name "/FMS/Tags" \
     --type "String" \
     --value "{\"ResourceTags\":[{\"Key\":\"Environment\",\"Value\":\"Prod\"}],\"ExcludeResourceTags\":false}" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
     "Version": 2,
     "Tier": "Standard"
    }
    

Test the solution

Test the solution by creating a global CloudFront distribution and an Amazon VPC security group in one of your member accounts, and configure these resources to be noncompliant with the policies enforced by Firewall Manager.

Deploy test resources in one of your member accounts

Run the following AWS CLI command to deploy the demo template in one of the member accounts to create a sample CloudFront distribution and an Amazon VPC security group that aren’t compliant with the default Firewall Manager policies.

aws cloudformation create-stack \
  --stack-name demo-stack \
  --template-body file://aws-fms-demo.template \
  --capabilities CAPABILITY_IAM

Check for web ACL and security group audit results

To check the webACL resource association in the member account, follow these steps:

  1. Log in to your member account management console where you had deployed your demo-template and go to the WAF & Shield service page.
  2. You will see the WebACL with the prefix “FMManagedWebACLV2FMS-WAF-01“ , select that webACL.
  3. Go to the tab Associated AWS resources.
  4. You should see the newly created CloudFront distribution listed here.

To check the compliance status of the newly created security group, follow these steps:

  1. Log in to the Firewall manager admin account management console and go to the AWS Firewall Manager service page.
  2. In the Security policies section, you will find the FMS-SecGroup-02 policy. Select the policy FMS-SecGroup-02.
  3. You will see that the member account (where the demo template was deployed) is marked as non-compliant. Select the member account number to see the newly created security group along with the reason for the noncompliant finding.

Clean-up test resources in member account

Follow these steps to clean up the test resources that the demo template would have created in your member account:

  1. Log in to member account management console, and go to Cloudfront service page.
  2. Select the CloudFront distribution. (Origin would have the stack name as its prefix.)
  3. Go to “Behaviours,” select the one and select Edit.
  4. Remove the lambda function association and save changes.
  5. Run this CLI command to delete the stack that you had created earlier:
    aws cloudformation delete-stack \
      --stack-name demo-stack
    

Customizing the solution’s source code

This solution deploys Firewall Manager policies with some opinionated default rules defined in the manifest.json file. They probably aren’t all that you will need. You could customize these policies to fit your own needs, but it takes a bit of development skill. Also, the steps are too long to include here. If, for example, you want to add another AWS WAF rule group to the default AWS WAF security policy, you’ll need to edit the manifest and redeploy the solution. See how to do this customization and several others.

(Optional) Clean-up resources

Once you’ve tried the solution, if you want to clean up the stack you can do so in two steps:

  1. Go to the Firewall Manager admin account management console, navigate to the Parameter Store, and update the /FMS/OU parameter with the value delete. This ensures that the Firewall Manager policies and their related resources are deleted.
  2. Run the following AWS CLI command in the Firewall manager admin account to delete the solution stack you deployed earlier:
    aws cloudformation delete-stack --stack-name fms-central-policy-mgmt-stack
    

Conclusion

The AWS Centralized WAF and VPC Security Group Management solution addresses the feedback we heard from you, our customer. You told us the process of defining policies and configuring rule sets is challenging and time consuming, so we wrote this to make it easier and faster. In this post, we showed you how to deploy the solution following a two-step process using AWS CloudFormation. We also showed you ways that you can use the solution to easily manage your Firewall Manager policies by updating values in Systems Manager Parameter Store. Updating the values automates the deployment based on your choice of OUs, Regions, and resources. Finally, we showed you how to further customize the solution to suit your needs.

This post is just an introduction to what is possible and the overall objective. Before you start using this solution for anything important, we recommend that you review the solution implementation guide. It contains step-by-step directions and more example use cases. The guide also includes security recommendations and some cost estimates for the various supported scenarios.

To learn more about other AWS solutions, visit the AWS Solutions Library.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Satheesh Kumar

Satheesh is a Senior Solutions Architect based out of Bangalore, India. He helps large enterprise customers build solutions using AWS services.

Author

Ramanan Kannan

Ramanan is a Senior Solutions Architect and works with large enterprise customers in the financial domain. He is based out of Chennai, India.

Automatically updating AWS WAF Rule in real time using Amazon EventBridge

Post Syndicated from Adam Cerini original https://aws.amazon.com/blogs/security/automatically-updating-aws-waf-rule-in-real-time-using-amazon-eventbridge/

In this post, I demonstrate a method for collecting and sharing threat intelligence between Amazon Web Services (AWS) accounts by using AWS WAF, Amazon Kinesis Data Analytics, and Amazon EventBridge. AWS WAF helps protect against common web exploits and gives you control over which traffic can reach your application.

Attempted exploitation blocked by AWS WAF provides a data source on potential attackers that can be shared proactively across AWS accounts. This solution can be an effective way to block traffic known to be malicious across accounts and public endpoints. AWS WAF managed rules provide an easy way to mitigate and record the details of common web exploit attempts. This solution will use the Admin protection managed rule for demonstration purposes.

In this post you will see references to the Sender account and the Receiver account. There is only one receiver in this example, but the receiving process can be duplicated multiple times across multiple accounts. This post walks through how to set up the solution. You’ll notice there is also an AWS CloudFormation template that makes it easy to test the solution in your own AWS account. The diagram in figure 1 illustrates how this architecture fits together at a high level.
 

Figure 1: Architecture diagram showing the activity flow of traffic blocked on the Sender AWS WAF

Figure 1: Architecture diagram showing the activity flow of traffic blocked on the Sender AWS WAF

Prerequisites

You should know how to do the following tasks:

Extracting threat intelligence

AWS WAF logs using a Kinesis Data Firehose delivery stream. This allows you to not only log to a destination S3 bucket, but also act on the stream in real time using a Kinesis Data Analytics Application. The following SQL code demonstrates how to extract any unique IP addresses that have been blocked by AWS WAF. While this example returns all blocked IPs, more complex SQL could be used for a more granular result. The full list of log fields is included in the documentation.


CREATE OR REPLACE STREAM "wafstream" (
 "clientIp" VARCHAR(16),
 "action" VARCHAR(8),
 "time_stamp" TIMESTAMP
 );

CREATE OR REPLACE PUMP "WAFPUMP" as
INSERT INTO "wafstream" (
"clientIp",
"action",
"time_stamp"
) 

Select STREAM DISTINCT "clientIp", "action", FLOOR(WAF_001.ROWTIME TO MINUTE) as "time_stamp"
FROM "WAF_001"
WHERE "action" = 'BLOCK';

Proactively blocking unwanted traffic

After extracting the IP addresses involved in the abnormal traffic, you will want to proactively block those IPs on your other web facing resources. You can accomplish this in a scalable way using Amazon EventBridge. After the Kinesis Application extracts the IP address, it will use an AWS Lambda function to call PutEvents on an EventBridge event bus. This process will create the event pattern, which is used to determine when to trigger an event bus rule. This example uses a simple pattern, which acts on any event with a source of “custom.waflogs” as shown in Figure 2. A more complex pattern could be used to for finer grain control of when a rule triggers.
 

Figure 2: EventBridge Rule creation

Figure 2: EventBridge Rule creation

Once the event reaches the event bus, the rule will forward the event to an event bus in “Receiver” account, where a second rule will trigger to call a Lambda function to update a WAF IPSet. A Web ACL rule is used to block all traffic sourcing from an IP address contained in the IPSet.

Test the solution by using AWS CloudFormation

Now that you’ve walked through the design of this solution, you can follow these instructions to test it in your own AWS account by using CloudFormation stacks.

To deploy using CloudFormation

  1. Launch the stack to provision resources in the Receiver account.
  2. Provide the account ID of the Sender account. This will correctly configure the permissions for the EventBridge event bus.
  3. Wait for the stack to complete, and then capture the event bus ARN from the output tab.

    This stack creates the following resources:

    • An AWS WAF v2 web ACL
    • An IPSet which will be used to contain the IP addresses to block
    • An AWS WAF rule that will block IP addresses contained in the IPSet
    • A Lambda function to update the IPSet
    • An IAM policy and execution role for the Lambda function
    • An event bus
    • An event bus rule that will trigger the Lambda function
  4. Switch to the Sender account. This should be the account you used in step 2 of this procedure.
  5. Provide the ARN of the event bus that was captured in step 3. This stack will provision the following resources in your account:
    • A virtual private cloud (VPC) with public and private subnets
    • Route tables for the VPC resources
    • An Application Load Balancer (ALB) with a fixed response rule
    • A security group that allows ingress traffic on port 80 to the ALB
    • A web ACL with the AWS Managed Rule for Admin Protection enabled
    • An S3 bucket for AWS WAF logs
    • A Kinesis Data Firehose delivery stream
    • A Kinesis Data Analytics application
    • An EventBridge event bus
    • An event bus rule
    • A Lambda function to send information to the Receiver account event bus
    • A custom CloudFormation resource which enables WAF logging and starts the Kinesis Application
    • An IAM policy and execution role that allows a Lamba function to put events into the event bus
    • An IAM policy and role to allow the custom CloudFormation resource to enable WAF logging and start the Kinesis Application
    • An IAM policy and role that allows the Kinesis Firehose to put logs into S3
    • An IAM policy that allows the WAF Web ACL to put records into the Firehose
    • An IAM policy and role that allows the Kinesis Application to invoke a Lambda function and log to CloudWatch
    • An IAM policy and role that allows the “Sender” account to put events in the “Receiver” event bus

After the CloudFormation stack completes, you should test your environment. To test the solution, check the output tab for the DNS name of the Application Load Balancer and run the following command:

curl ALBDNSname/admin

You should be able to check the Receiver account’s AWS WAF IPSet named WAFBlockIPset and find your IP.

Conclusion

This example is intentionally simple to clearly demonstrate how each component works. You can take these principles and apply them to your own environment. Layering the Amazon managed rules with your own custom rules is the best way to get started with AWS WAF. This example shows how you can use automation to update your WAF rules without needing to rely on humans. A more complete solution would source log data from each Web ACL and update an active IP Set in each account to protect all resources. As seen in Figure 3, a more complete implementation would send all logs in a region to a centralized Kinesis Firehose to be processed by the Kinesis Analytics Application, EventBridge would be used to update a local IPset as well as forward the event to other accounts event buses for processing.
 

Figure 3: Updating across accounts

Figure 3: Updating across accounts

You can also add additional targets to the event bus to do things such as send to a Simple Notification Service topic for notifications, or run additional automation. To learn more about AWS WAF web ACLs, visit the AWS WAF Developer Guide. Using Amazon EventBridge opens up the possibility to send events to partner integrations. Customers or APN Partners like PagerDuty or Zendesk can enrich this solution by taking actions such as automatically opening a ticket or starting an incident. To learn more about the power of Amazon EventBridge, see the EventBridge User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Adam Cerini

Adam is a Senior Solutions Architect with Amazon Web Services. He focuses on helping Public Sector customers architect scalable, secure, and cost effective systems. Adam holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

Troubleshooting Amazon API Gateway with enhanced observability variables

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/troubleshooting-amazon-api-gateway-with-enhanced-observability-variables/

Amazon API Gateway is often used for managing access to serverless applications. Additionally, it can help developers reduce code and increase security with features like AWS WAF integration and authorizers at the API level.

Because more is handled by API Gateway, developers tell us they would like to see more data points on the individual parts of the request. This data helps developers understand each phase of the API request and how it affects the request as a whole. In response to this request, the API Gateway team has added new enhanced observability variables to the API Gateway access logs. With these new variables, developers can troubleshoot on a more granular level to quickly isolate and resolve request errors and latency issues.

The phases of an API request

API Gateway divides requests into phases, reflected by the variables that have been added. Depending upon the features configured for the application, an API request goes through multiple phases. The phases appear in a specific order as follows:

Phases of an API request

Phases of an API request

  • WAF: the WAF phase only appears when an AWS WAF web access control list (ACL) is configured for enhanced security. During this phase, WAF rules are evaluated and a decision is made on whether to continue or cancel the request.
  • Authenticate: the authenticate phase is only present when AWS Identity and Access Management (IAM) authorizers are used. During this phase, the credentials of the signed request are verified. Access is granted or denied based on the client’s right to assume the access role.
  • Authorizer: the authorizer phase is only present when a Lambda, JWT, or Amazon Cognito authorizer is used. During this phase, the authorizer logic is processed to verify the user’s right to access the resource.
  • Authorize: the authorize phase is only present when a Lambda or IAM authorizer is used. During this phase, the results from the authenticate and authorizer phase are evaluated and applied.
  • Integration: during this phase, the backend integration processes the request.

Each phrase can add latency to the request, return a status, or raise an error. To capture this data, API Gateway now provides enhanced observability variables based on each phase. The variables are named according to the phase they occur in and follow the naming structure, $context.phase.property. Therefore, you can get data about WAF latency by using $context.waf.latency.

Some existing variables have also been given aliases to match this naming schema. For example, $context.integrationErrorMessage has a new alias of $context.integration.error. The resulting list of variables is as follows:

Phases and variables for API Gateway requests

Phases and variables for API Gateway requests

API Gateway provides status, latency, and error data for each phase. In the authorizer and integration phases, there are additional variables you can use in logs. The $context.phase.requestId provides the request ID from that service and the $context.phase.integrationStatus provide the status code.

For example, when using an AWS Lambda function as the integration, API Gateway receives two status codes. The first, $context.integration.integrationStatus, is the status of the Lambda service itself. This is usually 200, unless there is a service or permissions error. The second, $context.integration.status, is the status of the Lambda function and reports on the success or failure of the code.

A full list of access log variables is in the documentation for REST APIs, WebSocket APIs, and HTTP APIs.

A troubleshooting example

In this example, an application is built using an API Gateway REST API with a Lambda function for the backend integration. The application uses an IAM authorizer to require AWS account credentials for application access. The application also uses an AWS WAF ACL to rate limit requests to 100 requests per IP, per five minutes. The demo application and deployment instructions can be found in the Sessions With SAM repository.

Because the application involves an AWS WAF and IAM authorizer for security, the request passes through four phases: waf, authenticate, authorize, and integration. The access log format is configured to capture all the data regarding these phases:

{
  "requestId":"$context.requestId",
  "waf-error":"$context.waf.error",
  "waf-status":"$context.waf.status",
  "waf-latency":"$context.waf.latency",
  "waf-response":"$context.wafResponseCode",
  "authenticate-error":"$context.authenticate.error",
  "authenticate-status":"$context.authenticate.status",
  "authenticate-latency":"$context.authenticate.latency",
  "authorize-error":"$context.authorize.error",
  "authorize-status":"$context.authorize.status",
  "authorize-latency":"$context.authorize.latency",
  "integration-error":"$context.integration.error",
  "integration-status":"$context.integration.status",
  "integration-latency":"$context.integration.latency",
  "integration-requestId":"$context.integration.requestId",
  "integration-integrationStatus":"$context.integration.integrationStatus",
  "response-latency":"$context.responseLatency",
  "status":"$context.status"
}

Once the application is deployed, use Postman to test the API with a sigV4 request.

Configuring Postman authorization

Configuring Postman authorization

To show troubleshooting with the new enhanced observability variables, the first request sent through contains invalid credentials. The user receives a 403 Forbidden error.

Client response view with invalid tokens

Client response view with invalid tokens

The access log for this request is:

{
    "requestId": "70aa9606-26be-4396-991c-405a3671fd9a",
    "waf-error": "-",
    "waf-status": "200",
    "waf-latency": "8",
    "waf-response": "WAF_ALLOW",
    "authenticate-error": "-",
    "authenticate-status": "403",
    "authenticate-latency": "17",
    "authorize-error": "-",
    "authorize-status": "-",
    "authorize-latency": "-",
    "integration-error": "-",
    "integration-status": "-",
    "integration-latency": "-",
    "integration-requestId": "-",
    "integration-integrationStatus": "-",
    "response-latency": "48",
    "status": "403"
}

The request passed through the waf phase first. Since this is the first request and the rate limit has not been exceeded, the request is passed on to the next phase, authenticate. During the authenticate phase, the user’s credentials are verified. In this case, the credentials are invalid and the request is rejected with a 403 response before invoking the downstream phases.

To correct this, the next request uses valid credentials, but those credentials do not have access to invoke the API. Again, the user receives a 403 Forbidden error.

Client response view with unauthorized tokens

Client response view with unauthorized tokens

The access log for this request is:

{
  "requestId": "c16d9edc-037d-4f42-adf3-eaadf358db2d",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "8",
  "authorize-error": "The client is not authorized to perform this operation.",
  "authorize-status": "403",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "-",
  "integration-latency": "-",
  "integration-requestId": "-",
  "integration-integrationStatus": "-",
  "response-latency": "52",
  "status": "403"
}

This time, the access logs show that the authenticate phase returns a 200. This indicates that the user credentials are valid for this account. However, the authorize phase returns a 403 and states, “The client is not authorized to perform this operation”. Again, the request is rejected with a 403 response before invoking downstream phases.

The last request for the API contains valid credentials for a user that has rights to invoke this API. This time the user receives a 200 OK response and the requested data.

Client response view with valid request

Client response view with valid request

The log for this request is:

{
  "requestId": "ac726ce5-91dd-4f1d-8f34-fcc4ae0bd622",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "1",
  "authorize-error": "-",
  "authorize-status": "200",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "200",
  "integration-latency": "16",
  "integration-requestId": "8dc58335-fa13-4d48-8f99-2b1c97f41a3e",
  "integration-integrationStatus": "200",
  "response-latency": "48",
  "status": "200"
}

This log contains a 200 status code from each of the phases and returns a 200 response to the user. Additionally, each of the phases reports latency. This request had a total of 48 ms of latency. The latency breaks down according to the following:

Request latency breakdown

Request latency breakdown

Developers can use this information to identify the cause of latency within the API request and adjust accordingly. While some phases like authenticate or authorize are immutable, optimizing the integration phase of this request could remove a large chunk of the latency involved.

Conclusion

This post covers the enhanced observability variables, the phases they occur in, and the order of those phases. With these new variables, developers can quickly isolate the problem and focus on resolving issues.

When configured with the proper access logging variables, API Gateway access logs can provide a detailed story of API performance. They can help developers to continually optimize that performance. To learn how to configure logging in API Gateway with AWS SAM, see the demonstration app for this blog.

#ServerlessForEveryone

Deploying defense in depth using AWS Managed Rules for AWS WAF (part 2)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/deploying-defense-in-depth-using-aws-managed-rules-for-aws-waf-part-2/

In this post, I show you how to use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

In part 1 of this post I describe the technologies and methods that you can use to build and manage defense in depth for your network. In part 2, I will show you how to use those tools to build your defense in depth using AWS Managed Rules as the starting point and how it can be used for optimal effectiveness

Managing policies for multiple environments can be done with minimal administrative overhead and can now be part of a deployment pipeline where you programmatically enforce policies for broad edge network policy enforcements and protect production workloads without compromising on development speed or safety.

Building robust security policy enforcement relies on a layered approach and the same applies to securing your web applications. Having edge policies, application policies, and even private or internal policy enforcement layers adds to the visibility of communication requests as well as unified policy enforcement.

Using a layered AWS WAF deployment, such as is deployed by the procedure that follows, gives you greater flexibility in the amount of rules you can use and the option to standardize edge policies and production policies. This lets you test and develop new applications without comprising the production environments.

In the following example, the application load balancer is in us-east-1. To create a web ACL for Amazon CloudFront you need to deploy the stack in us-east-1. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template can create both web ACLs in this scenario.

Note: If you’re using CloudFront and hosting the origin in us-east-1, you only need to maintain one stack. If your origin is in another region, you need to deploy a stack in us-east-1 for CloudFront web ACLs and another in the region where your application load balancer is. That scenario isn’t covered in the following procedure. None of the underlying infrastructure would be deployed with the example AWS CloudFromation templates provided. Only the AWS WAF configurations would be deployed using the example templates.

Solution overview

The following diagram illustrates the traffic flow where traffic comes in via CloudFront and serves the traffic to the backend load balancers. Both CloudFront and the load balancers support AWS WAF. This is where dedicated web security policies can be enforced to build out a defense-in-depth, multi layered policy enforcement.
 

Figure 1: Defense in depth deployment on AWS WAF

Figure 1: Defense in depth deployment on AWS WAF

Creating AWS Managed Rule web ACLs

During this process we create two web ACLs that are designed for policy enforcement for two dedicated layers. The process won’t deploy the required infrastructure, such as the CloudFront distribution or application load balancers. This example template deploys a single stack in us-east-1 where the CloudFront origin load balancer is located.

To create AWS Managed Rule web ACLs

  1. Download the Amazon-CloudFront-Application-Load-Balancer-AMR.yml template.
  2. Open the AWS Management Console and select the region where the origin application load balancer is deployed. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template that you downloaded deploys both web ACLs for CloudFront and the application load balancer.
     
    Figure 2: Select a region from the console

    Figure 2: Select a region from the console

  3. Under Find Services enter AWS CloudFormation and select Enter.
     
    Figure 3: Find and select AWS CloudFormation

    Figure 3: Find and select AWS CloudFormation

  4. Select Create stack.
     
    Figure 4: Create stack

    Figure 4: Create stack

  5. Select a template file for the stack.
    1. In the Create stack window, select Template is ready and Upload a template file.
    2. Under Upload a template file, select Choose file and select the Amazon-CloudFront-Application-Load-Balancer-AMR.yml example AWS CloudFormation template you downloaded earlier.
    3. Choose Next.
    Figure 5: Prepare and choose a template

    Figure 5: Prepare and choose a template

  6. Add stack details.
    1. Enter a name for the stack in Stack name.
    2. Enter a name for the Edge Network AWS WAF WebACL and for the Public Layer AWS WAF WebACL.
    3. Set a rate-limit for HTTP GET requests in HTTP Get Flood Protection (this rate is applied per IP address over a 5 minute period).
    4. Set a rate limit for HTTP POST requests in HTTP Post Flood Protection.
    5. Use the Login URL to apply the limit to a targeted login page. If you want to rate-limit all HTTP POST requests, leave the login URL section blank.
    Figure 6: Set stack details

    Figure 6: Set stack details

  7. By default, all the rules within the rule-sets are in action override (count mode). This does not include the rate based rules. If you want to deploy selected rules in a block, remove them from the pre-populated list by highlighting and deleting them. It’s best practice to evaluate firewall rules before changing them from count to block mode. Choose Next to move to the next step.
     
    Figure 7: Default managed rules options

    Figure 7: Default managed rules options

  8. Here you can add tags to apply to the resources in the stack that these rules will be deployed to. Tagging is a recommended best practice as it enables you to add metadata information to resources during the creation. For more information on tagging please see the Tagging AWS resources documentation. Then choose Next. On the following page choose Create stack.
     
    Figure 8: Add tags

    Figure 8: Add tags

  9. Wait until the stack has been deployed. When deployment is complete, the status of the stack will change to CREATE_COMPLETE.
     
    Figure 9: Stack deployment status

    Figure 9: Stack deployment status

Associating the web ACLs to resources

During this process we associate the two newly created web ACLs to the corresponding infrastructure resources. In this example, it would be the CloudFront distribution and its origin load balancer which should have been created prior.

To associate the web ACLs to resources

  1. In the console search for and select WAF & Shield.
     
    Figure 10: Select WAF & Shield

    Figure 10: Select WAF & Shield

  2. Select Web ACLs from the list on the left.
     
    Figure 11: Select Web ACLs

    Figure 11: Select Web ACLs

  3. Select Global (CloudFront) from the drop down list at the top of the page. Choose the Edge-Network-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 12: Select the web ACL

    Figure 12: Select the web ACL

  4. Next select Associated AWS and then choose Add AWS resources.
     
    Figure 13: Add AWS resources

    Figure 13: Add AWS resources

  5. Select the CloudFront distribution you want to protect. Choose Add.
     
    Figure 14: Select the CloudFront distribution to protect

    Figure 14: Select the CloudFront distribution to protect

  6. Select the region the application load balancer is deployed in—this example is us-east-1—and then repeat the same association process as in steps by selecting Web ACLs and now associating the Application Load Balancer similar to steps 3 and 4 above. However, this time, select the application load balancer that serves as the CloudFront Distribution origin. Select US East (N. Virginia) from the drop-down list at the top of the page. Choose the Public-Application-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 15: Application layer Web ACL association

    Figure 15: Application layer Web ACL association

Conclusion

Using AWS WAF to manage a multi-layer web application security enforcement policy you are able to build defense in depth stack for each specific web application. The configuration will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications. Now with AWS Managed Rules this has enabled customers to make use of prebuild rule sets that can easily be deployed to create a layered defense that will fit into customers web application deployment pipelines. For customers that would like to centrally manage and control WAF in their AWS Organization, consider AWS Firewall Manager.

The AWS CloudFormation templates used in this procedure are in this GitHub repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

Defense in depth using AWS Managed Rules for AWS WAF (part 1)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/defense-in-depth-using-aws-managed-rules-for-aws-waf-part-1/

In this post, I discuss how you can use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

The post is in two parts. This first part describes AWS Managed Rules for AWS WAF and how it can be used to provide defense in depth. The second part shows how to apply AWS Managed Rules for WAF.

AWS Managed Rules for AWS WAF is a service that provides groups of rules created by Amazon Web Services (AWS) or by an AWS technology partner. By using AWS Managed Rules, you can reduce the administrative overhead of configuring rules for AWS WAF. You still need a comprehensive strategy for web application policy enforcement to help you make the best use of AWS Managed Rules for your web applications.

By using a layered policy enforcement strategy, you can create policy enforcement that’s specific to each part of your applications. This helps you avoid having to maintain and manage monolithic AWS WAF configurations for each of your applications. When you can separate policies for the edge network and for the application layer network, replicating separate policies across larger workloads becomes modular. This makes your application security more agile and lets you protect public-facing web applications without writing new rules or including rules that aren’t relevant to your web application.

Policy enforcement becomes even less of an administrative burden when you use AWS Firewall Manager to enforce policies across all accounts. This helps ensure organizations have robust policy enforcement measures across multiple accounts, with increased application layer visibility.

The new AWS WAF JSON document-style configuration enables traditional code review processes. You can now easily manage AWS WAF configurations on multiple layers of your web applications. This has also enabled partners to create more dynamic and robust rules that they can deliver on AWS WAF, which ultimately helps those customers manage their web application security policies.

AWS WAF enhancements

AWS WAF uses web ACL capacity units (WCU) to calculate and control the operating resources that are used to run your rules, rule groups, and web ACLs.

You can use JSON key-value pair document-based configuration to more easily integrate AWS WAF into the development practices of your organization. As noted in the prior paragraph, using document-style configuration removes the need to use multiple API calls to create objects in the correct order before you can create and deploy a web ACL to protect your web applications.

Using this method lets firewall changes be implemented with normal development and operations best practices because it will be infrastructure as code. This enables version control and code review before deploying updates to your production environment.

Solution overview

The following diagram illustrates the layers and functions of a defense-in-depth solution. The text that follows describes each layer.
 

Figure 1: Solution overview diagram

Figure 1: Solution overview diagram

Edge network layer policy enforcement

The edge network is the first layer of policy enforcement and should be used for broad security policy enforcement. This is the ideal place for rules such as AWS Managed Rules Core rule set (CRS), geographical location blocks, IP reputational lists, anonymous IP lists, and basic rate limits enforcement. By limiting known bad traffic at the edge network, the CRS limits the exposure of the application layer to known bad IP address ranges, malicious requests, bad bots, and request floods. This provides broad protection to the inner application layer against malicious activity, which can be applied regardless of the web application being served at the application layer.

Combining Amazon CloudFront with the distributed denial of service (DDoS) mitigation capabilities of AWS Shield is supported by AWS WAF for your outer layer of web application security enforcement.

It’s a common misconception that CloudFront is only a content delivery platform, but it also has robust transparent reverse proxy capabilities. CloudFront can help protect your environment from a broad range of web application risks. For example, you can use CloudFront to ensure that HTTP requests conform to standards on the far outer layer of your web application environment while serving content closer to the user.

Application layer policy enforcement

The next level of enforcement should be an application load balancer in a public subnet with another web ACL at the CloudFront origin. This policy enforcement layer is where you create a regional web ACL for the CloudFront origin. In addition, this layer is where you apply application-specific rules. For example, if you have a web application that uses a LAMP stack, it would be best to use AWS Managed Rules for SQL Injection, Linux, and PHP as an enforcement layer.

Note: IP-based enforcement is not effective on this part of the environment. Consider making use of an origin custom header on the CloudFront distribution. Then using this custom header to create a BLOCK rule within this web ACL to deny any request without the origin custom header as the first rule in your web ACL list. This rule needs to be created manually and will not be configured by the supplied templates.

(Optional) Third-party web application firewall layer policy enforcement

AWS WAF enforces policies on inbound requests and doesn’t have outbound inspection capabilities. If you need to enforce policies based on outbound responses, you can use Amazon Machine Image (AMI) based web application firewalls, which are available via the AWS Marketplace.

Using an instance-based web application firewall is used here because most of the heavy lifting of computational expenditure is done on the AWS WAF enforcement layers. The third-party layer is where you can enforce policies that require requests to be stateful.

Using an AMI from AWS Marketplace also gives you access to capabilities such as higher visibility, threat intelligence, and robust firewall rules. This adds an additional layer of security enhancement to your environment.

(Optional) Private layer policy enforcement

When working with a traditional three-tier web architecture, you can add an additional layer of enforcement on the private layer, which can be used for the web front ends. This stage is where you would deploy an application load balancer in a private subnet serving your web front ends. This load balancer is there for any computational expensive regex-based rule enforcement that you don’t want to enforce on the instances-based WAF. This also gives you another layer of visibility before requests reaches the web front ends themselves. This example can be seen in Figure 2 below as a reference.

Use case examples

The AWS CloudFormation templates supplied can be deployed in a modular fashion. If the application load balancer is located in the us-east-1 region, you can deploy a single template called Amazon-CloudFront-Application-Load-Balancer-AMR.yml.

If the application load balancer isn’t located in us-east-1, you can use the Amazon-CloudFront-EdgeLayer-AMR.yml template to deploy the stack in us-east-1 to support the web ACL on CloudFront and then deploy ApplicationLayer-Load-Balancer-AMR.yml in the region the original application load balancer was deployed for its web ACL.

All CloudFormation templates are available on the Github project page and a summary of each can be found in the main readme.md file.

Note: All the individual rules in each rule set is set to ACTION OVERRIDE for initial deployment. If any of the rule actions in the group are set to block or allow, this override changes the behavior so that matching rules are only counted. You may change the setting to NO ACTION OVERRIDE after a period of evaluation to avoid disrupting production workloads with potential false positives.

Edge network and application load balancer origin using AWS Managed Rules for AWS WAF

When considering some of the web application best practices on AWS for resiliency and security, the recommendation is to use CloudFront where possible, because it can terminate TLS/SSL connections and serve cached content close to the end user. CloudFront has advanced mitigation capabilities such as SYN cookies and a massively distributed network separate from the traditional Amazon Elastic Compute Cloud (Amazon EC2) networking space. CloudFront also supports AWS WAF rate limits, IP blacklists, and broad security policies, which can be enforced at the edge network layer.

In the example Amazon-CloudFront-Application-Load-Balancer-AMR.yml template, we place a rate-limit for HTTP GET and HTTP POST methods. This is dependent upon expected traffic request rates. You can review Amazon CloudWatch metrics for your CloudFront distribution or application load balancer to determine the baseline for your rate limit based on the maximum expected requests per minute.

The rate limit is adjustable within the parameter options at deployment of the AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml. The HTTP POST rate limit also helps to slow down credential stuffing attacks—a form of brute force attack—on login pages. The ApplicationLayer-Load-Balancer-AMR.yml template used in part 2 of this post also deploys the Amazon IP reputation list to drop IP addresses based on Amazon internal threat intelligence.

We also use the AWS Managed Rules CommonRuleSet that blocks cross-site scripting (XSS) attacks, request with no user-agents, requests with known bad user-agents, large queries, posts, cookies, and URLs, and known LFI/RFI attacks.

Note: The size constraint rules aren’t recommended for protecting APIs or web applications with large HTTP POSTs or long cookies. Evaluate the possible effects of size constraint rules thoroughly before setting them to block requests.

There is also an AWS Managed Rule for known bad inputs which is based on threat intelligence gathered by the AWS Threat Research Team. Finally, there is an admin protection rule set that drops requests to known management login pages. It’s not advised that web applications have front door access to admin controls.

At the origin, it’s a good idea to use an application load balancer that also supports AWS WAF. This is where you want to apply application-specific web policies. For example, this is where you would apply rules to protect against a SQL injection attack if your web application uses a SQL database.

In the example AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml, for the origin application load balancer, we use AWS Managed Rules for SQL injections, Linux rule set, Unix rule set, PHP rule set, and the WordPress rule set to cover most eventualities customers could be using on their web applications.

For the example solution in part 2 of this post, if the origin application load balancer is in us-east-1, you can use Amazon-CloudFront-Application-Load-Balancer-AMR.yml, which will deploy both web ACLs.

If the origin is not in us-east-1, you can use two example templates which are Amazon-CloudFront-EdgeLayer-AMR.yml for the edge network and ApplicationLayer-Load-Balancer-AMR.yml in the origin region.

Using AWS Managed WAF Rules on public and private application load balancers

Some customers have reasons to not use CloudFront and will use two application load balancers. One load balancer for the public facing environment for web front ends and an internal load balancer for the application backends.

The following figure shows a deployment that uses two load balancers. A public load balancer works with the edge network WAF to connect to a web front end in a private subnet and an internal load balancer connects to the backend application.
 

Figure 2: Diagram of stacked load balancers

Figure 2: Diagram of stacked load balancers

In this use case, we can still use the same structure of edge network and application layer network, now only using load balancers. Using a three-tier web application approach to deploy web applications there will be an external facing and an internal application load balancer where you can deploy the same style of policy enforcement, but only on load balancers.

Note: To deploy something similar to this example, you can use the template EdgeLayerALB-PrivateLayerALB-AMR.yml in the relevant regions where the load balancers have been deployed.

Alarms and logging

After deploying these AWS CloudFormation templates you should consider setting CloudWatch alarms on certain metrics for the HTTP GET and HTTP POST flood rules as well as the reputation and anonymous IP lists. Customers that are familiar with developing may also opt to use Lambda responders to use CloudWatch Events to trigger and update to the rule change from COUNT to BLOCK. Also enabling full logging for each web ACL will give you higher visibility into each request and will make potential investigations easier.

Conclusion

Using the new enhancements of AWS WAF makes it easier to manage a multi-layer web application security enforcement policy by using AWS WAF to maintain and deploy web application firewall configurations across their different deployment stages, as well as across different types of applications. By making use of partner or AWS Managed Rules, administrative overhead can be significantly reduced, and with AWS Firewall Manager, customers can enforce these policies across all of an organization’s accounts. Part 2 of this post will show you one example of how this can be done.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

Migrating your rules from AWS WAF Classic to the new AWS WAF

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/migrating-rules-from-aws-waf-classic-to-new-aws-waf/

In November 2019, Amazon launched a new version of AWS Web Application Firewall (WAF) that offers a richer and easier to use set of features. In this post, we show you some of the changes and how to migrate from AWS WAF Classic to the new AWS WAF.

AWS Managed Rules for AWS WAF is one of the more powerful new capabilities in AWS WAF. It helps you protect your applications without needing to create or manage rules directly in the service. The new release includes other enhancements and a brand new set of APIs for AWS WAF.

Before you start, we recommend that you review How AWS WAF works as a refresher. If you’re already familiar with AWS WAF, please feel free to skip ahead. If you’re new to AWS WAF and want to know the best practice for deploying AWS WAF, we recommend reading the Guidelines for Implementing AWS WAF whitepaper.

What’s changed in AWS WAF

Here’s a summary of what’s changed in AWS WAF:

  • AWS Managed Rules for AWS WAF – a new capability that provides protection against common web threats and includes the Amazon IP reputation list and an anonymous IP list for blocking bots and traffic that originate from VPNs, proxies, and Tor networks.
  • New API (wafv2) – allows you to configure all of your AWS WAF resources using a single set of APIs instead of two (waf and waf-regional).
  • Simplified service limits – gives you more rules per web ACL and lets you define longer regex patterns. Limits per condition have been eliminated and replaced with web ACL capacity units (WCU).
  • Document-based rule writing – allows you to write and express rules in JSON format directly to your web ACLs. You’re no longer required to use individual APIs to create different conditions and associate them to a rule, greatly simplifying your code and making it more maintainable.
  • Rule nesting and full logical operation support – lets you write rules that contain multiple conditions, including OR statements. You can also nest logical operations, creating statement such as [A AND NOT(B OR C)].
  • Variable CIDR range support for IP set – gives you more flexibility in defining the IP range you want to block. The new AWF WAF supports IPv4 /1 to /32 IPv6 /1 to /128.
  • Chainable text transformation – allows you to perform multiple text transformations before executing a rule against incoming traffic.
  • Revamped console experience – features a visual rule builder and more intuitive design.
  • AWS CloudFormation support for all condition types – including that rules written in JSON can easily be converted into YAML format.

Although there were many changes introduced, the concepts and terminology that you’re already familiar with have stayed the same. The previous APIs have been renamed to AWS WAF Classic. It’s important to stress that resources created under AWS WAF Classic aren’t compatible with the new AWS WAF.

About web ACL capacity units

Web ACL capacity units (WCUs) are a new concept that we introduced to AWS WAF in November 2019. WCU is a measurement that’s used to calculate and control the operating resources that are needed to run the rules associated with your web ACLs. WCU helps you visualize and plan how many rules you can add to a web ACL. The number of WCUs used by a web ACL depends on which rule statements you add. The maximum WCUs for each web ACL is 1,500, which is sufficient for most use cases. We recommend that you take some time to review how WCUs work and understand how each type of rule statement consumes WCUs before continuing with your migration.

Planning your migration to the new AWS WAF

We recently announced a new API and a wizard that will help you migrate from AWS WAF Classic to the new AWS WAF. At high level summary, it will parse the web ACL under AWS WAF Classic and generate a CloudFormation template that will create equivalent web ACL under the new AWS WAF once deployed. In this section, we explain how you can use the wizard to plan your migration.

Things to know before you get started

The migration wizard will first examine your existing web ACL. It will examine and record for conversion any rules associated to the web ACL, as well as any IP sets, regex pattern sets, string match filters, and account-owned rule groups. Executing the wizard will not delete or modify your existing web ACL configuration, or any resource associated with that web ACL. Once finished, it will generate an AWS CloudFormation template within your S3 bucket that represents an equivalent web ACL with all rules, sets, filters, and groups converted for use in the new AWS WAF. You’ll need to manually deploy the template in order to recreate the web ACL in the new AWS WAF.

Please note the following limitations:

  • Only the AWS WAF Classic resources that are under the same account will be migrated over.
  • If you migrate multiple web ACLs that reference shared resources—such as IP sets or regex pattern sets—they will be duplicated under new AWS WAF.
  • Conditions associated with rate-based rules won’t be carried over. Once migration is complete, you can manually recreate the rules and the conditions.
  • Managed rules from AWS Marketplace won’t be carried over. Some sellers have equivalent managed rules that you can subscribe to in the new AWS WAF.
  • The web ACL associations won’t be carried over. This was done on purpose so that migration doesn’t affect your production environment. Once you verify that everything has been migrated over correctly, you can re-associate the web ACLs to your resources.
  • Logging for the web ACLs will be disabled by default. You can re-enable the logging once you are ready to switch over.
  • Any CloudWatch alarms that you may have will not be carried over. You will need to set up the alarms again once the web ACL has been recreated.

While you can use the migration wizard to migrate AWS WAF Security Automations, we don’t recommend doing so because it won’t convert any Lambda functions that are used behind the scenes. Instead, redeploy your automations using the new solution (version 3.0 and higher), which has been updated to be compatible with the new AWS WAF.

About AWS Firewall Manager and migration wizard

The current version of the migration API and wizard doesn’t migrate rule groups managed by AWS Firewall Manager. When you use the wizard on web ACLs managed by Firewall Manager, the associated rule groups won’t be carried over. Instead of using the wizard to migrate web ACLs managed by Firewall Manager, you’ll want to recreate the rule groups in the new AWS WAF and replace the existing policy with a new policy.

Note: In the past, the rule group was a concept that existed under Firewall Manager. However, with the latest change, we have moved the rule group under AWS WAF. The functionality remains the same.

Migrate to the new AWS WAF

Use the new migration wizard which creates a new executable AWS CloudFormation template in order to migrate your web ACLs from AWS WAF Classic to the new AWS WAF. The template is used to create a new version of the AWS WAF rules and corresponding entities.

  1. From the new AWS WAF console, navigate to AWS WAF Classic by choosing Switch to AWS WAF Classic. There will be a message box at the top of the window. Select the migration wizard link in the message box to start the migration process.
     
    Figure 1: Start the migration wizard

    Figure 1: Start the migration wizard

  2. Select the web ACL you want to migrate.
     
    Figure 2: Select a web ACL to migrate

    Figure 2: Select a web ACL to migrate

  3. Specify a new S3 bucket for the migration wizard to store the AWS CloudFormation template that it generates. The S3 bucket name needs to start with the prefix aws-waf-migration-. For example, name it aws-waf-migration-helloworld. Store the template in the region you will be deploying it to. For example, if you have a web ACL that is in us-west-2, you would create the S3 bucket in us-west-2, and deploy the stack to us-west-2.

    Select Auto apply the bucket policy required for migration to have the wizard configure the permissions the API needs to access your S3 bucket.

    Choose how you want any rules that can’t be migrated to be handled. Select either Exclude rules that can’t be migrated or Stop the migration process if a rule can’t be migrated. The ability of the wizard to migrate rules is affected by the WCU limit mentioned earlier.
     

    Figure 3: Configure migration options

    Figure 3: Configure migration options

    Note: If you prefer, you can run the following code to manually set up your S3 bucket with the policy below to configure the necessary permissions before you start the migration. If you do this, select Use the bucket policy that comes with the S3 bucket. Don’t forget to replace <BUCKET_NAME> and <CUSTOMER_ACCOUNT_ID> with your information.

    For all AWS Regions (waf-regional):

    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "apiv2migration.waf-regional.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<BUCKET_NAME>/AWSWAF/<CUSTOMER_ACCOUNT_ID>/*"
            }
        ]
    }
    

    For Amazon CloudFront (waf):

    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "apiv2migration.waf.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<BUCKET_NAME>/AWSWAF/<CUSTOMER_ACCOUNT_ID>/*"
            }
        ]
    }
    

  4. Verify the configuration, then choose Start creating CloudFormation template to begin the migration. Creating the AWS CloudFormation template will take about a minute depending on the complexity of your web ACL.
     
    Figure 4: Create CloudFormation template

    Figure 4: Create CloudFormation template

  5. Once completed, you have an option to review the generated template file and make modifications (for example, you can add more rules) should you wish to do so. To continue, choose Create CloudFormation stack.
     
    Figure 5: Review template

    Figure 5: Review template

  6. Use the AWS CloudFormation console to deploy the new template created by the migration wizard. Under Prepare template, select Template is ready. Select Amazon S3 URL as the Template source. Before you deploy, we recommend that you download and review the template to ensure the resources have been migrated as expected.
     
    Figure 6: Create stack

    Figure 6: Create stack

  7. Choose Next and step through the wizard to deploy the stack. Once created successfully, you can review the new web ACL and associate it to resources.
     
    Figure 7: Complete the migration

    Figure 7: Complete the migration

Post-migration considerations

After you verify that migration has been completed correctly, you might want to revisit your configuration to take advantage of some of the new AWS WAF features.

For instance, consider adding AWS Managed Rules to your web ACLs to improve the security of your application. AWS Managed Rules feature three different types of rule groups:

  • Baseline
  • Use-case specific
  • IP reputation list

The baseline rule groups provide general protection against a variety of common threats, such as stopping known bad inputs from making into your application and preventing admin page access. The use-case specific rule groups provide incremental protection for many different use cases and environments, and the IP reputation lists provide threat intelligence based on client’s source IP.

You should also consider revisiting some of the old rules and optimizing them by rewriting the rules or removing any outdated rules. For example, if you deployed the AWS CloudFormation template from our OWASP Top 10 Web Application Vulnerabilities whitepaper to create rules, you should consider replacing them with AWS Managed Rules. While the concepts found within the whitepaper are still applicable and might assist you in writing your own rules, the rules created by the template have been superseded by AWS Managed Rules.

For information about writing your own rules in the new AWS WAF using JSON, please watch the Protecting Your Web Application Using AWS Managed Rules for AWS WAF webinar. In addition, you can refer to the sample JSON/YAML to help you get started.

Revisit your CloudWatch metrics after the migration and set up alarms as necessary. The alarms aren’t carried over by the migration API and your metric names might have changed as well. You’ll also need to re-enable logging configuration and any field redaction you might have had in the previous web ACL.

Finally, use this time to work with your application team and check your security posture. Find out what fields are parsed frequently by the application and add rules to sanitize the input accordingly. Check for edge cases and add rules to catch these cases if the application’s business logic fails to process them. You should also coordinate with your application team when you make the switch, as there might be a brief disruption when you change the resource association to the new web ACL.

Conclusion

We hope that you found this post helpful in planning your migration from AWS WAF Classic to the new AWS WAF. We encourage you to explore the new AWS WAF experience as it features new enhancements, such as AWS Managed Rules, and offers much more flexibility in creating your own rules.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Umesh Kumar Ramesh

Umesh is a Sr. Cloud Infrastructure Architect with Amazon Web Services. He delivers proof-of-concept projects, topical workshops, and lead implementation projects to AWS customers. He holds a Bachelor’s degree in Computer Science & Engineering from National Institute of Technology, Jamshedpur (India). Outside of work, Umesh enjoys watching documentaries, biking, and practicing meditation.

Author

Kevin Lee

Kevin is a Sr. Product Manager at Amazon Web Service, currently overseeing AWS WAF.

Building well-architected serverless applications: Controlling serverless API access – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

This post continues part 1 of this security question. Previously, I cover the different mechanisms for authentication and authorization available for Amazon API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use AWS Identity and Access Management (IAM) to control access to internal or private API consumers.

Required practice: Use appropriate endpoint type and mechanisms to secure access to your API

I continue to show how to implement security mechanisms appropriate for your API endpoint.

Using AWS Amplify CLI to add a GraphQL API

After adding authentication in part 1, I use the AWS Amplify CLI to add a GraphQL AWS AppSync API with the following command:

amplify add api

When prompted, I specify an Amazon Cognito user pool for authorization.

Amplify add Amazon Cognito user pool for authorization

Amplify add Amazon Cognito user pool for authorization

To deploy the AWS AppSync API configuration to the AWS Cloud, I enter:

amplify push

Once the deployment is complete, I view the GraphQL API from within the AWS AppSync console and navigate to Settings. I see the AWS AppSync API uses the authorization configuration added during the part 1 amplify add auth. This uses the Amazon Cognito user pool to store the user sign-up information.

View AWS AppSync authorization settings with Amazon Cognito

View AWS AppSync authorization settings with Amazon Cognito

For a more detailed walkthrough using Amplify CLI to add an AWS AppSync API for the serverless airline, see the build video.

Viewing JWT tokens

When I create a new account from the serverless airline web frontend, Amazon Cognito creates a user within the user pool. It handles the 3-stage sign-up process for new users. This includes account creation, confirmation, and user sign-in.

Serverless airline Amazon Cognito based sign-in process

Serverless airline Amazon Cognito based sign-in process

Once the account is created, I browse to the Amazon Cognito console and choose Manage User Pools. I navigate to Users and groups under General settings and view my user account.

View User Account

View User Account

When I sign in to the serverless airline web app, I authenticate with Amazon Cognito, and the client receives user pool tokens. The client then calls the AWS AppSync API, which authorizes access using the tokens, connects to data sources, and resolves the queries.

Amazon Cognito tokens used by AWS AppSync

Amazon Cognito tokens used by AWS AppSync

During the sign-in process, I can use the browser developer tools to view the three JWT tokens Amazon Cognito generates and returns to the client. These are the accesstoken, idToken, and refreshToken.

View tokens with browser developer tools

View tokens with browser developer tools

I copy the .idToken value and use the decoder at https://jwt.io/ to view the contents.

JSON web token decoded

JSON web token decoded

The decoded token contains claims about my identity. Claims are pieces of information asserted about my identity. In this example, these include my Amazon Cognito username, email address, and other sign-up fields specified in the user pool. The client can use this identity information inside the application.

The ID token expires one hour after I authenticate. The client uses the Amazon Cognito issued refreshToken to retrieve new ID and access tokens. By default, the refresh token expires after 30 days, but can be set to any value between 1 and 3650 days. When using the mobile SDKs for iOS and Android, retrieving new ID and access tokens is done automatically with a valid refresh token.

For more information, see “Using Tokens with User Pools”.

Accessing AWS services

An Amazon Cognito user pool is a managed user directory to provide access for a user to an application. Amazon Cognito has a feature called identity pools (federated identities), which allow you to create unique identities for your users. These can be from user pools, or other external identity providers.

These unique identities are used to get temporary AWS credentials to directly access other AWS services, or external services via API Gateway. The Amplify client libraries automatically expire, rotate, and refresh the temporary credentials.

Identity pools have identities that are either authenticated or unauthenticated. Unauthenticated identities typically belong to guest users. Authenticated identities belong to authenticated users who have received a token by a login provider, such as a user pool. The Amazon Cognito issued user pool tokens are exchanged for AWS access credentials from an identity pool.

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

API keys

For public content and unauthenticated access, both Amazon API Gateway and AWS AppSync provide API keys that can be used to track usage. API keys should not be used as a primary authorization method for production applications. Instead, use these for rate limiting and throttling. Unauthenticated APIs require stricter throttling than authenticated APIs.

API Gateway usage plans specify who can access API stages and methods, and also how much and how fast they can access them. API keys are then associated with the usage plans to identify API clients and meter access for each key. Throttling and quota limits are enforced on individual keys.

Throttling limits determine how many requests per second are allowed for a usage plan. This is useful to prevent a client from overwhelming a downstream resource. There are two API Gateway values to control this, the throttle rate and throttle burst, which use the token bucket algorithm. The algorithm is based on an analogy of filling and emptying a bucket of tokens representing the number of available requests that can be processed. The bucket in the algorithm has a fixed size based on the throttle burst and is filled at the token rate. Each API request removes a token from the bucket. The throttle rate then determines how many requests are allowed per second. The throttle burst determines how many concurrent requests are allowed and is shared across all APIs per Region in an account.

Token bucket algorithm

Token bucket algorithm

Quota limits allow you to set a maximum number of requests for an API key within a fixed time period. When billing for usage, this also allows you to enforce a limit when a client pays by monthly volume.

API keys are passed using the x-api-key header. API Gateway rejects requests without them.

For example, within the serverless airline, the loyalty service uses an AWS Lambda function to fetch loyalty points and next tier progress via an API Gateway REST API /loyalty/{customerId}/get resource.

I can use this API to simulate the effect of usage plans with API keys.

  1. I navigate to the airline-loyalty API /loyalty/{customerId}/get resource in API Gateway console.
  2. I change the API Key Required value to be true.
  3. Setting API Key Required on API Gateway method

    Setting API Key Required on API Gateway method

  4. I choose Deploy API from the Actions menu.
  5. I create a usage plan in the Usage Plans section of the API Gateway Console.
  6. I choose Create and enter a name for the usage plan.
  7. I select Enable throttling and set the rate to one request per second and the burst to two requests. These are artificially low numbers to simulate the effect.
  8. I select Enable quota and set the limit to 10 requests per day.
  9. Create API Gateway usage plan

    Create API Gateway usage plan

  10. I click Next.
  11. I associate an API Stage by choosing Add API Stage, and selecting the airline Loyalty API and Prod Stage.
  12. Associate usage plan to API Gateway stage

    Associate usage plan to API Gateway stage

  13. I click Next, and choose Create API Key and add to Usage Plan
  14. Create API key and add to usage plan.

    Create API key and add to usage plan.

  15. I name the API Key and ensure it is set to Auto Generate.
  16. Name API Key

  17. I choose Save then Done to associate the API key with the usage plan.
API key associated with usage plan

API key associated with usage plan

I test the API authentication, in addition to the throttles and limits using Postman.

I issue a GET request against the API Gateway URL using a customerId from the airline Airline-LoyaltyData Amazon DynamoDB table. I don’t specify any authorization or API key.

Postman unauthenticated GET request

Postman unauthenticated GET request

I receive a Missing Authentication Token reply, which I expect as the API uses IAM authentication and I haven’t authenticated.

I then configure authentication details within the Authorization tab, using an AWS Signature. I enter my AWS user account’s AccessKey and SecretKey, which has an associated IAM identity policy to access the API.

Postman authenticated GET request without access key

Postman authenticated GET request without access key

I receive a Forbidden reply. I have successfully authenticated, but the API Gateway method rejects the request as it requires an API key, which I have not provided.

I retrieve and copy my previously created API key from the API Gateway console API Keys section, and display it by choosing Show.

Retrieve API key.

Retrieve API key.

I then configure an x-api-key header in the Postman Headers section and paste the API key value.

Having authenticated and specifying the required API key, I receive a response from the API with the loyalty points value.

Postman successful authenticated GET request with access key

Postman successful authenticated GET request with access key

I then call the API with a number of quick successive requests.

When I exceed the throttle rate limit of one request per second, and the throttle burst limit of two requests, I receive:

{"message": "Too Many Requests"}

When I then exceed the quota of 10 requests per day, I receive:

{"message": "Limit Exceeded"}

I view the API key usage within the API Gateway console Usage Plan section.

I select the usage plan, choose the API Keys section, then choose Usage. I see how many requests I have made.

View API key usage

View API key usage

If necessary, I can also grant a temporary rate extension for this key.

For more information on using API Keys for unauthenticated access for AWS AppSync, see the documentation.

API Gateway also has support for AWS Web Application Firewall (AWS WAF) which helps protect web applications and APIs from attacks. It is another mechanism to apply rate-based rules to prevent public API consumers exceeding a configurable request threshold. AWS WAF rules are evaluated before other access control features, such as resource policies, IAM policies, Lambda authorizers, and Amazon Cognito authorizers. For more information, see “Using AWS WAF with Amazon API Gateway”.

AWS AppSync APIs have built-in DDoS protection to protect all GraphQL API endpoints from attacks.

Improvement plan summary:

  1. Determine your API consumer and choose an API endpoint type.
  2. Implement security mechanisms appropriate to your API endpoint

Conclusion

Controlling serverless application API access using authentication and authorization mechanisms can help protect against unauthorized access and prevent unnecessary use of resources.

In this post, I cover using Amplify CLI to add a GraphQL API with an Amazon Cognito user pool handling authentication. I explain how to view JSON Web Token (JWT) claims, and how to use identity pools to grant temporary access to AWS services. I also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

This well-architected question will be continued where I look at segregating authenticated users into logical groups. I will first show how to use Amazon Cognito user pool groups to separate users with an Amazon Cognito authorizer to control access to an API Gateway method. I will also show how to pass JWTs to a Lambda function to perform authorization within a function. I will then explain how to also segregate users using custom scopes by defining an Amazon Cognito resource server.

Deploy a dashboard for AWS WAF with minimal effort

Post Syndicated from Tomasz Stachlewski original https://aws.amazon.com/blogs/security/deploy-dashboard-for-aws-waf-minimal-effort/

In this post, I’ll show you how to deploy a solution in your Amazon Web Services (AWS) account that will provide a fully automated dashboard for AWS Web Application Firewall (WAF) service. The solution uses logs generated and collected by AWS WAF, and displays them in a user-friendly dashboard shown in Figure 1.

Figure 1: User-friendly dashboard for AWS Web Application Firewall

Figure 1: User-friendly dashboard for AWS Web Application Firewall

The dashboard provides multiple graphs for you to reference, filter, and adjust that are available out-of-the-box. The example in Figure 1 shows data from a sample web page that I created, where you can see:

  • Executed AWS WAF rules
  • Number of all requests
  • Number of blocked requests
  • Allowed versus blocked requests
  • Countries by number of requests
  • HTTP methods
  • HTTP versions
  • Unique IP count
  • Request count
  • Top 10 IP addresses
  • Top 10 countries
  • Top 10 user-agents
  • Top 10 hosts
  • Top 10 web ACLs

The dashboard is created using Kibana, which provides flexibility by enabling you to add new diagrams and visualizations.

AWS WAF is a web application firewall. It helps protect your web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. In just a few steps, you can deploy AWS WAF to your Application Load Balancer, Amazon CloudFront distribution, or Amazon API Gateway stages. I’ll show you how you can use it to get more insights into what’s happening at the AWS WAF layer. AWS WAF provides two versions of the service: AWS WAF (version 2) and AWS WAF classic. We recommend using version 2 of AWS WAF to stay up to date with the latest features as AWS WAF classic is no longer being updated. The solution that I describe in this blog post works with both AWS WAF versions.

The solution is swift to deploy: The dashboard can be ready to use in less than an hour. The solution is built with multiple AWS services such as Amazon Elasticsearch (Amazon ES), AWS Lambda, Amazon Kinesis Data Firehose, Amazon Cognito, Amazon EventBridge, and more. However, you don’t need to know those services in detail to build and use the dashboard. I prepared a CloudFormation template that you can deploy in the AWS Console to set up the whole solution automatically on your AWS account. You can also find the whole solution on our AWS Github. It’s open source, so you can use and edit it to meet your needs.

The architecture of the solution can be broken down into 7 steps, which are outlined in Figure 2.

Figure 2: Interaction points while architecting the dashboard

Figure 2: Interaction points while architecting the dashboard

The interaction points are as follows:

  1. One of the functionalities of AWS WAF service is AWS WAF logs. The logs capture information about blocked and allowed requests. These logs are forwarded to Kinesis Data Firehose service.
  2. Kinesis Data Firehose buffer receives information, and then sends it to Amazon ES—the core of the solution.
  3. Some information, like the names of AWS WAF web ACLs aren’t provided in the AWS WAF logs. To make the whole solution more friendly for users, I used EventBridge, which will be called whenever a user changes their configuration of AWS WAF.
  4. EventBridge will call a Lambda function when new rules are created.
  5. Lambda will retrieve the information about all existing rules and it will update the mapping between IDs of the rules and their names in the Amazon ES cluster.
  6. To make the whole solution more secure, I’m using Amazon Cognito service to store the credentials of authorized dashboard users.
  7. The user enters their credentials to access the dashboard on Kibana which is installed on Amazon ES cluster.

Now, let’s deploy the solution and see how it works.

Step 1: Deploy solution using CloudFormation template

Click Launch Stack to launch a CloudFormation stack in your account and deploy the solution.

Select button to launch stack

You’ll be redirected to the CloudFormation service in North Virginia, USA, which is the default region to deploy this solution for an AWS WAF WebACL associated to CloudFront. You can change the region if you want. This template will spin up multiple cloud resources, including but not limited to:

  • Amazon ES cluster with Kibana for storing data and displaying dashboard
  • Amazon Cognito user pool with a registry of users who have access to dashboards
  • Kinesis Data Firehose for streaming logs to Amazon ES

In the wizard, you’ll be asked to modify or provide four different parameters. They are:

  • DataNodeEBSVolumeSize: Storage size of Amazon ES cluster which will be created. You can leave the default value.
  • ElasticSearchDomainName: Name of your Amazon ES cluster domain. You can leave the default value.
  • NodeType: Type of the instance which will be used to create Amazon ES cluster. You don’t need to change it if you don’t want to, but you can if necessary to accommodate your needs.
  • UserEmail: You must update this parameter. It is the email address that will receive the password to log in to Kibana.

Step 2: Wait

The process of launching a template, which I named aws-waf-dashboard for this example, will take 20–30 minutes. You can take a break and wait until the status of the stack changes to CREATE_COMPLETE.

Figure 3: Completed launch of the CloudFormation template

Figure 3: Completed launch of the CloudFormation template

Step 3: Validate that Kibana and dashboards work

Check your email. You should have received an email with the required password to log in to the Kibana dashboard. Make a note of it. Now return to the CloudFormation service and select the aws-waf-dashboard template. In the Output tab, there should be one parameter with a link to your dashboard in the Value column.

Figure 4: Output the CloudFormation template

Figure 4: Output the CloudFormation template

Select the link and log in to Kibana. Provide the email address that you set up in Step 1 and the password that was sent to it. You might be prompted to update the password.

In Kibana, select the Dashboard tab, as shown in Figure 5, and then select WAFDashboard in the table. This will call up the AWS WAF dashboard. It should still be empty because it hasn’t been connected with AWS WAF yet.

Figure 5: Empty Kibana dashboard

Figure 5: Empty Kibana dashboard

Step 4: Connect AWS WAF logs

Now it’s time to enable AWS WAF logs on the web ACL for which you want to create a dashboard and connect them to this solution. Open AWS WAF, select the AWS WAF dropdown option, select Web ACLs, and then select your desired web ACL. In this example, I used a previously created web ACL called MyPageWAF, as shown in Figure 6.

Figure 6: WAF & Shield

Figure 6: WAF & Shield

If you didn’t enable AWS WAF logs yet, you need to do it now in order to continue. To do this, select Logging and metrics in your web ACL, and then Enable logging, as shown in Figure 7.

Figure 7: Enable AWS WAF logs

Figure 7: Enable AWS WAF logs

Select the drop-down list under Amazon Kinesis Data Firehose Delivery Stream and then select the Kinesis Firehose which was created by the template in step 2. Its name starts with aws-waf-logs. Save your changes.

Figure 8: Select the Kinesis Firehose

Figure 8: Select the Kinesis Firehose

Step 5: Final validation

Your AWS WAF logs will be sent from the AWS WAF service through Kinesis Data Firehose directly to an Amazon ES cluster and will be available to you using Kibana dashboards. After a couple of minutes, you should start seeing data on your dashboard similar to the screenshot in Figure 1.

And that’s all! As you can see, in just a few steps we built and deployed a solution, which we can use to examine our AWS WAF configuration and see what kind of requests are being made and if they’re blocked or allowed.

Sample Scenario

Let’s go through a sample scenario to see one way you can use this solution. I built a small website for my dog and configured CloudFront to accelerate it and to make it more secure.

Figure 9: Java the Dog's homepage

Figure 9: Java the Dog’s homepage

Next, I configured an AWS WAF web ACL and attached it to my CloudFront distribution, which is the entry point of my website. In my AWS WAF web ACL, I didn’t add any rules, but allowed all requests. This will allow me to log all requests and understand who is visiting my website. Then I configured an AWS WAF dashboard by following the steps in this blog.

My imaginary website is mainly dedicated to three countries—USA, Germany and Japan—where French Bulldogs are very popular. I noticed that I got quite a lot of users from India, which was unexpected. In Figure 10, the AWS WAF dashboard includes data from all four countries and tells me there have been over 11,000 requests for my website.

Figure 10: Kibana dashboard with requests from USA, Japan, Germany, and India

Figure 10: Kibana dashboard with requests from USA, Japan, Germany, and India

To understand the data better, I filtered on requests coming only from India, which is shown in Figure 11:

Figure 11: Website requests coming from India only

Figure 11: Website requests coming from India only

The dashboard shows that I got more than 700 requests from India in the previous hour. This could have been a great success for my website, but unfortunately, all the requests were coming from single IP address. Additionally, most of them have a suspicious user-agent header: “secret-hacker-agent.” This information is provided in the Visualize tab in Kibana, shown in Figure 12.

Figure 12: Visualize tab of Kibana dashboard

Figure 12: Visualize tab of Kibana dashboard

This doesn’t look good, so I decided to block those requests using AWS WAF.

So, the question now is what to block exactly? I can block all requests coming from India, but this isn’t the best idea because there might be other Indian fans of French Bulldogs. I can block this single IP address, but the hacker might use a different IP to continue hitting my website. Finally, I decided to create an AWS WAF rule that inspects the user-agent header. If the user-agent header contains “secret-hacker-agent,” the rule will block the request

Within a couple of minutes of configuring my AWS WAF rule, I noticed that I was still getting requests from India, but this time, requests with the suspicious user-agent header were blocked! As shown in Figure 13, there were around 2,700 requests, but about 2,000 of them were blocked.

Figure 13: Blocked suspicious requests

Figure 13: Blocked suspicious requests

In reality, I was attacking my own website as secret-hacker-agent for the sake of the example. You can see in the following command line screenshot that my request (using wget) with the suspicious user-agent header was blocked (receiving a “403 Forbidden” message). When I use a different header (“good-agent”), my request passes the AWS WAF rule successfully.

Figure 14: Command line screenshot of the 'wget' request

Figure 14: Command line screenshot of the ‘wget’ request

Summary

In this post we’ve detailed how to deploy a dashboard for AWS WAF in a few steps, and how to use it to troubleshoot and block a web application attack. Now it’s your turn to deploy this solution for your own application. Please share your feedback about the solution and the dashboard. You can submit comments in the Comments section below or on the project’s GitHub page.

This post was inspired by a blog post created by my friend Tom Adamski, who also described how to use Kibana and Amazon ES to visualize AWS WAF logs, and with help of Achraf Souk, who contributed his specialist knowledge in AWS edge services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tomasz Stachlewski

Tomasz is a Senior Solution Architecture Manager at AWS, where he helps companies of all sizes (from startups to enterprises) in their Cloud journey. He is a big believer in innovative technology such as serverless architecture, which allows organizations to accelerate their digital transformation.

AWS Shield Threat Landscape report is now available

Post Syndicated from Mario Pinho original https://aws.amazon.com/blogs/security/aws-shield-threat-landscape-report-now-available/

AWS Shield is a managed threat protection service that safeguards applications running on AWS against exploitation of application vulnerabilities, bad bots, and Distributed Denial of Service (DDoS) attacks. The AWS Shield Threat Landscape Report (TLR) provides you with a summary of threats detected by AWS Shield. This report is curated by the AWS Threat Response Team (TRT), who continually monitors and assesses the threat landscape to build protections on behalf of AWS customers. This includes rules and mitigations for services like AWS Managed Rules for AWS WAF and AWS Shield Advanced. You can use this information to expand your knowledge of external threats and improve the security of your applications running on AWS.

Here are some of our findings from the most recent report, which covers Q1 2020:

Volumetric Threat Analysis

AWS Shield detects network and web application-layer volumetric events that may indicate a DDoS attack, web content scraping, account takeover bots, or other unauthorized, non-human traffic. In Q1 2020, we observed significant increases in the frequency and volume of network volumetric threats, including a CLDAP reflection attack with a peak volume of 2.3 Tbps.

You can find a summary of the volumetric events detected in Q1 2020, compared to the same quarter in 2019, in the following table:

MetricSame Quarter, Prior Year (Q1 2019)
Most Recent Quarter (Q1 2020)
Change
Total number of events253,231310,954+23%
Largest bit rate (Tbps)0.82.3+188%
Largest packet rate (Mpps)260.1293.1+13%
Largest request rate (rps)1,000,414694,201-31%
Days of elevated threat*13+200%

Days of elevated threat indicates the number of days during which the volume or frequency of events was unusually high.

Malware Threat Analysis

AWS operates a threat intelligence platform that monitors Internet traffic and evaluates potentially suspicious interactions. We observed significant increases in the both the total number of events and the number of unique suspects, relative to the prior quarter. The most common interactions observed in Q1 2020 were Remote Code Execution (RCE) attempts on Apache Hadoop YARN applications, where the suspect attempts to exploit the API of a Hadoop cluster’s resource management system and execute code, without authorization. In March 2020, these interactions accounted for 31% of all events detected by the threat intelligence platform.

You can find a summary of the volumetric events detected in Q1 2020, compared to the prior quarter, in the following table:

MetricPrior Quarter
(Q4 2019)
Most Recent Quarter
(Q1 2020)
Change
Total number of events (billion)0.71.1+57%
Unique suspects (million)1.21.6+33%

 

For more information about the threats detected by AWS Shield in Q1 2020 and steps that you can take to protect your applications running on AWS, download the AWS Shield Threat Landscape Report.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Shield forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mario Pinho

Mario Pinho is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into its simpler components. In his free time he pretends to be an artist by playing piano and doing landscape photography.

Enable automatic logging of web ACLs by using AWS Config

Post Syndicated from Mike George original https://aws.amazon.com/blogs/security/enable-automatic-logging-of-web-acls-by-using-aws-config/

In this blog post, I will show you how to use AWS Config, with its auto-remediation functionality, to ensure that all web ACLs have logging enabled. The AWS CloudFormation template included in this blog post will facilitate this solution, and will get you started being able to manage web ACL logging at scale.

AWS Firewall Manager can automatically deploy an AWS Web Application Firewall (WAF) rule to protect your applications when your organization creates new Application Load Balancers, API Gateways, and CloudFront distributions. However, you still have to enable logging for web ACLs on an individual basis. Information contained in web ACL logs includes the time that AWS WAF received the request for your AWS resource, detailed information about the request, and the action for the rule that each request matched. This data can be extremely important for compliance and auditing needs, debugging, or forensic research.

Web ACL logging is a best practice, and is a business requirement within many organizations. Rather than leaving logging as a manual step in a deployment process, I will show you how to use automated mechanisms to enable logging, so that your business can meet its security and compliance requirements.

Prerequisites

The solution in this blog post assumes that you are already using AWS Web Application Firewall (AWS WAF) and AWS Firewall Manager to manage your firewall rules at scale. The following is a list of all the AWS services used in this blog post:

Using AWS Config to ensure automatic logging

AWS Config is a service that enables you to evaluate the configurations of the AWS resources in your account. AWS Config continuously monitors and records resource configuration changes. AWS Config can alert you and perform actions when resources get added, removed, or change state. AWS Config has a set of built-in rules that it can evaluate your AWS resources against, or you can build your own AWS Config rules.

In fact, when you enable AWS Firewall Manager to automatically apply AWS WAF rules to your Application Load Balancers, API Gateways, or CloudFront distributions, AWS Firewall Manager creates AWS Config rules behind the scenes. These AWS Config rules are designed so that the correct web ACLs are automatically applied whenever new Application Load Balancers, API Gateways, or CloudFront distributions are created. Enterprises use AWS Config rules to ensure consistent compliance with their internal organizational policies. You can use AWS Config to ensure that your AWS WAF rules have logging enabled.

When creating custom AWS Config rules, you associate each custom rule with an AWS Lambda function, which contains the logic that evaluates whether your AWS resource complies with the rule. You can configure the custom AWS Config rule to invoke the Lambda function in response to a configuration change, or to run periodically. After the Lambda function executes, it evaluates whether your resource complies with your rule, and it then sends the results back to AWS Config. If the resource violates the conditions of the rule, then AWS Config flags the resource as noncompliant. For more information, see How AWS Config Works in the AWS Developer Documentation.

You can also perform auto-remediation on non-compliant resources by using the built-in remediation functionality in AWS Config. When AWS Config detects a noncompliant resource, it can invoke an automation function that is defined as a Systems Manager Automation document. Systems Manager has a number of pre-built Automation documents that can do things such as create an Amazon Machine Image (AMI), create a Jira issue, and create a ServiceNow incident. For the full list of built-in Automation documents, see Systems Manager Automation Document Details Reference.

You can also create your own Automation documents to support business cases not covered by the built-in Systems Manager Automation documents. Systems Manager Automation documents can run scripts, call AWS API functions, call custom Lambda functions, or execute a CloudFormation stack, and more.

Overview of the solution

The following is a high-level overview diagram of the solution described in this post, when an AWS WAF web ACL has a configuration change:
 

Figure 1: High-level solution overview

Figure 1: High-level solution overview

When an AWS WAF web ACL has a configuration change, the following steps will occur:

  1. The creation of the AWS WAF web ACL generates a ConfigurationItemChangeNotification, which is sent to AWS Config (step 1).
  2. AWS Config in turn sends the notification on to an AWS Lambda function (step 2), which determines if the web ACL in question is “compliant”. In this case, compliant means that the web ACL has logging configured.
  3. Lambda queries the web ACL (step 3) to determine if logging is enabled.
  4. The Lambda query results are then reported back to AWS Config (step 4).
  5. If logging is not enabled, the web ACL is seen as noncompliant and AWS Config kicks off an auto-remediation step (step 5) by executing a Systems Manager Automation document.
  6. The Automation document calls a Lambda function (step 6).
  7. The Lambda function attempts to enable logging on the web ACL (step 7).
  8. If logging is successfully enabled, then the web ACL automatically sends logs through a Kinesis Data Firehose delivery stream (step 8).
  9. The Kinesis Data Firehose delivery stream stores the data in an S3 bucket (step 9).
  10. After the Lambda function has completed enabling logging functionality, it reports back to Systems Manager (step 10).
  11. Systems Manager reports back to AWS Config (step 11).
  12. At this point, the web ACL compliance status still hasn’t been updated. AWS Config still believes the web ACL is noncompliant, so AWS Config calls the Lambda function (step 2) to determine if the compliance status has changed.
  13. Lambda checks the web ACL again (step 3), determines that it is compliant, and returns the results to AWS Config (step 4).

Because AWS Config stores the compliance history of the web ACL configuration, compliance team members will be able to go into AWS Config and see the history of the web ACL, as shown in the following screenshot. You will be able to see that the configuration state was noncompliant when the web ACL was created, and that it became compliant after logging was enabled.
 

Figure 2: Web ACL compliance history in AWS Config

Figure 2: Web ACL compliance history in AWS Config

Using the CloudFormation template

To automatically enable logging on all web ACLs, I created a CloudFormation template for you to use to set up all the necessary components. The CloudFormation template creates the following:

  • An S3 bucket to store the logs.
  • A Kinesis Data Firehose delivery stream.
  • An AWS Config rule.
  • A Systems Manager Automation document.
  • Two Lambda functions. The first Lambda function is used by AWS Config to evaluate whether the web ACL has logging enabled. The second Lambda function is used by the Systems Manager Automation document to automatically enable logging.
  • AWS IAM policies and roles to ensure that everything works correctly.

I designed this CloudFormation template to be executed in an AWS account that already has AWS Firewall Manager enabled, however it will not prevent you from running it in an AWS account that does not have it enabled. Accounts without AWS Firewall Manager won’t benefit from the central configuration and management that AWS Firewall Manager provides. However, this stack will still allow you to ensure that existing or new web ACLs have logging enabled.

To deploy the template

  1. Copy the CloudFormation template file that follows these instructions, and save it to your computer.
  2. Sign in to the AWS account where you want to deploy this stack.
  3. Choose Services, choose CloudFormation, and then choose Stacks.
  4. In the upper right, choose Create stack, and then choose With new resources (standard).
  5. In the Specify template section, choose Upload a template file, and then select Choose file.
  6. Navigate to the file that you saved in step 1. Choose Next.
  7. In the Stack name field, enter a stack name that is meaningful to you. Choose Next, and choose Next again.
  8. Select the checkbox that says I acknowledge that AWS CloudFormation might create IAM resources and choose the Create stack button.

CloudFormation template file


#
# This CloudFormation template enables auto-logging of web ACLs through the use of 
# AWS Config and Systems Manager Automation documents.
#
# This solution creates an S3 bucket, a Kinesis Data Firehose, an AWS Config rule, 
# a Systems Manager Automation document, and two Lambda functions to evaluate and 
# remediate when web ACLs are not configured for logging.
#

Outputs:
  S3BucketName:
    Value: !Ref S3Bucket
  FirehoseName:
    Value: !Ref Firehose

Resources:
  S3Bucket:
    Type: AWS::S3::Bucket

  Firehose:
    Type: AWS::KinesisFirehose::DeliveryStream
    Properties:
      DeliveryStreamName:
        !Join
          - ''
          - - aws-waf-logs-
            - !Ref AWS::StackName
      ExtendedS3DestinationConfiguration:
        RoleARN: !GetAtt DeliveryRole.Arn
        BucketARN: !GetAtt S3Bucket.Arn
        BufferingHints:
          IntervalInSeconds: 300
          SizeInMBs: 5
        CompressionFormat: UNCOMPRESSED

  DeliveryRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: ''
            Effect: Allow
            Principal:
              Service: firehose.amazonaws.com
            Action: 'sts:AssumeRole'
            Condition:
              StringEquals:
                'sts:ExternalId': !Ref 'AWS::AccountId'


  DeliveryPolicy:
    Type: AWS::IAM::Policy
    Properties:
      PolicyName: 'firehose_delivery_policy'
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - 's3:AbortMultipartUpload'
              - 's3:GetBucketLocation'
              - 's3:GetObject'
              - 's3:ListBucket'
              - 's3:ListBucketMultipartUploads'
              - 's3:PutObject'
            Resource:
              - !GetAtt S3Bucket.Arn
              - !Join
                - ''
                - - !GetAtt S3Bucket.Arn
                  - '*'
      Roles:
        - !Ref DeliveryRole

  AutomationDoc:
    Type: "AWS::SSM::Document"
    Properties:
      Content:
        schemaVersion: "0.3"
        description: "Adds logging to non-compliant WebACLs"
        assumeRole: "{{ AutomationAssumeRole }}"
        parameters:
          WebACLId:
            type: "String"
            description: "(Required) The WebACLId of the WebACL"
          AutomationAssumeRole:
            type: "String"
            description: "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf"
        mainSteps:
          - name: performRemediation
            action: aws:invokeLambdaFunction
            inputs:
              FunctionName: !GetAtt WafLambda.Arn
              Payload: '{"webAclName":"{{ WebACLId }}"}'
      DocumentType: Automation


  WafLambda:
    Type: AWS::Lambda::Function
    Properties:
      # The AmazonSSMAutomationRole role expects the Lambda function name to begin with Automation*
      FunctionName: !Sub Automation-${AWS::StackName}-EnableWafLogging
      Code:
        ZipFile:
          !Sub |
            #CODE GOES HERE
            import boto3
            import json
            import os

            #
            # This Lambda function ensures that all WAF web ACLs have logging enabled.
            #
            # Trigger Type: SSM Automation
            # Scope of Automation: AWS::WAF::WebACL & AWS::WAFRegional::WebACL
            #

            FIREHOSE_ARN = os.environ['FIREHOSE_ARN']
            CONFIG_RULE_NAME = os.environ['CONFIG_RULE_NAME']

            def evaluate_compliance(webAclName):
              hasConfig = False

              #Setting up variables
              client = ''
              response = ''
              wafArn = ''

              #Check if this is a WAFv2. The ResourceId passed in is already the ARN
              if webAclName.find('arn:aws:wafv2:') >= 0:
                wafArn = webAclName
                client = boto3.client('wafv2')
              else:

                isWebAcl = True
                #Test if this is AWS::WAF::WebACL
                try:
                  print('Testing for WAF::WebACL')
                  client = boto3.client('waf')
                  response = client.get_web_acl(WebACLId=webAclName)
                except:
                  isWebAcl = False
                  pass

                if not isWebAcl:
                  #Test if this is AWS::WAFRegional::WebACL
                  try:
                    print('Testing for WAFRegional::WebACL')
                    client = boto3.client('waf-regional')
                    response = client.get_web_acl(WebACLId=webAclName)
                  except:
                    pass

                wafArn = response['WebACL']['WebACLArn']

              try:
                response = client.get_logging_configuration(ResourceArn=wafArn)
                hasConfig = True
              except:
                print('Attempting to fix non-compliance')
                print('WAF ARN: ' + wafArn)
                response = client.put_logging_configuration(LoggingConfiguration={'ResourceArn': wafArn,'LogDestinationConfigs': [ FIREHOSE_ARN ]})

            def regen_compliance():
              try:
                print("Attempting to re-run AWS Config rule to update compliance status")
                client = boto3.client('config')
                response = client.start_config_rules_evaluation(ConfigRuleNames=[CONFIG_RULE_NAME])
              except:
                pass

            def handler(event, context):
              aclName = event['webAclName']
              evaluate_compliance(aclName)

              regen_compliance()

      Handler: "index.handler"
      Environment:
        Variables:
          FIREHOSE_ARN: !GetAtt Firehose.Arn
          CONFIG_RULE_NAME: !Ref ConfigRule
      Runtime: python3.7
      Timeout: 30
      Role: !GetAtt LambdaExecutionRole.Arn

  ConfigRule:
    Type: AWS::Config::ConfigRule
    Properties:
      ConfigRuleName:
        !Join
          - ''
          - - Enable-WebACL-Logging-
            - !Ref AWS::StackName
      Description: 'Ensures that all new web ACLs have logging enabled'
      Scope:
        ComplianceResourceTypes:
          - AWS::WAF::WebACL
          - AWS::WAFv2::WebACL
          - AWS::WAFRegional::WebACL
      Source:
        Owner: "CUSTOM_LAMBDA"
        SourceDetails:
        - EventSource: "aws.config"
          MessageType: ConfigurationItemChangeNotification
        - EventSource: "aws.config"
          MessageType: OversizedConfigurationItemChangeNotification
        SourceIdentifier: !GetAtt Lambda.Arn
    DependsOn: PermissionToCallLambda

  WebACLRemediation:
    Type: "AWS::Config::RemediationConfiguration"
    Properties:
      # AutomationAssumeRole, MaximumAutomaticAttempts and RetryAttemptSeconds are Required if Automatic is true
      Automatic: true
      ConfigRuleName: !Ref ConfigRule
      MaximumAutomaticAttempts: 1
      Parameters:
        AutomationAssumeRole:
          StaticValue:
            Values:
              - !GetAtt AutoRemediationIamRole.Arn
        WebACLId:
          ResourceValue:
            Value: RESOURCE_ID
      RetryAttemptSeconds: 60
      TargetId: !Ref AutomationDoc
      TargetType: SSM_DOCUMENT


  AutoRemediationIamRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ssm.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole'
      Policies: []

  PermissionToCallRemediationLambda:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt WafLambda.Arn
      Action: "lambda:InvokeFunction"
      Principal: "ssm.amazonaws.com"

  PermissionToCallLambda:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt Lambda.Arn
      Action: "lambda:InvokeFunction"
      Principal: "config.amazonaws.com"

  Lambda:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile:
          !Sub |
            import boto3
            import json
            #
            # This Lambda function determines if WAF web ACLs have logging enabled
            #
            # Trigger Type: Config: Change Triggered
            # Scope of Changes: AWS::WAF::WebACL, AWS::WAFv2::WebACL & AWS::WAFRegional::WebACL
            #

            def is_applicable(config_item, event):
              status = config_item['configurationItemStatus']
              event_left_scope = event['eventLeftScope']
              test = ((status in ['OK', 'ResourceDiscovered']) and
                event_left_scope == False)
              return test

            def evaluate_compliance(config_item):
              wafArn = config_item['ARN']
              hasConfig = False

              client = ''
              if (config_item['resourceType'] == 'AWS::WAF::WebACL'):
                client = boto3.client('waf')
              elif (config_item['resourceType'] == 'AWS::WAFRegional::WebACL'):
                client = boto3.client('waf-regional')
              elif (config_item['resourceType'] == 'AWS::WAFv2::WebACL'):
                client = boto3.client('wafv2')

              try:
                response = client.get_logging_configuration(ResourceArn=wafArn)
                hasConfig = True
              except:
                pass

              if not hasConfig:
                return 'NON_COMPLIANT'
              else:
                return 'COMPLIANT'

            def handler(event, context):
              invoking_event = json.loads(event['invokingEvent'])
              compliance_value = 'NOT_APPLICABLE'

              if is_applicable(invoking_event['configurationItem'], event):
                compliance_value = evaluate_compliance(invoking_event['configurationItem'])

              config = boto3.client('config')
              response = config.put_evaluations(
                Evaluations=[
                  {
                  'ComplianceResourceType': invoking_event['configurationItem']['resourceType'],
                  'ComplianceResourceId': invoking_event['configurationItem']['resourceId'],
                  'ComplianceType': compliance_value,
                  'OrderingTimestamp': invoking_event['configurationItem']['configurationItemCaptureTime']
                  },
                ],
                ResultToken=event['resultToken'])
      Handler: "index.handler"
      Runtime: python3.7
      Timeout: 30
      Role: !GetAtt LambdaExecutionRole.Arn

  LambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: lambda-logging
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:*
            Resource: arn:aws:logs:*:*:*
      - PolicyName: waf-config
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - waf:PutLoggingConfiguration
            - waf:GetLoggingConfiguration
            - waf:GetWebACL
            - wafv2:PutLoggingConfiguration
            - wafv2:GetLoggingConfiguration
            - wafv2:GetWebACL
            - waf-regional:PutLoggingConfiguration
            - waf-regional:GetLoggingConfiguration
            - waf-regional:GetWebACL
            Resource:
            - arn:aws:waf::*:*
            - arn:aws:wafv2:*:*:*/*/*
            - arn:aws:waf-regional:*:*:*
      - PolicyName: config-evaluate
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - config:PutEvaluations
            - config:StartConfigRulesEvaluation
            Resource: '*'
      - PolicyName: allow-lambda-servicelinkedrole
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - iam:CreateServiceLinkedRole
            Resource: arn:aws:iam::*:role/aws-service-role/*

How the CloudFormation template works

To enable logging on a web ACL, the web ACL expects a Kinesis Data Firehose delivery stream that has a name that starts with aws-waf-logs-. You typically configure a Kinesis Data Firehose delivery stream to deliver data to an S3 bucket. This CloudFormation template creates a Kinesis Data Firehose delivery stream with a name that the web ACL is expecting and is configured to deliver data to an S3 bucket. The Kinesis Data Firehose delivery stream has the name of aws-waf-logs-StackName, where StackName is the name you provided when you created this CloudFormation stack.

The CloudFormation template also creates an AWS Config rule with the name Enable-WebACL-Logging-StackName. This AWS Config rule is configured to monitor resources of type AWS::WAF::WebACL (typically a CloudFront distribution), AWS::WAFRegional::WebACL (typically an API Gateway or an Application Load Balancer), and AWS::WAFv2::WebACL, which is the latest version of the AWS WAF API. When AWS Config detects a change to one of your web ACLs (for example, an AWS WAF rule being added to an Application Load Balancer), the event is sent off to a Lambda function for evaluation against your rule.

The Lambda function is where all the heavy lifting is performed. When the Lambda function is invoked, control is passed to the handler method. This method calls the evaluate_compliance method, which uses the Boto3 Python library to pull the logging configuration of the web ACL in question. The function simply checks to see if it can pull a logging configuration from the web ACL. If it can pull a logging configuration, that means that logging is enabled. If it cannot pull a logging configuration, it means logging is not enabled. The Lambda function then reports back the status of COMPLIANT (meaning logging is enabled) or NON_COMPLIANT (meaning logging is not enabled) to AWS Config.

This AWS Config rule is configured to auto-remediate noncompliant web ACLs. When a noncompliant web ACL is identified, AWS Config executes a Systems Manager Automation document, which calls a Lambda function to enable logging. This Lambda function is configured with an environment variable called FIREHOSE_ARN, which is the ARN of the Kinesis Data Firehose delivery stream that is created as part of this CloudFormation stack. In this Lambda function, if it cannot pull a logging configuration, it creates a new logging configuration using the Kinesis Data Firehose delivery stream that has already been configured. The Lambda function then attempts to call a method on AWS Config to re-evaluate compliance for this rule.

When you view the details of this rule within the AWS Config console, you’ll see all web ACLs listed under the Resource ID column. The Resource compliance status column will show as Compliant, meaning that these web ACLs comply with your AWS Config rule. Because the AWS Config rule enforces logging on web ACLs, you can be confident that logging is properly enabled.
 

Figure 3: Compliance status of web ACLs in AWS Config

Figure 3: Compliance status of web ACLs in AWS Config

The remaining parts of the CloudFormation template are in place to ensure that the system has sufficient permissions to work correctly. The Kinesis Data Firehose delivery stream is assigned to an IAM role, which has a policy assigned that grants it appropriate permissions to write to your S3 bucket. The AWS Config rule is granted permission to call the first Lambda function, and then Systems Manager is granted permission to call the second Lambda function. Finally, the Lambda functions are assigned to an IAM role that has permissions to request and modify the logging configurations of the web ACLs, and to update AWS Config with the results of those actions.

The CloudFormation template in this post provides a simple solution for automatically enabling logging of all web ACLs within an AWS Region. If your organization is looking for additional operational control, you can extend this example CloudFormation template to verify that all web ACLs are using the same logging configuration. This change could be accomplished by modifying the Lambda functions to ensure that the web ACL has both a logging configuration and is using the same Kinesis Data Firehose delivery stream that is defined within the CloudFormation template. If a logging configuration exists for a web ACL, but it is using the wrong Kinesis Data Firehose delivery stream, a Lambda function can delete that logging configuration and re-create it using the correct Kinesis Data Firehose delivery stream.

While this solution described in this blog post uses custom AWS Config rules and Automation documents for enabling logging on web ACLs, this approach can be generalized to use custom AWS Config rules for other contexts and for other resource types. For example, you can use this same approach to ensure that your Amazon Elastic Compute Cloud (Amazon EC2) instances comply with your internal IT security policies.

Cost Considerations

For customers who already use AWS WAF and AWS Firewall Manager, this solution adds additional costs for the use of AWS Config, Amazon Kinesis Data Firehose, and Amazon S3.

With AWS Config, you pay per configuration item recorded in your AWS account per AWS Region and the number of active rule evaluations recorded. For more information, see AWS Config pricing.

With AWS Systems Manager, you pay for the number of initiated actions performed (called steps) in the Automation and the duration of each step per second. I expect that my usage for this solution would fall under the free tier, but your usage may vary. For more information, see AWS Systems Manager pricing.

With AWS Lambda, you pay for the number of requests and the duration of those requests. However, because I don’t expect a lot of requests to Lambda in this solution, I expect that my usage would fall under the free tier, but your usage may vary. For more information, see AWS Lambda pricing.

With Amazon Kinesis Data Firehose, you pay only for the volume of data you ingest into the service. For more information, see Amazon Kinesis Data Firehose pricing.

For customers who want managed distributed denial of service (DDoS) protection, AWS Shield Advanced may be a good solution. Additionally, AWS Shield Advanced customers get AWS WAF and AWS Firewall Manager at no additional cost for usage on their resources that are protected by AWS Shield Advanced. For more information, see AWS Shield Pricing.

Conclusion

AWS Firewall Manager is a powerful solution for managing web ACLs at scale. By using a custom AWS Config rule—the same underlying technology used by AWS Firewall Manager—you can create a scalable approach to verify that all your web ACLs within an AWS Region have logging enabled. The CloudFormation template included in this blog post gives your organization a good starting point for being able to manage web ACL logging at scale.

Find out more:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mike George

Mike is a Senior Solutions Architect based out of Salt Lake City, Utah. He enjoys helping customers solve their technology problems. His interests include software engineering, security, and AI/ML

Announcing AWS Managed Rules for AWS WAF

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/announcing-aws-managed-rules-for-aws-waf/

Building and deploying secure applications is critical work, and the threat landscape is always shifting. We’re constantly working to reduce the pain of maintaining a strong cloud security posture. Today we’re launching a new capability called AWS Managed Rules for AWS WAF that helps you protect your applications without needing to create or manage the rules directly. We’ve also made multiple improvements to AWS WAF with the launch of a new, improved console and API that makes it easier than ever to keep your applications safe.

AWS WAF is a web application firewall. It lets you define rules that give you control over which traffic to allow or deny to your application. You can use AWS WAF to help block common threats like SQL injections or cross-site scripting attacks. You can use AWS WAF with Amazon API Gateway, Amazon CloudFront, and Application Load Balancer. Today it’s getting a number of exciting improvements. Creating rules is more straightforward with the introduction of the OR operator, allowing evaluations that would previously require multiple rules. The API experience has been greatly improved, and complex rules can now be created and updated with a single API call. We’ve removed the limit of ten rules per web access control list (ACL) with the introduction of the WAF Capacity Unit (WCU). The switch to WCUs allows the creation of hundreds of rules. Each rule added to a web access control list (ACL) consumes capacity based on the type of rule being deployed, and each web ACL has a defined WCU limit.

Using the New AWS WAF

Let’s take a look at some of the changes and turn on AWS Managed Rules for AWS WAF. First, I’ll go to AWS WAF and switch over to the new version.

Next I’ll create a new web ACL and add it to an existing API Gateway resource on my account.

Now I can start adding some rules to our web ACL. With the new AWS WAF, the rules engine has been improved. Statements can be combined with AND, OR, and NOT operators, allowing for more complex rule logic.

Screenshot of the WAF v2 Boolean operators

I’m going to create a simple rule that blocks any request that uses the HTTP method POST. Another cool feature is support for multiple text transformations, so for example, you could have all your requests transformed to decode HTML entities, and then made lowercase.

JSON objects now define web ACL rules (and web ACLs themselves), making them versionable assets you can match with your application code. You can also use these JSON documents to create or update rules with a single API call.

Using AWS Managed Rules for AWS WAF

Now let’s play around with something totally new: AWS Managed Rules. AWS Managed Rules give you instant protection. The AWS Threat Research Team maintains the rules, with new ones being added as additional threats are identified. Additional rule sets are available on the AWS Marketplace. Choose a managed rule group, add it to your web ACL, and AWS WAF immediately helps protect against common threats.

I’ve selected a rule group that protects against SQL attacks, and also enabled core rule set. The core rule set covers some of the common threats and security risks described in OWASP Top 10 publication. As soon as I create the web ACL and the changes are propagated, my app will be protected from a whole range of attacks such as SQL injections. Now let’s look at both rules that I’ve added to our ACL and see how things are shaping up.

Since my demo rule was quite simple, it doesn’t require much capacity. The managed rules use a bit more, but we’ve got plenty of room to add many more rules to this web ACL.

Things to Know

That’s a quick tour of the benefits of the new and improved AWS WAF. Before you head to the console to turn it on, there’s a few things to keep in mind.

  • The new AWS WAF supports AWS CloudFormation, allowing you to create and update your web ACL and rules using CloudFormation templates.
  • There is no additional charge for using AWS Managed Rules. If you subscribe to managed rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller.
  • Pricing for AWS WAF has not changed.

As always, happy (and secure) building, and I’ll see you at re:Invent or on the re:Invent livestreams soon!

— Brandon

How to use CI/CD to deploy and configure AWS security services with Terraform

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/how-use-ci-cd-deploy-configure-aws-security-services-terraform/

Like the infrastructure your applications are built on, security infrastructure can be handled using infrastructure as code (IAC) and continuous integration/continuous deployment (CI/CD). In this post, I’ll show you how to build a CI/CD pipeline using AWS Developer Tools and HashiCorp’s Terraform platform as an IAC tool for AWS Web Application Firewall (WAF) deployments. AWS WAF is a web application firewall that helps protect your applications from common web exploits that could affect availability, compromise security, or consume excessive resources.

Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently. With Terraform, you can manage AWS services and custom defined provisioning logic. You create a configuration file that describes to Terraform the components needed to run a single application or your entire AWS footprint. When Terraform consumes the configuration file, it generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure.

In this solution, you’ll use Terraform configuration files to build your WAF, deploy it automatically through a CI/CD pipeline, and retain the WAF state files to be later referenced, changed, or destroyed through subsequent deployments in a durable backend. The CI/CD solution is flexible enough to deploy many other AWS services, security or otherwise, using Terraform. For a full list of supported services, see HashiCorp’s documentation.

Note: This post assumes you’re comfortable with Terraform and its core concepts, such as state management, syntax, and command terms. You can learn about Terraform here.

Solution Overview

Figure 1: Architecture diagram

Figure 1: Architecture diagram

For this solution, you’ll use AWS CodePipeline, an automated CD service to form the foundation of the CI/CD pipeline. CodePipeline helps us automate our release pipeline through build, test, and deployment. For the purpose of this post, I will not demonstrate how to configure any test or deployment stages.

The source stage uses AWS CodeCommit, which is the AWS fully-managed managed, Git-based source code management service that can be interacted with via the console and CLI. CodeCommit encrypts the source at rest and in transit, and is integrated with AWS Identity and Access Management (IAM) to customize fine-grained access controls to the source.

Note: CodePipeline supports different sources, such as S3 or GitHub – if you’re comfortable with those services, feel free to substitute them as you walk through the solution.

For the build stage, you’ll use AWS CodeBuild, which is a fully managed CI service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild uses a build specification file, which is a collection of build commands, variables and related settings, in a YAML file, that CodeBuild uses to run a build.

Finally, you’ll create a new Amazon Simple Storage Service (S3) bucket and Amazon DynamoDB table to durably store the Terraform state files outside of the CI/CD pipeline. These files are used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.

For the purpose of this post, the security infrastructure resource deployed through the pipeline will be an AWS WAF, specifically a Global Web ACL that can attach to an Amazon CloudFront distribution, with a sample SQL Injection and Blacklist filtering rule.

The deployment steps will be as shown in Figure 1:

  1. Push artifacts, Terraform configuration files and a build specification to a CodePipeline source.
  2. CodePipeline automatically invokes CodeBuild and downloads the source files.
  3. CodeBuild installs and executes Terraform according to your build specification.
  4. Terraform stores the state files in S3 and a record of the deployment in DynamoDB.
  5. The WAF Web ACL is deployed and ready for use by your application teams.

Step 1: Set-up

In this step, you’ll create a new CodeCommit repository, S3 bucket, and DynamoDB table.

Create a CodeCommit repository

  1. Navigate to the AWS CodeCommit console, and then choose Create repository.
  2. Enter a name, description, and then choose Create. You will be taken to your repository after creation.
  3. Scroll down, and then choose Create file, as shown in Figure 2:
     
    Figure 2: CodeCommit create file

    Figure 2: CodeCommit create file

  4. You will be taken to a new screen to create a sample file, write readme into the text body, name the file readme.md, and then choose Commit changes, as shown in Figure 3:
     
    Figure 3: CodeCommit editing files

    Figure 3: CodeCommit editing files

Note: You need to create a sample file to initialize your Master branch that will not interfere with the build process. You can safely delete this file later.

Create a DynamoDB table

  1. Navigate to the Amazon DynamoDB console, and then choose Create table.
  2. Give your table a name like terraform-state-lock-dynamo.
  3. Enter LockID as your Primary key, keep the box checked for Use default settings, and then choose Create, as shown in Figure 4.

Note: Copy the name and ARN of the DynamoDB table because you will need it later when configuring your Terraform backend and CodeBuild service role.

 

Figure 4: Create DynamoDB table

Figure 4: Create DynamoDB table

Create an S3 bucket

  1. Navigate to the Amazon S3 console, and then choose Create bucket.
  2. Enter a unique name and choose the Region you have built the rest of your resources in, and then choose Next.
  3. Enable Versioning and Default encryption, and then choose Next.
  4. Select Block all public access, choose Next, and then choose Create bucket.

Note: Copy the name and ARN of the S3 bucket because you will need it later when configuring your Terraform backend and CodeBuild service role.

Step 2: Create the CI/CD pipeline

In this step, you will create the rest of your pipeline using CodePipeline and CodeBuild. If you have decided to not use CodeCommit, read CodePipeline’s documentation here about other sources.

  1. Navigate to the AWS CodePipeline console, and then choose Create pipeline.
  2. Enter a Pipeline name, select New service role, and then choose Next, as shown in Figure 5:
     
    Figure 5: CodePipeline settings

    Figure 5: CodePipeline settings

  3. Select AWS CodeCommit as the Source provider, select the name of the repository you created, and then choose master as your Branch name.
  4. Choose Amazon CloudWatch Events (recommended) as your detection option, and then choose Next, as shown in Figure 6:
     
    Figure 6: CodePipeline source stage

    Figure 6: CodePipeline source stage

  5. For Build provider, choose AWS CodeBuild and change your region as needed, and then choose Create project.

    Important: Selecting Create Project will open a new screen in your browser with the AWS CodeBuild console; do not close the browser because you will need it!

  6. Enter a Project name and description, and then scroll to the Environment section.
  7. For Environment image, choose Managed image, and then configure the following sub-selections, as shown in Figure 7:
    1. Operating system: Ubuntu
    2. Runtimes(s): Standard
    3. Image: aws/codebuild/standard:1.0
    4. Image version: Always use the latest image for this runtime version
       
      Figure 7: CodeBuild environment image

      Figure 7: CodeBuild environment image

  8. Select the checkbox under Privileged, select New service role, and take note of this Role name because you will be modifying it later.
     
    Figure 8: CodeBuild service role

    Figure 8: CodeBuild service role

  9. Choose the dropdown menu named Additional configuration (shown in Figure 8), scroll down to Environment variables, and then enter the following values, as shown in Figure 9:
    1. Name: TF_COMMAND
    2. Value: apply (this is case sensitive)
    3. Type: Plaintex
       
      Figure 9: CodeBuild variables

      Figure 9: CodeBuild variables

      Note: These values are used by the build specification to inject Terraform commands into Runtime.

  10. In the Buildspec section, choose Use a buildspec file. You don’t need to provide a name because buildspec.yaml in your ZIP package is the default value CodeBuild will look for.
  11. In the Logs section, choose the checkbox next to CloudWatch logs – optional, and then choose Continue to CodePipeline (see Figure 10).
     
    Figure 10: CodeBuild logging

    Figure 10: CodeBuild logging

    Note: The separate window will close at this point and you will be back in the CodePipeline console.

  12. Now, back in the CodePipeline console, choose Next, choose Skip deploy stage, and then choose Skip when prompted, as shown in Figure 11.
     
    Figure 11: CodePipeline skip deploy stage

    Figure 11: CodePipeline skip deploy stage

  13. Confirm your details are correct in the Review screen, and then choose Create pipeline.

After creation, you will be taken to the Pipeline Status view for the pipeline you just created. This interface allows you to monitor the status of CodePipeline in near real time. You can pivot to your Source repository and Build project by selecting the Details link, as shown in Figure 12.
 

Figure 12: CodePipeline status

Figure 12: CodePipeline status

You can also see previous CodePipeline runs by choosing the History view on the navigation pane on the left, as shown in Figure 13. This view is also useful for viewing multiple concurrent CodePipeline runs.
 

Figure 13: CodePipeline History

Figure 13: CodePipeline History

Step 3: Modify the CodeBuild service role

In this section, you will add an additional policy to your CodeBuild service role to allow Terraform to deploy your WAF and write state information to DynamoDB and S3.

  1. Navigate to the IAM Console, and then choose Roles from the navigation pane.
  2. Search for the CodeBuild service role, select it, and then choose Add inline policy.

    Note: The inline policy is used to avoid accidental deletions or modifications, and provide a one-to-one relationship between the permissions and the service role.

  3. Choose the JSON tab and paste in the following policy. Ensure you populate the Resources section of the policy with the ARN of your S3 Bucket and DynamoDB table created in Step 3.1, as shown in Figure 14.
    
    	{
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "WafSID",
            "Action": [
                "waf:CreateIPSet",
                "waf:CreateRule",
                "waf:CreateRuleGroup",
                "waf:CreateSqlInjectionMatchSet",
                "waf:CreateWebACL",
                "waf:DeleteIPSet",
                "waf:DeleteLoggingConfiguration",
                "waf:DeletePermissionPolicy",
                "waf:DeleteRule",
                "waf:DeleteRuleGroup",
                "waf:DeleteSqlInjectionMatchSet",
                "waf:DeleteWebACL",
                "waf:GetChangeToken",
                "waf:GetChangeTokenStatus",
                "waf:GetGeoMatchSet",
                "waf:GetIPSet",
                "waf:GetLoggingConfiguration",
                "waf:GetPermissionPolicy",
                "waf:GetRule",
                "waf:GetRuleGroup",
                "waf:GetSampledRequests",
                "waf:GetSqlInjectionMatchSet",
                "waf:GetWebACL",
                "waf:ListActivatedRulesInRuleGroup",
                "waf:ListGeoMatchSets",
                "waf:ListIPSets",
                "waf:ListLoggingConfigurations",
                "waf:ListRuleGroups",
                "waf:ListRules",
                "waf:ListSqlInjectionMatchSets",
                "waf:ListSubscribedRuleGroups",
                "waf:ListTagsForResource",
                "waf:ListWebACLs",
                "waf:PutLoggingConfiguration",
                "waf:PutPermissionPolicy",
                "waf:TagResource",
                "waf:UntagResource",
                "waf:UpdateIPSet",
                "waf:UpdateRule",
                "waf:UpdateRuleGroup",
                "waf:UpdateSqlInjectionMatchSet",
                "waf:UpdateWebACL"
              ],
            "Effect": "Allow",
            "Resource": "*"
          },
          {
            "Sid": "S3SID",
            "Action": [
              "s3:GetObject",
              "s3:ListBucket",
              "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": ""
          },
          {
            "Sid": "DDBSID",
            "Action": [
              "dynamodb:DeleteItem",
              "dynamodb:GetItem",
              "dynamodb:PutItem"
            ],
            "Effect": "Allow",
            "Resource": ""
          }
        ]
      }
    

     

    Figure 14: IAM resource edits

    Figure 14: IAM resource edits

  4. Choose Review policy, enter a name for the inline policy, and then choose Create policy.

You now have the required permissions to deploy, modify, and delete your WAF, as needed. For pipelines that will be deploying multiple services, or using different backends for the state files, the permissions will need to be much more broadly defined.

Step 4: Deploy the WAF with CodePipeline

With all permissions and supporting infrastructure set up, you can now deploy your WAF. Navigate to this GitHub repository and clone it; there are five files you will need:

  • provider.tf
  • variables.tf
  • waf-conditions.tf
  • waf-rules.tf
  • buildspec.yaml
  1. Open the file named provider.tf in a text editor and modify the following values, as shown in Figure 15:
    1. region=: Enter your preferred AWS Region (on lines 3 & 13)
    2. bucket=: Name of your S3 bucket (on line 10)
    3. dynamodb_table=: Name of your DynamoDB table (on line 11)
       
      Figure 15: provider.tf modification

      Figure 15: provider.tf modification

  2. Save and close this file, navigate to the AWS CodeCommit console, and then select your repository.
  3. Choose the drop-down menu named Add file, and then select Upload file (see Figure 16).
     
    Figure 16: CodeCommit Upload files

    Figure 16: CodeCommit Upload files

  4. Using the Console, upload all five files downloaded from GitHub. Alternatively, you can learn how to do this using the CLI in the AWS CodeCommit User Guide.
  5. After you’ve uploaded the last file, navigate to the CodePipeline console, and then select your pipeline.

    Note: If the source message within the UI doesn’t match what you entered for your last upload commit message, use the History tab to find your execution with all files added because the previous deployments will fail due to the missing files.

  6. To access the Build project Build logs console, in the Build section, choose Details, as shown in Figure 17.
     
    Figure 17: CodePipeline status details

    Figure 17: CodePipeline status details

  7. Choose Tail logs to view logs in near real-time from the CodeBuild environment. You will be able to see the output from Terraform, as well as other information, such as errors and environmental logs, from the CodeBuild service, as shown in Figure 18.This view can be useful for debugging missing permissions for Terraform, as it will cause a failure and Terraform will log what IAM permissions were denied
     
    Figure 18: CodeBuild tail logs

    Figure 18: CodeBuild tail logs

  8. After a successful deployment, navigate to the AWS WAF Web ACL Console, and then choose the Web ACL that was deployed.
  9. Choose the Rules tab, and then select the Rules’ hyperlinks to inspect how they were created, as shown in Figure 19.
     
    Figure 19: Web ACL views

    Figure 19: Web ACL views

From here, you can associate the Global Web ACL with a CloudFront distribution to test the efficacy. This AWS Samples GitHub repository contains a more in-depth demo on how to effectively tune a WAF.

Important clean up

You will now clean up your deployed Web ACL. Doing this is important because you will be charged $5.00 USD per Web ACL, and $1.00 per rule per Web ACL, per month, on top of other related charges. Read the AWS WAF Pricing page for more details around AWS WAF pricing.

  1. Navigate to the AWS CodeBuild console, and then choose your CodeBuild project.
  2. Choose the Build details tab, scroll to the Environment section, and then choose Edit.
  3. Expand the Additional configuration drop-down menu, and then scroll to Environment variables.
  4. Under the Value of your previously created variable, replace the value with destroy, and then choose Update environment.
  5. Navigate back to the Pipelines menu in the AWS CodePipeline console, and then select your pipeline.
  6. Choose Release Change, and then choose Release, when prompted. Wait for the Build stage to report success to confirm deletion of our WAF resources.

Conclusion

In this post, you learned how to use AWS Developer Tools to create a Serverless CI/CD pipeline that you can use to automate deployments of infrastructure with Terraform. By using Terraform and CI/CD, your security engineers can deploy security infrastructure services in a clearly defined and immutable process, such as AWS WAF.

To further extend this solution, you can include manual confirmation stages via Amazon Simple Notification Service (SNS) to enforce approvals before all CI/CD pipelines deploy resources into your accounts. You can also choose to isolate your CI/CD pipelines by placing them in a VPC. Finally, you can select the WAF Rules deployed by Terraform as the starting point for a Rule group in AWS Firewall Management Service (FMS), which allows you to define multi-account WAF deployments for accounts in AWS Organizations.

Jonthan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

AWS Security Profiles: Maritza Mills, Senior Product Manager, Perimeter Protection

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-maritza-mills-senior-product-manager-perimeter-protection/

Maritza Mills, Senior Product Manager
In the weeks leading up to re:Invent 2019, we’ll share conversations we’e had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

How long have you been at AWS, and what do you do in your current role?
I’ve been at AWS almost two years. I’m a product manager for our Perimeter Protection team, which includes products like AWS Web Application Firewall (WAF), AWS Shield and AWS Firewall Manager. I spend a lot of my time talking with customers—primarily security specialists and network engineers—about how they can protect their web applications and how they can defend against Distributed Denial of Service (DDoS) attacks. My work is about deeply understanding the technical challenges customers are facing. I then use that information to inform what we need to build next, and then I work with our engineering team to figure out how we deliver it.

What’s the most challenging part of your job?

Deciding how to prioritize what we work on next. We have AWS customers with a lot of different needs, but we only have so much time in a day. My team has to balance the most pressing customer challenges along with the challenges we anticipate customers will face in the future, plus how quickly we’ll be able to deliver solutions to those challenges. I wish that we could do everything, all the time, but we have to make difficult choices about which things we’re going to do first.

What’s your favorite part of your job?

Constantly learning something new from our customers. A big part of what I do involves listening to customers to understand their most difficult technical challenges, and every customer is different. A customer in healthcare will have different needs from a customer in finance versus one in gaming. It’s exciting to learn about the different problems each customer faces. Even at the same company, different teams may have different goals and approaches to security. Often, I might educate customers on the tools currently available to fit their needs, but there are also times when the solution a customer needs has not been invented yet, and that’s when things really get interesting.

What does cloud security mean to you on a personal level?

When I think about security in the cloud, it’s about security for individual people. If you store data in the cloud, part of “security” is protecting access to your personal information, like your messages and photos, or credit card numbers, or personal healthcare data.

But it’s not just about preventing unauthorized access. It’s also about making sure that peoples’ data are available for them when they need it. One of the big things that we focus on in Perimeter Protection—particularly in AWS Shield—is protecting applications from denial of service attacks so that the applications are always available. This means that when you need to access the money in your bank account, or say, when a hospital needs to access vital information about a patient, the apps are always up and available. When I think about security and what we’re doing at scale here at AWS, that’s what’s most important to me on a personal level.

What’s the most common misperception you encounter about cloud security?

Sometimes, customers might be tempted to use blanket protections without thinking about why their particular application or business is unique, and what different protections they should put in place as a result.

Cloud security is an ongoing discipline that requires continuously monitoring your applications and updating your controls as your applications change. At AWS, we have this concept of the shared responsibility model, where AWS handles security of the cloud itself and customers are responsible for securing the applications which they run on the cloud. We’ve designed several tools to help customers manage that responsibility and adapt and scale as quickly as their applications do. In Perimeter Protection specifically, services like AWS Firewall Manager are designed to give our customers central visibility of their security controls, such as Amazon VPC security groups, AWS WAF rules, and AWS Shield Advanced protections. Services like Firewall Manager also constantly monitor these configurations so that customers can be alerted when changes have occurred.

I encourage customers to think carefully about how their applications will change over time, and how to best monitor and adjust to those changes as they occur.

What challenges do you currently see in the application security space, and how do you think the field will evolve to meet those challenges?

One challenge that I currently see is the pace of change, and the fact that customers need ways to keep up with these changes.

In the past, many security controls have been static—you set them up, and they don’t change. But as our customers have migrated into AWS, they’re able to operate in a more dynamic way and to scale up or down more quickly than they could before. At the same time, we’ve seen the techniques used to gain unauthorized access or to launch DDoS attacks scale and become more sophisticated. Here at AWS, we’re constantly looking ahead to anticipate how customers will need to actively monitor and secure their applications, and then we build those capabilities into our services.

Today, services like AWS Shield can automatically detect and mitigate DDoS attacks and provide you with alarms and the ability to continuously monitor your network flows. AWS WAF gives you the ability to write custom rules so you can create granular protections for your specific environment. We also provide you with information regarding security best practices so you can proactively architect your applications in a way that allows you to quickly react to new and unique attack vectors. That’s part of what we’ll be addressing in our upcoming re:Invent talk, as well.

You and Paul Oremland are leading a re:Invent session called A defense-in-depth approach to building web applications. What can you tell us about the session that’s not described in the catalog?

In this session, we’ll start by reviewing common security vulnerabilities, and then provide detailed examples of how to mitigate them at each layer of their application. I expect attendees will gain a better sense of how those layers fit together and how to think creatively about their individual security needs based on how they’ve architected their system, or based on their specific business case. Finally, I want all customers, from startups to enterprise, to understand how those challenges change as they scale. We’ll be touching on all of that.

It’s a 400-level session, so it’s a technical deep dive. It’s going to have a lot of good information for security specialists and engineers who want to have hands-on examples that they can go back and use. But I also want to encourage people who are exploring or are newer to this space to join us because even if the hands-on portion is a little too advanced, I think the strategy and philosophy of how to think about application security is going to be very relevant even to those less familiar with the subject matter, and to the work that they might do in the future.

What are you hoping that your audience will do differently as a result of attending?

I want to motivate attendees to perform a review of their current architecture and consider the current controls that they have in place. Then, I’d like them to ask themselves, “Why did I put this control here?” and “Do I know exactly what risk each control is mitigating?” I’d also like them to consider whether there are protections they’ve opted not to use in the past, and whether that decision is still an acceptable risk.

How did you choose your topic?

We developed it based on numerous conversations we’ve had with customers when they’re exploring how to protect their applications at the edge. But, we usually find that the conversation expands into other parts of the stack that need protection as well. One goal of this session is to talk about these needs up front, so that customers can come into conversations with us already knowing how they’d like to protect their entire application.

Any advice for first-time attendees coming to re:Invent?

Make sure you have enough time to get to your next session. There’s a lot of different things going on at re:Invent, and they take place in a lot of different buildings. While I think we do a great job with the schedule and spacing, first-time attendees should be aware that they might have a session in one building and then need to immediately be in another building for their next session. Factor that into your commute plans.

You enjoy discussing song lyrics. Who have you enjoyed the most?

Rush is one of my favorite bands when it comes to lyricism. As a kid, the music was just interesting. But as I’ve gotten older, certain lines hit me differently.

In the song “Dreamline,” there’s a particular verse that says:

When we are young
Wandering the face of the earth
Wondering what our dreams might be worth
Learning that we’re only immortal
For a limited time

When I was younger, I really could relate to that feeling of immortality in a way, as if I was going to be around forever. But as I’ve gotten older, I’ve realized that life is very short and very precious, and I want to make the most of it. So I enjoy going back to that song every single time. It’s changed for me as I’ve grown.

And what song has created the lengthiest discussion for you?

I’ve had some great conversations about Fast Car by Tracy Chapman. The themes in that song are relatable to people in so many different ways, and at different times in their lives. One of the great things about song lyrics is that the way people interpret a song is influenced by their personal experiences in life, and this song in particular has always opened up meaningful conversations for me.

Want more AWS Security news? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Maritza Mills, Senior Product Manager

Maritza Mills

Maritza is a Senior Product Manager for AWS WAF, Shield and Firewall Manager.

Building an AWS Landing Zone from Scratch in Six Weeks

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/building-an-aws-landing-zone-from-scratch-in-six-weeks/

In an effort to deliver a simpler, smarter, and more unified experience on its website, the UK’s Ministry of Justice and its Lead Technical Architect, James Abley, created a bespoke AWS Landing Zone, a pre-defined template for an AWS account or infrastructure. And they did it in six weeks.

Supporting 33 agencies and public bodies, and making sure they all work together, the Ministry of Justice is at the heart of the United Kingdom’s justice system. Its task is to look after all parts of the justice system, including the courts, prisons, probation services, and legal aid, striving to bring the principles of justice to life for everyone in society.

In a This Is My Architecture video, shot at 2018 re:Invent in Las Vegas, James talks with AWS Solutions Architect, Simon Treacy, about the importance of delivering a consistent experience to his website’s customers, a mix of citizen and internal legal aid agency case workers.

Utilizing a number of AWS services, James walks us through the user experience, and he why decided to put AWS CoudFront and AWS Web Application Firewall (WAF) up front to improve the security posture of the ministry’s legacy applications and extend their lifespan. James also explained how he split traffic between two availability zones, using AWS Elastic Load Balancing (ELB) to provide higher availability and resilience, which will help with zero downtime deployment later on.

 

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

 

Trimming AWS WAF logs with Amazon Kinesis Firehose transformations

Post Syndicated from Tino Tran original https://aws.amazon.com/blogs/security/trimming-aws-waf-logs-with-amazon-kinesis-firehose-transformations/

In an earlier post, Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight, published on March 28, 2019, the authors showed you how to stream WAF logs with Amazon Kinesis Firehose for visualization using QuickSight. This approach used no filtering of the logs so that you could visualize the full data set. However, you are often only interested in seeing specific events. Or you might be looking to minimize log size to save storage costs. In this post, I show you how to apply rules in Amazon Kinesis Firehose to trim down logs. You can then apply the same visualizations you used in the previous solution.

AWS WAF is a web application firewall that supports full logging of all the web requests it inspects. For each request, AWS WAF logs the raw HTTP/S headers along with information on which AWS WAF rules were triggered. Having complete logs is useful for compliance, auditing, forensics, and troubleshooting custom and Managed Rules for AWS WAF. However, for some use cases, you might not want to log all of the requests inspected by AWS WAF. For example, to reduce the volume of logs, you might only want to log the requests blocked by AWS WAF, or you might want to remove certain HTTP header or query string parameter values from your logs. In many cases, unblocked requests are often already stored in your CloudFront access logs or web server logs and, therefore, using AWS WAF logs can result in redundant data for these requests, while logging blocked traffic can help you to identify bad actors or root cause false positives.

In this post, I’ll show you how to create an Amazon Kinesis Data Firehose stream to filter out unneeded records, so that you only retain log records for requests that were blocked by AWS WAF. From here, the logs can be stored in Amazon S3 or directed to SIEM (Security information and event management) and log analysis tools.

To simplify things, I’ll provide you with a CloudFormation template that will create the resources highlighted in the diagram below:
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A Kinesis Data Firehose delivery stream is used to receive log records from AWS WAF.
  2. An IAM role for the Kinesis Data Firehose delivery stream, with permissions needed to invoke Lambda and write to S3.
  3. A Lambda function used to filter out WAF records matching the default action before the records are written to S3.
  4. An IAM role for the Lambda function, with the permissions needed to create CloudWatch logs (for troubleshooting).
  5. An S3 bucket where the WAF logs will be stored.

Prerequisites and assumptions

  • In this post, I assume that the AWS WAF default action is configured to allow requests that don’t explicitly match a blocking WAF rule. So I’ll show you how to omit any records matching the WAF default action.
  • You need to already have a AWS WAF WebACL created. In this example, you’ll use a WebACL generated from the AWS WAF OWASP 10 template. For more information on deploying AWS WAF to a CloudFront or ALB resource, see the Getting Started page.

Step 1: Create a Kinesis Data Firehose delivery stream for AWS WAF logs

In this step, you’ll use the following CloudFormation template to create a Kinesis Data Firehose delivery stream that writes logs to an S3 bucket. The template also creates a Lambda function that omits AWS WAF records matching the default action.

Here’s how to launch the template:

  1. Open CloudFormation in the AWS console.
  2. For WAF deployments on Amazon CloudFront, select region US-EAST-1. Otherwise, create the stack in the same region in which your AWS WAF Web ACL is deployed.
  3. Select the Create Stack button.
  4. In the CloudFormation wizard, select Specify an Amazon S3 template URL and copy and paste the following URL into the text box, then select Next:
    https://s3.amazonaws.com/awsiammedia/public/sample/TrimAWSWAFLogs/KinesisWAFDeliveryStream.yml
  5. On the options page, leave the default values and select Next.
  6. Specify the following and then select Next:
    1. Stack name: (for example, kinesis-waf-logging). Make sure to note your stack name, as you’ll need to provide it later in the walkthrough.
    2. Buffer size: This value specifies the size in MB for which Kinesis will buffer incoming records before processing.
    3. Buffer interval: This value specifies the interval in seconds for which Kinesis will buffer incoming records before processing.

    Note: Kinesis will trigger data delivery based on which buffer condition is satisfied first. This CloudFormation sets the default buffer size to 3MB and interval size to 900 seconds to match the maximum transformation buffer size and intervals which is set by this template. To learn more about Kinesis Data Firehose buffer conditions, read this documentation.

     

    Figure 2: Specify the stack name, buffer size, and buffer interval

    Figure 2: Specify the stack name, buffer size, and buffer interval

  7. Select the check box for I acknowledge that AWS CloudFormation might create IAM resources and choose Create.
  8. Wait for the template to finish creating the resources. This will take a few minutes. On the CloudFormation dashboard, the status next to your stack should say CREATE_COMPLETE.
  9. From the AWS Management Console, open Amazon Kinesis and find the Data Firehose delivery stream on the dashboard. Note that the name of the stream will start with aws-waf-logs- and end with the name of the CloudFormation. This prefix is required in order to configure AWS WAF to write logs to the Kinesis stream.
  10. From the AWS Management Console, open AWS Lambda and view the Lambda function created from the CloudFormation template. The function name should start with the Stack name from the CloudFormation template. I included the function code generated from the CloudFormation template below so you can see what’s going on.

    Note: Through CloudFormation, the code is deployed without indentation. To format it for readability, I recommend using the code formatter built into Lambda under the edit tab. This code can easily be modified for custom record filtering or transformations.

    
        'use strict';
    
        exports.handler = (event, context, callback) => {
            /* Process the list of records and drop those containing Default_Action */
            const output = event.records.map((record) => {
                const entry = (new Buffer(record.data, 'base64')).toString('utf8');
                if (!entry.match(/Default_Action/g)){
                    return {
                        recordId: record.recordId,
                        result: 'Ok',
                        data: record.data,
                    };
                } else {
                    return {
                        recordId: record.recordId,
                        result: 'Dropped',
                        data: record.data,
                    };
                }
            });
        
            console.log(`Processing completed.  Successful records ${output.length}.`);
            callback(null, { records: output });
        };"        
        

You now have a Kinesis Data Firehose stream that AWS WAF can use for logging records.

Cost Considerations

This template sets the Kinesis transformation buffer size to 3MB and buffer interval to 900 seconds (the maximum values) in order to reduce the number of Lambda invocations used to process records. On average, an AWS WAF record is approximately 1-1.5KB. With a buffer size of 3MB, Kinesis will use 1 Lambda invocation per 2000-3000 records. Visit the AWS Lambda website to learn more about pricing.

Step 2: Configure AWS WAF Logging

Now that you have an active Amazon Kinesis Firehose delivery stream, you can configure your AWS WAF WebACL to turn on logging.

  1. From the AWS Management Console, open WAF & Shield.
  2. Select the WebACL for which you would like to enable logging.
  3. Select the Logging tab.
  4. Select the Enable Logging button.
  5. Next to Amazon Kinesis Data Firehose, select the stream that was created from the CloudFormation template in Step 1 (for example, aws-waf-logs-kinesis-waf-stream) and select Create.

Congratulations! Your AWS WAF WebACL is now configured to send records of requests inspected by AWS WAF to Kinesis Data Firehose. From there, records that match the default action will be dropped, and the remaining records will be stored in S3 in JSON format.

Below is a sample of the logs generated from this example. Notice that there are only blocked records in the logs.
 

Figure 3: Sample logs

Figure 3: Sample logs

Conclusion

In this blog, I’ve provided you with a CloudFormation template to generate a Kinesis Data Firehose stream that can be used to log requests blocked by AWS WAF, omitting requests matching the default action. By omitting the default action, I have reduced the number of log records that must be reviewed to identify bad actors, tune new WAF rules, and/or root cause false positives. For unblocked traffic, consider using CloudFront’s access logs with Amazon Athena or CloudWatch Logs Insights to query and analyze the data. To learn more about AWS WAF logs, read our developer guide for AWS WAF.

If you have feedback about this blog post, , please submit them in the Comments section below. If you have issues with AWS WAF, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tino Tran

Tino is a Senior Edge Specialized Solutions Architect based out of Florida. His main focus is to help companies deliver online content in a secure, reliable, and fast way using AWS Edge Services. He is a experienced technologist with a background in software engineering, content delivery networks, and security.

Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/enabling-serverless-security-analytics-using-aws-waf-full-logs/

Traditionally, analyzing data logs required you to extract, transform, and load your data before using a number of data warehouse and business intelligence tools to derive business intelligence from that data—on top of maintaining the servers that ran behind these tools.

This blog post will show you how to analyze AWS Web Application Firewall (AWS WAF) logs and quickly build multiple dashboards, without booting up any servers. With the new AWS WAF full logs feature, you can now log all traffic inspected by AWS WAF into Amazon Simple Storage Service (Amazon S3) buckets by configuring Amazon Kinesis Data Firehose. In this walkthrough, you’ll create an Amazon Kinesis Data Firehose delivery stream to which AWS WAF full logs can be sent, and you’ll enable AWS WAF logging for a specific web ACL. Then you’ll set up an AWS Glue crawler job and an Amazon Athena table. Finally, you’ll set up Amazon QuickSight dashboards to help you visualize your web application security. You can use these same steps to build additional visualizations to draw insights from AWS WAF rules and the web traffic traversing the AWS WAF layer. Security and operations teams can monitor these dashboards directly, without needing to depend on other teams to analyze the logs.

The following architecture diagram highlights the AWS services used in the solution:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

AWS WAF is a web application firewall that lets you monitor HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, to Amazon CloudFront or to an Application Load Balancer. AWS WAF also lets you control access to your content. Based on conditions that you specify—such as the IP addresses from which requests originate, or the values of query strings—API Gateway, CloudFront, or the Application Load Balancer responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked.

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

AWS Glue can be used to run serverless queries against your Amazon S3 data lake. AWS Glue can catalog your S3 data, making it available for querying with Amazon Athena and Amazon Redshift Spectrum. With crawlers, your metadata stays in sync with the underlying data (more details about crawlers later in this post). Amazon Athena and Amazon Redshift Spectrum can directly query your Amazon S3 data lake by using the AWS Glue Data Catalog. With AWS Glue, you access and analyze data through one unified interface without loading it into multiple data silos.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Amazon QuickSight is a business analytics service you can use to build visualizations, perform one-off analysis, and get business insights from your data. It can automatically discover AWS data sources and also works with your data sources. Amazon QuickSight enables organizations to scale to hundreds of thousands of users and delivers responsive performance by using a robust in-memory engine called SPICE.

SPICE stands for Super-fast, Parallel, In-memory Calculation Engine. SPICE supports rich calculations to help you derive insights from your analysis without worrying about provisioning or managing infrastructure. Data in SPICE is persisted until it is explicitly deleted by the user. SPICE also automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources.

Step one: Set up a new Amazon Kinesis Data Firehose delivery stream

  1. In the AWS Management Console, open the Amazon Kinesis Data Firehose service and choose the button to create a new stream.
    1. In the Delivery stream name field, enter a name for your new stream that starts with aws-waf-logs- as shown in the screenshot below. AWS WAF filters all streams starting with the keyword aws-waf-logs when it displays the delivery streams. Note the name of your stream since you’ll need it again later in the walkthrough.
    2. For Source, choose Direct PUT, since AWS WAF logs will be the source in this walkthrough.

      Figure 2: Select the delivery stream name and source

      Figure 2: Select the delivery stream name and source

  2. Next, you have the option to enable AWS Lambda if you need to transform your data before transferring it to your destination. (You can learn more about data transformation in the Amazon Kinesis Data Firehose documentation.) In this walkthrough, there are no transformations that need to be performed, so for Record transformation, choose Disabled.
    Figure 3: Select "Disabled" for record transformations

    Figure 3: Select “Disabled” for record transformations

    1. You’ll have the option to convert the JSON object to Apache Parquet or Apache ORC format for better query performance. In this example, you’ll be reading the AWS WAF logs in JSON format, so for Record format conversion, choose Disabled.

      Figure 4: Choose "Disabled" to not convert the JSON object

      Figure 4: Choose “Disabled” to not convert the JSON object

  3. On the Select destination screen, for Destination, choose Amazon S3.
    Figure 5: Choose the destination

    Figure 5: Choose the destination

    1. For the S3 destination, you can either enter the name of an existing S3 bucket or create a new S3 bucket. Note the name of the S3 bucket since you’ll need the bucket name in a later step in this walkthrough.
    2. For Source record S3 backup, choose Disabled, because the destination in this walkthrough is an S3 bucket.

      Figure 6: Enter the S3 bucket name, and select "Disabled" for the Source record S3 backup

      Figure 6: Enter the S3 bucket name, and select “Disabled” for the source record S3 backup

  4. On the next screen, leave the default conditions for Buffer size, Buffer interval, S3 compression and S3 encryption as they are. However, we recommend that you set Error logging to Enabled initially, for troubleshooting purposes.
    1. For IAM role, select Create new or choose. This opens up a new window that will prompt you to create firehose_delivery_role, as shown in the following screenshot. Choose Allow in this window to accept the role creation. This grants the Kinesis Data Firehose service access to the S3 bucket.

      Figure 7: Select "Create new or choose" for IAM Role

      Figure 7: Select “Allow” to create the IAM role “firehose_delivery_role”

  5. On the last step of configuration, review all the options you’ve chosen, and then select Create delivery stream. This will cause the delivery stream to display as “Creating” under Status. In a couple of minutes, the status will change to “Active,” as shown in the below screenshot.

    Figure 8: Review the options you selected

    Figure 8: Review the options you selected

Step two: Enable AWS WAF logging for a specific Web ACL

  1. From the AWS Management Console, open the AWS WAF service and choose Web ACLs. Open your Web ACL resource, which can either be deployed on a CloudFront distribution or on an Application Load Balancer.
    1. Choose the Web ACL for which you want to enable logging. (In the below screenshot, we’ve selected a Web ACL in the US East Region.)
    2. On the Logging tab, choose Enable Logging.

      Figure 9: Choose "Enable Logging"

      Figure 9: Choose “Enable Logging”

  2. The next page displays all the delivery streams that start with aws-waf-logs. Choose the Amazon Kinesis Data Firehose delivery stream that you created for AWS WAF logs at the start of this walkthrough. (In the screenshot below, our example stream name is “aws-waf-logs-us-east-1)
    1. You can also choose to redact certain fields that you wish to exclude from being captured in the logs. In this walkthrough, you don’t need to choose any fields to redact.
    2. Select Create.

      Figure 10: Choose your delivery stream, and select "Create"

      Figure 10: Choose your delivery stream, and select “Create”

After a couple of minutes, you’ll be able to inspect the S3 bucket that you defined in the Kinesis Data Firehose delivery stream. The log files are created in directories by year, month, day, and hour.

Step three: Set up an AWS Glue crawler job and Amazon Athena table

The purpose of a crawler within your Data Catalog is to traverse your data stores (such as S3) and extract the metadata fields of the files. The output of the crawler consists of one or more metadata tables that are defined in your Data Catalog. When the crawler runs, the first classifier in your list to successfully recognize your data store is used to create a schema for your table. AWS Glue provides built-in classifiers to infer schemas from common files with formats that include JSON, CSV, and Apache Avro.

  1. In the AWS Management Console, open the AWS Glue service and choose Crawler to setup a crawler job.
  2. Choose Add crawler to launch a wizard to setup the crawler job. For Crawler name, enter a relevant name. Then select Next.

    Figure 11: Enter "Crawler name," and select "Next"

    Figure 11: Enter “Crawler name,” and select “Next”

  3. For Choose a data store, select S3 and include the path of the S3 bucket that stores your AWS WAF logs, which you made note of in step 1.3. Then choose Next.

    Figure 12: Choose a data store

    Figure 12: Choose a data store

  4. When you’re given the option to add another data store, choose No.
  5. Then, choose Create an IAM role and enter a name. The role grants access to the S3 bucket for the AWS Glue service to access the log files.

    Figure 13: Choose "Create an IAM role," and enter a name

    Figure 13: Choose “Create an IAM role,” and enter a name

  6. Next, set the frequency to Run on demand. You can also schedule the crawler to run periodically to make sure any changes in the file structure are reflected in your data catalog.

    Figure 14: Set the "Frequency" to "Run on demand"

    Figure 14: Set the “Frequency” to “Run on demand”

  7. For output, choose the database in which the Athena table is to be created and add a prefix to identify your table name easily. Select Next.

    Figure 15: Choose the database, and enter a prefix

    Figure 15: Choose the database, and enter a prefix

  8. Review all the options you’ve selected for the crawler job and complete the wizard by selecting the Finish button.
  9. Now that the crawler job parameters are set up, on the left panel of the console, choose Crawlers to select your job and then choose Run crawler. The job creates an Amazon Athena table. The duration depends on the size of the log files.

    Figure 16: Choose "Run crawler" to create an Amazon Athena table

    Figure 16: Choose “Run crawler” to create an Amazon Athena table

  10. To see the Amazon Athena table created by the AWS Glue crawler job, from the AWS Management Console, open the Amazon Athena service. You can filter by your table name prefix.
      1. To view the data, choose Preview table. This displays the table data with certain fields showing data in JSON object structure.
    Figure 17: Choose "Preview table" to view the data

    Figure 17: Choose “Preview table” to view the data

Step four: Create visualizations using Amazon QuickSight

  1. From the AWS Management Console, open Amazon QuickSight.
  2. In the Amazon QuickSight window, in the top left, choose New Analysis. Choose New Data set, and for the data source choose Athena. Enter an appropriate name for the data source name and choose Create data source.

    Figure 18: Enter the "Data source name," and choose "Create data source"

    Figure 18: Enter the “Data source name,” and choose “Create data source”

  3. Next, choose Use custom SQL to extract all the fields in the JSON object using the following SQL query:
    
        ```
        with d as (select
        waf.timestamp,
            waf.formatversion,
            waf.webaclid,
            waf.terminatingruleid,
            waf.terminatingruletype,
            waf.action,
            waf.httpsourcename,
            waf.httpsourceid,
            waf.HTTPREQUEST.clientip as clientip,
            waf.HTTPREQUEST.country as country,
            waf.HTTPREQUEST.httpMethod as httpMethod,
            map_agg(f.name,f.value) as kv
        from sampledb.jsonwaflogs_useast1 waf,
        UNNEST(waf.httprequest.headers) as t(f)
        group by 1,2,3,4,5,6,7,8,9,10,11)
        select d.timestamp,
            d.formatversion,
            d.webaclid,
            d.terminatingruleid,
            d.terminatingruletype,
            d.action,
            d.httpsourcename,
            d.httpsourceid,
            d.clientip,
            d.country,
            d.httpMethod,
            d.kv['Host'] as host,
            d.kv['User-Agent'] as UA,
            d.kv['Accept'] as Acc,
            d.kv['Accept-Language'] as AccL,
            d.kv['Accept-Encoding'] as AccE,
            d.kv['Upgrade-Insecure-Requests'] as UIR,
            d.kv['Cookie'] as Cookie,
            d.kv['X-IMForwards'] as XIMF,
            d.kv['Referer'] as Referer
        from d;
        ```        
        

  4. To extract individual fields, copy the previous SQL query and paste it in the New custom SQL box, then choose Edit/Preview data.
    Figure 19: Paste the SQL query in "New custom SQL query"

    Figure 19: Paste the SQL query in “New custom SQL query”

    1. In the Edit/Preview data view, for Data source, choose SPICE, then choose Finish.

      Figure 20: Choose "Spice" and then "Finish"

      Figure 20: Choose “Spice” and then “Finish”

  5. Back in the Amazon Quicksight console, under the Fields section, select the drop-down menu and change the data type to Date.

    Figure 21: In the Amazon Quicksight console, change the data type to "Date"

    Figure 21: In the Amazon Quicksight console, change the data type to “Date”

  6. After you see the Date column appear, enter an appropriate name for the visualizations at the top of the page, then choose Save.

    Figure 22: Enter the name for the visualizations, and choose "Save"

    Figure 22: Enter the name for the visualizations, and choose “Save”

  7. You can now create various visualization dashboards with multiple visual types by using the drag-and-drop feature. You can drag and drop combinations of fields such as Action, Client IP, Country, Httpmethod, and User Agents. You can also add filters on Date to view dashboards for a specific timeline. Here are some sample screenshots:
    Figure 23: Visualization dashboard samples

    Figure 23a: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23b: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23c: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23d: Visualization dashboard samples

Conclusion

You can enable AWS WAF logs to Amazon S3 buckets and analyze the logs while they are being streamed by configuring Amazon Kinesis Data Firehose. You can further enhance this solution by automating the streaming of data and using AWS Lambda for any data transformations based on your specific requirements. Using Amazon Athena and Amazon QuickSight makes it easy to analyze logs and build visualizations and dashboards for executive leadership teams. Using these solutions, you can go serverless and let AWS do the heavy lifting for you.

Author photo

Umesh Kumar Ramesh

Umesh is a Cloud Infrastructure Architect with Amazon Web Services. He delivers proof-of-concept projects, topical workshops, and lead implementation projects to various AWS customers. He holds a Bachelor’s degree in Computer Science & Engineering from National Institute of Technology, Jamshedpur (India). Outside of work, Umesh enjoys watching documentaries, biking, and practicing meditation.

Author photo

Muralidhar Ramarao

Muralidhar is a Data Engineer with the Amazon Payment Products Machine Learning Team. He has a Bachelor’s degree in Industrial and Production Engineering from the National Institute of Engineering, Mysore, India. Outside of work, he loves to hike. You will find him with his camera or snapping pictures with his phone, and always looking for his next travel destination.

This Is My Architecture: Mobile Cryptocurrency Mining

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/this-is-my-architecture-mobile-cryptocurrency-mining/

In North America, approximately 95% of adults over the age of 25 have a bank account. In the developing world, that number is only about 52%. Cryptocurrencies can provide a platform for millions of unbanked people in the world to achieve financial freedom on a more level financial playing field.

Electroneum, a cryptocurrency company located in England, built its cryptocurrency mobile back end on AWS and is using the power of blockchain to unlock the global digital economy for millions of people in the developing world.

Electroneum’s cryptocurrency mobile app allows Electroneum customers in developing countries to transfer ETNs (exchange-traded notes) and pay for goods using their smartphones. Listen in to the discussion between AWS Solutions Architect Toby Knight and Electroneum CTO Barry Last as they explain how the company built its solution. Electroneum’s app is a web application that uses a feedback loop between its web servers and AWS WAF (a web application firewall) to automatically block malicious actors. The system then uses Athena, with a gamified approach, to provide an additional layer of blocking to prevent DDoS attacks. Finally, Electroneum built a serverless, instant payments system using AWS API Gateway, AWS Lambda, and Amazon DynamoDB to help its customers avoid the usual delays in confirming cryptocurrency transactions.

 

How to use AWS WAF to filter incoming traffic from embargoed countries

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-use-aws-waf-to-filter-incoming-traffic-from-embargoed-countries/

AWS WAF provides inline inspection of inbound traffic at the application layer to detect and filter against critical web application security flaws from common web exploits that could affect application availability, compromise security, or consume excessive resources. The inbound traffic is inspected against web access control list (web ACL) rules that you can create manually or programmatically—either through AWS WAF Security Automations or through the AWS Marketplace. AWS WAF functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service. It can detect and filter malicious web requests and scale to handle bursts in traffic.

We have customers in public sector and financial services who use AWS WAF to block requests from certain geographical locations, like embargoed countries, by applying geographic match conditions. By using AWS WAF, our customers can create a customized list to easily manage an automated solution for geographic blocking.

In order to reduce the operational burden of maintaining an up-to-date list of rules for geographical location blocking, this blog post provides you with an automated solution that applies geography-based IP (GeoIP) restrictions based on a descriptive JSON file that lists all the locations that you want to block. When you update this file, the automation applies all rules to the specified AWS WAF web ACL. For countries not listed on the geographic match condition (or if you just need to block a subset of IPs from a country), the JSON file also has a section where you can list IP ranges that should be blocked.

If you deploy our solution with the default parameters, it builds the following environment:
 

Figure 1: Solution diagram

Figure 1: Solution diagram

As the diagram shows, the solution uses these resources:

  • AWS WAF, which functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service.
  • Two AWS Lambda functions — a Custom Resource function and an Embargoed Countries Parser.
    1. The Custom Resource function helps provision the solution when the AWS WAF conditions, rules, and web ACL are created and configured. It’s also triggered when you upload an initial version of the embargoed countries JSON file to your Amazon Simple Storage (Amazon S3) bucket.
    2. The Embargoed Countries Parser function is trigged whenever a new JSON file is uploaded to the S3 bucket. When an upload occurs, the function parses the new file and enforces AWS WAF rules that reflect what the file describes.
  • An Amazon Simple Storage Service (Amazon S3) bucket, where you’ll save the embargoed countries JSON file.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following resources:
    1. AWS WAF, to list, create, obtain, and update geographic IP restrictions, conditions, and web ACLs.
    2. Amazon CloudWatch logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload and read the embargoed countries JSON file.

The image below shows a reference architecture where malicious traffic is blocked by AWS WAF rules.
 

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

As a starting point for this walk-through, we created a list of embargoed countries based on information published by the Office of Foreign Assets Control (OFAC) of the US Department of the Treasury.
OFAC sanctions and restrictions vary in scope, and OFAC does not maintain one single list of embargoed countries. OFAC also imposes additional restrictions on doing business with certain individuals and entities that are not covered by the embargoed country sanctions list. For the most up-to-date information about embargoed countries and other OFAC sanctions programs, see the US Department of the Treasury’s Resource Center.

IMPORTANT NOTES:

You’re responsible for updating your list of embargoed countries, based on geographic IP restrictions that you establish and keep up-to-date. Later in the post, we’ll show you how to update and edit your list, but we want to emphasize that ensuring your embargo list is current and comprehensive for your business and compliance needs is a critical part of your responsibility as a customer.

Further, the accuracy of the IP Address to country lookup database used by WAF varies by region. Based on recent tests, our overall accuracy for the IP address to country mapping is 99.8%. We recommend that you work with regulatory compliance experts to decide whether your solution meets your compliance needs.

Deploying the CloudFormation stack

To get started, first make sure you have at least one resource that’s associated with your web ACL. This can be either a CloudFront distribution or an Application Load Balancer (ALB). Then, select the Launch Stack button below to launch an AWS CloudFormation stack in your account. It will take approximately 5 minutes for the CloudFormation stack to complete:
 
Select this image to open a link that starts building the CloudFormation stack

The code for this solution is available on GitHub.

Notes: The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar.

  1. On the Select Template page, select Next.
  2. On the Specify Details page, give your solution stack a name.
  3. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    The following screenshot illustrates the same.
     

    Figure 3: Review and modify parameters

    Figure 3: Review and modify parameters

    ParameterValueDescription
    EndpointType <Requires input>

    Default: CloudFront

    Choose whether the endpoint that needs to be protected by AWS WAF is associated with CloudFront or ALB.
    WebAclIdInsert the webACL id (or leave it empty to create a new one)
    RuleAction AllowedValues:

    BLOCK, COUNT

    Default: BLOCK

    Select the action that AWS WAF takes when a web request comes from an embargoed country.
    RulePriorityIpDefault: 100Specifies the order in which the embargoed IPs rule will be evaluated in a WebACL.
    RulePriorityGeoDefault: 101Specifies the order in which the embargoed country rule will be evaluated in a WebACL.

     

  4. Select Next.
  5. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like. Then select Next.
  6. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  7. To deploy the stack, select Create. In approximately two minutes, the stack creation should be complete. You can verify this on the Events tab by finding your stack ID and looking for the CREATE_COMPLETE status:

    Upon the completion of the CloudFormation stack, you should see CREATE_COMPLETE as the Status. It should look like this:
     

    Figure 4: Look for "CREATE_COMPLETE" as the "Status"

    Figure 4: Look for “CREATE_COMPLETE” as the “Status”

  8. Return to the AWS Management Console, where you’ll see that an additional rule has been added, as shown in the following diagram:
     
    Figure 5: An additional rule has been added

    Figure 5: An additional rule has been added

  9. Choose your newly created rule, then go to the Rules details page. You should now see the JSON file that contains our initial list of embargoed countries to filter traffic from. This is a starting point list: it’s your responsibility as a customer to update the embargoed countries list going forward. To update the list of countries, you can edit the JSON file located in the Amazon S3 bucket using the steps in the next section of this post.
     

    Note:Check to make sure that the web ACL is associated with the endpoint you need to protect, or you run the risk of leaving the endpoint unprotected against inbound traffic from the geographic regions you want to block. More information about how to associate an endpoint with WAF web ACL can be found here.

Updating the list of embargoed countries

  1. To find the S3 bucket, on the completed CloudFormation Template, go to the Resources tab.
  2. Select the Physical ID to see an Amazon S3 bucket with an S3 object called
    embargoed-countries.json. Youll be directed to the Amazon S3 Bucket.
     
    Figure 6: The "embargoed-countries.json" file

    Figure 6: The “embargoed-countries.json” file

  3. Download the embargoed-countries.json file, edit it, and upload the edited file to the same location. Wait for a couple of minutes for the changes to propagate to AWS WAF.

Conclusion

You now have access to a simple solution to block inbound traffic from specific geographic regions. Using this solution, you can use AWS WAF to help protect your applications from unwanted or unauthorized traffic to your application served by CloudFront or ALB.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Rajat Ravinder Varuni

As a security architect with Amazon Web Services, Rajat provides subject matter expertise in the architecture and deployment of solutions that reduce the likelihood of data leakage, web application and denial-of-service attacks, as well as in the design of data encryption methodologies to secure mission-critical data. Find his other contributions to the AWS Security Blog here.

Author

Heitor Vital

Heitor Vital is a Solutions Builder at Amazon Web Services. His team outlines AWS best practices and provides prescriptive architectural guidance, as well as automated solutions that you can deploy in your AWS account in minutes. He contributes to projects such as AWS WAF Security Automation, Data Lake Solution, and Serverless Bot Framework.

Amazon API Gateway adds support for AWS WAF

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/amazon-api-gateway-adds-support-for-aws-waf/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

Today, I’m excited to tell you about the Amazon API Gateway native integration with AWS WAF. Previously, if you wanted to secure your API in Amazon API Gateway with AWS WAF, you had to deploy a Regional API endpoint and use your own Amazon CloudFront distribution. This new feature now enables you to provision any­­ API Gateway endpoint and secure it with AWS WAF without having to configure your own CloudFront distribution to add that capability.

In Part 1 of this series, I described how to protect your API provided by API Gateway using AWS WAF.

In Part 2 of this series, I described how to use API keys as a shared secret between a CloudFront distribution and API Gateway to secure public access to your API in API Gateway. This new AWS WAF integration means that the method described in Part 2 is no longer necessary.

The following image describes methods to secure your API in API Gateway before and after this feature was made available.

Where:

  1. AWS WAF securing CloudFront endpoint only.
  2. AWS WAF securing both CloudFront and API Gateway endpoints natively.
  3. AWS WAF securing Amazon API Gateway endpoints natively where global presence is not needed.

Enabling AWS WAF for an API managed by Amazon API Gateway

For this walkthrough, you can use an existing Pet Store API or any API in API Gateway that you may already have deployed. You create a new AWS WAF web ACL that is later associated with your API Gateway stage.

Follow these steps to create a web ACL:

  1. Open the AWS WAF console.
  2. Choose Create web ACL.
  3. For Web ACL Name, enter ApiGateway-HTTP-Flood-Sample.
  4. For Region, choose US East (N. Virginia).
  5. Choose Next until you reach Step 3: Create rules.
  6. Choose Create rule and enter HTTP Flood Sample.
  7. For Rule type, choose Rate-based rule.
  8. For Rate limit, enter 2000 and choose Create.
  9. For Default action, choose Allow all requests that don’t match any rules.
  10. Choose Review and create.
  11. Confirm that your options look similar to the following image and choose Confirm and create next.

You can now follow the steps to enable the AWS WAF web ACL for an existing API in API Gateway:

  1. Open the Amazon API Gateway console.
  2. Choose Stages, prod.
  3. Under Web Application Firewall (WAF), choose ApiGateway-HTTP-Flood-Sample (or the web ACL that you just created).
  4. Choose Save Changes.

Testing your API in API Gateway now secured by AWS WAF

AWS WAF provides HTTP flood protection that is a rate-based rule. The rate-based rule is automatically triggered when web requests from a client exceed a configurable threshold. The threshold is defined by the maximum number of incoming requests allowed from a single IP address within a five-minute period.

After this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold. For this example, you defined 2000 requests as a threshold for the HTTP flood rate–based rule.

Artillery, an open source modern load testing toolkit, is used to send a large number of requests directly to the API Gateway Invoke URL to test whether your AWS WAF native integration is working correctly.

Firstly, follow these steps to retrieve the correct Invoke URL of your Pet Store API:

  1. Open the API Gateway console.
  2. In the left navigation pane, open the PetStore API.
  3. Choose Stages, select prod, and copy the Invoke URL value.

Secondly, use cURL to query your distribution and see the API output before the rate limit rule is triggered:

$ curl -s INVOKE_URL/pets

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
] 

Then, use Artillery to send a large number of requests in a short period of time to trigger your rate limit rule:

$ artillery quick -n 2000 --count 10 INVOKE_URL/pets

With this command, Artillery sends 2000 requests to your PetStore API from 10 concurrent users. By doing so, you trigger the rate limit rule in less than the 5-minute threshold. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try re-running the cURL command. You should no longer see a list of pets:

{“message”:”Forbidden”}

As you can see from the output, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

As you can see, with the AWS WAF native integration with Amazon API Gateway, you no longer have to manage your own Amazon CloudFront distribution in order to secure your API with AWS WAF. The AWS WAF native integration makes this process seamless.

I hope that you found the information in this post helpful. Remember that you can use this integration today with all Amazon API Gateway endpoints (Edge, Regional, and Private). It is available in the following Regions:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • EU (Frankfurt)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)