Tag Archives: Defense in depth

Three common cloud encryption questions and their answers on AWS

Post Syndicated from Peter M. O'Donnell original https://aws.amazon.com/blogs/security/three-common-cloud-encryption-questions-and-their-answers-on-aws/

At Amazon Web Services (AWS), we encourage our customers to take advantage of encryption to help secure their data. Encryption is a core component of a good data protection strategy, but people sometimes have questions about how to manage encryption in the cloud to meet the growth pace and complexity of today’s enterprises. Encryption can seem like a difficult task—people often think they need to master complicated systems to encrypt data—but the cloud can simplify it.

In response to frequently asked questions from executives and IT managers, this post provides an overview of how AWS makes encryption less difficult for everyone. In it, I describe the advantages to encryption in the cloud, common encryption questions, and some AWS services that can help.

Cloud encryption advantages

The most important thing to remember about encryption on AWS is that you always own and control your data. This is an extension of the AWS shared responsibility model, which makes the secure delivery and operation of your applications the responsibility of both you and AWS. You control security in the cloud, including encryption of content, applications, systems, and networks. AWS manages security of the cloud, meaning that we are responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud.

Encryption in the cloud offers a number of advantages in addition to the options available in on-premises environments. This includes on-demand access to managed services that enable you to more easily create and control the keys used for cryptographic operations, integrated identity and access management, and automating encryption in transit and at rest. With the cloud, you don’t manage physical security or the lifecycle of hardware. Instead of the need to procure, configure, deploy, and decommission hardware, AWS offers you a managed service backed by hardware that meets the security requirements of FIPS 140-2. If you need to use that key tens of thousands of times per second, the elastic capacity of AWS services can scale to meet your demands. Finally, you can use integrated encryption capabilities with the AWS services that you use to store and process your data. You pay only for what you use and can instead focus on configuring and monitoring logical security, and innovating on behalf of your business.

Addressing three common encryption questions

For many of the technology leaders I work with, agility and risk mitigation are top IT business goals. An enterprise-wide cloud encryption and data protection strategy helps define how to achieve fine-grained access controls while maintaining nearly continuous visibility into your risk posture. In combination with the wide range of AWS services that integrate directly with AWS Key Management Service (AWS KMS), AWS encryption services help you to achieve greater agility and additional control of your data as you move through the stages of cloud adoption.

The configuration of AWS encryption services is part of your portion of the shared responsibility model. You’re responsible for your data, AWS Identity and Access Management (IAM) configuration, operating systems and networks, and encryption on the client-side, server-side, and network. AWS is responsible for protecting the infrastructure that runs all of the services offered in AWS.

That still leaves you with responsibilities around encryption—which can seem complex, but AWS services can help. Three of the most common questions we get from customers about encryption in the cloud are:

  • How can I use encryption to prevent unauthorized access to my data in the cloud?
  • How can I use encryption to meet compliance requirements in the cloud?
  • How do I demonstrate compliance with company policies or other standards to my stakeholders in the cloud?

Let’s look closely at these three questions and some ways you can address them in AWS.

How can I use encryption to prevent unauthorized access to my data in the cloud?

Start with IAM

The primary way to protect access to your data is access control. On AWS, this often means using IAM to describe which users or roles can access resources like Amazon Simple Storage Service (Amazon S3) buckets. IAM allows you to tightly define the access for each user—whether human or system—and set the conditions in which that access is allowed. This could mean requiring the use of multi-factor authentication, or making the data accessible only from your Amazon Virtual Private Cloud (Amazon VPC).

Encryption allows you to introduce an additional authorization condition before granting access to data. When you use AWS KMS with other services, you can get further control over access to sensitive data. For example, with S3 objects that are encrypted by KMS, each IAM user must not only have access to the storage itself but also have authorization to use the KMS key that protects the data. This works similarly for Amazon Elastic Block Store (Amazon EBS). For example, you can allow an entire operations team to manage Amazon EBS volumes and snapshots, but, for certain Amazon EBS volumes that contain sensitive data, you can use a different KMS master key with different permissions that are granted only to the individuals you specify. This ability to define more granular access control through independent permission on encryption keys is supported by all AWS services that integrate with KMS.

When you configure IAM for your users to access your data and resources, it’s critical that you consider the principle of least privilege. This means you grant only the access necessary for each user to do their work and no more. For example, instead of granting users access to an entire S3 bucket, you can use IAM policy language to specify the particular Amazon S3 prefixes that are required and no others. This is important when thinking about the difference between using a service—data plane events—and managing a service—management plane events. An application might store and retrieve objects in an S3 bucket, but it’s rarely the case that the same application needs to list all of the buckets in an account or configure the bucket’s settings and permissions.

Making clear distinctions between who can use resources and who can manage resources is often referred to as the principle of separation of duties. Consider the circumstance of having a single application with two identities that are associated with it—an application identity that uses a key to encrypt and decrypt data and a manager identity that can make configuration changes to the key. By using AWS KMS together with services like Amazon EBS, Amazon S3, and many others, you can clearly define which actions can be used by each persona. This prevents the application identity from making configuration or permission changes while allowing the manager to make those changes but not use the services to actually access the data or use the encryption keys.

Use AWS KMS and key policies with IAM policies

AWS KMS provides you with visibility and granular permissions control of a specific key in the hierarchy of keys used to protect your data. Controlling access to the keys in KMS is done using IAM policy language. The customer master key (CMK) has its own policy document, known as a key policy. AWS KMS key policies can work together with IAM identity policies or you can manage the permissions for a KMS CMK exclusively with key policies. This gives you greater flexibility to separately assign permissions to use the key or manage the key, depending on your business use case.

Encryption everywhere

AWS recommends that you encrypt as much as possible. This means encrypting data while it’s in transit and while it’s at rest.

For customers seeking to encrypt data in transit for their public facing applications, our recommended best practice is to use AWS Certificate Manager (ACM). This service automates the creation, deployment, and renewal of public TLS certificates. If you’ve been using SSL/TLS for your websites and applications, then you’re familiar with some of the challenges related to dealing with certificates. ACM is designed to make certificate management easier and less expensive.

One way ACM does this is by generating a certificate for you. Because AWS operates a certificate authority that’s already trusted by industry-standard web browsers and operating systems, public certificates created by ACM can be used with public websites and mobile applications. ACM can create a publicly trusted certificate that you can then deploy into API Gateway, Elastic Load Balancing, or Amazon CloudFront (a globally distributed content delivery network). You don’t have to handle the private key material or figure out complicated tooling to deploy the certificates to your resources. ACM helps you to deploy your certificates either through the AWS Management Console or with automation that uses AWS Command Line Interface (AWS CLI) or AWS SDKs.

One of the challenges related to certificates is regularly rotating and renewing them so they don’t unexpectedly expire and prevent your users from using your website or application. Fortunately, ACM has a feature that updates the certificate before it expires and automatically deploys the new certificate to the resources associated with it. No more needing to make a calendar entry to remind your team to renew certificates and, most importantly, no more outages because of expired certificates.

Many customers want to secure data in transit for services by using privately trusted TLS certificates instead of publicly trusted TLS certificates. For this use case, you can use AWS Certificate Manager Private Certificate Authority (ACM PCA) to issue certificates for both clients and servers. ACM PCA provides an inexpensive solution for issuing internally trusted certificates and it can be integrated with ACM with all of the same integrative benefits that ACM provides for public certificates, including automated renewal.

For encrypting data at rest, I strongly encourage using AWS KMS. There is a broad range of AWS storage and database services that support KMS integration so you can implement robust encryption to protect your data at rest within AWS services. This lets you have the benefit of the KMS capabilities for encryption and access control to build complex solutions with a variety of AWS services without compromising on using encryption as part of your data protection strategy.

How can I use encryption to meet compliance requirements in the cloud?

The first step is to identify your compliance requirements. This can often be done by working with your company’s risk and compliance team to understand the frameworks and controls that your company must abide by. While the requirements vary by industry and region, the most common encryption compliance requirements are to encrypt your data and make sure that the access control for the encryption keys (for example by using AWS KMS CMK key policies) is separate from the access control to the encrypted data itself (for example through Amazon S3 bucket policies).

Another common requirement is to have separate encryption keys for different classes of data, or for different tenants or customers. This is directly supported by AWS KMS as you can have as many different keys as you need within a single account. If you need to use even more than the 10,000 keys AWS KMS allows by default, contact AWS Support about raising your quota.

For compliance-related concerns, there are a few capabilities that are worth exploring as options to increase your coverage of security controls.

  • Amazon S3 can automatically encrypt all new objects placed into a bucket, even when the user or software doesn’t specify encryption.
  • You can use batch operations in Amazon S3 to encrypt existing objects that weren’t originally stored with encryption.
  • You can use the Amazon S3 inventory report to generate a list of all S3 objects in a bucket, including their encryption status.

AWS services that track encryption configurations to comply with your requirements

Anyone who has pasted a screenshot of a configuration into a word processor at the end of the year to memorialize compliance knows how brittle traditional on-premises forms of compliance attestation can be. Everything looked right the day it was installed and still looked right at the end of the year—but how can you be certain that everything was correctly configured at all times?

AWS provides several different services to help you configure your environment correctly and monitor its configuration over time. AWS services can also be configured to perform automated remediation to correct any deviations from your desired configuration state. AWS helps automate the collection of compliance evidence and provides nearly continuous, rather than point in time, compliance snapshots.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations and helps you to automate the evaluation of recorded configurations against desired configurations. One of the most powerful features of AWS Config is AWS Config Rules. While AWS Config continuously tracks the configuration changes that occur among your resources, it checks whether these changes violate any of the conditions in your rules. If a resource violates a rule, AWS Config flags the resource and the rule as noncompliant. AWS Config comes with a wide range of prewritten managed rules to help you maintain compliance for many different AWS services. The managed rules include checks for encryption status on a variety of resources, ACM certificate expiration, IAM policy configurations, and many more.

For additional monitoring capabilities, consider Amazon Macie and AWS Security Hub. Amazon Macie is a service that helps you understand the contents of your S3 buckets by analyzing and classifying the data contained within your S3 objects. It can also be used to report on the encryption status of your S3 buckets, giving you a central view into the configurations of all buckets in your account, including default encryption settings. Amazon Macie also integrates with AWS Security Hub, which can perform automated checks of your configurations, including several checks that focus on encryption settings.

Another critical service for compliance outcomes is AWS CloudTrail. CloudTrail enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. AWS KMS records all of its activity in CloudTrail, allowing you to identify who used the encryption keys, in what context, and with which resources. This information is useful for operational purposes and to help you meet your compliance needs.

How do I demonstrate compliance with company policy to my stakeholders in the cloud?

You probably have internal and external stakeholders that care about compliance and require that you document your system’s compliance posture. These stakeholders include a range of possible entities and roles, including internal and external auditors, risk management departments, industry and government regulators, diligence teams related to funding or acquisition, and more.

Unfortunately, the relationship between technical staff and audit and compliance staff is sometimes contentious. AWS believes strongly that these two groups should work together—they want the same things. The same services and facilities that engineering teams use to support operational excellence can also provide output that answers stakeholders’ questions about security compliance.

You can provide access to the console for AWS Config and CloudTrail to your counterparts in audit and risk management roles. Use AWS Config to continuously monitor your configurations and produce periodic reports that can be delivered to the right stakeholders. The evolution towards continuous compliance makes compliance with your company policies on AWS not just possible, but often better than is possible in traditional on-premises environments. AWS Config includes several managed rules that check for encryption settings in your environment. CloudTrail contains an ongoing record of every time AWS KMS keys are used to either encrypt or decrypt your resources. The contents of the CloudTrail entry include the KMS key ID, letting your stakeholders review and connect the activity recorded in CloudTrail with the configurations and permissions set in your environment. You can also use the reports produced by Security Hub automated compliance checks to verify and validate your encryption settings and other controls.

Your stakeholders might have further requirements for compliance that are beyond your scope of control because AWS is operating those controls for you. AWS provides System and Organization Controls (SOC) Reports that are independent, third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations and compliance. You can consult the AWS SOC2 report, available through AWS Artifact, for more information about how AWS operates in the cloud and provides assurance around AWS security procedures. The SOC2 report includes several AWS KMS-specific controls that might be of interest to your audit-minded colleagues.

Summary

Encryption in the cloud is easier than encryption on-premises, powerful, and can help you meet the highest standards for controls and compliance. The cloud provides more comprehensive data protection capabilities for customers looking to rapidly scale and innovate than are available for on-premises systems. This post provides guidance for how to think about encryption in AWS. You can use IAM, AWS KMS, and ACM to provide granular access control to your most sensitive data, and support protection of your data in transit and at rest. Once you’ve identified your compliance requirements, you can use AWS Config and CloudTrail to review your compliance with company policy over time, rather than point-in-time snapshots obtained through traditional audit methods. AWS can provide on-demand compliance evidence, with tools such as reporting from CloudTrail and AWS Config, and attestations such as SOC reports.

I encourage you to review your current encryption approach against the steps I’ve outlined in this post. While every industry and company is different, I believe the core concepts presented here apply to all scenarios. I want to hear from you. If you have any comments or feedback on the approach discussed here, or how you’ve used it for your use case, leave a comment on this post.

And for more information on encryption in the cloud and on AWS, check out the following resources, in addition to our collection of encryption blog posts.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Peter M. O’Donnell

Peter is an AWS Principal Solutions Architect, specializing in security, risk, and compliance with the Strategic Accounts team. Formerly dedicated to a major US commercial bank customer, Peter now supports some of AWS’s largest and most complex strategic customers in security and security-related topics, including data protection, cryptography, identity, threat modeling, incident response, and CISO engagement.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS, focused on marketing, encryption, and emerging areas of cybersecurity. She has worked to drive large scale marketing and content initiatives forward in a variety of regulated industries. She is passionate about helping customers learn best practices to secure their AWS cloud environment so they can innovate faster on behalf of their business.

Deploying defense in depth using AWS Managed Rules for AWS WAF (part 2)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/deploying-defense-in-depth-using-aws-managed-rules-for-aws-waf-part-2/

In this post, I show you how to use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

In part 1 of this post I describe the technologies and methods that you can use to build and manage defense in depth for your network. In part 2, I will show you how to use those tools to build your defense in depth using AWS Managed Rules as the starting point and how it can be used for optimal effectiveness

Managing policies for multiple environments can be done with minimal administrative overhead and can now be part of a deployment pipeline where you programmatically enforce policies for broad edge network policy enforcements and protect production workloads without compromising on development speed or safety.

Building robust security policy enforcement relies on a layered approach and the same applies to securing your web applications. Having edge policies, application policies, and even private or internal policy enforcement layers adds to the visibility of communication requests as well as unified policy enforcement.

Using a layered AWS WAF deployment, such as is deployed by the procedure that follows, gives you greater flexibility in the amount of rules you can use and the option to standardize edge policies and production policies. This lets you test and develop new applications without comprising the production environments.

In the following example, the application load balancer is in us-east-1. To create a web ACL for Amazon CloudFront you need to deploy the stack in us-east-1. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template can create both web ACLs in this scenario.

Note: If you’re using CloudFront and hosting the origin in us-east-1, you only need to maintain one stack. If your origin is in another region, you need to deploy a stack in us-east-1 for CloudFront web ACLs and another in the region where your application load balancer is. That scenario isn’t covered in the following procedure. None of the underlying infrastructure would be deployed with the example AWS CloudFromation templates provided. Only the AWS WAF configurations would be deployed using the example templates.

Solution overview

The following diagram illustrates the traffic flow where traffic comes in via CloudFront and serves the traffic to the backend load balancers. Both CloudFront and the load balancers support AWS WAF. This is where dedicated web security policies can be enforced to build out a defense-in-depth, multi layered policy enforcement.
 

Figure 1: Defense in depth deployment on AWS WAF

Figure 1: Defense in depth deployment on AWS WAF

Creating AWS Managed Rule web ACLs

During this process we create two web ACLs that are designed for policy enforcement for two dedicated layers. The process won’t deploy the required infrastructure, such as the CloudFront distribution or application load balancers. This example template deploys a single stack in us-east-1 where the CloudFront origin load balancer is located.

To create AWS Managed Rule web ACLs

  1. Download the Amazon-CloudFront-Application-Load-Balancer-AMR.yml template.
  2. Open the AWS Management Console and select the region where the origin application load balancer is deployed. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template that you downloaded deploys both web ACLs for CloudFront and the application load balancer.
     
    Figure 2: Select a region from the console

    Figure 2: Select a region from the console

  3. Under Find Services enter AWS CloudFormation and select Enter.
     
    Figure 3: Find and select AWS CloudFormation

    Figure 3: Find and select AWS CloudFormation

  4. Select Create stack.
     
    Figure 4: Create stack

    Figure 4: Create stack

  5. Select a template file for the stack.
    1. In the Create stack window, select Template is ready and Upload a template file.
    2. Under Upload a template file, select Choose file and select the Amazon-CloudFront-Application-Load-Balancer-AMR.yml example AWS CloudFormation template you downloaded earlier.
    3. Choose Next.
    Figure 5: Prepare and choose a template

    Figure 5: Prepare and choose a template

  6. Add stack details.
    1. Enter a name for the stack in Stack name.
    2. Enter a name for the Edge Network AWS WAF WebACL and for the Public Layer AWS WAF WebACL.
    3. Set a rate-limit for HTTP GET requests in HTTP Get Flood Protection (this rate is applied per IP address over a 5 minute period).
    4. Set a rate limit for HTTP POST requests in HTTP Post Flood Protection.
    5. Use the Login URL to apply the limit to a targeted login page. If you want to rate-limit all HTTP POST requests, leave the login URL section blank.
    Figure 6: Set stack details

    Figure 6: Set stack details

  7. By default, all the rules within the rule-sets are in action override (count mode). This does not include the rate based rules. If you want to deploy selected rules in a block, remove them from the pre-populated list by highlighting and deleting them. It’s best practice to evaluate firewall rules before changing them from count to block mode. Choose Next to move to the next step.
     
    Figure 7: Default managed rules options

    Figure 7: Default managed rules options

  8. Here you can add tags to apply to the resources in the stack that these rules will be deployed to. Tagging is a recommended best practice as it enables you to add metadata information to resources during the creation. For more information on tagging please see the Tagging AWS resources documentation. Then choose Next. On the following page choose Create stack.
     
    Figure 8: Add tags

    Figure 8: Add tags

  9. Wait until the stack has been deployed. When deployment is complete, the status of the stack will change to CREATE_COMPLETE.
     
    Figure 9: Stack deployment status

    Figure 9: Stack deployment status

Associating the web ACLs to resources

During this process we associate the two newly created web ACLs to the corresponding infrastructure resources. In this example, it would be the CloudFront distribution and its origin load balancer which should have been created prior.

To associate the web ACLs to resources

  1. In the console search for and select WAF & Shield.
     
    Figure 10: Select WAF & Shield

    Figure 10: Select WAF & Shield

  2. Select Web ACLs from the list on the left.
     
    Figure 11: Select Web ACLs

    Figure 11: Select Web ACLs

  3. Select Global (CloudFront) from the drop down list at the top of the page. Choose the Edge-Network-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 12: Select the web ACL

    Figure 12: Select the web ACL

  4. Next select Associated AWS and then choose Add AWS resources.
     
    Figure 13: Add AWS resources

    Figure 13: Add AWS resources

  5. Select the CloudFront distribution you want to protect. Choose Add.
     
    Figure 14: Select the CloudFront distribution to protect

    Figure 14: Select the CloudFront distribution to protect

  6. Select the region the application load balancer is deployed in—this example is us-east-1—and then repeat the same association process as in steps by selecting Web ACLs and now associating the Application Load Balancer similar to steps 3 and 4 above. However, this time, select the application load balancer that serves as the CloudFront Distribution origin. Select US East (N. Virginia) from the drop-down list at the top of the page. Choose the Public-Application-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 15: Application layer Web ACL association

    Figure 15: Application layer Web ACL association

Conclusion

Using AWS WAF to manage a multi-layer web application security enforcement policy you are able to build defense in depth stack for each specific web application. The configuration will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications. Now with AWS Managed Rules this has enabled customers to make use of prebuild rule sets that can easily be deployed to create a layered defense that will fit into customers web application deployment pipelines. For customers that would like to centrally manage and control WAF in their AWS Organization, consider AWS Firewall Manager.

The AWS CloudFormation templates used in this procedure are in this GitHub repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

Defense in depth using AWS Managed Rules for AWS WAF (part 1)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/defense-in-depth-using-aws-managed-rules-for-aws-waf-part-1/

In this post, I discuss how you can use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

The post is in two parts. This first part describes AWS Managed Rules for AWS WAF and how it can be used to provide defense in depth. The second part shows how to apply AWS Managed Rules for WAF.

AWS Managed Rules for AWS WAF is a service that provides groups of rules created by Amazon Web Services (AWS) or by an AWS technology partner. By using AWS Managed Rules, you can reduce the administrative overhead of configuring rules for AWS WAF. You still need a comprehensive strategy for web application policy enforcement to help you make the best use of AWS Managed Rules for your web applications.

By using a layered policy enforcement strategy, you can create policy enforcement that’s specific to each part of your applications. This helps you avoid having to maintain and manage monolithic AWS WAF configurations for each of your applications. When you can separate policies for the edge network and for the application layer network, replicating separate policies across larger workloads becomes modular. This makes your application security more agile and lets you protect public-facing web applications without writing new rules or including rules that aren’t relevant to your web application.

Policy enforcement becomes even less of an administrative burden when you use AWS Firewall Manager to enforce policies across all accounts. This helps ensure organizations have robust policy enforcement measures across multiple accounts, with increased application layer visibility.

The new AWS WAF JSON document-style configuration enables traditional code review processes. You can now easily manage AWS WAF configurations on multiple layers of your web applications. This has also enabled partners to create more dynamic and robust rules that they can deliver on AWS WAF, which ultimately helps those customers manage their web application security policies.

AWS WAF enhancements

AWS WAF uses web ACL capacity units (WCU) to calculate and control the operating resources that are used to run your rules, rule groups, and web ACLs.

You can use JSON key-value pair document-based configuration to more easily integrate AWS WAF into the development practices of your organization. As noted in the prior paragraph, using document-style configuration removes the need to use multiple API calls to create objects in the correct order before you can create and deploy a web ACL to protect your web applications.

Using this method lets firewall changes be implemented with normal development and operations best practices because it will be infrastructure as code. This enables version control and code review before deploying updates to your production environment.

Solution overview

The following diagram illustrates the layers and functions of a defense-in-depth solution. The text that follows describes each layer.
 

Figure 1: Solution overview diagram

Figure 1: Solution overview diagram

Edge network layer policy enforcement

The edge network is the first layer of policy enforcement and should be used for broad security policy enforcement. This is the ideal place for rules such as AWS Managed Rules Core rule set (CRS), geographical location blocks, IP reputational lists, anonymous IP lists, and basic rate limits enforcement. By limiting known bad traffic at the edge network, the CRS limits the exposure of the application layer to known bad IP address ranges, malicious requests, bad bots, and request floods. This provides broad protection to the inner application layer against malicious activity, which can be applied regardless of the web application being served at the application layer.

Combining Amazon CloudFront with the distributed denial of service (DDoS) mitigation capabilities of AWS Shield is supported by AWS WAF for your outer layer of web application security enforcement.

It’s a common misconception that CloudFront is only a content delivery platform, but it also has robust transparent reverse proxy capabilities. CloudFront can help protect your environment from a broad range of web application risks. For example, you can use CloudFront to ensure that HTTP requests conform to standards on the far outer layer of your web application environment while serving content closer to the user.

Application layer policy enforcement

The next level of enforcement should be an application load balancer in a public subnet with another web ACL at the CloudFront origin. This policy enforcement layer is where you create a regional web ACL for the CloudFront origin. In addition, this layer is where you apply application-specific rules. For example, if you have a web application that uses a LAMP stack, it would be best to use AWS Managed Rules for SQL Injection, Linux, and PHP as an enforcement layer.

Note: IP-based enforcement is not effective on this part of the environment. Consider making use of an origin custom header on the CloudFront distribution. Then using this custom header to create a BLOCK rule within this web ACL to deny any request without the origin custom header as the first rule in your web ACL list. This rule needs to be created manually and will not be configured by the supplied templates.

(Optional) Third-party web application firewall layer policy enforcement

AWS WAF enforces policies on inbound requests and doesn’t have outbound inspection capabilities. If you need to enforce policies based on outbound responses, you can use Amazon Machine Image (AMI) based web application firewalls, which are available via the AWS Marketplace.

Using an instance-based web application firewall is used here because most of the heavy lifting of computational expenditure is done on the AWS WAF enforcement layers. The third-party layer is where you can enforce policies that require requests to be stateful.

Using an AMI from AWS Marketplace also gives you access to capabilities such as higher visibility, threat intelligence, and robust firewall rules. This adds an additional layer of security enhancement to your environment.

(Optional) Private layer policy enforcement

When working with a traditional three-tier web architecture, you can add an additional layer of enforcement on the private layer, which can be used for the web front ends. This stage is where you would deploy an application load balancer in a private subnet serving your web front ends. This load balancer is there for any computational expensive regex-based rule enforcement that you don’t want to enforce on the instances-based WAF. This also gives you another layer of visibility before requests reaches the web front ends themselves. This example can be seen in Figure 2 below as a reference.

Use case examples

The AWS CloudFormation templates supplied can be deployed in a modular fashion. If the application load balancer is located in the us-east-1 region, you can deploy a single template called Amazon-CloudFront-Application-Load-Balancer-AMR.yml.

If the application load balancer isn’t located in us-east-1, you can use the Amazon-CloudFront-EdgeLayer-AMR.yml template to deploy the stack in us-east-1 to support the web ACL on CloudFront and then deploy ApplicationLayer-Load-Balancer-AMR.yml in the region the original application load balancer was deployed for its web ACL.

All CloudFormation templates are available on the Github project page and a summary of each can be found in the main readme.md file.

Note: All the individual rules in each rule set is set to ACTION OVERRIDE for initial deployment. If any of the rule actions in the group are set to block or allow, this override changes the behavior so that matching rules are only counted. You may change the setting to NO ACTION OVERRIDE after a period of evaluation to avoid disrupting production workloads with potential false positives.

Edge network and application load balancer origin using AWS Managed Rules for AWS WAF

When considering some of the web application best practices on AWS for resiliency and security, the recommendation is to use CloudFront where possible, because it can terminate TLS/SSL connections and serve cached content close to the end user. CloudFront has advanced mitigation capabilities such as SYN cookies and a massively distributed network separate from the traditional Amazon Elastic Compute Cloud (Amazon EC2) networking space. CloudFront also supports AWS WAF rate limits, IP blacklists, and broad security policies, which can be enforced at the edge network layer.

In the example Amazon-CloudFront-Application-Load-Balancer-AMR.yml template, we place a rate-limit for HTTP GET and HTTP POST methods. This is dependent upon expected traffic request rates. You can review Amazon CloudWatch metrics for your CloudFront distribution or application load balancer to determine the baseline for your rate limit based on the maximum expected requests per minute.

The rate limit is adjustable within the parameter options at deployment of the AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml. The HTTP POST rate limit also helps to slow down credential stuffing attacks—a form of brute force attack—on login pages. The ApplicationLayer-Load-Balancer-AMR.yml template used in part 2 of this post also deploys the Amazon IP reputation list to drop IP addresses based on Amazon internal threat intelligence.

We also use the AWS Managed Rules CommonRuleSet that blocks cross-site scripting (XSS) attacks, request with no user-agents, requests with known bad user-agents, large queries, posts, cookies, and URLs, and known LFI/RFI attacks.

Note: The size constraint rules aren’t recommended for protecting APIs or web applications with large HTTP POSTs or long cookies. Evaluate the possible effects of size constraint rules thoroughly before setting them to block requests.

There is also an AWS Managed Rule for known bad inputs which is based on threat intelligence gathered by the AWS Threat Research Team. Finally, there is an admin protection rule set that drops requests to known management login pages. It’s not advised that web applications have front door access to admin controls.

At the origin, it’s a good idea to use an application load balancer that also supports AWS WAF. This is where you want to apply application-specific web policies. For example, this is where you would apply rules to protect against a SQL injection attack if your web application uses a SQL database.

In the example AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml, for the origin application load balancer, we use AWS Managed Rules for SQL injections, Linux rule set, Unix rule set, PHP rule set, and the WordPress rule set to cover most eventualities customers could be using on their web applications.

For the example solution in part 2 of this post, if the origin application load balancer is in us-east-1, you can use Amazon-CloudFront-Application-Load-Balancer-AMR.yml, which will deploy both web ACLs.

If the origin is not in us-east-1, you can use two example templates which are Amazon-CloudFront-EdgeLayer-AMR.yml for the edge network and ApplicationLayer-Load-Balancer-AMR.yml in the origin region.

Using AWS Managed WAF Rules on public and private application load balancers

Some customers have reasons to not use CloudFront and will use two application load balancers. One load balancer for the public facing environment for web front ends and an internal load balancer for the application backends.

The following figure shows a deployment that uses two load balancers. A public load balancer works with the edge network WAF to connect to a web front end in a private subnet and an internal load balancer connects to the backend application.
 

Figure 2: Diagram of stacked load balancers

Figure 2: Diagram of stacked load balancers

In this use case, we can still use the same structure of edge network and application layer network, now only using load balancers. Using a three-tier web application approach to deploy web applications there will be an external facing and an internal application load balancer where you can deploy the same style of policy enforcement, but only on load balancers.

Note: To deploy something similar to this example, you can use the template EdgeLayerALB-PrivateLayerALB-AMR.yml in the relevant regions where the load balancers have been deployed.

Alarms and logging

After deploying these AWS CloudFormation templates you should consider setting CloudWatch alarms on certain metrics for the HTTP GET and HTTP POST flood rules as well as the reputation and anonymous IP lists. Customers that are familiar with developing may also opt to use Lambda responders to use CloudWatch Events to trigger and update to the rule change from COUNT to BLOCK. Also enabling full logging for each web ACL will give you higher visibility into each request and will make potential investigations easier.

Conclusion

Using the new enhancements of AWS WAF makes it easier to manage a multi-layer web application security enforcement policy by using AWS WAF to maintain and deploy web application firewall configurations across their different deployment stages, as well as across different types of applications. By making use of partner or AWS Managed Rules, administrative overhead can be significantly reduced, and with AWS Firewall Manager, customers can enforce these policies across all of an organization’s accounts. Part 2 of this post will show you one example of how this can be done.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.