Tag Archives: AWS security

How to analyze AWS WAF logs using Amazon Elasticsearch Service

Post Syndicated from Tom Adamski original https://aws.amazon.com/blogs/security/how-to-analyze-aws-waf-logs-using-amazon-elasticsearch-service/

Log analysis is essential for understanding the effectiveness of any security solution. It can be valuable for day-to-day troubleshooting and also for your long-term understanding of how your security environment is performing.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules.

With the release of access to full AWS WAF logs, you now have the ability to view all of the logs generated by AWS WAF while it’s protecting your web applications. In addition, you can use Amazon Kinesis Data Firehose to forward the logs to Amazon Simple Storage Service (Amazon S3) for archival, and to Amazon Elasticsearch Service to analyze access to your web applications.

In this blog post, I’ll show you how you can analyze AWS WAF logs using Amazon Elasticsearch Service (Amazon ES). I’ll walk you through how to find out in near-real time which AWS WAF rules get triggered, why, and by which request. I’ll also show you how to create a historical view of your web applications’ access trends for long-term analysis.

Architecture overview

When enabled, AWS WAF sends logs to Amazon Kinesis Data Firehose. From there, you can decide what to do with them next. In my architecture, I’ll forward the relevant logs to Amazon ES for analysis and to Amazon S3 for long-term storage.
 

Figure 1: Architecture overview

Figure 1: Architecture overview

In this blog, I will show you how to enable AWS WAF logging to Amazon ES in the following steps:

  1. Create an Amazon Elasticsearch Service domain.
  2. Configure Kinesis Data Firehose for log delivery.
  3. Enable AWS WAF logs on a web access control list (ACL) to send data to Kinesis Data Firehose.
  4. Analyze AWS WAF logs in Amazon ES.

Note: The region in which you should enable Kinesis Data Firehose must match the region where your AWS WAF web ACL is deployed. If you’re enabling logging for AWS WAF deployed on Amazon CloudFront, then Kinesis Data Firehose must be created in the Northern Virginia AWS Region.

Preparing Amazon Elasticsearch Service

If you don’t have your Amazon ES environment already running on AWS, you’ll need to create one. Start by navigating to the Elasticsearch Service from your AWS Console and choosing Create a new domain. There, you will be able to define the namespace for your Elasticsearch cluster and choose the version you’d like to use. In my setup, I’ll be using awswaf-logs as my namespace and version 6.3 of Elasticsearch.
 

Figure 2: Defining Amazon ES Domain

Figure 2: Defining Amazon ES Domain

You can also decide on other parameters, like the number of nodes in your cluster or how to access the dashboards. See the Amazon Elasticsearch Service Documentation to find out more details about setting up your environment, or review this previous blog post, which goes deeper into Amazon ES setup.

When your environment is ready, you’ll be able to click on your namespace and find the link to your Kibana dashboard. Kibana is the data visualisation plugin for Elasticsearch that you’ll use to analyze your logs.
 

Figure 3: Identifying the Kibana management URL

Figure 3: Identifying the Kibana management URL

While Elasticsearch can automatically classify most of the fields from the AWS WAF logs, you need to inform Elasticsearch how to interpret fields that have specific formatting. Therefore, before you start sending logs to Elasticsearch, you should create an index pattern template so Elasticsearch can distinguish AWS WAF logs and classify the fields correctly.

The pattern template I’m using below will be applied to all indexes beginning with the string awswaf-. That’s because in my setup all AWS WAF log index files created in Elasticsearch will always begin with the character string awswaf- followed by a date.

The pattern template is defining two fields from the logs. It will indicate to Elasticsearch that the httpRequest.clientIp field is using an IP address format and that the timestamp field is represented in epoch time. All the other log fields will be classified automatically.

In the template, I’m also setting the number of Elasticsearch shards that should be used for the index. Shards help with distributing large amount of data and their number depends on the size of the index—the larger the index the more shards it should be deployed across. I’m going to use only one shard since I expect my index to be less than 30 GB. For recommendations on sizing your shards appropriately, refer to Jon Handler’s blog post on using Amazon Elasticsearch Service.


PUT  _template/awswaf-logs
{
    "index_patterns": ["awswaf-*"],
    "settings": {
    "number_of_shards": 1
    },
    "mappings": {
      "waflog": {
        "properties": {
          "httpRequest": {
            "properties": {
              "clientIp": {
                "type": "keyword",
                "fields": {
                  "keyword": {
                    "type": "ip"
                  }
                }
              }
            }
          },
          "timestamp": {
            "type": "date",
            "format": "epoch_millis"
          }
      }
    }
  }
}

I’m going to apply this template to Elasticsearch through the Kibana developer interface (from the Dev Tools tab in the Kibana dashboard), but it can also be done through an API call to the Amazon ES endpoint.
 

Figure 4: Applying the index pattern template in Kibana

Figure 4: Applying the index pattern template in Kibana

The template will be applied to all log indexes starting with the awswaf- prefix. This is the prefix name you’ll use at a later stage in the Kinesis Data Firehose configuration.

Setting up Kinesis for log delivery

With Amazon ES now ready to receive AWS WAF logs, I’ll show you how to use Amazon Kinesis Data Firehose to deliver your logs to Amazon S3 and Amazon ES. This is a prerequisite before you enable logging in AWS WAF.

First, go to the Kinesis service in the AWS Console and under the Data Firehose tab choose Create delivery stream. If the Data Firehose stream is going to be used for WAF logs, its name must start with aws-waf-logs-. I’ll enable logging for the already existing web ACL that I’m using for OWASP top 10 protection, therefore I’ll name my delivery stream aws-waf-logs-owasp.

Keep all the settings at the default until you get to the Select destination page. On this page, you’ll configure Amazon Elasticsearch Service as the destination for your logs. Make sure you enter awswaf as the index name and waflog as the type to match the index pattern template you already created in Elasticsearch earlier.
 

Figure 5: Setting up Amazon ES as Data Firehose log destination

Figure 5: Setting up Amazon ES as Data Firehose log destination

On this page, you can also set what to back up to Amazon S3—either all records or just the failed ones. You’ll then need to decide on the bucket to use, specify the prefix, and then select Next.
 

Figure 6: Setting up S3 backup in Data Firehose

Figure 6: Setting up S3 backup in Data Firehose

On the next page, you’ll have the option to update the buffer conditions—size and interval. These indicate how often Kinesis Data Firehose will send new data to Amazon ES. The lower the values you set, the closer to real time your log delivery will be. In my example, I’ve configured a buffer size of 1 MB and a buffer interval of 60 seconds.
 

Figure 7: Setting up Data Firehose buffer conditions

Figure 7: Setting up Data Firehose buffer conditions

Next, you’ll need to specify an IAM role that enables Kinesis Data Firehose to send data to Amazon S3 and Amazon ES. You can select an existing IAM role or create a new one. For more information about creating a role, see Controlling Access with Amazon Kinesis Data Firehose.

Finally, review all the settings, and complete the creation of your Kinesis Data Firehose.

Enabling AWS WAF logging

Now that you have Kinesis Data Firehose prepared, you can enable logging for an existing AWS WAF Web ACL. You can do this in the AWS Console under the AWS WAF & AWS Shield service. Go to the Web ACLs tab and select the Web ACL for which you want to start logging. Then, under the Logging tab, choose Enable Logging.
 

Figure 8: Enabling logging for AWS WAF web ACL

Figure 8: Enabling logging for AWS WAF web ACL

On the next page, specify the Kinesis Data Firehose that the logs should be delivered to. If you can’t see any options in the drop-down list, make sure that the Kinesis Data Firehose you configured earlier is deployed in the same region as the web ACL you’re enabling logging for and that its name begins with aws-waf-logs-. Remember that for CloudFront web ACL deployments, the Kinesis Data Firehose should be created in the Northern Virginia Region.

Searching through logs

As soon as you log back into your Kibana dashboard for Amazon ES, you should see a new index appear called awswaf-<date>. You’ll need to define an index pattern that will encompass all the potential awswaf log entries. Configure your index pattern description as awswaf-*, and on the next screen select timestamp as the time filter.
 

Figure 9: Configuring the Kibana index pattern

Figure 9: Configuring the Kibana index pattern

 

Figure 10: Configuring a time filter for the index pattern

Figure 10: Configuring a time filter for the index pattern

After you complete the setup of your index pattern, you’ll be able to run searches through your logs by going into the Discover tab in Kibana. The searches can be based on any fields in the AWS WAF logs. For example, you can look for specific HTTP headers, query strings, or source IP addresses to find out what action has been applied to them. To find out more about what fields are available in the AWS WAF logs, see the AWS WAF Developer Guide.

In the example search that follows, I’m looking for all of the log entries where a web ACL has blocked the traffic and the HTTP arguments were equal to name=badname.
 

Figure 11: An example ad-hoc search

Figure 11: An example ad-hoc search

This approach can help you identify specific request based on their parameters, such as a cookie, host header, query string, and understand why they are being blocked (or allowed) and by which web ACL. It is especially useful for tuning AWS WAF to match your web application.

Building a dashboard

You can also use Amazon ES to create custom graphs to look at long-term trends—and you can combine your graphs into a dashboard. For example, you may want to see the number of blocked and allowed requests by IP address to identify potentially hostile IP addresses.

To understand the breakdown of the requests per IP address, you should first set up a graph showing the number of requests that were blocked and allowed by AWS WAF per source. To do this, go to the Visualize section of your Kibana dashboard and create a new visualization. You can select from a variety of visualization options, but I’m going to use the horizontal bar chart for the example that follows.
 

Figure 12: Configuring primary bucket parameters

Figure 12: Configuring primary bucket parameters

  1. Select your index and then configure your graph options. For a horizontal bar chart, leave the Y axis to represent the count of requests, but expand the primary bucket (X-Axis) form and change the fields to show the source IP address and action:
    1. From the Aggregation dropdown, select Terms.
    2. From the Field dropdown, select httpRequest.clientIP.
    3. From the Order By dropdown, select metric: Count.
       
      Figure 13: Configuring bucket series

      Figure 13: Configuring bucket series

  2. Next, expand the Split Series section and adjust the following dropdown fields:
    1. From the Sub Aggregation dropdown, select Terms.
    2. From the Field dropdown select action.keyword.
    3. From the Order By dropdown, select metric: Count.
       
      Figure 14: Sample Kibana graphs

      Figure 14: Sample Kibana graphs

    4. Apply your changes by selecting the play button at the top of the page. You should see the results appear on the right side of the screen.
       
      Figure 15: Select the "Apply changes" button

      Figure 15: Select the “Apply changes” button

    You can combine graphs into dashboards that aggregate relevant information about AWS WAF functionality. In the sample dashboard below, I’m showing multiple visualisations focusing on different dimensions like request per country, IP address, and even which WAF rules get the most hits.
     

    Figure 16: You can combine graphs

    Figure 16: You can combine graphs

    I’ve made the sample dashboard and sample visualizations available for download. They can be imported to your Kibana instance in the Management/SavedObjects section of the dashboard.

    Conclusion

    The ability to access AWS WAF logs for any web ACL allows you to start analyzing the requests coming to your web applications. In this post, I’ve covered an example of how you can use Amazon Elasticsearch Service as a destination for AWS WAF logs and shown you how to run searches on the log data as well as build graphs and dashboards. Amazon ES is just one of the destination options for native log delivery. To learn more, please refer to the documentation on how to send data directly to Amazon S3, Amazon Redshift, and Splunk.

    If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the WAF forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Tom Adamski

    Tom is a Specialist Solutions Engineer in Networking. He helps customers across the EMEA region design their network environments on AWS. He has over 10 years of experience in the networking and security industries. In his spare time, he’s an avid surfer who’s always on the lookout for cold water surf breaks.

Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/setting-the-record-straight-on-bloomberg-businessweeks-erroneous-article/

Today, Bloomberg BusinessWeek published a story claiming that AWS was aware of modified hardware or malicious chips in SuperMicro motherboards in Elemental Media’s hardware at the time Amazon acquired Elemental in 2015, and that Amazon was aware of modified hardware or chips in AWS’s China Region.

As we shared with Bloomberg BusinessWeek multiple times over the last couple months, this is untrue. At no time, past or present, have we ever found any issues relating to modified hardware or malicious chips in SuperMicro motherboards in any Elemental or Amazon systems. Nor have we engaged in an investigation with the government.

There are so many inaccuracies in ‎this article as it relates to Amazon that they’re hard to count. We will name only a few of them here. First, when Amazon was considering acquiring Elemental, we did a lot of due diligence with our own security team, and also commissioned a single external security company to do a security assessment for us as well. That report did not identify any issues with modified chips or hardware. As is typical with most of these audits, it offered some recommended areas to remediate, and we fixed all critical issues before the acquisition closed. This was the sole external security report commissioned. Bloomberg has admittedly never seen our commissioned security report nor any other (and refused to share any details of any purported other report with us).

The article also claims that after learning of hardware modifications and malicious chips in Elemental servers, we conducted a network-wide audit of SuperMicro motherboards and discovered the malicious chips in a Beijing data center. This claim is similarly untrue. The first and most obvious reason is that we never found modified hardware or malicious chips in Elemental servers. Aside from that, we never found modified hardware or malicious chips in servers in any of our data centers. And, this notion that we sold off the hardware and datacenter in China to our partner Sinnet because we wanted to rid ourselves of SuperMicro servers is absurd. Sinnet had been running these data centers since we ‎launched in China, they owned these data centers from the start, and the hardware we “sold” to them was a transfer-of-assets agreement mandated by new China regulations for non-Chinese cloud providers to continue to operate in China.

Amazon employs stringent security standards across our supply chain – investigating all hardware and software prior to going into production and performing regular security audits internally and with our supply chain partners. We further strengthen our security posture by implementing our own hardware designs for critical components such as processors, servers, storage systems, and networking equipment.

Security will always be our top priority. AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting their security above all else. We are constantly vigilant about potential threats to our customers, and we take swift and decisive action to address them whenever they are identified.

– Steve Schmidt, Chief Information Security Officer

Visualizing Amazon GuardDuty findings

Post Syndicated from Mike Fortuna original https://aws.amazon.com/blogs/security/visualizing-amazon-guardduty-findings/

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help protect your AWS accounts and workloads. Enable GuardDuty and it begins monitoring for:

  • Anomalous API activity
  • Potentially unauthorized deployments and compromised instances
  • Reconnaissance by attackers.

GuardDuty analyzes and processes VPC flow log, AWS CloudTrail event log, and DNS log data sources. You don’t need to manually manage these data sources because the data is automatically leveraged and analyzed when you activate GuardDuty. For example, GuardDuty consumes VPC Flow Log events directly from the VPC Flow Logs feature through an independent and duplicative stream of flow logs. As a result, you don’t incur any operational burden on existing workloads.

GuardDuty helps find potential threats in your AWS environment by producing security findings that you can view in the GuardDuty console or consume through Amazon CloudWatch Events, which is a service that makes alerts actionable and easier to integrate into existing event management and workflow systems. One common question we hear from customers is “how do I visualize these findings to generate meaningful insights?” In this post, we’re going to show you how to create a dashboard that includes visualizations like this:
 

Figure 1: Example visualization

Figure 1: Example visualization

You’ll learn how you can use AWS services to create a pipeline for your GuardDuty findings so you can log and visualize them. The services include:

Architecture

The architectural diagram below illustrates the pipeline we’ll create.
 

Figure 2: Architectural diagram

Figure 2: Architectural diagram

We’ll walk through the data flow to explain the architecture and highlight the additional customizations available to you.

  1. Amazon GuardDuty is enabled in an account and begins monitoring CloudTrail logs, VPC flow logs, and DNS query logs. If a threat is detected, GuardDuty forwards a finding to CloudWatch Events. For a newly generated finding, GuardDuty sends a notification based on its CloudWatch event within 5 minutes of the finding. CloudWatch Events allows you to send upstream notifications to various services filtered on your configured event patterns. We’ll configure an event pattern that only forwards events coming from the GuardDuty service.
  2. We define two targets in our CloudWatch Event Rule. The first target is a Kinesis Firehose stream for delivery into an Elasticsearch domain and an S3 bucket. The second target is an SNS Topic for Email/SMS notification of findings. We’ll send all findings to our targets; however, you can filter and format the findings you send by using a Lambda function (or by event pattern matching with a CloudWatch Event Rule). For example, you could send only high-severity alarms (that is, findings with detail.severity > 7).
  3. The Firehose stream delivers findings to Amazon Elasticsearch, which provides visualization and analysis for our event findings. The stream also delivers findings to an S3 bucket. The S3 bucket is used for long term archiving. This data can augment your data lake and you can use services such as Amazon Athena to perform advanced analytics.
  4. We’ll search, explore, and visualize the GuardDuty findings using Kibana and the Elasticsearch query Domain Specific Language (DSL) to gain valuable insights. Amazon Elasticsearch has a built-in Kibana plugin to visualize the data and perform operational analyses.
  5. To provide a simplified and secure authentication method, we provide user authentication to Kibana with Amazon Cognito User Pools. This method provides improved security from traditional IP whitelists or proxy infrastructure.
  6. Our second CloudWatch Event target is SNS, which has subscribed email endpoint(s) that allow your operations teams to receive email (or SMS messages) when a new GuardDuty Event is received.

If you would like to centralize your findings from multiple regions into a single S3 bucket, you can adapt this pipeline. You would deploy the frontend of the pipeline by configuring Kinesis Firehose in the remote regions to point to the S3 bucket in the centralized region. You can leverage prefixes in the Kinesis Firehose configuration to identify the source region. For example, you would configure a prefix of us-west-1 for events originating from the us-west-1 region. Analytic queries from tools such as Athena can then selectively target the desired region.

Deployment Steps

This CloudFormation template will install the pipeline and components required for GuardDuty visualization:
 
Select button to launch stack

When you start the stack creation process you will be prompted for the following information:

  • Stack name — This is the name of the stack you will create
  • EmailAddress — This email address is used to create a username in Cognito and a subscriber to the SNS topic.
  • ESDomainName — This will be the name given to the Elasticsearch Domain.
  • IndexName — This will be the Index created by Firehose to load data into Elasticsearch.

 

Figure 3: The "Create stack" interface

Figure 3: The “Create stack” interface

Once the infrastructure is installed, you’ll follow two main steps, each of which is described in detail later:

  1. Add Cognito authentication to Kibana, which is hosted on the Elasticsearch domain. At the time of writing, this can’t be done natively in CloudFormation. We’ll also confirm the SNS subscription so we can start to receive GuardDuty Findings via email.
  2. Configure Kibana with the index, the appropriate scripted fields, and the dashboard to provide the visualizations. We’ll also enable GuardDuty to start monitoring your account and send sample findings to test the pipeline.

Step 1: Enable Cognito authentication in Kibana

To enable user authentication to your dashboards hosted in Kibana, you need to enable the integration from the Elasticsearch domain that was created within the Cloudformation template.

  1. Open the AWS Console and select the Cognito service. Select Manage User Pools to access the User Pool that was created in Cloudformation. Select the user pool beginning with the name VisualizeGuardDutyUserPool and, under the App Integration menu item, select Domain name.
     
    Figure 4: The "Domain name" interface

    Figure 4: The “Domain name” interface

  2. You need to create a unique domain prefix to allow Kibana to authenticate using Cognito. Enter a unique domain prefix (it can only contain lowercase letters, numbers, and hyphens). After entering the prefix, select the Check Availability button to ensure it’s available in the region. If it’s available, select Save Changes button.
  3. From the AWS Console, select the Cloudformation service.
  4. Select the template you created for your pipeline, select the Outputs tab, and then, under Value, copy the value of ESCognitoRole. You’ll use this role when you enable Cognito authentication of Elasticsearch.
     
    Figure 5: The "Outputs" tab and the "ESCognitoRole" key

    Figure 5: The “Outputs” tab and the “ESCognitoRole” key

  5. Next, browse to the Elasticsearch Service, select the domain you created from the CloudFormation template, and select the Configure cluster button:
     
    Figure 6: The "Configure cluster" button

    Figure 6: The “Configure cluster” button

  6. Under the Kibana authentication section, select the Enable Amazon Cognito for authentication checkbox. You’ll be presented with several fields you need to configure, including: Cognito User Pool (the name of the user pool should start with VisualizeGuardDutyUserPool), Cognito Identity Pool (the name of the identity pool should start with VisualizeGuardDutyIDPool), and IAM Role Name (this was copied in step 4 earlier). A Cognito User Pool is a user directory in Amazon Cognito, we use this to create a user account to provide authentication to Kibana. Amazon Cognito Identity Pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. The Cognito Identity Pool in our case is used to provide federated access to Kibana. After you provided values for these fields, select the Submit button.
     
    Figure 7: The "Kibana authentication" interface

    Figure 7: The “Kibana authentication” interface

  7. The cluster reconfiguration will take several minutes to complete processing. When you see Domain status as Active, you can proceed.
  8. Finally, confirm the subscription email you received from SNS. Look for an email from: AWS Notifications <[email protected]>, open the message and select Confirm subscription to allow SNS to send you email when the SNS Topic receives a notification for new GuardDuty findings.

Step 2: Set up the Kibana dashboard and enable GuardDuty

Now, you can set up the Kibana dashboard with custom visualizations.

  1. Open the CloudFormation service page and select the stack you created earlier.
  2. Under the Outputs section, copy the Kibana URL.
     
    Figure 8: Copy the Kibana URL

    Figure 8: Copy the Kibana URL

  3. Paste the Kibana URL in a new browser window.
  4. Check your email client. You should have an email containing the temporary password from Cognito. Copy the temporary password and use it to log in to Cognito. If you haven’t received the email, check your email junk folder. You can also create additional users in the Cognito User Pool that was created from the CloudFormation Stack to provide additional users Kibana access.
     
    Figure 9: Example email with temporary password

    Figure 9: Example email with temporary password

  5. At the login prompt, enter the email address and password for the Cognito user the CloudFormation template created. A prompt to change your password will appear. Change your password to proceed. The Cognito User Pool requires: upper case letters, lower case letters, special characters, and numbers with a minimum length of 8 characters.
  6. It’s time to add mapping information to your index to instruct Kibana that some of the fields are delivered as geopoints. This allows these fields to be properly visualized with a Coordinate Map. Select Dev Tools in the menu on the left side:
  7.  

    Figure 10: Select "Dev tools"

    Figure 10: Select “Dev tools”

  8. Paste the following API call in the text box to provide the appropriate mappings for the networkConnectionAction & portProbeAction geolocation field. This calls the Elasticsearch API and updates the geolocation mapping for the above fields:
    
    PUT _template/gdt
    {
      "template": "gdt*",
      "settings": {},
      "mappings": {
        "_default_": {
          "properties": {
            "detail.service.action.portProbeAction.portProbeDetails.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            },
            "detail.service.action.networkConnectionAction.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            }        
          }
        }
      }
    }
    

  9. After you paste the API call be sure to remove whitespace after the ending brace. This allows you to select the green arrow to execute it. You should receive a message that the call was successful.
     
    Figure 11: Paste the API call

    Figure 11: Paste the API call

  10. Next, enable GuardDuty and send sample findings so you can create the Kibana Dashboard with data present. Find the GuardDuty service in the AWS Console and select the Get started button.
  11. From the Welcome to GuardDuty page, select the Enable GuardDuty button.
  12. Next, send some sample events. From the GuardDuty service, select the Settings menu on the left-hand menu, and then select Generate sample findings as shown here:
     
    Figure 12: The "Generate sample findings" button

    Figure 12: The “Generate sample findings” button

  13. Optionally, if you want to test with real GuardDuty findings, you can leverage the Amazon GuardDuty Tester. This AWS CloudFormation template creates an isolated environment with a bastion host, a tester EC2 instance, and two target EC2 instances to simulate five types of common attacks that GuardDuty is built to detect and notify you with generated findings. Once deployed, you would use the tester EC2 instance to execute a shell script to generate GuardDuty findings. Additional detail about this option can be found in the GuardDuty documentation.
  14. On the Kibana landing page, in the menu on the left side, create the Index by selecting Management.
  15. On the Management page, select Index Patterns.
  16. On the Create index pattern page, under Index patterns, enter gdt-* (if you used a different IndexName in the Cloudformation template, use that here), and then select Next Step.

    Note: It takes several minutes for the GuardDuty findings to generate a CloudWatch Event, work through the pipeline, and create the index in Elasticsearch. If the index doesn’t appear initially, please wait a few minutes and try again.

     

    Figure 13: The "Create index pattern" page

    Figure 13: The “Create index pattern” page

  17. Under Time Filter field name, select time from the drop-down list, and then select Create index pattern.
     
    Figure 14: The "Time Filter field name" list

    Figure 14: The “Time Filter field name” list

Create scripted fields

With the Index defined, we will now create two scripted fields that your dashboard visualizations will use.

Define the severity level

  1. Select the Index you just created, and then select scripted fields.
     
    Figure 15: The "scripted fields" tab

    Figure 15: The “scripted fields” tab

  2. Select Add Scripted Field, and enter the following information:
    • Name — sevLevel
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — copy and paste this script into the text-entry field:
      
      if (doc['detail.severity'].value < 3.9) { 
          return "Low";
      }
      else {if (doc['detail.severity'].value < 6.9) {
                return "Medium";
             }
      return "High";
      }
      

  3. After entering the information, select Create Field.

The sevLevel field provides a value-to-level mapping as defined by GuardDuty Severity Levels. This allows you to visualize the severity levels in a more user-friendly format (High, Medium, and Low) instead of a cryptic numerical value. To generate sevLevel, we used Kibana painless scripting, which allows custom field creation.

Define the attack type

  1. Now create a second scripted field for typeCategory. The typeCategory field extracts the finding attack type. Enter the following information:
    • Name — typeCategory
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — Copy and paste this script into the text-entry field:
      
      def path = doc['detail.type.keyword'].value;
      if (path != null) {
          int firstColon = path.indexOf(":");
          if (firstColon > 0) {
          return path.substring(0,firstColon);
          }
      }
      return "";
      

  2. After entering the information, select Create Field.

The typeCategory field is used to define the broad category “attack type.” The source field (detail.type.keyword) provides a lot of detailed information (for example: Recon:EC2/PortProbeUnprotectedPort), but we want to visualize the category of “attack type” in the high-level dashboard (that is, only Recon). We can still visualize on a more granular level, if necessary.

Create the Kibana Dashboard

  1. Create the Kibana dashboard by importing a JSON file containing its definition. To do this, download the Kibana dashboard and visualizations definition JSON file from here.
  2. Select Management in the menu on the left, and then select Saved Objects. On the right, select Import.
  3. Select the JSON file you downloaded and select Open. This imports the GuardDuty dashboard and visualizations. Select Yes, overwrite all objects.
  4. In the Index Pattern Conflicts section, under New index pattern, select gdt-*, and then select Confirm all changes.

Dashboard in action

  1. Select Dashboard in the menu on the left.
  2. Select the Guard Duty Summary link.

Your GuardDuty Dashboard will look like this:
 

Figure 16: The GuardDuty dashboard with callouts

Figure 16: The GuardDuty dashboard

The dashboard provides the following visualizations:

  1. This filter allows you to filter sample findings from real findings. If you generate sample findings from the GuardDuty AWS console, this filter allows you to remove the sample findings from the dashboard.
  2. The GuardDuty — Affected Instances chart shows which EC2 instances have associated findings. This visualization allows you to filter specific instances from display by selecting them in the graphic.
  3. The Guard Duty — Threat Type chart allows you to filter on the general attack type (inner circle) as well as the specific attack type (outer circle).
  4. The Guard Duty — Events Per Day graph allows you to visualize and filter on a specific time or date to show findings for that specific time, as well as search for temporal patterns in findings.
  5. GuardDuty — Top10 Findings provides a list of the top 10 findings by count.
  6. GuardDuty — Total Events provides the total number of events based on the criteria chosen. This value will change based on the filters defined.
  7. The GuardDuty — Heatmap — Port Probe Source Countries visualizes the countries where port probes are issued from. This is a Coordinate Map visualization that allows you to see the source and volume of the port probes targeting your instances.
  8. The GuardDuty — Network Connection Source Countries visualizes where brute force attacks are coming from. This is a Region Map visualization that allows you to highlight the country the brute force attacks are sourced from.
  9. GuardDuty — Severity Levels is a pie chart that show findings by severity levels (High, Med, Low), and you can filter by a specific level (that is, only show high-severity findings). This visualization uses the scripted field we created earlier for simplified visualization.
  10. The All-GuardDuty table includes the raw findings for all events. This provides complete raw event detail and the ability to filter at very granular levels.

In a previous blog, we saw how you can create a Kibana dashboard to visualize your network security posture by visualizing your VPC flow logs. This GuardDuty dashboard augments that dashboard. You can use a single Elasticsearch cluster to host both of these dashboards, in addition to other data sources you want to analyze and report on.

Conclusion

We’ve outlined an approach to rapidly build a pipeline to help you archive, analyze, and visualize your GuardDuty findings for rapid insight and actionable intelligence. You can extend this solution in a number of ways, including:

  • Modifying the alert email sent with a structured message (instead of raw JSON)
  • Adding additional visualizations, such as a heatmaps or timeseries charts
  • Extending the solution across AWS accounts or regions.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum.

Want more AWS Security news? Follow us on Twitter.

Michael Fortuna

Michael Fortuna

Michael is a Solutions Architect in AWS supporting enterprise customers and their journey to the cloud. Prior to his work on AWS and cloud technologies, Michael’s areas of focus included software-defined networking, security, collaboration, and virtualization technologies. He’s very excited to work as an SA because it allows him to dive deep on technology while helping customers.

Ravi Sakaria

Ravi Sakariar

Ravi is a Senior Solutions Architect at AWS based in New York. He works with enterprise customers as they transform their business and journey to the cloud. He enjoys the culture of innovation at Amazon because it’s similar to his prior experiences building startup companies. Outside of work, Ravi enjoys spending time with his family, cooking, and watching the New Jersey Devils.

AWS Compliance Center for financial services now available

Post Syndicated from Frank Fallon original https://aws.amazon.com/blogs/security/aws-compliance-center-financial-services/

On Tuesday, September 4, AWS announced the launch of an AWS Compliance Center for our Financial Services (FS) customers. This addition to our compliance offerings gives you a central location to research cloud-related regulatory requirements that impact the financial services industry. Prior to the launch of the AWS Compliance Center, customers preparing to adopt AWS for their FS workloads typically had to browse multiple in-depth sources to understand the expectations of regulatory agencies in each country.

The AWS Compliance Center is designed to make this process easier. It aggregates any given country’s regulatory position regarding the adoption and operation of cloud services. Key components of the FS industry—including regulatory approvals, data privacy, and data protection—are explained, along with the steps you must take throughout your adoption of AWS services to help satisfy regulatory requirements. You can browse the information in the portal and export it as printable documents.

We expect the AWS Compliance Center to evolve as our customers’ compliance needs change and as regulators begin to address the challenges and opportunities that cloud services create in the FS industry. The AWS Compliance Center covers 13 countries, and we’ll continue to enhance it with additional countries and information based on your needs.

New guide helps financial services customers in Brazil navigate cloud requirements

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/new-guide-helps-financial-services-customers-in-brazil-navigate-cloud-requirements/

We have a new resource to help our financial services customers in Brazil navigate regulatory requirements for using the cloud. The AWS User Guide to Financial Services Regulations in Brazil is a deep dive into the Brazilian National Monetary Council’s Resolution No. 4,658. The cybersecurity cloud resolution is the first of its kind by regulators in Brazil. The guide details how our services may be able to assist you in achieving these security expectations.

The resolution covers topics such as implementing a cybersecurity policy, incident response, entering into agreements with cloud service providers, subcontracting, business continuity, and notification requirements. Our guide addresses each of these issues and provides specific guidance on how you can use AWS to satisfy requirements.

The AWS User Guide to Financial Services Regulations in Brazil is part of a series of publications that seek to facilitate customer compliance. It’s available in English and Portuguese. We’ll continue to monitor the regulatory environment in Brazil and around the world and to publish additional resources.

If you have any questions, please contact your account executive.

Amazon sponsors R00tz at DEF CON 2018

Post Syndicated from Patrick McCanna original https://aws.amazon.com/blogs/security/amazon-sponsors-rootz-at-def-con-2018/

It’s early August, and we’re quickly approaching Hacker summer camp (AKA DEF CON). The Black Hat Briefings start August 8, DEF CON starts August 9, and many people will be closely following the latest security presentations at both conferences. But there’s another, exclusive conference happening at DEF CON that Amazon is excited to be a part of: we’re sponsoring R00tz Asylum. R00tz is a conference dedicated to teaching kids ages 8-18 how to become white-hat hackers.

Kids who attend will learn how hackers break into computers and networks and how cybersecurity experts defend against hackers. They’ll get hands-on experience soldering and disassembling computers. They’ll learn about lock-picking and how to use 3D printers, and they’ll even get to compete in a security-oriented “Capture the Flag” event where competitors are rewarded for discovering secrets embedded in servers and encrypted files. And throughout the conference, they’ll hear talks from top-notch presenters on topics that include hacking web apps, hacking elections, and hacking conference badges covered in blinky LEDs.

Many of Amazon’s best security experts started young, with limited access to mentors. We learned how to keep hackers from breaking into computers and networks before cybersecurity became the industry it is today. Amazon’s support for R00tz is our chance to give back to the next generation of cyber-security professionals. Kids who are interested in learning about security will get a safe environment and access to mentors. If you’re in Las Vegas for DEF CON, head over to the R00tz FAQs to learn more about the event, and be sure to check out the conference schedule.

How AWS uses automated reasoning to help you achieve security at scale

Post Syndicated from Andrew Gacek original https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova/

At AWS, we focus on achieving security at scale to diminish risks to your business. Fundamental to this approach is ensuring your policies are configured in a way that helps protect your data, and the Automated Reasoning Group (ARG), an advanced innovation team at AWS, is using automated reasoning to do it.

What is automated reasoning, you ask? It’s a method of formal verification that automatically generates and checks mathematical proofs which help to prove the correctness of systems; that is, fancy math that proves things are working as expected. If you want a deeper understanding of automated reasoning, check out this re:Invent session. While the applications of this methodology are vast, in this post I’ll explore one specific aspect: analyzing policies using an internal Amazon service named Zelkova.

What is Zelkova? How will it help me?

Zelkova uses automated reasoning to analyze policies and the future consequences of policies. This includes AWS Identity and Access Management (IAM) policies, Amazon Simple Storage Service (S3) policies, and other resource policies. These policies dictate who can (or can’t) do what to which resources. Because Zelkova uses automated reasoning, you no longer need to think about what questions you need to ask about your policies. Using fancy math, as mentioned above, Zelkova will automatically derive the questions and answers you need to be asking about your policies, improving confidence in your security configuration(s).

How does it work?

Zelkova translates policies into precise mathematical language and then uses automated reasoning tools to check properties of the policies. These tools include automated reasoners called Satisfiability Modulo Theories (SMT) solvers, which use a mix of numbers, strings, regular expressions, dates, and IP addresses to prove and disprove logical formulas. Zelkova has a deep understanding of the semantics of the IAM policy language and builds upon a solid mathematical foundation. While tools like the IAM Policy Simulator let you test individual requests, Zelkova is able to use mathematics to talk about all possible requests. Other techniques guess and check, but Zelkova knows.

You may have noticed, as an example, the new “Public / Not public” checks in S3. These are powered by Zelkova:
 

Figure 1: the "public/Not public" checks in S3

Figure 1: the “Public/Not public” checks in S3

S3 uses Zelkova to check each bucket policy and warns you if an unauthorized user is able to read or write to your bucket. When a bucket is flagged as “Public”, there are some public requests that are allowed to access the bucket. However, when a bucket is flagged as “Not public”, all public requests are denied. Zelkova is able to make such statements because it has a precise mathematical representation of IAM policies. In fact, it creates a formula for each policy and proves a theorem about that formula.

Consider the following S3 bucket policy statement where my goal is to disallow a certain principal from accessing the bucket:


{
    "Effect": "Allow",
    "NotPrincipal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

Unfortunately, this policy statement does not capture my intentions. Instead, it allows access for everybody in the world who is not the given principal. This means almost everybody now has access to my bucket, including anonymous unauthorized users. Fortunately, as soon as I attach this policy, S3 flags my bucket as “Public”—warning me that there’s something wrong with the policy I wrote. How did it know?

Zelkova translates this policy into a mathematical formula:

(Resource = “arn:aws:s3:::test-bucket”) ∧ (Principal ≠ 11112222333)

Here, ∧ is the mathematical symbol for “and” which is true only when both its left and right side are true. Resource and Principal are variables just like you would use x and y in algebra class. The above formula is true exactly when my policy allows a request. The precise meaning of my policy has now been defined in the universal language of mathematics. The next step is to decide if this policy formula allows public access, but this is a hard problem. Now Zelkova really goes to work.

A counterintuitive trick sometimes used by mathematicians is to make a problem harder in order to make finding a solution easier. That is, solving a more difficult problem can sometimes lead to a simpler solution. In this case, Zelkova solves the harder problem of comparing two policies against each other to decide which is more permissive. If P1 and P2 are policy formulas, then suppose formula P1 ⇒ P2 is true. This arrow symbol is an implication that means whenever P1 is true, P2 must also be true. So, whenever policy 1 accepts a request, policy 2 must also accept the request. Thus, policy 2 is at least as permissive as policy 1. Suppose also that the converse formula P2 ⇒ P1 is not true. That means there’s a request which makes P2 true and P1 false. This request is allowed by policy 2 and is denied by policy 1. Combining all these results, policy 1 is strictly less permissive than policy 2.

How does this solve the “Public / Not public” problem? Zelkova has a special policy that allows anonymous, unauthorized users to access an S3 resource. It compares your policy against this policy. If your policy is more permissive, then Zelkova says your policy allows public access. If you restrict access—for example, based on source VPC endpoint (aws:SourceVpce) or source IP address (aws:SourceIp)—then your policy is not more permissive than the special policy, and Zelkova says your policy does not allow public access.

For all this to work, Zelkova uses SMT solvers. Using mathematical language, these tools take a formula and either prove it is true for all possible values of the variables, or they return a counterexample that makes the formula false.

To understand SMT solvers better, you can play with them yourself. For example, if asked to prove x+y > xy, an SMT solver will quickly find a counterexample such as x=5,y=-1. To fix this, you could strengthen your formula to assume that y is positive:

(y > 0) ⇒ (x + y > xy)

The SMT solver will now respond that your formula is true for all values of the variables x and y. It does this using the rules of algebra and logic. This same idea carries over into theories like strings. You can ask the SMT solver to prove the formula length(append(a,b)) > length(a) where a and b are string variables. It will find a counterexample such as a=”hello” and b=”” where b is the empty string. This time, you could fix your formula by changing from greater-than to greater-than-or-equal-to:

length(append(a, b)) ≥ length(a)

The SMT solver will now respond that the formula is true for all values of the variables a and b. Here, the solver has combined reasoning about strings (length, append) with reasoning about numbers (greater-than-or-equal-to). SMT solvers are designed for exactly this sort of theory composition.

What about my original policy? Once I see that my bucket is public, I can fix my policy using an explicit “Deny”:


{
    "Effect": "Deny"
    "Principal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

With this policy statement attached, S3 correctly reports my bucket as “Not public”. Zelkova has translated this policy into a mathematical formula, compared it against a special policy, and proved that my policy is less permissive. Fancy math has proved that things are working (or in this case, not working) as expected.

Where else is Zelkova being used?

In addition to S3, several AWS services are using Zelkova:

We have also engaged with a number of enterprise and regulated customers who have adopted Zelkova for their use cases:

“Bridgewater, like many other security-conscious AWS customers, needs to quickly reason about the security posture of our AWS infrastructure, and an integral part of that posture is IAM policies. These govern permissions on everything from individual users, to S3 buckets, KMS keys, and even VPC endpoints, among many others. Bridgewater uses Zelkova to verify and provide assurances that our policies do not allow data exfiltration, misconfigurations, and many other malicious and accidental undesirable behaviors. Zelkova allows our security experts to encode their understanding once and then mechanically apply it to any relevant policies, avoiding error-prone and slow human reviews, while at the same time providing us high confidence in the correctness and security of our IAM policies.”
Dan Peebles, Lead Cloud Security Architect at Bridgewater Associates

Summary

AWS services such as S3 use Zelkova to precisely represent policies and prove that they are secure—improving confidence in your security configurations. Zelkova can make broad statements about all resource requests because it’s based on mathematics and proofs instead of heuristics, pattern matching, or simulation. The ubiquity of policies in AWS means that the value of Zelkova and its benefits will continue to grow as it serves to protect more customers every day.

Want more AWS Security news? Follow us on Twitter.

AWS Resources Addressing Argentina’s Personal Data Protection Law and Disposition No. 11/2006

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/aws-and-resources-addressing-argentinas-personal-data-protection-law-and-disposition-no-112006/

We have two new resources to help customers address their data protection requirements in Argentina. These resources specifically address the needs outlined under the Personal Data Protection Law No. 25.326, as supplemented by Regulatory Decree No. 1558/2001 (“PDPL”), including Disposition No. 11/2006. For context, the PDPL is an Argentine federal law that applies to the protection of personal data, including during transfer and processing.

A new webpage focused on data privacy in Argentina features FAQs, helpful links, and whitepapers that provide an overview of PDPL considerations, as well as our security assurance frameworks and international certifications, including ISO 27001, ISO 27017, and ISO 27018. You’ll also find details about our Information Request Report and the high bar of security at AWS data centers.

Additionally, we’ve released a new workbook that offers a detailed mapping as to how customers can operate securely under the Shared Responsibility Model while also aligning with Disposition No. 11/2006. The AWS Disposition 11/2006 Workbook can be downloaded from the Argentina Data Privacy page or directly from this link. Both resources are also available in Spanish from the Privacidad de los datos en Argentina page.

Want more AWS Security news? Follow us on Twitter.

 

Protecting your API using Amazon API Gateway and AWS WAF — Part I

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-i/

This post courtesy of Thiago Morais, AWS Solutions Architect

When you build web applications or expose any data externally, you probably look for a platform where you can build highly scalable, secure, and robust REST APIs. As APIs are publicly exposed, there are a number of best practices for providing a secure mechanism to consumers using your API.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

In this post, I show you how to take advantage of the regional API endpoint feature in API Gateway, so that you can create your own Amazon CloudFront distribution and secure your API using AWS WAF.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

As you make your APIs publicly available, you are exposed to attackers trying to exploit your services in several ways. The AWS security team published a whitepaper solution using AWS WAF, How to Mitigate OWASP’s Top 10 Web Application Vulnerabilities.

Regional API endpoints

Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution created and managed by API Gateway. Before the launch of regional API endpoints, this was the default option when creating APIs using API Gateway. It primarily helped to reduce latency for API consumers that were located in different geographical locations than your API.

When API requests predominantly originate from an Amazon EC2 instance or other services within the same AWS Region as the API is deployed, a regional API endpoint typically lowers the latency of connections. It is recommended for such scenarios.

For better control around caching strategies, customers can use their own CloudFront distribution for regional APIs. They also have the ability to use AWS WAF protection, as I describe in this post.

Edge-optimized API endpoint

The following diagram is an illustrated example of the edge-optimized API endpoint where your API clients access your API through a CloudFront distribution created and managed by API Gateway.

Regional API endpoint

For the regional API endpoint, your customers access your API from the same Region in which your REST API is deployed. This helps you to reduce request latency and particularly allows you to add your own content delivery network, as needed.

Walkthrough

In this section, you implement the following steps:

  • Create a regional API using the PetStore sample API.
  • Create a CloudFront distribution for the API.
  • Test the CloudFront distribution.
  • Set up AWS WAF and create a web ACL.
  • Attach the web ACL to the CloudFront distribution.
  • Test AWS WAF protection.

Create the regional API

For this walkthrough, use an existing PetStore API. All new APIs launch by default as the regional endpoint type. To change the endpoint type for your existing API, choose the cog icon on the top right corner:

After you have created the PetStore API on your account, deploy a stage called “prod” for the PetStore API.

On the API Gateway console, select the PetStore API and choose Actions, Deploy API.

For Stage name, type prod and add a stage description.

Choose Deploy and the new API stage is created.

Use the following AWS CLI command to update your API from edge-optimized to regional:

aws apigateway update-rest-api \
--rest-api-id {rest-api-id} \
--patch-operations op=replace,path=/endpointConfiguration/types/EDGE,value=REGIONAL

A successful response looks like the following:

{
    "description": "Your first API with Amazon API Gateway. This is a sample API that integrates via HTTP with your demo Pet Store endpoints", 
    "createdDate": 1511525626, 
    "endpointConfiguration": {
        "types": [
            "REGIONAL"
        ]
    }, 
    "id": "{api-id}", 
    "name": "PetStore"
}

After you change your API endpoint to regional, you can now assign your own CloudFront distribution to this API.

Create a CloudFront distribution

To make things easier, I have provided an AWS CloudFormation template to deploy a CloudFront distribution pointing to the API that you just created. Click the button to deploy the template in the us-east-1 Region.

For Stack name, enter RegionalAPI. For APIGWEndpoint, enter your API FQDN in the following format:

{api-id}.execute-api.us-east-1.amazonaws.com

After you fill out the parameters, choose Next to continue the stack deployment. It takes a couple of minutes to finish the deployment. After it finishes, the Output tab lists the following items:

  • A CloudFront domain URL
  • An S3 bucket for CloudFront access logs
Output from CloudFormation

Output from CloudFormation

Test the CloudFront distribution

To see if the CloudFront distribution was configured correctly, use a web browser and enter the URL from your distribution, with the following parameters:

https://{your-distribution-url}.cloudfront.net/{api-stage}/pets

You should get the following output:

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Set up AWS WAF and create a web ACL

With the new CloudFront distribution in place, you can now start setting up AWS WAF to protect your API.

For this demo, you deploy the AWS WAF Security Automations solution, which provides fine-grained control over the requests attempting to access your API.

For more information about deployment, see Automated Deployment. If you prefer, you can launch the solution directly into your account using the following button.

For CloudFront Access Log Bucket Name, add the name of the bucket created during the deployment of the CloudFormation stack for your CloudFront distribution.

The solution allows you to adjust thresholds and also choose which automations to enable to protect your API. After you finish configuring these settings, choose Next.

To start the deployment process in your account, follow the creation wizard and choose Create. It takes a few minutes do finish the deployment. You can follow the creation process through the CloudFormation console.

After the deployment finishes, you can see the new web ACL deployed on the AWS WAF console, AWSWAFSecurityAutomations.

Attach the AWS WAF web ACL to the CloudFront distribution

With the solution deployed, you can now attach the AWS WAF web ACL to the CloudFront distribution that you created earlier.

To assign the newly created AWS WAF web ACL, go back to your CloudFront distribution. After you open your distribution for editing, choose General, Edit.

Select the new AWS WAF web ACL that you created earlier, AWSWAFSecurityAutomations.

Save the changes to your CloudFront distribution and wait for the deployment to finish.

Test AWS WAF protection

To validate the AWS WAF Web ACL setup, use Artillery to load test your API and see AWS WAF in action.

To install Artillery on your machine, run the following command:

$ npm install -g artillery

After the installation completes, you can check if Artillery installed successfully by running the following command:

$ artillery -V
$ 1.6.0-12

As the time of publication, Artillery is on version 1.6.0-12.

One of the WAF web ACL rules that you have set up is a rate-based rule. By default, it is set up to block any requesters that exceed 2000 requests under 5 minutes. Try this out.

First, use cURL to query your distribution and see the API output:

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets
[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Based on the test above, the result looks good. But what if you max out the 2000 requests in under 5 minutes?

Run the following Artillery command:

artillery quick -n 2000 --count 10  https://{distribution-name}.cloudfront.net/prod/pets

What you are doing is firing 2000 requests to your API from 10 concurrent users. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try to run the cURL request again and see what happens:

 

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Request blocked.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: [removed]
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>

As you can see from the output above, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

In this first part, you saw how to use the new API Gateway regional API endpoint together with Amazon CloudFront and AWS WAF to secure your API from a series of attacks.

In the second part, I will demonstrate some other techniques to protect your API using API keys and Amazon CloudFront custom headers.

AWS GDPR Data Processing Addendum – Now Part of Service Terms

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-gdpr-data-processing-addendum/

Today, we’re happy to announce that the AWS GDPR Data Processing Addendum (GDPR DPA) is now part of our online Service Terms. This means all AWS customers globally can rely on the terms of the AWS GDPR DPA which will apply automatically from May 25, 2018, whenever they use AWS services to process personal data under the GDPR. The AWS GDPR DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that AWS customers wishing to transfer personal data from the European Economic Area (EEA) to other countries can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA.

As we approach the GDPR enforcement date this week, this announcement is an important GDPR compliance component for us, our customers, and our partners. All customers which that are using cloud services to process personal data will need to have a data processing agreement in place between them and their cloud services provider if they are to comply with GDPR. As early as April 2017, AWS announced that AWS had a GDPR-ready DPA available for its customers. In this way, we started offering our GDPR DPA to customers over a year before the May 25, 2018 enforcement date. Now, with the DPA terms included in our online service terms, there is no extra engagement needed by our customers and partners to be compliant with the GDPR requirement for data processing terms.

The AWS GDPR DPA also provides our customers with a number of other important assurances, such as the following:

  • AWS will process customer data only in accordance with customer instructions.
  • AWS has implemented and will maintain robust technical and organizational measures for the AWS network.
  • AWS will notify its customers of a security incident without undue delay after becoming aware of the security incident.
  • AWS will make available certificates issued in relation to the ISO 27001 certification, the ISO 27017 certification, and the ISO 27018 certification to further help customers and partners in their own GDPR compliance activities.

Customers who have already signed an offline version of the AWS GDPR DPA can continue to rely on that GDPR DPA. By incorporating our GDPR DPA into the AWS Service Terms, we are simply extending the terms of our GDPR DPA to all customers globally who will require it under GDPR.

AWS GDPR DPA is only part of the story, however. We are continuing to work alongside our customers and partners to help them on their journey towards GDPR compliance.

If you have any questions about the GDPR or the AWS GDPR DPA, please contact your account representative, or visit the AWS GDPR Center at: https://aws.amazon.com/compliance/gdpr-center/

-Chad

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Spring 2018 AWS SOC Reports are Now Available with 11 Services Added in Scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2018-aws-soc-reports-are-now-available-with-11-services-added-in-scope/

Since our last System and Organization Control (SOC) audit, our service and compliance teams have been working to increase the number of AWS Services in scope prioritized based on customer requests. Today, we’re happy to report 11 services are newly SOC compliant, which is a 21 percent increase in the last six months.

With the addition of the following 11 new services, you can now select from a total of 62 SOC-compliant services. To see the full list, go to our Services in Scope by Compliance Program page:

• Amazon Athena
• Amazon QuickSight
• Amazon WorkDocs
• AWS Batch
• AWS CodeBuild
• AWS Config
• AWS OpsWorks Stacks
• AWS Snowball
• AWS Snowball Edge
• AWS Snowmobile
• AWS X-Ray

Our latest SOC 1, 2, and 3 reports covering the period from October 1, 2017 to March 31, 2018 are now available. The SOC 1 and 2 reports are available on-demand through AWS Artifact by logging into the AWS Management Console. The SOC 3 report can be downloaded here.

Finally, prospective customers can read our SOC 1 and 2 reports by reaching out to AWS Compliance.

Want more AWS Security news? Follow us on Twitter.

Enhanced Domain Protections for Amazon CloudFront Requests

Post Syndicated from Colm MacCarthaigh original https://aws.amazon.com/blogs/security/enhanced-domain-protections-for-amazon-cloudfront-requests/

Over the coming weeks, we’ll be adding enhanced domain protections to Amazon CloudFront. The short version is this: the new measures are designed to ensure that requests handled by CloudFront are handled on behalf of legitimate domain owners.

Using CloudFront to receive traffic for a domain you aren’t authorized to use is already a violation of our AWS Terms of Service. When we become aware of this type of activity, we deal with it behind the scenes by disabling abusive accounts. Now we’re integrating checks directly into the CloudFront API and Content Distribution service, as well.

Enhanced Protection against Dangling DNS entries
To use CloudFront with your domain, you must configure your domain to point at CloudFront. You may use a traditional CNAME, or an Amazon Route 53 “ALIAS” record.

A problem can arise if you delete your CloudFront distribution, but leave your DNS still pointing at CloudFront, popularly known as a “dangling” DNS entry. Thankfully, this is very rare, as the domain will no longer work, but we occasionally see customers who leave their old domains dormant. This can also happen if you leave this kind of “dangling” DNS entry pointing at other infrastructure you no longer control. For example, if you leave a domain pointing at an IP address that you don’t control, then there is a risk that someone may come along and “claim” traffic destined for your domain.

In an even more rare set of circumstances, an abuser can exploit a subdomain of a domain that you are actively using. For example, if a customer left “images.example.com” dangling and pointing to a deleted CloudFront distribution which is no longer in use, but they still actively use the parent domain “example.com”, then an abuser could come along and register “images.example.com” as an alternative name on their own distribution and claim traffic that they aren’t entitled to. This also means that cookies may be set and intercepted for HTTP traffic potentially including the parent domain. HTTPS traffic remains protected if you’ve removed the certificate associated with the original CloudFront distribution.

Of course, the best fix for this kind of risk is not to leave dangling DNS entries in the first place. Earlier in February, 2018, we added a new warning to our systems. With this warning, if you remove an alternate domain name from a distribution, you are reminded to delete any DNS entries that may still be pointing at CloudFront.

We also have long-standing checks in the CloudFront API that ensure this kind of domain claiming can’t occur when you are using wildcard domains. If you attempt to add *.example.com to your CloudFront distribution, but another account has already registered www.example.com, then the attempt will fail.

With the new enhanced domain protection, CloudFront will now also check your DNS whenever you remove an alternate domain. If we determine that the domain is still pointing at your CloudFront distribution, the API call will fail and no other accounts will be able to claim this traffic in the future.

Enhanced Protection against Domain Fronting
CloudFront will also be soon be implementing enhanced protections against so-called “Domain Fronting”. Domain Fronting is when a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name. For example, the TLS connection may connect to “www.example.com” but then issue a request for “www.example.org”.

In certain circumstances this is normal and expected. For example, browsers can re-use persistent connections for any domain that is listed in the same SSL Certificate, and these are considered related domains. But in other cases, tools including malware can use this technique between completely unrelated domains to evade restrictions and blocks that can be imposed at the TLS/SSL layer.

To be clear, this technique can’t be used to impersonate domains. The clients are non-standard and are working around the usual TLS/SSL checks that ordinary clients impose. But clearly, no customer ever wants to find that someone else is masquerading as their innocent, ordinary domain. Although these cases are also already handled as a breach of our AWS Terms of Service, in the coming weeks we will be checking that the account that owns the certificate we serve for a particular connection always matches the account that owns the request we handle on that connection. As ever, the security of our customers is our top priority, and we will continue to provide enhanced protection against misconfigurations and abuse from unrelated parties.

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

How to centralize DNS management in a multi-account environment

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-centralize-dns-management-in-a-multi-account-environment/

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).

Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Diagram showing the relationship between all the various services

Figure 1: Diagram showing the relationship between all the various services

 

In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

How domain resolution across accounts works

Figure 2: How domain resolution across accounts works

 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.

Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     
    The "Create DHCP options set" dialog box

    Figure 3: The “Create DHCP options set” dialog box

     

  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.

Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.