Tag Archives: AWS WAF

Building an AWS Landing Zone from Scratch in Six Weeks

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/building-an-aws-landing-zone-from-scratch-in-six-weeks/

In an effort to deliver a simpler, smarter, and more unified experience on its website, the UK’s Ministry of Justice and its Lead Technical Architect, James Abley, created a bespoke AWS Landing Zone, a pre-defined template for an AWS account or infrastructure. And they did it in six weeks.

Supporting 33 agencies and public bodies, and making sure they all work together, the Ministry of Justice is at the heart of the United Kingdom’s justice system. Its task is to look after all parts of the justice system, including the courts, prisons, probation services, and legal aid, striving to bring the principles of justice to life for everyone in society.

In a This Is My Architecture video, shot at 2018 re:Invent in Las Vegas, James talks with AWS Solutions Architect, Simon Treacy, about the importance of delivering a consistent experience to his website’s customers, a mix of citizen and internal legal aid agency case workers.

Utilizing a number of AWS services, James walks us through the user experience, and he why decided to put AWS CoudFront and AWS Web Application Firewall (WAF) up front to improve the security posture of the ministry’s legacy applications and extend their lifespan. James also explained how he split traffic between two availability zones, using AWS Elastic Load Balancing (ELB) to provide higher availability and resilience, which will help with zero downtime deployment later on.

 

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

 

Trimming AWS WAF logs with Amazon Kinesis Firehose transformations

Post Syndicated from Tino Tran original https://aws.amazon.com/blogs/security/trimming-aws-waf-logs-with-amazon-kinesis-firehose-transformations/

In an earlier post, Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight, published on March 28, 2019, the authors showed you how to stream WAF logs with Amazon Kinesis Firehose for visualization using QuickSight. This approach used no filtering of the logs so that you could visualize the full data set. However, you are often only interested in seeing specific events. Or you might be looking to minimize log size to save storage costs. In this post, I show you how to apply rules in Amazon Kinesis Firehose to trim down logs. You can then apply the same visualizations you used in the previous solution.

AWS WAF is a web application firewall that supports full logging of all the web requests it inspects. For each request, AWS WAF logs the raw HTTP/S headers along with information on which AWS WAF rules were triggered. Having complete logs is useful for compliance, auditing, forensics, and troubleshooting custom and Managed Rules for AWS WAF. However, for some use cases, you might not want to log all of the requests inspected by AWS WAF. For example, to reduce the volume of logs, you might only want to log the requests blocked by AWS WAF, or you might want to remove certain HTTP header or query string parameter values from your logs. In many cases, unblocked requests are often already stored in your CloudFront access logs or web server logs and, therefore, using AWS WAF logs can result in redundant data for these requests, while logging blocked traffic can help you to identify bad actors or root cause false positives.

In this post, I’ll show you how to create an Amazon Kinesis Data Firehose stream to filter out unneeded records, so that you only retain log records for requests that were blocked by AWS WAF. From here, the logs can be stored in Amazon S3 or directed to SIEM (Security information and event management) and log analysis tools.

To simplify things, I’ll provide you with a CloudFormation template that will create the resources highlighted in the diagram below:
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A Kinesis Data Firehose delivery stream is used to receive log records from AWS WAF.
  2. An IAM role for the Kinesis Data Firehose delivery stream, with permissions needed to invoke Lambda and write to S3.
  3. A Lambda function used to filter out WAF records matching the default action before the records are written to S3.
  4. An IAM role for the Lambda function, with the permissions needed to create CloudWatch logs (for troubleshooting).
  5. An S3 bucket where the WAF logs will be stored.

Prerequisites and assumptions

  • In this post, I assume that the AWS WAF default action is configured to allow requests that don’t explicitly match a blocking WAF rule. So I’ll show you how to omit any records matching the WAF default action.
  • You need to already have a AWS WAF WebACL created. In this example, you’ll use a WebACL generated from the AWS WAF OWASP 10 template. For more information on deploying AWS WAF to a CloudFront or ALB resource, see the Getting Started page.

Step 1: Create a Kinesis Data Firehose delivery stream for AWS WAF logs

In this step, you’ll use the following CloudFormation template to create a Kinesis Data Firehose delivery stream that writes logs to an S3 bucket. The template also creates a Lambda function that omits AWS WAF records matching the default action.

Here’s how to launch the template:

  1. Open CloudFormation in the AWS console.
  2. For WAF deployments on Amazon CloudFront, select region US-EAST-1. Otherwise, create the stack in the same region in which your AWS WAF Web ACL is deployed.
  3. Select the Create Stack button.
  4. In the CloudFormation wizard, select Specify an Amazon S3 template URL and copy and paste the following URL into the text box, then select Next:
    https://s3.amazonaws.com/awsiammedia/public/sample/TrimAWSWAFLogs/KinesisWAFDeliveryStream.yml
  5. On the options page, leave the default values and select Next.
  6. Specify the following and then select Next:
    1. Stack name: (for example, kinesis-waf-logging). Make sure to note your stack name, as you’ll need to provide it later in the walkthrough.
    2. Buffer size: This value specifies the size in MB for which Kinesis will buffer incoming records before processing.
    3. Buffer interval: This value specifies the interval in seconds for which Kinesis will buffer incoming records before processing.

    Note: Kinesis will trigger data delivery based on which buffer condition is satisfied first. This CloudFormation sets the default buffer size to 3MB and interval size to 900 seconds to match the maximum transformation buffer size and intervals which is set by this template. To learn more about Kinesis Data Firehose buffer conditions, read this documentation.

     

    Figure 2: Specify the stack name, buffer size, and buffer interval

    Figure 2: Specify the stack name, buffer size, and buffer interval

  7. Select the check box for I acknowledge that AWS CloudFormation might create IAM resources and choose Create.
  8. Wait for the template to finish creating the resources. This will take a few minutes. On the CloudFormation dashboard, the status next to your stack should say CREATE_COMPLETE.
  9. From the AWS Management Console, open Amazon Kinesis and find the Data Firehose delivery stream on the dashboard. Note that the name of the stream will start with aws-waf-logs- and end with the name of the CloudFormation. This prefix is required in order to configure AWS WAF to write logs to the Kinesis stream.
  10. From the AWS Management Console, open AWS Lambda and view the Lambda function created from the CloudFormation template. The function name should start with the Stack name from the CloudFormation template. I included the function code generated from the CloudFormation template below so you can see what’s going on.

    Note: Through CloudFormation, the code is deployed without indentation. To format it for readability, I recommend using the code formatter built into Lambda under the edit tab. This code can easily be modified for custom record filtering or transformations.

    
        'use strict';
    
        exports.handler = (event, context, callback) => {
            /* Process the list of records and drop those containing Default_Action */
            const output = event.records.map((record) => {
                const entry = (new Buffer(record.data, 'base64')).toString('utf8');
                if (!entry.match(/Default_Action/g)){
                    return {
                        recordId: record.recordId,
                        result: 'Ok',
                        data: record.data,
                    };
                } else {
                    return {
                        recordId: record.recordId,
                        result: 'Dropped',
                        data: record.data,
                    };
                }
            });
        
            console.log(`Processing completed.  Successful records ${output.length}.`);
            callback(null, { records: output });
        };"        
        

You now have a Kinesis Data Firehose stream that AWS WAF can use for logging records.

Cost Considerations

This template sets the Kinesis transformation buffer size to 3MB and buffer interval to 900 seconds (the maximum values) in order to reduce the number of Lambda invocations used to process records. On average, an AWS WAF record is approximately 1-1.5KB. With a buffer size of 3MB, Kinesis will use 1 Lambda invocation per 2000-3000 records. Visit the AWS Lambda website to learn more about pricing.

Step 2: Configure AWS WAF Logging

Now that you have an active Amazon Kinesis Firehose delivery stream, you can configure your AWS WAF WebACL to turn on logging.

  1. From the AWS Management Console, open WAF & Shield.
  2. Select the WebACL for which you would like to enable logging.
  3. Select the Logging tab.
  4. Select the Enable Logging button.
  5. Next to Amazon Kinesis Data Firehose, select the stream that was created from the CloudFormation template in Step 1 (for example, aws-waf-logs-kinesis-waf-stream) and select Create.

Congratulations! Your AWS WAF WebACL is now configured to send records of requests inspected by AWS WAF to Kinesis Data Firehose. From there, records that match the default action will be dropped, and the remaining records will be stored in S3 in JSON format.

Below is a sample of the logs generated from this example. Notice that there are only blocked records in the logs.
 

Figure 3: Sample logs

Figure 3: Sample logs

Conclusion

In this blog, I’ve provided you with a CloudFormation template to generate a Kinesis Data Firehose stream that can be used to log requests blocked by AWS WAF, omitting requests matching the default action. By omitting the default action, I have reduced the number of log records that must be reviewed to identify bad actors, tune new WAF rules, and/or root cause false positives. For unblocked traffic, consider using CloudFront’s access logs with Amazon Athena or CloudWatch Logs Insights to query and analyze the data. To learn more about AWS WAF logs, read our developer guide for AWS WAF.

If you have feedback about this blog post, , please submit them in the Comments section below. If you have issues with AWS WAF, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tino Tran

Tino is a Senior Edge Specialized Solutions Architect based out of Florida. His main focus is to help companies deliver online content in a secure, reliable, and fast way using AWS Edge Services. He is a experienced technologist with a background in software engineering, content delivery networks, and security.

Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/enabling-serverless-security-analytics-using-aws-waf-full-logs/

Traditionally, analyzing data logs required you to extract, transform, and load your data before using a number of data warehouse and business intelligence tools to derive business intelligence from that data—on top of maintaining the servers that ran behind these tools.

This blog post will show you how to analyze AWS Web Application Firewall (AWS WAF) logs and quickly build multiple dashboards, without booting up any servers. With the new AWS WAF full logs feature, you can now log all traffic inspected by AWS WAF into Amazon Simple Storage Service (Amazon S3) buckets by configuring Amazon Kinesis Data Firehose. In this walkthrough, you’ll create an Amazon Kinesis Data Firehose delivery stream to which AWS WAF full logs can be sent, and you’ll enable AWS WAF logging for a specific web ACL. Then you’ll set up an AWS Glue crawler job and an Amazon Athena table. Finally, you’ll set up Amazon QuickSight dashboards to help you visualize your web application security. You can use these same steps to build additional visualizations to draw insights from AWS WAF rules and the web traffic traversing the AWS WAF layer. Security and operations teams can monitor these dashboards directly, without needing to depend on other teams to analyze the logs.

The following architecture diagram highlights the AWS services used in the solution:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

AWS WAF is a web application firewall that lets you monitor HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, to Amazon CloudFront or to an Application Load Balancer. AWS WAF also lets you control access to your content. Based on conditions that you specify—such as the IP addresses from which requests originate, or the values of query strings—API Gateway, CloudFront, or the Application Load Balancer responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked.

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

AWS Glue can be used to run serverless queries against your Amazon S3 data lake. AWS Glue can catalog your S3 data, making it available for querying with Amazon Athena and Amazon Redshift Spectrum. With crawlers, your metadata stays in sync with the underlying data (more details about crawlers later in this post). Amazon Athena and Amazon Redshift Spectrum can directly query your Amazon S3 data lake by using the AWS Glue Data Catalog. With AWS Glue, you access and analyze data through one unified interface without loading it into multiple data silos.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Amazon QuickSight is a business analytics service you can use to build visualizations, perform one-off analysis, and get business insights from your data. It can automatically discover AWS data sources and also works with your data sources. Amazon QuickSight enables organizations to scale to hundreds of thousands of users and delivers responsive performance by using a robust in-memory engine called SPICE.

SPICE stands for Super-fast, Parallel, In-memory Calculation Engine. SPICE supports rich calculations to help you derive insights from your analysis without worrying about provisioning or managing infrastructure. Data in SPICE is persisted until it is explicitly deleted by the user. SPICE also automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources.

Step one: Set up a new Amazon Kinesis Data Firehose delivery stream

  1. In the AWS Management Console, open the Amazon Kinesis Data Firehose service and choose the button to create a new stream.
    1. In the Delivery stream name field, enter a name for your new stream that starts with aws-waf-logs- as shown in the screenshot below. AWS WAF filters all streams starting with the keyword aws-waf-logs when it displays the delivery streams. Note the name of your stream since you’ll need it again later in the walkthrough.
    2. For Source, choose Direct PUT, since AWS WAF logs will be the source in this walkthrough.

      Figure 2: Select the delivery stream name and source

      Figure 2: Select the delivery stream name and source

  2. Next, you have the option to enable AWS Lambda if you need to transform your data before transferring it to your destination. (You can learn more about data transformation in the Amazon Kinesis Data Firehose documentation.) In this walkthrough, there are no transformations that need to be performed, so for Record transformation, choose Disabled.
    Figure 3: Select "Disabled" for record transformations

    Figure 3: Select “Disabled” for record transformations

    1. You’ll have the option to convert the JSON object to Apache Parquet or Apache ORC format for better query performance. In this example, you’ll be reading the AWS WAF logs in JSON format, so for Record format conversion, choose Disabled.

      Figure 4: Choose "Disabled" to not convert the JSON object

      Figure 4: Choose “Disabled” to not convert the JSON object

  3. On the Select destination screen, for Destination, choose Amazon S3.
    Figure 5: Choose the destination

    Figure 5: Choose the destination

    1. For the S3 destination, you can either enter the name of an existing S3 bucket or create a new S3 bucket. Note the name of the S3 bucket since you’ll need the bucket name in a later step in this walkthrough.
    2. For Source record S3 backup, choose Disabled, because the destination in this walkthrough is an S3 bucket.

      Figure 6: Enter the S3 bucket name, and select "Disabled" for the Source record S3 backup

      Figure 6: Enter the S3 bucket name, and select “Disabled” for the source record S3 backup

  4. On the next screen, leave the default conditions for Buffer size, Buffer interval, S3 compression and S3 encryption as they are. However, we recommend that you set Error logging to Enabled initially, for troubleshooting purposes.
    1. For IAM role, select Create new or choose. This opens up a new window that will prompt you to create firehose_delivery_role, as shown in the following screenshot. Choose Allow in this window to accept the role creation. This grants the Kinesis Data Firehose service access to the S3 bucket.

      Figure 7: Select "Create new or choose" for IAM Role

      Figure 7: Select “Allow” to create the IAM role “firehose_delivery_role”

  5. On the last step of configuration, review all the options you’ve chosen, and then select Create delivery stream. This will cause the delivery stream to display as “Creating” under Status. In a couple of minutes, the status will change to “Active,” as shown in the below screenshot.

    Figure 8: Review the options you selected

    Figure 8: Review the options you selected

Step two: Enable AWS WAF logging for a specific Web ACL

  1. From the AWS Management Console, open the AWS WAF service and choose Web ACLs. Open your Web ACL resource, which can either be deployed on a CloudFront distribution or on an Application Load Balancer.
    1. Choose the Web ACL for which you want to enable logging. (In the below screenshot, we’ve selected a Web ACL in the US East Region.)
    2. On the Logging tab, choose Enable Logging.

      Figure 9: Choose "Enable Logging"

      Figure 9: Choose “Enable Logging”

  2. The next page displays all the delivery streams that start with aws-waf-logs. Choose the Amazon Kinesis Data Firehose delivery stream that you created for AWS WAF logs at the start of this walkthrough. (In the screenshot below, our example stream name is “aws-waf-logs-us-east-1)
    1. You can also choose to redact certain fields that you wish to exclude from being captured in the logs. In this walkthrough, you don’t need to choose any fields to redact.
    2. Select Create.

      Figure 10: Choose your delivery stream, and select "Create"

      Figure 10: Choose your delivery stream, and select “Create”

After a couple of minutes, you’ll be able to inspect the S3 bucket that you defined in the Kinesis Data Firehose delivery stream. The log files are created in directories by year, month, day, and hour.

Step three: Set up an AWS Glue crawler job and Amazon Athena table

The purpose of a crawler within your Data Catalog is to traverse your data stores (such as S3) and extract the metadata fields of the files. The output of the crawler consists of one or more metadata tables that are defined in your Data Catalog. When the crawler runs, the first classifier in your list to successfully recognize your data store is used to create a schema for your table. AWS Glue provides built-in classifiers to infer schemas from common files with formats that include JSON, CSV, and Apache Avro.

  1. In the AWS Management Console, open the AWS Glue service and choose Crawler to setup a crawler job.
  2. Choose Add crawler to launch a wizard to setup the crawler job. For Crawler name, enter a relevant name. Then select Next.

    Figure 11: Enter "Crawler name," and select "Next"

    Figure 11: Enter “Crawler name,” and select “Next”

  3. For Choose a data store, select S3 and include the path of the S3 bucket that stores your AWS WAF logs, which you made note of in step 1.3. Then choose Next.

    Figure 12: Choose a data store

    Figure 12: Choose a data store

  4. When you’re given the option to add another data store, choose No.
  5. Then, choose Create an IAM role and enter a name. The role grants access to the S3 bucket for the AWS Glue service to access the log files.

    Figure 13: Choose "Create an IAM role," and enter a name

    Figure 13: Choose “Create an IAM role,” and enter a name

  6. Next, set the frequency to Run on demand. You can also schedule the crawler to run periodically to make sure any changes in the file structure are reflected in your data catalog.

    Figure 14: Set the "Frequency" to "Run on demand"

    Figure 14: Set the “Frequency” to “Run on demand”

  7. For output, choose the database in which the Athena table is to be created and add a prefix to identify your table name easily. Select Next.

    Figure 15: Choose the database, and enter a prefix

    Figure 15: Choose the database, and enter a prefix

  8. Review all the options you’ve selected for the crawler job and complete the wizard by selecting the Finish button.
  9. Now that the crawler job parameters are set up, on the left panel of the console, choose Crawlers to select your job and then choose Run crawler. The job creates an Amazon Athena table. The duration depends on the size of the log files.

    Figure 16: Choose "Run crawler" to create an Amazon Athena table

    Figure 16: Choose “Run crawler” to create an Amazon Athena table

  10. To see the Amazon Athena table created by the AWS Glue crawler job, from the AWS Management Console, open the Amazon Athena service. You can filter by your table name prefix.
      1. To view the data, choose Preview table. This displays the table data with certain fields showing data in JSON object structure.
    Figure 17: Choose "Preview table" to view the data

    Figure 17: Choose “Preview table” to view the data

Step four: Create visualizations using Amazon QuickSight

  1. From the AWS Management Console, open Amazon QuickSight.
  2. In the Amazon QuickSight window, in the top left, choose New Analysis. Choose New Data set, and for the data source choose Athena. Enter an appropriate name for the data source name and choose Create data source.

    Figure 18: Enter the "Data source name," and choose "Create data source"

    Figure 18: Enter the “Data source name,” and choose “Create data source”

  3. Next, choose Use custom SQL to extract all the fields in the JSON object using the following SQL query:
    
        ```
        with d as (select
        waf.timestamp,
            waf.formatversion,
            waf.webaclid,
            waf.terminatingruleid,
            waf.terminatingruletype,
            waf.action,
            waf.httpsourcename,
            waf.httpsourceid,
            waf.HTTPREQUEST.clientip as clientip,
            waf.HTTPREQUEST.country as country,
            waf.HTTPREQUEST.httpMethod as httpMethod,
            map_agg(f.name,f.value) as kv
        from sampledb.jsonwaflogs_useast1 waf,
        UNNEST(waf.httprequest.headers) as t(f)
        group by 1,2,3,4,5,6,7,8,9,10,11)
        select d.timestamp,
            d.formatversion,
            d.webaclid,
            d.terminatingruleid,
            d.terminatingruletype,
            d.action,
            d.httpsourcename,
            d.httpsourceid,
            d.clientip,
            d.country,
            d.httpMethod,
            d.kv['Host'] as host,
            d.kv['User-Agent'] as UA,
            d.kv['Accept'] as Acc,
            d.kv['Accept-Language'] as AccL,
            d.kv['Accept-Encoding'] as AccE,
            d.kv['Upgrade-Insecure-Requests'] as UIR,
            d.kv['Cookie'] as Cookie,
            d.kv['X-IMForwards'] as XIMF,
            d.kv['Referer'] as Referer
        from d;
        ```        
        

  4. To extract individual fields, copy the previous SQL query and paste it in the New custom SQL box, then choose Edit/Preview data.
    Figure 19: Paste the SQL query in "New custom SQL query"

    Figure 19: Paste the SQL query in “New custom SQL query”

    1. In the Edit/Preview data view, for Data source, choose SPICE, then choose Finish.

      Figure 20: Choose "Spice" and then "Finish"

      Figure 20: Choose “Spice” and then “Finish”

  5. Back in the Amazon Quicksight console, under the Fields section, select the drop-down menu and change the data type to Date.

    Figure 21: In the Amazon Quicksight console, change the data type to "Date"

    Figure 21: In the Amazon Quicksight console, change the data type to “Date”

  6. After you see the Date column appear, enter an appropriate name for the visualizations at the top of the page, then choose Save.

    Figure 22: Enter the name for the visualizations, and choose "Save"

    Figure 22: Enter the name for the visualizations, and choose “Save”

  7. You can now create various visualization dashboards with multiple visual types by using the drag-and-drop feature. You can drag and drop combinations of fields such as Action, Client IP, Country, Httpmethod, and User Agents. You can also add filters on Date to view dashboards for a specific timeline. Here are some sample screenshots:
    Figure 23: Visualization dashboard samples

    Figure 23a: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23b: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23c: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23d: Visualization dashboard samples

Conclusion

You can enable AWS WAF logs to Amazon S3 buckets and analyze the logs while they are being streamed by configuring Amazon Kinesis Data Firehose. You can further enhance this solution by automating the streaming of data and using AWS Lambda for any data transformations based on your specific requirements. Using Amazon Athena and Amazon QuickSight makes it easy to analyze logs and build visualizations and dashboards for executive leadership teams. Using these solutions, you can go serverless and let AWS do the heavy lifting for you.

Author photo

Umesh Kumar Ramesh

Umesh is a Cloud Infrastructure Architect with Amazon Web Services. He delivers proof-of-concept projects, topical workshops, and lead implementation projects to various AWS customers. He holds a Bachelor’s degree in Computer Science & Engineering from National Institute of Technology, Jamshedpur (India). Outside of work, Umesh enjoys watching documentaries, biking, and practicing meditation.

Author photo

Muralidhar Ramarao

Muralidhar is a Data Engineer with the Amazon Payment Products Machine Learning Team. He has a Bachelor’s degree in Industrial and Production Engineering from the National Institute of Engineering, Mysore, India. Outside of work, he loves to hike. You will find him with his camera or snapping pictures with his phone, and always looking for his next travel destination.

This Is My Architecture: Mobile Cryptocurrency Mining

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/this-is-my-architecture-mobile-cryptocurrency-mining/

In North America, approximately 95% of adults over the age of 25 have a bank account. In the developing world, that number is only about 52%. Cryptocurrencies can provide a platform for millions of unbanked people in the world to achieve financial freedom on a more level financial playing field.

Electroneum, a cryptocurrency company located in England, built its cryptocurrency mobile back end on AWS and is using the power of blockchain to unlock the global digital economy for millions of people in the developing world.

Electroneum’s cryptocurrency mobile app allows Electroneum customers in developing countries to transfer ETNs (exchange-traded notes) and pay for goods using their smartphones. Listen in to the discussion between AWS Solutions Architect Toby Knight and Electroneum CTO Barry Last as they explain how the company built its solution. Electroneum’s app is a web application that uses a feedback loop between its web servers and AWS WAF (a web application firewall) to automatically block malicious actors. The system then uses Athena, with a gamified approach, to provide an additional layer of blocking to prevent DDoS attacks. Finally, Electroneum built a serverless, instant payments system using AWS API Gateway, AWS Lambda, and Amazon DynamoDB to help its customers avoid the usual delays in confirming cryptocurrency transactions.

 

How to use AWS WAF to filter incoming traffic from embargoed countries

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-use-aws-waf-to-filter-incoming-traffic-from-embargoed-countries/

AWS WAF provides inline inspection of inbound traffic at the application layer to detect and filter against critical web application security flaws from common web exploits that could affect application availability, compromise security, or consume excessive resources. The inbound traffic is inspected against web access control list (web ACL) rules that you can create manually or programmatically—either through AWS WAF Security Automations or through the AWS Marketplace. AWS WAF functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service. It can detect and filter malicious web requests and scale to handle bursts in traffic.

We have customers in public sector and financial services who use AWS WAF to block requests from certain geographical locations, like embargoed countries, by applying geographic match conditions. By using AWS WAF, our customers can create a customized list to easily manage an automated solution for geographic blocking.

In order to reduce the operational burden of maintaining an up-to-date list of rules for geographical location blocking, this blog post provides you with an automated solution that applies geography-based IP (GeoIP) restrictions based on a descriptive JSON file that lists all the locations that you want to block. When you update this file, the automation applies all rules to the specified AWS WAF web ACL. For countries not listed on the geographic match condition (or if you just need to block a subset of IPs from a country), the JSON file also has a section where you can list IP ranges that should be blocked.

If you deploy our solution with the default parameters, it builds the following environment:
 

Figure 1: Solution diagram

Figure 1: Solution diagram

As the diagram shows, the solution uses these resources:

  • AWS WAF, which functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service.
  • Two AWS Lambda functions — a Custom Resource function and an Embargoed Countries Parser.
    1. The Custom Resource function helps provision the solution when the AWS WAF conditions, rules, and web ACL are created and configured. It’s also triggered when you upload an initial version of the embargoed countries JSON file to your Amazon Simple Storage (Amazon S3) bucket.
    2. The Embargoed Countries Parser function is trigged whenever a new JSON file is uploaded to the S3 bucket. When an upload occurs, the function parses the new file and enforces AWS WAF rules that reflect what the file describes.
  • An Amazon Simple Storage Service (Amazon S3) bucket, where you’ll save the embargoed countries JSON file.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following resources:
    1. AWS WAF, to list, create, obtain, and update geographic IP restrictions, conditions, and web ACLs.
    2. Amazon CloudWatch logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload and read the embargoed countries JSON file.

The image below shows a reference architecture where malicious traffic is blocked by AWS WAF rules.
 

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

As a starting point for this walk-through, we created a list of embargoed countries based on information published by the Office of Foreign Assets Control (OFAC) of the US Department of the Treasury.
OFAC sanctions and restrictions vary in scope, and OFAC does not maintain one single list of embargoed countries. OFAC also imposes additional restrictions on doing business with certain individuals and entities that are not covered by the embargoed country sanctions list. For the most up-to-date information about embargoed countries and other OFAC sanctions programs, see the US Department of the Treasury’s Resource Center.

IMPORTANT NOTES:

You’re responsible for updating your list of embargoed countries, based on geographic IP restrictions that you establish and keep up-to-date. Later in the post, we’ll show you how to update and edit your list, but we want to emphasize that ensuring your embargo list is current and comprehensive for your business and compliance needs is a critical part of your responsibility as a customer.

Further, the accuracy of the IP Address to country lookup database used by WAF varies by region. Based on recent tests, our overall accuracy for the IP address to country mapping is 99.8%. We recommend that you work with regulatory compliance experts to decide whether your solution meets your compliance needs.

Deploying the CloudFormation stack

To get started, first make sure you have at least one resource that’s associated with your web ACL. This can be either a CloudFront distribution or an Application Load Balancer (ALB). Then, select the Launch Stack button below to launch an AWS CloudFormation stack in your account. It will take approximately 5 minutes for the CloudFormation stack to complete:
 
Select this image to open a link that starts building the CloudFormation stack

The code for this solution is available on GitHub.

Notes: The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar.

  1. On the Select Template page, select Next.
  2. On the Specify Details page, give your solution stack a name.
  3. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    The following screenshot illustrates the same.
     

    Figure 3: Review and modify parameters

    Figure 3: Review and modify parameters

    ParameterValueDescription
    EndpointType <Requires input>

    Default: CloudFront

    Choose whether the endpoint that needs to be protected by AWS WAF is associated with CloudFront or ALB.
    WebAclIdInsert the webACL id (or leave it empty to create a new one)
    RuleAction AllowedValues:

    BLOCK, COUNT

    Default: BLOCK

    Select the action that AWS WAF takes when a web request comes from an embargoed country.
    RulePriorityIpDefault: 100Specifies the order in which the embargoed IPs rule will be evaluated in a WebACL.
    RulePriorityGeoDefault: 101Specifies the order in which the embargoed country rule will be evaluated in a WebACL.

     

  4. Select Next.
  5. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like. Then select Next.
  6. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  7. To deploy the stack, select Create. In approximately two minutes, the stack creation should be complete. You can verify this on the Events tab by finding your stack ID and looking for the CREATE_COMPLETE status:

    Upon the completion of the CloudFormation stack, you should see CREATE_COMPLETE as the Status. It should look like this:
     

    Figure 4: Look for "CREATE_COMPLETE" as the "Status"

    Figure 4: Look for “CREATE_COMPLETE” as the “Status”

  8. Return to the AWS Management Console, where you’ll see that an additional rule has been added, as shown in the following diagram:
     
    Figure 5: An additional rule has been added

    Figure 5: An additional rule has been added

  9. Choose your newly created rule, then go to the Rules details page. You should now see the JSON file that contains our initial list of embargoed countries to filter traffic from. This is a starting point list: it’s your responsibility as a customer to update the embargoed countries list going forward. To update the list of countries, you can edit the JSON file located in the Amazon S3 bucket using the steps in the next section of this post.
     

    Note:Check to make sure that the web ACL is associated with the endpoint you need to protect, or you run the risk of leaving the endpoint unprotected against inbound traffic from the geographic regions you want to block. More information about how to associate an endpoint with WAF web ACL can be found here.

Updating the list of embargoed countries

  1. To find the S3 bucket, on the completed CloudFormation Template, go to the Resources tab.
  2. Select the Physical ID to see an Amazon S3 bucket with an S3 object called
    embargoed-countries.json. Youll be directed to the Amazon S3 Bucket.
     
    Figure 6: The "embargoed-countries.json" file

    Figure 6: The “embargoed-countries.json” file

  3. Download the embargoed-countries.json file, edit it, and upload the edited file to the same location. Wait for a couple of minutes for the changes to propagate to AWS WAF.

Conclusion

You now have access to a simple solution to block inbound traffic from specific geographic regions. Using this solution, you can use AWS WAF to help protect your applications from unwanted or unauthorized traffic to your application served by CloudFront or ALB.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Rajat Ravinder Varuni

As a security architect with Amazon Web Services, Rajat provides subject matter expertise in the architecture and deployment of solutions that reduce the likelihood of data leakage, web application and denial-of-service attacks, as well as in the design of data encryption methodologies to secure mission-critical data. Find his other contributions to the AWS Security Blog here.

Author

Heitor Vital

Heitor Vital is a Solutions Builder at Amazon Web Services. His team outlines AWS best practices and provides prescriptive architectural guidance, as well as automated solutions that you can deploy in your AWS account in minutes. He contributes to projects such as AWS WAF Security Automation, Data Lake Solution, and Serverless Bot Framework.

Amazon API Gateway adds support for AWS WAF

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/amazon-api-gateway-adds-support-for-aws-waf/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

Today, I’m excited to tell you about the Amazon API Gateway native integration with AWS WAF. Previously, if you wanted to secure your API in Amazon API Gateway with AWS WAF, you had to deploy a Regional API endpoint and use your own Amazon CloudFront distribution. This new feature now enables you to provision any­­ API Gateway endpoint and secure it with AWS WAF without having to configure your own CloudFront distribution to add that capability.

In Part 1 of this series, I described how to protect your API provided by API Gateway using AWS WAF.

In Part 2 of this series, I described how to use API keys as a shared secret between a CloudFront distribution and API Gateway to secure public access to your API in API Gateway. This new AWS WAF integration means that the method described in Part 2 is no longer necessary.

The following image describes methods to secure your API in API Gateway before and after this feature was made available.

Where:

  1. AWS WAF securing CloudFront endpoint only.
  2. AWS WAF securing both CloudFront and API Gateway endpoints natively.
  3. AWS WAF securing Amazon API Gateway endpoints natively where global presence is not needed.

Enabling AWS WAF for an API managed by Amazon API Gateway

For this walkthrough, you can use an existing Pet Store API or any API in API Gateway that you may already have deployed. You create a new AWS WAF web ACL that is later associated with your API Gateway stage.

Follow these steps to create a web ACL:

  1. Open the AWS WAF console.
  2. Choose Create web ACL.
  3. For Web ACL Name, enter ApiGateway-HTTP-Flood-Sample.
  4. For Region, choose US East (N. Virginia).
  5. Choose Next until you reach Step 3: Create rules.
  6. Choose Create rule and enter HTTP Flood Sample.
  7. For Rule type, choose Rate-based rule.
  8. For Rate limit, enter 2000 and choose Create.
  9. For Default action, choose Allow all requests that don’t match any rules.
  10. Choose Review and create.
  11. Confirm that your options look similar to the following image and choose Confirm and create next.

You can now follow the steps to enable the AWS WAF web ACL for an existing API in API Gateway:

  1. Open the Amazon API Gateway console.
  2. Choose Stages, prod.
  3. Under Web Application Firewall (WAF), choose ApiGateway-HTTP-Flood-Sample (or the web ACL that you just created).
  4. Choose Save Changes.

Testing your API in API Gateway now secured by AWS WAF

AWS WAF provides HTTP flood protection that is a rate-based rule. The rate-based rule is automatically triggered when web requests from a client exceed a configurable threshold. The threshold is defined by the maximum number of incoming requests allowed from a single IP address within a five-minute period.

After this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold. For this example, you defined 2000 requests as a threshold for the HTTP flood rate–based rule.

Artillery, an open source modern load testing toolkit, is used to send a large number of requests directly to the API Gateway Invoke URL to test whether your AWS WAF native integration is working correctly.

Firstly, follow these steps to retrieve the correct Invoke URL of your Pet Store API:

  1. Open the API Gateway console.
  2. In the left navigation pane, open the PetStore API.
  3. Choose Stages, select prod, and copy the Invoke URL value.

Secondly, use cURL to query your distribution and see the API output before the rate limit rule is triggered:

$ curl -s INVOKE_URL/pets

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
] 

Then, use Artillery to send a large number of requests in a short period of time to trigger your rate limit rule:

$ artillery quick -n 2000 --count 10 INVOKE_URL/pets

With this command, Artillery sends 2000 requests to your PetStore API from 10 concurrent users. By doing so, you trigger the rate limit rule in less than the 5-minute threshold. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try re-running the cURL command. You should no longer see a list of pets:

{“message”:”Forbidden”}

As you can see from the output, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

As you can see, with the AWS WAF native integration with Amazon API Gateway, you no longer have to manage your own Amazon CloudFront distribution in order to secure your API with AWS WAF. The AWS WAF native integration makes this process seamless.

I hope that you found the information in this post helpful. Remember that you can use this integration today with all Amazon API Gateway endpoints (Edge, Regional, and Private). It is available in the following Regions:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • EU (Frankfurt)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)

Using AWS Firewall Manager and WAF to protect your web applications with master rules and application-specific rules

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/using-aws-firewall-manager-and-waf-to-protect-your-web-applications-with-master-rules-and-application-specific-rules/

Jeff Barr’s blog post introducing AWS Firewall Manager describes how you can centrally manage a set of web application firewall rules to protect all the applications in an AWS Organization. This blog post will take you through the specific steps to implement firewall rules using both AWS Web Application Firewall (AWS WAF) and AWS Firewall Manager, including how to use a predefined set of AWS WAF rules like a master rule set that you can enforce on multiple resources.

If you haven’t used these services before, here’s a quick overview:

  • AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront or to an Application Load Balancer. AWS WAF also lets you control access to your content. Based on conditions that you specify, such as the IP addresses from which requests originate or the values of query strings, CloudFront or an Application Load Balancer will respond to requests either with the requested content or with an HTTP 403 status code (Forbidden). You also can configure CloudFront to return a custom error page when a request is blocked.
  • AWS Firewall Manager simplifies your AWS WAF administration and helps you enforce WAF rules on the resources across all the accounts in an AWS Organization by using AWS Config in the background. AWS Firewall Manager also enables you to selectively apply the rules to specific resources.

By using AWS Firewall Manager together with AWS WAF, you can enable developers to code WAF rules while still ensuring that all applications are protected by master rules defined by your organization’s security team. This strategy meets the needs of organizations using a DevOps model of development, where both the security team and the application development teams will build, manage, and deploy security rules. The architecture proposed in this post lets the security team create and enforce a master set of WAF rules using AWS Firewall Manager while still allowing developers to build and manage additional rules.

Customers use AWS WAF and AWS Firewall Manager to meet different requirements. One common scenario is to create different global sets of WAF rules in AWS Firewall Manager and apply these centrally managed rule sets to individual applications using AWS Firewall Manager policies. With AWS Firewall Manager policies, you can filter resources based on tags. Another common use case is to create AWS WAF rules in individual accounts and apply web access control lists (web ACLs) to protect specific resources by associating the application’s CloudFront distribution resource or the application’s Application Load Balancer.

However, with multiple customers adopting DevOps culture for software development, I’ll use this post to dive deep into a design pattern that enables developers to deploy and manage application-specific WAF rules to augment their existing security controls. This lets your organization’s InfoSec team create a master set of WAF rules using AWS Firewall Manager and enforce these rules across all applications, while enabling developers to build and manage additional rules in conjunction.

Prerequisites

AWS Firewall Manager has the following prerequisites:

  • AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an Organization and Enabling All Features in Your Organization.
  • A firewall administrator AWS Account: You must designate one of the AWS accounts in your organization as the administrator for AWS Firewall Manager. This gives the account permission to deploy AWS WAF rules across the organization.
  • AWS Config: You must enable AWS Config for all of the accounts in your organization so that AWS Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, you can use the Enable AWS Config template on the StackSets Sample Templates page. For more information, see Getting Started with AWS Config.

Architecture diagram

The following diagram gives an overview of the process for deploying WAF rules to different applications.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

The diagram illustrates the following steps:

  1. Enable AWS Firewall Manager and designate the account owned by your security team as the AWS Firewall Manager administrator account.
  2. Create a rule group under AWS Firewall Manager. This is nothing but a logical grouping of WAF rules that you can add to a web ACL or an AWS Firewall Manager policy. You can either create or use an existing set of WAF rules as the master rule set.
  3. Under AWS Firewall Manager, create policies that can be applied to individual application resources (Application Load Balancer or CloudFront distributions) by mapping them to specific application name/value tags. The policies you create will result in the generation of individual new web ACLs in the backend for each policy.
  4. Application developers can further build more app-specific WAF rules on the web ACLs created in the previous step.

Deploying the solution

The following steps will walk you through the deployment details.

  1. From the AWS Management Console, select AWS WAF. You can either create your own set of master rules or import a master set of rules. For more information about how to import a pre-defined master set of rules, see the whitepaper Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities. You can also choose to use the AWS CloudFormation template mentioned in the AWS WAF Security Automations implementation guide. As another option, you can choose to import managed rules offered by partners on AWS Marketplace Rule Groups as the master set of rules.
  2. Under AWS Firewall Manager, select Rule groups, and then select Create rule group.
  3. Under Choose how you want to proceed, select Use existing rules for this rule group, and then select Next. If you choose to create your own rules, you can select Create rules and conditions for this rule group.
     
    Figure 2: The "Use existing rules for this rule group" option

    Figure 2: The “Use existing rules for this rule group” option

  4. Enter a name for your rule group. In my example, I’ve named my rule group “MasterRuleGroup.” Add each rule to your rule group by selecting Add another rule. After adding all of your rules, select the Create button to create the rule group.
     
    Figure 3: Create a rule group

    Figure 3: Create a rule group

  5. Now, under AWS Firewall Manager, select Security Policies. You’ll create a new policy for each of the applications that you want to manage by selecting the Create policy button and following these steps:
    1. Select the option to Create an AWS Firewall Manager policy and add existing rule groups.
    2. Associate the policy to the rule group you named and created earlier, and then select Next. In my example, you can see that I’ve associated one of my policies (Policy1) with my rule group (MasterRuleGroup).
       
      Figure 4: Associate the policy to the rule group

      Figure 4: Associate the policy to the rule group

    3. On the following screen, select the box that says Select accounts to include/exclude from this policy, and then select the option to Include only these accounts in this policy. Add the corresponding accounts by selecting Add, then select the appropriate resource type to protect. Finally, select the checkbox for the option to Use tags to include/exclude resources (optional), select Include, and then add the application’s Name/Value tag.
       
      Figure 5: Define the policy's scope

      Figure 5: Define the policy’s scope

  6. Repeat step five to create policies for all the applications that you want to manage, using each application’s corresponding application tags.

Each security policy creates a web ACL that is visible in the web ACLs list under AWS WAF. The policies appear with a unique ID starting with FMManagedWebACL:
 

Figure 6: Look for IDs that being with "FMManagedWebACL"

Figure 6: Look for IDs that begin with “FMManagedWebACL”

Each of these WAF web ACLs can be managed by your individual application teams. Developers can add up to nine WAF rules for various scenarios, such as cross-site scripting, SQL injections, and IP blacklisting, while still ensuring that their applications are protected by the master rules defined in the AWS Firewall Manager.

Additionally, master rules will be automatically reinstated if any of the application team members deletes the master rule group. Now, I want to take a moment here and highlight the importance of this feature. If you have a new developer who has taken up the responsibility of developing and managing the WAF rules and who mistakenly deletes the master rule group of type Group, AWS Firewall Manager in the backend will find that this rule has gone non-compliant and automatically remediate it by adding this rule back into the web ACL.

Conclusion

This blog post describes how you can create AWS Firewall Manager rules using the web console. You can also create these rules by using the command line interface (CLI), programmatically with the SDK, or by using AWS CloudFormation and other such scripting tools.

Using both AWS WAF and AWS Firewall Manager, you can create a deployment strategy that enables
developers to maintain a security mindset and begin coding WAF rules, while at the same time ensuring all applications are still protected by a set of WAF rules defined by your organization’s InfoSec team

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Umesh Kumar Ramesh

Umesh is a Cloud Infrastructure Architect with AWS who delivers proof-of-concept projects, topical workshops, and leads implementation projects. He holds a Bachelor’s degree in Computer Science & Engineering from the National Institute of Technology, Jamshedpur (India). Outside of work, he enjoys watching documentaries, biking, and practicing meditation.

How to analyze AWS WAF logs using Amazon Elasticsearch Service

Post Syndicated from Tom Adamski original https://aws.amazon.com/blogs/security/how-to-analyze-aws-waf-logs-using-amazon-elasticsearch-service/

Log analysis is essential for understanding the effectiveness of any security solution. It can be valuable for day-to-day troubleshooting and also for your long-term understanding of how your security environment is performing.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules.

With the release of access to full AWS WAF logs, you now have the ability to view all of the logs generated by AWS WAF while it’s protecting your web applications. In addition, you can use Amazon Kinesis Data Firehose to forward the logs to Amazon Simple Storage Service (Amazon S3) for archival, and to Amazon Elasticsearch Service to analyze access to your web applications.

In this blog post, I’ll show you how you can analyze AWS WAF logs using Amazon Elasticsearch Service (Amazon ES). I’ll walk you through how to find out in near-real time which AWS WAF rules get triggered, why, and by which request. I’ll also show you how to create a historical view of your web applications’ access trends for long-term analysis.

Architecture overview

When enabled, AWS WAF sends logs to Amazon Kinesis Data Firehose. From there, you can decide what to do with them next. In my architecture, I’ll forward the relevant logs to Amazon ES for analysis and to Amazon S3 for long-term storage.
 

Figure 1: Architecture overview

Figure 1: Architecture overview

In this blog, I will show you how to enable AWS WAF logging to Amazon ES in the following steps:

  1. Create an Amazon Elasticsearch Service domain.
  2. Configure Kinesis Data Firehose for log delivery.
  3. Enable AWS WAF logs on a web access control list (ACL) to send data to Kinesis Data Firehose.
  4. Analyze AWS WAF logs in Amazon ES.

Note: The region in which you should enable Kinesis Data Firehose must match the region where your AWS WAF web ACL is deployed. If you’re enabling logging for AWS WAF deployed on Amazon CloudFront, then Kinesis Data Firehose must be created in the Northern Virginia AWS Region.

Preparing Amazon Elasticsearch Service

If you don’t have your Amazon ES environment already running on AWS, you’ll need to create one. Start by navigating to the Elasticsearch Service from your AWS Console and choosing Create a new domain. There, you will be able to define the namespace for your Elasticsearch cluster and choose the version you’d like to use. In my setup, I’ll be using awswaf-logs as my namespace and version 6.3 of Elasticsearch.
 

Figure 2: Defining Amazon ES Domain

Figure 2: Defining Amazon ES Domain

You can also decide on other parameters, like the number of nodes in your cluster or how to access the dashboards. See the Amazon Elasticsearch Service Documentation to find out more details about setting up your environment, or review this previous blog post, which goes deeper into Amazon ES setup.

When your environment is ready, you’ll be able to click on your namespace and find the link to your Kibana dashboard. Kibana is the data visualisation plugin for Elasticsearch that you’ll use to analyze your logs.
 

Figure 3: Identifying the Kibana management URL

Figure 3: Identifying the Kibana management URL

While Elasticsearch can automatically classify most of the fields from the AWS WAF logs, you need to inform Elasticsearch how to interpret fields that have specific formatting. Therefore, before you start sending logs to Elasticsearch, you should create an index pattern template so Elasticsearch can distinguish AWS WAF logs and classify the fields correctly.

The pattern template I’m using below will be applied to all indexes beginning with the string awswaf-. That’s because in my setup all AWS WAF log index files created in Elasticsearch will always begin with the character string awswaf- followed by a date.

The pattern template is defining two fields from the logs. It will indicate to Elasticsearch that the httpRequest.clientIp field is using an IP address format and that the timestamp field is represented in epoch time. All the other log fields will be classified automatically.

In the template, I’m also setting the number of Elasticsearch shards that should be used for the index. Shards help with distributing large amount of data and their number depends on the size of the index—the larger the index the more shards it should be deployed across. I’m going to use only one shard since I expect my index to be less than 30 GB. For recommendations on sizing your shards appropriately, refer to Jon Handler’s blog post on using Amazon Elasticsearch Service.


PUT  _template/awswaf-logs
{
    "index_patterns": ["awswaf-*"],
    "settings": {
    "number_of_shards": 1
    },
    "mappings": {
      "waflog": {
        "properties": {
          "httpRequest": {
            "properties": {
              "clientIp": {
                "type": "keyword",
                "fields": {
                  "keyword": {
                    "type": "ip"
                  }
                }
              }
            }
          },
          "timestamp": {
            "type": "date",
            "format": "epoch_millis"
          }
      }
    }
  }
}

I’m going to apply this template to Elasticsearch through the Kibana developer interface (from the Dev Tools tab in the Kibana dashboard), but it can also be done through an API call to the Amazon ES endpoint.
 

Figure 4: Applying the index pattern template in Kibana

Figure 4: Applying the index pattern template in Kibana

The template will be applied to all log indexes starting with the awswaf- prefix. This is the prefix name you’ll use at a later stage in the Kinesis Data Firehose configuration.

Setting up Kinesis for log delivery

With Amazon ES now ready to receive AWS WAF logs, I’ll show you how to use Amazon Kinesis Data Firehose to deliver your logs to Amazon S3 and Amazon ES. This is a prerequisite before you enable logging in AWS WAF.

First, go to the Kinesis service in the AWS Console and under the Data Firehose tab choose Create delivery stream. If the Data Firehose stream is going to be used for WAF logs, its name must start with aws-waf-logs-. I’ll enable logging for the already existing web ACL that I’m using for OWASP top 10 protection, therefore I’ll name my delivery stream aws-waf-logs-owasp.

Keep all the settings at the default until you get to the Select destination page. On this page, you’ll configure Amazon Elasticsearch Service as the destination for your logs. Make sure you enter awswaf as the index name and waflog as the type to match the index pattern template you already created in Elasticsearch earlier.
 

Figure 5: Setting up Amazon ES as Data Firehose log destination

Figure 5: Setting up Amazon ES as Data Firehose log destination

On this page, you can also set what to back up to Amazon S3—either all records or just the failed ones. You’ll then need to decide on the bucket to use, specify the prefix, and then select Next.
 

Figure 6: Setting up S3 backup in Data Firehose

Figure 6: Setting up S3 backup in Data Firehose

On the next page, you’ll have the option to update the buffer conditions—size and interval. These indicate how often Kinesis Data Firehose will send new data to Amazon ES. The lower the values you set, the closer to real time your log delivery will be. In my example, I’ve configured a buffer size of 1 MB and a buffer interval of 60 seconds.
 

Figure 7: Setting up Data Firehose buffer conditions

Figure 7: Setting up Data Firehose buffer conditions

Next, you’ll need to specify an IAM role that enables Kinesis Data Firehose to send data to Amazon S3 and Amazon ES. You can select an existing IAM role or create a new one. For more information about creating a role, see Controlling Access with Amazon Kinesis Data Firehose.

Finally, review all the settings, and complete the creation of your Kinesis Data Firehose.

Enabling AWS WAF logging

Now that you have Kinesis Data Firehose prepared, you can enable logging for an existing AWS WAF Web ACL. You can do this in the AWS Console under the AWS WAF & AWS Shield service. Go to the Web ACLs tab and select the Web ACL for which you want to start logging. Then, under the Logging tab, choose Enable Logging.
 

Figure 8: Enabling logging for AWS WAF web ACL

Figure 8: Enabling logging for AWS WAF web ACL

On the next page, specify the Kinesis Data Firehose that the logs should be delivered to. If you can’t see any options in the drop-down list, make sure that the Kinesis Data Firehose you configured earlier is deployed in the same region as the web ACL you’re enabling logging for and that its name begins with aws-waf-logs-. Remember that for CloudFront web ACL deployments, the Kinesis Data Firehose should be created in the Northern Virginia Region.

Searching through logs

As soon as you log back into your Kibana dashboard for Amazon ES, you should see a new index appear called awswaf-<date>. You’ll need to define an index pattern that will encompass all the potential awswaf log entries. Configure your index pattern description as awswaf-*, and on the next screen select timestamp as the time filter.
 

Figure 9: Configuring the Kibana index pattern

Figure 9: Configuring the Kibana index pattern

 

Figure 10: Configuring a time filter for the index pattern

Figure 10: Configuring a time filter for the index pattern

After you complete the setup of your index pattern, you’ll be able to run searches through your logs by going into the Discover tab in Kibana. The searches can be based on any fields in the AWS WAF logs. For example, you can look for specific HTTP headers, query strings, or source IP addresses to find out what action has been applied to them. To find out more about what fields are available in the AWS WAF logs, see the AWS WAF Developer Guide.

In the example search that follows, I’m looking for all of the log entries where a web ACL has blocked the traffic and the HTTP arguments were equal to name=badname.
 

Figure 11: An example ad-hoc search

Figure 11: An example ad-hoc search

This approach can help you identify specific request based on their parameters, such as a cookie, host header, query string, and understand why they are being blocked (or allowed) and by which web ACL. It is especially useful for tuning AWS WAF to match your web application.

Building a dashboard

You can also use Amazon ES to create custom graphs to look at long-term trends—and you can combine your graphs into a dashboard. For example, you may want to see the number of blocked and allowed requests by IP address to identify potentially hostile IP addresses.

To understand the breakdown of the requests per IP address, you should first set up a graph showing the number of requests that were blocked and allowed by AWS WAF per source. To do this, go to the Visualize section of your Kibana dashboard and create a new visualization. You can select from a variety of visualization options, but I’m going to use the horizontal bar chart for the example that follows.
 

Figure 12: Configuring primary bucket parameters

Figure 12: Configuring primary bucket parameters

  1. Select your index and then configure your graph options. For a horizontal bar chart, leave the Y axis to represent the count of requests, but expand the primary bucket (X-Axis) form and change the fields to show the source IP address and action:
    1. From the Aggregation dropdown, select Terms.
    2. From the Field dropdown, select httpRequest.clientIP.
    3. From the Order By dropdown, select metric: Count.
       
      Figure 13: Configuring bucket series

      Figure 13: Configuring bucket series

  2. Next, expand the Split Series section and adjust the following dropdown fields:
    1. From the Sub Aggregation dropdown, select Terms.
    2. From the Field dropdown select action.keyword.
    3. From the Order By dropdown, select metric: Count.
       
      Figure 14: Sample Kibana graphs

      Figure 14: Sample Kibana graphs

    4. Apply your changes by selecting the play button at the top of the page. You should see the results appear on the right side of the screen.
       
      Figure 15: Select the "Apply changes" button

      Figure 15: Select the “Apply changes” button

    You can combine graphs into dashboards that aggregate relevant information about AWS WAF functionality. In the sample dashboard below, I’m showing multiple visualisations focusing on different dimensions like request per country, IP address, and even which WAF rules get the most hits.
     

    Figure 16: You can combine graphs

    Figure 16: You can combine graphs

    I’ve made the sample dashboard and sample visualizations available for download. They can be imported to your Kibana instance in the Management/SavedObjects section of the dashboard.

    Conclusion

    The ability to access AWS WAF logs for any web ACL allows you to start analyzing the requests coming to your web applications. In this post, I’ve covered an example of how you can use Amazon Elasticsearch Service as a destination for AWS WAF logs and shown you how to run searches on the log data as well as build graphs and dashboards. Amazon ES is just one of the destination options for native log delivery. To learn more, please refer to the documentation on how to send data directly to Amazon S3, Amazon Redshift, and Splunk.

    If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the WAF forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Tom Adamski

    Tom is a Specialist Solutions Engineer in Networking. He helps customers across the EMEA region design their network environments on AWS. He has over 10 years of experience in the networking and security industries. In his spare time, he’s an avid surfer who’s always on the lookout for cold water surf breaks.

Protecting your API using Amazon API Gateway and AWS WAF — Part 2

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-2/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

In Part 1 of this blog, we described how to protect your API provided by Amazon API Gateway using AWS WAF. In this blog, we show how to use API keys between an Amazon CloudFront distribution and API Gateway to secure access to your API in API Gateway in addition to your preferred authorization (AuthZ) mechanism already set up in API Gateway. For more information about AuthZ mechanisms in API Gateway, see Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway.

We also extend the AWS CloudFormation stack previously used to automate the creation of the following necessary resources of this solution:

The following are alternative solutions to using an API key, depending on your security requirements:

Using a randomly generated HTTP secret header in CloudFront and verifying by API Gateway request validation
Signing incoming requests with [email protected] and verifying with API Gateway Lambda authorizers

Requirements

To follow along, you need full permissions to create, update, and delete API Gateway, CloudFront, Lambda, and CloudWatch Events through AWS CloudFormation.

Extending the existing AWS CloudFormation stack

First, click here to download the full template. Then follow these steps to update the existing AWS CloudFormation stack:

  1. Go to the AWS Management Console and open the AWS CloudFormation console.
  2. Select the stack that you created in Part 1, right-click it, and select Update Stack.
  3. For option 2, choose Choose file and select the template that you downloaded.
  4. Fill in the required parameters as shown in the following image.

Here’s more information about these parameters:

  • API Gateway to send traffic to – We use the same API Gateway URL as in Part 1 except without the URL scheme (https://): cxm45444t9a.execute-api.us-east-2.amazonaws.com/prod
  • Rotating API Keys – We define Daily and use 2018-04-03 as the timestamp value to append to the API key name

Continue with the AWS CloudFormation console to complete the operation. It might take a couple of minutes to update the stack as CloudFront takes its time to propagate changes across all point of presences.

Enabling API Keys in the example Pet Store API

While the stack completes in the background, let’s enable the use of API Keys in the API that CloudFront will send traffic to.

  1. Go to the AWS Management Console and open the API Gateway console.
  2. Select the API that you created in Part 1 and choose Resources.
  3. Under /pets, choose GET and then choose Method Request.
  4. For API Key Required, choose the dropdown menu and choose true.
  5. To save this change, select the highlighted check mark as shown in the following image.

Next, we need to deploy these changes so that requests sent to /pets fail if an API key isn’t present.

  1. Choose Actions and select Deploy API.
  2. Choose the Deployment stage dropdown menu and select the stage you created in Part 1.
  3. Add a deployment description such as “Requires API Keys under /pets” and choose Deploy.

When the deployment succeeds, you’re redirected to the API Gateway Stage page. There you can use the Invoke URL to test if the following request fails due to not having an API key.

This failure is expected and proves that our deployed changes are working. Next, let’s try to access the same API but this time through our CloudFront distribution.

  1. From the AWS Management Console, open the AWS Cloudformation console.
  2. Select the stack that you created in Part 1 and choose Outputs at the bottom left.
  3. On the CFDistribution line, copy the URL. Before you paste in a new browser tab or window, append ‘/pets’ to it.

As opposed to our first attempt without an API key, we receive a JSON response from the PetStore API. This is because CloudFront is injecting an API key before it forwards the request to the PetStore API. The following image demonstrates both of these tests:

  1. Successful request when accessing the API through CloudFront
  2. Unsuccessful request when accessing the API directly through its Invoke URL

This works as a secret between CloudFront and API Gateway, which could be any agreed random secret that can be rotated like an API key. However, it’s important to know that the API key is a feature to track or meter API consumers’ usage. It’s not a secure authorization mechanism and therefore should be used only in conjunction with an API Gateway authorizer.

Rotating API keys

API keys are automatically rotated based on the schedule (e.g., daily or monthly) that you chose when updating the AWS CloudFormation stack. This requires no maintenance or intervention on your part. In this section, we explain how this process works under the hood and what you can do if you want to manually trigger an API key rotation.

The AWS CloudFormation template that we downloaded and used to update our stack does the following in addition to Part 1.

Introduce a Timestamp parameter that is appended to the API key name

Parameters:
  Timestamp:
    Type: String
    Description: Fill in this format <Year>-<Month>-<Day>
    Default: 2018-04-02

Create an API Gateway key, API Gateway usage plan, associate the new key with the API gateway given as a parameter, and configure the CloudFront distribution to send a custom header when forwarding traffic to API Gateway

CFDistribution:
  Type: AWS::CloudFront::Distribution
  Properties:
    DistributionConfig:
      Logging:
        IncludeCookies: 'false'
        Bucket: !Sub ${S3BucketAccessLogs}.s3.amazonaws.com
        Prefix: cloudfront-logs
      Enabled: 'true'
      Comment: API Gateway Regional Endpoint Blog post
      Origins:
        -
          Id: APIGWRegional
          DomainName: !Select [0, !Split ['/', !Ref ApiURL]]
          CustomOriginConfig:
            HTTPPort: 443
            OriginProtocolPolicy: https-only
          OriginCustomHeaders:
            - 
              HeaderName: x-api-key
              HeaderValue: !Ref ApiKey
              ...

ApiUsagePlan:
  Type: AWS::ApiGateway::UsagePlan
  Properties:
    Description: CloudFront usage only
    UsagePlanName: CloudFront_only
    ApiStages:
      - 
        ApiId: !Select [0, !Split ['.', !Ref ApiURL]]
        Stage: !Select [1, !Split ['/', !Ref ApiURL]]

ApiKey: 
  Type: "AWS::ApiGateway::ApiKey"
  Properties: 
    Name: !Sub "CloudFront-${Timestamp}"
    Description: !Sub "CloudFormation API Key ${Timestamp}"
    Enabled: true

ApiKeyUsagePlan:
  Type: "AWS::ApiGateway::UsagePlanKey"
  Properties:
    KeyId: !Ref ApiKey
    KeyType: API_KEY
    UsagePlanId: !Ref ApiUsagePlan

As shown in the ApiKey resource, we append the given Timestamp to Name as well as use it in the API Gateway usage plan key resource. This means that whenever the Timestamp parameter changes, AWS CloudFormation triggers a resource replacement and updates every resource that depends on that API key. In this case, that includes the AWS CloudFront configuration and API Gateway usage plan.

But what does the rotation schedule that you chose at the beginning of this blog mean in this example?

Create a scheduled activity to trigger a Lambda function on a given schedule

Parameters:
...
  ApiKeyRotationSchedule: 
    Description: Schedule to rotate API Keys e.g. Daily, Monthly, Bimonthly basis
    Type: String
    Default: Daily
    AllowedValues:
      - Daily
      - Fortnightly
      - Monthly
      - Bimonthly
      - Quarterly
    ConstraintDescription: Must be any of the available options

Mappings: 

  ScheduleMap: 
    CloudwatchEvents: 
      Daily: "rate(1 day)"
      Fortnightly: "rate(14 days)"
      Monthly: "rate(30 days)"
      Bimonthly: "rate(60 days)"
      Quarterly: "rate(90 days)"

Resources:
...
  RotateApiKeysScheduledJob: 
    Type: "AWS::Events::Rule"
    Properties: 
      Description: "ScheduledRule"
      ScheduleExpression: !FindInMap [ScheduleMap, CloudwatchEvents, !Ref ApiKeyRotationSchedule]
      State: "ENABLED"
      Targets: 
        - 
          Arn: !GetAtt RotateApiKeysFunction.Arn
          Id: "RotateApiKeys"

The resource RotateApiKeysScheduledJob shows that the schedule that you selected through a dropdown menu when updating the AWS CloudFormation stack is actually converted to a CloudWatch Events rule. This in turn triggers a Lambda function that is defined in the same template.

RotateApiKeysFunction:
      Type: "AWS::Lambda::Function"
      Properties:
        Handler: "index.lambda_handler"
        Role: !GetAtt RotateApiKeysFunctionRole.Arn
        Runtime: python3.6
        Environment:
          Variables:
            StackName: !Ref "AWS::StackName"
        Code:
          ZipFile: !Sub |
            import datetime
            import os

            import boto3
            from botocore.exceptions import ClientError

            session = boto3.Session()
            cfn = session.client('cloudformation')
            
            timestamp = datetime.date.today()            
            params = {
                'StackName': os.getenv('StackName'),
                'UsePreviousTemplate': True,
                'Capabilities': ["CAPABILITY_IAM"],
                'Parameters': [
                    {
                      'ParameterKey': 'ApiURL',
                      'UsePreviousValue': True
                    },
                    {
                      'ParameterKey': 'ApiKeyRotationSchedule',
                      'UsePreviousValue': True
                    },
                    {
                      'ParameterKey': 'Timestamp',
                      'ParameterValue': str(timestamp)
                    },
                ],                
            }

            def lambda_handler(event, context):
              """ Updates CloudFormation Stack with a new timestamp and returns CloudFormation response"""
              try:
                  response = cfn.update_stack(**params)
              except ClientError as err:
                  if "No updates are to be performed" in err.response['Error']['Message']:
                      return {"message": err.response['Error']['Message']}
                  else:
                      raise Exception("An error happened while updating the stack: {}".format(err))          
  
              return response

All this Lambda function does is trigger an AWS CloudFormation stack update via API (exactly what you did through the console but programmatically) and updates the Timestamp parameter. As a result, it rotates the API key and the CloudFront distribution configuration.

This gives you enough flexibility to change the API key rotation schedule at any time without maintaining or writing any code. You can also manually update the stack and rotate the keys by updating the AWS CloudFormation stack’s Timestamp parameter.

Next Steps

We hope you found the information in this blog helpful. You can use it to understand how to create a mechanism to allow traffic only from CloudFront to API Gateway and avoid bypassing the AWS WAF rules that Part 1 set up.

Keep the following important notes in mind about this solution:

  • It assumes that you already have a strong AuthZ mechanism, managed by API Gateway, to control access to your API.
  • The API Gateway usage plan and other resources created in this solution work only for APIs created in the same account (the ApiUrl parameter).
  • If you already use API keys for tracking API usage, consider using either of the following solutions as a replacement:
    • Use a random HTTP header value in CloudFront origin configuration and use an API Gateway request model validation to verify it instead of API keys alone.
    • Combine [email protected] and an API Gateway custom authorizer to sign and verify incoming requests using a shared secret known only to the two. This is a more advanced technique.

How to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hosts

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-suspicious-hosts/

When you’re implementing security measures across your AWS resources, you should use a holistic approach that incorporates controls across multiple areas. In the Cloud Adoption Framework (CAF) Security perspective whitepaper, we define these controls across four categories.

  • Directive controls. Establish the governance, risk, and compliance models the environment will operate within.
  • Preventive controls. Protect your workloads and mitigate threats and vulnerabilities.
  • Detective controls. Provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive controls. Drive remediation of potential deviations from your security baselines.

The use of security automation is also a key principle outlined in the whitepaper. It helps reduce operational overhead and create repeatable, predictable approaches to monitoring and responding to events. You can take advantage of AWS services to build powerful solutions for the automated detection and remediation of threats against your AWS environments. For example, you can configure Amazon CloudWatch Events to invoke a Lambda action in response to suspicious or unexpected behavior in your AWS environment detected by Amazon GuardDuty. You can configure automated flows that use both detective and responsive controls and might also feed into preventative controls to help mitigate the threat in the future. Depending on the type of source event, you can automatically invoke specific actions, such as modifying access controls, terminating instances, or revoking credentials.

In this blog post, we’ll show you how to use Amazon GuardDuty to automatically update the AWS Web Application Firewall Web Access Control Lists (WebACLs) and VPC Network Access Control Lists (NACLs) in response to GuardDuty findings. After GuardDuty detects a suspicious activity, the solution updates these resources to block communication from the suspicious host while you perform additional investigation and remediation. Once communication has been blocked, further occurrences of a finding are reduced, allowing security and operations teams to focus more on higher priority tasks.

Amazon GuardDuty is a continuous security monitoring and threat detection service that incorporates threat intelligence, anomaly detection, and machine learning to help protect your AWS resources, including your AWS accounts. Amazon CloudWatch Events delivers a near-real-time stream of system events that describe changes in AWS resources. Amazon GuardDuty sends notifications based on Amazon CloudWatch Events when any change in the findings takes place. In the context of GuardDuty, such changes include newly generated findings and all subsequent occurrences of these existing findings. Using rules that you can quickly set up, you can match CloudWatch events and route them to one or more target actions. This solution routes matched events to AWS Lambda, which then performs updates to AWS Web Application Firewall (WAF) and VPC NACLs. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports both managed rules as well as a powerful rule language for custom rules. A Network Access Control List (NACL) is an optional layer of security for your Amazon Virtual Private Cloud (VPC) that acts as a firewall for controlling traffic in and out of one or more subnets.

Solution overview

The solution assumes that Amazon GuardDuty is enabled in your AWS account. If it isn’t enabled, you can find more info about the free trial and pricing here, and you can follow the steps in the GuardDuty documentation to set up the service and start monitoring your account.

Figure 1 shows how the CloudFormation template creates the sample solution:

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A GuardDuty finding is raised with suspected malicious activity.
  2. A CloudWatch Event is configured to filter for GuardDuty Finding type.
  3. A Lambda function is invoked by the CloudWatch Event and parses the GuardDuty finding.
  4. State data for blocked hosts is stored in Amazon DynamoDB table. The Lambda function checks the state table for existing host entry.
  5. The Lambda function creates a Rule inside AWS WAF and in a VPC NACL.
  6. A notification email is sent via Amazon Simple Notification Service (SNS).

A second Lambda function runs on a 5-minute recurring schedule and removes entries that are past the configurable retention period from WAF IPSets (which is a list that contains the blacklisted IPs or CIDRs), VPC NACLs, and the Dynamo DB table.

GuardDuty findings referenced in this solution

This solution’s CloudWatch Event Rule pattern is configured to match the following GuardDuty Finding types:

  1. UnauthorizedAccess:EC2/SSHBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to SSH services on Linux-based systems.
  2. UnauthorizedAccess:EC2/RDPBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to RDP services on Windows-based systems.
  3. Recon:EC2/PortProbeUnprotectedPort
    This finding informs you that a port on an EC2 instance in your AWS environment isn’t blocked by a security group, access control list (ACL), or an on-host firewall (for example, Linux IPChains), and known scanners on the internet are actively probing it.
  4. Trojan:EC2/BlackholeTraffic
    This finding informs you that an EC2 instance in your AWS environment might be compromised because it’s trying to communicate with an IP address of a black hole. Black holes refer to places in the network where incoming or outgoing traffic is silently discarded without informing the source that the data didn’t reach its intended recipient.
  5. Backdoor:EC2/XORDDOS
    This finding informs you that an EC2 instance in your AWS environment is attempting to communicate with an IP address that’s associated with XOR DDoS malware. XOR DDoS is Trojan malware that hijacks Linux systems.
  6. UnauthorizedAccess:EC2/TorIPCaller
    This finding informs you that an EC2 instance in your AWS environment is receiving inbound connections from a Tor exit node. Tor is software for enabling anonymous communication. It encrypts and randomly bounces communications through relays between a series of network nodes.
  7. Trojan:EC2/DropPoint
    This finding informs you that an EC2 instance in your AWS environment is trying to communicate with an IP address of a remote host that’s known to hold credentials and other stolen data captured by malware.

When one of these GuardDuty finding types is matched by the CloudWatch Event Rule, an entry is created in the target ACLs to deny the suspicious host, and then a notification is sent to an email address by this solution’s Lambda. Blocking traffic from the suspicious host helps to mitigate the threat while you perform additional investigation and remediation. For more information, see Remediating a Compromised EC2 Instance.

Solution deployment

This sample solution includes 6 main steps:

  1. Deploy the CloudFormation template.
  2. Create and run a Lambda GuardDuty finding test event.
  3. Confirm the entry in the VPC Network ACL.
  4. Confirm the entry in the AWS WAF IPSets.
  5. Confirm the SNS notification subscription.
  6. Apply the WAF Web ACLs to resources.

Step 1: Deploy the CloudFormation template

For this next step, make sure you deploy the template within the AWS account and region where you want to monitor GuardDuty findings.

  1. Select this link to launch a CloudFormation stack in your account.

    Note: The stack will launch in the N. Virginia (us-east-1) region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS regions, first upload the solution’s Lambda deployment packages (zip files with code) to an S3 bucket in the selected region. Once you have uploaded the zip files in the target region, update the CloudFormation ArtifactsBucket and ArticaftsPrefix parameters referenced in step 3 below.

  2. In the CloudFormation console, select the Select Template form, and then select Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameterInput parameter description
    AdminEmailEmail address to receive notifications. Must be a valid email address.
    RetentionHow long to retain IP addresses in the blacklist (in minutes). Default is 12 hours.
    CloudFrontIPSetIdID for existing WAF IPSet on CloudFront. Enter the ID here if there’s an existing WAF IPSet on CloudFront you want to use. Leave set to the default value of False if you want to create a new WebACL and IPSet.
    ALBIPSetIdID for existing WAF IPSet on ALB. Enter if there is an existing WAF IPSet on ALB. Leave set to False for creation of new WebACL and IPSet.
    ArtifactsBucketS3 bucket with artifact files (Lambda functions, templates, html files, etc.). Leave set to the default value for deployment into N. Virginia region.
    ArtifactsPrefixPath in the S3 bucket containing artifact files. Leave set to the default value for deployment into N. Virginia region.

    Note: AWS WAF is not currently available in all regions. For more information about where it’s available, refer to this page.

    Figure 2 shows an example of values entered on this screen:

    Figure 2: CloudFormation parameters on the "Specify Details" page

    Figure 2: CloudFormation parameters on the “Specify Details” page

  4. Enter values for all of the input parameters, and then select Next.
  5. On the Options page, accept the defaults, and then select Next.
  6. On the Review page, confirm the details, and then select Create.
  7. While the stack is being created, check the email inbox for the value you gave for the AdminEmail address parameter. Look for an email message with the subject “AWS Notification – Subscription Confirmation”. Select the link to confirm the subscription to the SNS topic. You should see a message similar to this:

    Figure 3: Subscription confirmation

    Figure 3: Subscription confirmation

Once the Status field for the CloudFormation stack changes to CREATE_COMPLETE, the solution is implemented and is ready for testing.

Figure 4: The "Status" displays "CREATE_COMPLETE"

Figure 4: The “Status” displays “CREATE_COMPLETE”

Step 2: Create and run a Lambda GuardDuty finding test event

Once the CloudFormation stack has completed deployment, you can test the functionality using a Lambda test event.

  1. In the console, select Services > VPC > Subnets and locate a subnet suitable for testing the solution. On the Summary tab, copy the Subnet ID to the clipboard or to a text editor.

    Figure 5: The "Subnet ID" value on the "Summary" tab

    Figure 5: The “Subnet ID” value on the “Summary” tab

  2. In the console, select Services > CloudFormation > GuardDutytoACL stack. In the stack Outputs tab, look for the GuardDutytoACLLambda entry, similar to Figure 6 below:

    Figure 6: The "GuardDutytoACLLambda" entry on the "Outputs" tab

    Figure 6: The “GuardDutytoACLLambda” entry on the “Outputs” tab

  3. 3. Select the link and you’ll be redirected to the Lambda console, with the Lambda function already open, similar to Figure 7:

    Figure 7: The Lambda function open in the Lambda console

    Figure 7: The Lambda function open in the Lambda console

  4. In the top right, select the Select a test event… drop-down list, and then select Configure test events.

    Figure 8: Select "Configure test events" from the drop-down list

    Figure 8: Select “Configure test events” from the drop-down list

  5. To facilitate testing, a test event file has been provided. On the Configure test event page, provide a name for Event name, and then paste the provided test event JSON in the body of the event.
  6. Update the value of subnetId key (line 34) to the value of your Subnet ID from step 2.1, and then select Create.

    Figure 9: Update the value of the "subnetId" key

    Figure 9: Update the value of the “subnetId” key

  7. Select Test to invoke the Lambda with the test event. You should see a message “Execution result: succeeded” similar to below:

    Figure 10: The "Test" button and the "succeeded" message

    Figure 10: The “Test” button and the “succeeded” message

Step 3: Confirm the entry in the VPC Network ACL (NACL)

In this step, you’ll confirm the DENY entry was created in the NACL. This solution is configured to create up to 10 entries in an ACL ranging between rule numbers 71 and 80. Since NACL rules are processed in order, it’s important that the DENY rule is placed before the ALLOW rule.

  1. In the console, select Services > VPC > Subnets and locate the subnet you provided for the test event.
  2. Select the Network ACL tab and confirm the new entry generated from the test event.
    Figure 11: Check the entry from the test event on the "Network" tab

    Figure 11: Check the entry from the test event on the “Network” tab

    Note that VPC NACL entries are created in the rule number range between 71 and 80. Older entries are aged out to create a “sliding window” of blocked hosts.

Step 4: Confirm the entry in the AWS WAF IPSets

In this step, you’ll verify that the entry was added to the CloudFront WAF IPSet and to the ALB WAF IPSet.

  1. In the console, select Services > WAF & Shield, and then select IP addresses.
  2. For Filter, select Global (CloudFront), and then select the IPSet named GD2ACL CloudFront IPSet for Blacklisted IP addresses.

    Figure 12: Filter the list and then select "GD2ACL CloudFront IPSet for Blacklisted IP addresses"

    Figure 12: Filter the list and then select “GD2ACL CloudFront IPSet for Blacklisted IP addresses”

  3. Confirm the IP address that was added to the list in the IPSet:

    Figure 13: Confirm the IP address was added

    Figure 13: Confirm the IP address was added

  4. In the console, select Services > WAF & Shield, and then select IP addresses.
  5. For Filter, select US East (N. Virginia)–or another region in which you deployed this solution–and then select the IPSet named GD2ACL ALB IPSet for blacklisted IP addresses.
  6. Confirm the IP address added to the ALB IPSet:

    Figure 14: Make sure the IP address was added

    Figure 14: Make sure the IP address was added

There might be specific host addresses that you want to prevent from being added to the blacklist. You can do this within GuardDuty by using a trusted IP list. Trusted IP lists consist of IP addresses that you have whitelisted for secure communication with your AWS infrastructure and applications. GuardDuty doesn’t generate findings for IP addresses on trusted IP lists. For additional information, see Working with Trusted IP Lists and Threat Lists.

Step 5: Confirm the SNS notification subscription

In this step, you’ll view the SNS notification that was sent to the email address you set up.

    1. Review the email inbox for the value you provided for the AdminEmail parameter and look for a message with the subject line “AWS GD2ACL Alert.”The contents of the message from SNS should be similar to this:

      Figure 15: SNS message example

      Figure 15: SNS message example

Step 6: Apply the WAF Web ACLs to resources

The final task is to associate the Web ACL with the CloudFront Distributions and Application Load Balancers that you want to automatically update with this solution. To learn how to do this, see Associating or Disassociating a Web ACL with a CloudFront Distribution or an Application Load Balancer.

You can also use AWS Firewall Manger to associate the Web ACLs. AWS Firewall Manager simplifies your AWS WAF administration and maintenance tasks across multiple accounts and resources. With Firewall Manager, you set up your firewall rules just once. The service automatically applies your rules across your accounts and resources, even as you add new resources.

Summary

You’ve learned how to use Amazon GuardDuty to automatically update AWS Web Application Firewall (AWS WAF) and VPC Network Access Control Lists (ACLs) in response to GuardDuty findings. With just a few steps, you can use this sample solution to help mitigate threats by blocking communication with suspicious hosts. You can explore additional solutions possible using GuardDuty Finding types and CloudWatch Events target actions. This solution’s code is available on GitHub. Feel free to play around with the code to add more GuardDuty findings to this solution and also to build bigger and better solutions!

If you have comments about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the GuardDuty, WAF, or CloudWatch forums, or contact AWS Support.

Protecting your API using Amazon API Gateway and AWS WAF — Part I

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-i/

This post courtesy of Thiago Morais, AWS Solutions Architect

When you build web applications or expose any data externally, you probably look for a platform where you can build highly scalable, secure, and robust REST APIs. As APIs are publicly exposed, there are a number of best practices for providing a secure mechanism to consumers using your API.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

In this post, I show you how to take advantage of the regional API endpoint feature in API Gateway, so that you can create your own Amazon CloudFront distribution and secure your API using AWS WAF.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

As you make your APIs publicly available, you are exposed to attackers trying to exploit your services in several ways. The AWS security team published a whitepaper solution using AWS WAF, How to Mitigate OWASP’s Top 10 Web Application Vulnerabilities.

Regional API endpoints

Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution created and managed by API Gateway. Before the launch of regional API endpoints, this was the default option when creating APIs using API Gateway. It primarily helped to reduce latency for API consumers that were located in different geographical locations than your API.

When API requests predominantly originate from an Amazon EC2 instance or other services within the same AWS Region as the API is deployed, a regional API endpoint typically lowers the latency of connections. It is recommended for such scenarios.

For better control around caching strategies, customers can use their own CloudFront distribution for regional APIs. They also have the ability to use AWS WAF protection, as I describe in this post.

Edge-optimized API endpoint

The following diagram is an illustrated example of the edge-optimized API endpoint where your API clients access your API through a CloudFront distribution created and managed by API Gateway.

Regional API endpoint

For the regional API endpoint, your customers access your API from the same Region in which your REST API is deployed. This helps you to reduce request latency and particularly allows you to add your own content delivery network, as needed.

Walkthrough

In this section, you implement the following steps:

  • Create a regional API using the PetStore sample API.
  • Create a CloudFront distribution for the API.
  • Test the CloudFront distribution.
  • Set up AWS WAF and create a web ACL.
  • Attach the web ACL to the CloudFront distribution.
  • Test AWS WAF protection.

Create the regional API

For this walkthrough, use an existing PetStore API. All new APIs launch by default as the regional endpoint type. To change the endpoint type for your existing API, choose the cog icon on the top right corner:

After you have created the PetStore API on your account, deploy a stage called “prod” for the PetStore API.

On the API Gateway console, select the PetStore API and choose Actions, Deploy API.

For Stage name, type prod and add a stage description.

Choose Deploy and the new API stage is created.

Use the following AWS CLI command to update your API from edge-optimized to regional:

aws apigateway update-rest-api \
--rest-api-id {rest-api-id} \
--patch-operations op=replace,path=/endpointConfiguration/types/EDGE,value=REGIONAL

A successful response looks like the following:

{
    "description": "Your first API with Amazon API Gateway. This is a sample API that integrates via HTTP with your demo Pet Store endpoints", 
    "createdDate": 1511525626, 
    "endpointConfiguration": {
        "types": [
            "REGIONAL"
        ]
    }, 
    "id": "{api-id}", 
    "name": "PetStore"
}

After you change your API endpoint to regional, you can now assign your own CloudFront distribution to this API.

Create a CloudFront distribution

To make things easier, I have provided an AWS CloudFormation template to deploy a CloudFront distribution pointing to the API that you just created. Click the button to deploy the template in the us-east-1 Region.

For Stack name, enter RegionalAPI. For APIGWEndpoint, enter your API FQDN in the following format:

{api-id}.execute-api.us-east-1.amazonaws.com

After you fill out the parameters, choose Next to continue the stack deployment. It takes a couple of minutes to finish the deployment. After it finishes, the Output tab lists the following items:

  • A CloudFront domain URL
  • An S3 bucket for CloudFront access logs
Output from CloudFormation

Output from CloudFormation

Test the CloudFront distribution

To see if the CloudFront distribution was configured correctly, use a web browser and enter the URL from your distribution, with the following parameters:

https://{your-distribution-url}.cloudfront.net/{api-stage}/pets

You should get the following output:

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Set up AWS WAF and create a web ACL

With the new CloudFront distribution in place, you can now start setting up AWS WAF to protect your API.

For this demo, you deploy the AWS WAF Security Automations solution, which provides fine-grained control over the requests attempting to access your API.

For more information about deployment, see Automated Deployment. If you prefer, you can launch the solution directly into your account using the following button.

For CloudFront Access Log Bucket Name, add the name of the bucket created during the deployment of the CloudFormation stack for your CloudFront distribution.

The solution allows you to adjust thresholds and also choose which automations to enable to protect your API. After you finish configuring these settings, choose Next.

To start the deployment process in your account, follow the creation wizard and choose Create. It takes a few minutes do finish the deployment. You can follow the creation process through the CloudFormation console.

After the deployment finishes, you can see the new web ACL deployed on the AWS WAF console, AWSWAFSecurityAutomations.

Attach the AWS WAF web ACL to the CloudFront distribution

With the solution deployed, you can now attach the AWS WAF web ACL to the CloudFront distribution that you created earlier.

To assign the newly created AWS WAF web ACL, go back to your CloudFront distribution. After you open your distribution for editing, choose General, Edit.

Select the new AWS WAF web ACL that you created earlier, AWSWAFSecurityAutomations.

Save the changes to your CloudFront distribution and wait for the deployment to finish.

Test AWS WAF protection

To validate the AWS WAF Web ACL setup, use Artillery to load test your API and see AWS WAF in action.

To install Artillery on your machine, run the following command:

$ npm install -g artillery

After the installation completes, you can check if Artillery installed successfully by running the following command:

$ artillery -V
$ 1.6.0-12

As the time of publication, Artillery is on version 1.6.0-12.

One of the WAF web ACL rules that you have set up is a rate-based rule. By default, it is set up to block any requesters that exceed 2000 requests under 5 minutes. Try this out.

First, use cURL to query your distribution and see the API output:

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets
[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Based on the test above, the result looks good. But what if you max out the 2000 requests in under 5 minutes?

Run the following Artillery command:

artillery quick -n 2000 --count 10  https://{distribution-name}.cloudfront.net/prod/pets

What you are doing is firing 2000 requests to your API from 10 concurrent users. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try to run the cURL request again and see what happens:

 

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Request blocked.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: [removed]
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>

As you can see from the output above, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

In this first part, you saw how to use the new API Gateway regional API endpoint together with Amazon CloudFront and AWS WAF to secure your API from a series of attacks.

In the second part, I will demonstrate some other techniques to protect your API using API keys and Amazon CloudFront custom headers.

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 

AWS Online Tech Talks – January 2018

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-january-2018/

Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

January 2018– Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday January 22

Analytics & Big Data
11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

Database
01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

Tuesday, January 23

Artificial Intelligence
9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

Containers

11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

Serverless
01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

Wednesday, January 24

DevOps
09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

Analytics & Big Data
11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
Lvl 300
Database
01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

Thursday, January 25

Artificial Intelligence
09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

Mobile
11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

IoT
01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

Monday, January 29

Enterprise
11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

Compute
01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

Tuesday, January 30

Security, Identity & Compliance
09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

Storage
11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

Compute
01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

Wednesday, January 31

Networking
09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

Enterprise
11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

Compute
01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

Thursday, February 1

Security, Identity & Compliance
09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

Storage
11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

The Top 10 Most Downloaded AWS Security and Compliance Documents in 2017

Post Syndicated from Sara Duffer original https://aws.amazon.com/blogs/security/the-top-10-most-downloaded-aws-security-and-compliance-documents-in-2017/

AWS download logo

The following list includes the ten most downloaded AWS security and compliance documents in 2017. Using this list, you can learn about what other AWS customers found most interesting about security and compliance last year.

  1. AWS Security Best Practices – This guide is intended for customers who are designing the security infrastructure and configuration for applications running on AWS. The guide provides security best practices that will help you define your Information Security Management System (ISMS) and build a set of security policies and processes for your organization so that you can protect your data and assets in the AWS Cloud.
  2. AWS: Overview of Security Processes – This whitepaper describes the physical and operational security processes for the AWS managed network and infrastructure, and helps answer questions such as, “How does AWS help me protect my data?”
  3. Architecting for HIPAA Security and Compliance on AWS – This whitepaper describes how to leverage AWS to develop applications that meet HIPAA and HITECH compliance requirements.
  4. Service Organization Controls (SOC) 3 Report – This publicly available report describes internal AWS security controls, availability, processing integrity, confidentiality, and privacy.
  5. Introduction to AWS Security –This document provides an introduction to AWS’s approach to security, including the controls in the AWS environment, and some of the products and features that AWS makes available to customers to meet your security objectives.
  6. AWS Best Practices for DDoS Resiliency – This whitepaper covers techniques to mitigate distributed denial of service (DDoS) attacks.
  7. AWS: Risk and Compliance – This whitepaper provides information to help customers integrate AWS into their existing control framework, including a basic approach for evaluating AWS controls and a description of AWS certifications, programs, reports, and third-party attestations.
  8. Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities – AWS WAF is a web application firewall that helps you protect your websites and web applications against various attack vectors at the HTTP protocol level. This whitepaper outlines how you can use AWS WAF to mitigate the application vulnerabilities that are defined in the Open Web Application Security Project (OWASP) Top 10 list of most common categories of application security flaws.
  9. Introduction to Auditing the Use of AWS – This whitepaper provides information, tools, and approaches for auditors to use when auditing the security of the AWS managed network and infrastructure.
  10. AWS Security and Compliance: Quick Reference Guide – By using AWS, you inherit the many security controls that we operate, thus reducing the number of security controls that you need to maintain. Your own compliance and certification programs are strengthened while at the same time lowering your cost to maintain and run your specific security assurance requirements. Learn more in this quick reference guide.

– Sara

Now Open AWS EU (Paris) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

How to Enhance the Security of Sensitive Customer Data by Using Amazon CloudFront Field-Level Encryption

Post Syndicated from Alex Tomic original https://aws.amazon.com/blogs/security/how-to-enhance-the-security-of-sensitive-customer-data-by-using-amazon-cloudfront-field-level-encryption/

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content to end users through a worldwide network of edge locations. CloudFront provides a number of benefits and capabilities that can help you secure your applications and content while meeting compliance requirements. For example, you can configure CloudFront to help enforce secure, end-to-end connections using HTTPS SSL/TLS encryption. You also can take advantage of CloudFront integration with AWS Shield for DDoS protection and with AWS WAF (a web application firewall) for protection against application-layer attacks, such as SQL injection and cross-site scripting.

Now, CloudFront field-level encryption helps secure sensitive data such as a customer phone numbers by adding another security layer to CloudFront HTTPS. Using this functionality, you can help ensure that sensitive information in a POST request is encrypted at CloudFront edge locations. This information remains encrypted as it flows to and beyond your origin servers that terminate HTTPS connections with CloudFront and throughout the application environment. In this blog post, we demonstrate how you can enhance the security of sensitive data by using CloudFront field-level encryption.

Note: This post assumes that you understand concepts and services such as content delivery networks, HTTP forms, public-key cryptography, CloudFrontAWS Lambda, and the AWS CLI. If necessary, you should familiarize yourself with these concepts and review the solution overview in the next section before proceeding with the deployment of this post’s solution.

How field-level encryption works

Many web applications collect and store data from users as those users interact with the applications. For example, a travel-booking website may ask for your passport number and less sensitive data such as your food preferences. This data is transmitted to web servers and also might travel among a number of services to perform tasks. However, this also means that your sensitive information may need to be accessed by only a small subset of these services (most other services do not need to access your data).

User data is often stored in a database for retrieval at a later time. One approach to protecting stored sensitive data is to configure and code each service to protect that sensitive data. For example, you can develop safeguards in logging functionality to ensure sensitive data is masked or removed. However, this can add complexity to your code base and limit performance.

Field-level encryption addresses this problem by ensuring sensitive data is encrypted at CloudFront edge locations. Sensitive data fields in HTTPS form POSTs are automatically encrypted with a user-provided public RSA key. After the data is encrypted, other systems in your architecture see only ciphertext. If this ciphertext unintentionally becomes externally available, the data is cryptographically protected and only designated systems with access to the private RSA key can decrypt the sensitive data.

It is critical to secure private RSA key material to prevent unauthorized access to the protected data. Management of cryptographic key material is a larger topic that is out of scope for this blog post, but should be carefully considered when implementing encryption in your applications. For example, in this blog post we store private key material as a secure string in the Amazon EC2 Systems Manager Parameter Store. The Parameter Store provides a centralized location for managing your configuration data such as plaintext data (such as database strings) or secrets (such as passwords) that are encrypted using AWS Key Management Service (AWS KMS). You may have an existing key management system in place that you can use, or you can use AWS CloudHSM. CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud.

To illustrate field-level encryption, let’s look at a simple form submission where Name and Phone values are sent to a web server using an HTTP POST. A typical form POST would contain data such as the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length:60

Name=Jane+Doe&Phone=404-555-0150

Instead of taking this typical approach, field-level encryption converts this data similar to the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 1713

Name=Jane+Doe&Phone=AYABeHxZ0ZqWyysqxrB5pEBSYw4AAA...

To further demonstrate field-level encryption in action, this blog post includes a sample serverless application that you can deploy by using a CloudFormation template, which creates an application environment using CloudFront, Amazon API Gateway, and Lambda. The sample application is only intended to demonstrate field-level encryption functionality and is not intended for production use. The following diagram depicts the architecture and data flow of this sample application.

Sample application architecture and data flow

Diagram of the solution's architecture and data flow

Here is how the sample solution works:

  1. An application user submits an HTML form page with sensitive data, generating an HTTPS POST to CloudFront.
  2. Field-level encryption intercepts the form POST and encrypts sensitive data with the public RSA key and replaces fields in the form post with encrypted ciphertext. The form POST ciphertext is then sent to origin servers.
  3. The serverless application accepts the form post data containing ciphertext where sensitive data would normally be. If a malicious user were able to compromise your application and gain access to your data, such as the contents of a form, that user would see encrypted data.
  4. Lambda stores data in a DynamoDB table, leaving sensitive data to remain safely encrypted at rest.
  5. An administrator uses the AWS Management Console and a Lambda function to view the sensitive data.
  6. During the session, the administrator retrieves ciphertext from the DynamoDB table.
  7. The administrator decrypts sensitive data by using private key material stored in the EC2 Systems Manager Parameter Store.
  8. Decrypted sensitive data is transmitted over SSL/TLS via the AWS Management Console to the administrator for review.

Deployment walkthrough

The high-level steps to deploy this solution are as follows:

  1. Stage the required artifacts
    When deployment packages are used with Lambda, the zipped artifacts have to be placed in an S3 bucket in the target AWS Region for deployment. This step is not required if you are deploying in the US East (N. Virginia) Region because the package has already been staged there.
  2. Generate an RSA key pair
    Create a public/private key pair that will be used to perform the encrypt/decrypt functionality.
  3. Upload the public key to CloudFront and associate it with the field-level encryption configuration
    After you create the key pair, the public key is uploaded to CloudFront so that it can be used by field-level encryption.
  4. Launch the CloudFormation stack
    Deploy the sample application for demonstrating field-level encryption by using AWS CloudFormation.
  5. Add the field-level encryption configuration to the CloudFront distribution
    After you have provisioned the application, this step associates the field-level encryption configuration with the CloudFront distribution.
  6. Store the RSA private key in the Parameter Store
    Store the private key in the Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value.

Deploy the solution

1. Stage the required artifacts

(If you are deploying in the US East [N. Virginia] Region, skip to Step 2, “Generate an RSA key pair.”)

Stage the Lambda function deployment package in an Amazon S3 bucket located in the AWS Region you are using for this solution. To do this, download the zipped deployment package and upload it to your in-region bucket. For additional information about uploading objects to S3, see Uploading Object into Amazon S3.

2. Generate an RSA key pair

In this section, you will generate an RSA key pair by using OpenSSL:

  1. Confirm access to OpenSSL.
    $ openssl version

    You should see version information similar to the following.

    OpenSSL <version> <date>

  1. Create a private key using the following command.
    $ openssl genrsa -out private_key.pem 2048

    The command results should look similar to the following.

    Generating RSA private key, 2048 bit long modulus
    ................................................................................+++
    ..........................+++
    e is 65537 (0x10001)
  1. Extract the public key from the private key by running the following command.
    $ openssl rsa -pubout -in private_key.pem -out public_key.pem

    You should see output similar to the following.

    writing RSA key
  1. Restrict access to the private key.$ chmod 600 private_key.pem Note: You will use the public and private key material in Steps 3 and 6 to configure the sample application.

3. Upload the public key to CloudFront and associate it with the field-level encryption configuration

Now that you have created the RSA key pair, you will use the AWS Management Console to upload the public key to CloudFront for use by field-level encryption. Complete the following steps to upload and configure the public key.

Note: Do not include spaces or special characters when providing the configuration values in this section.

  1. From the AWS Management Console, choose Services > CloudFront.
  2. In the navigation pane, choose Public Key and choose Add Public Key.
    Screenshot of adding a public key

Complete the Add Public Key configuration boxes:

  • Key Name: Type a name such as DemoPublicKey.
  • Encoded Key: Paste the contents of the public_key.pem file you created in Step 2c. Copy and paste the encoded key value for your public key, including the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines.
  • Comment: Optionally add a comment.
  1. Choose Create.
  2. After adding at least one public key to CloudFront, the next step is to create a profile to tell CloudFront which fields of input you want to be encrypted. While still on the CloudFront console, choose Field-level encryption in the navigation pane.
  3. Under Profiles, choose Create profile.
    Screenshot of creating a profile

Complete the Create profile configuration boxes:

  • Name: Type a name such as FLEDemo.
  • Comment: Optionally add a comment.
  • Public key: Select the public key you configured in Step 4.b.
  • Provider name: Type a provider name such as FLEDemo.
    This information will be used when the form data is encrypted, and must be provided to applications that need to decrypt the data, along with the appropriate private key.
  • Pattern to match: Type phone. This configures field-level encryption to match based on the phone.
  1. Choose Save profile.
  2. Configurations include options for whether to block or forward a query to your origin in scenarios where CloudFront can’t encrypt the data. Under Encryption Configurations, choose Create configuration.
    Screenshot of creating a configuration

Complete the Create configuration boxes:

  • Comment: Optionally add a comment.
  • Content type: Enter application/x-www-form-urlencoded. This is a common media type for encoding form data.
  • Default profile ID: Select the profile you added in Step 3e.
  1. Choose Save configuration

4. Launch the CloudFormation stack

Launch the sample application by using a CloudFormation template that automates the provisioning process.

Input parameterInput parameter description
ProviderIDEnter the Provider name you assigned in Step 3e. The ProviderID is used in field-level encryption configuration in CloudFront (letters and numbers only, no special characters)
PublicKeyNameEnter the Key Name you assigned in Step 3b. This name is assigned to the public key in field-level encryption configuration in CloudFront (letters and numbers only, no special characters).
PrivateKeySSMPathLeave as the default: /cloudfront/field-encryption-sample/private-key
ArtifactsBucketThe S3 bucket with artifact files (staged zip file with app code). Leave as default if deploying in us-east-1.
ArtifactsPrefixThe path in the S3 bucket containing artifact files. Leave as default if deploying in us-east-1.

To finish creating the CloudFormation stack:

  1. Choose Next on the Select Template page, enter the input parameters and choose Next.
    Note: The Artifacts configuration needs to be updated only if you are deploying outside of us-east-1 (US East [N. Virginia]). See Step 1 for artifact staging instructions.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details, choose the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. (The stack will be created in approximately 15 minutes.)

5. Add the field-level encryption configuration to the CloudFront distribution

While still on the CloudFront console, choose Distributions in the navigation pane, and then:

    1. In the Outputs section of the FLE-Sample-App stack, look for CloudFrontDistribution and click the URL to open the CloudFront console.
    2. Choose Behaviors, choose the Default (*) behavior, and then choose Edit.
    3. For Field-level Encryption Config, choose the configuration you created in Step 3g.
      Screenshot of editing the default cache behavior
    4. Choose Yes, Edit.
    5. While still in the CloudFront distribution configuration, choose the General Choose Edit, scroll down to Distribution State, and change it to Enabled.
    6. Choose Yes, Edit.

6. Store the RSA private key in the Parameter Store

In this step, you store the private key in the EC2 Systems Manager Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value. For more information about AWS KMS, see the AWS Key Management Service Developer Guide. You will need a working installation of the AWS CLI to complete this step.

  1. Store the private key in the Parameter Store with the AWS CLI by running the following command. You will find the <KMSKeyID> in the KMSKeyID in the CloudFormation stack Outputs. Substitute it for the placeholder in the following command.
    $ aws ssm put-parameter --type "SecureString" --name /cloudfront/field-encryption-sample/private-key --value file://private_key.pem --key-id "<KMSKeyID>"
    
    ------------------
    |  PutParameter  |
    +----------+-----+
    |  Version |  1  |
    +----------+-----+

  1. Verify the parameter. Your private key material should be accessible through the ssm get-parameter in the following command in the Value The key material has been truncated in the following output.
    $ aws ssm get-parameter --name /cloudfront/field-encryption-sample/private-key --with-decryption
    
    -----…
    
    ||  Value  |  -----BEGIN RSA PRIVATE KEY-----
    MIIEowIBAAKCAQEAwGRBGuhacmw+C73kM6Z…….

    Notice we use the —with decryption argument in this command. This returns the private key as cleartext.

    This completes the sample application deployment. Next, we show you how to see field-level encryption in action.

  1. Delete the private key from local storage. On Linux for example, using the shred command, securely delete the private key material from your workstation as shown below. You may also wish to store the private key material within an AWS CloudHSM or other protected location suitable for your security requirements. For production implementations, you also should implement key rotation policies.
    $ shred -zvu -n  100 private*.pem
    
    shred: private_encrypted_key.pem: pass 1/101 (random)...
    shred: private_encrypted_key.pem: pass 2/101 (dddddd)...
    shred: private_encrypted_key.pem: pass 3/101 (555555)...
    ….

Test the sample application

Use the following steps to test the sample application with field-level encryption:

  1. Open sample application in your web browser by clicking the ApplicationURL link in the CloudFormation stack Outputs. (for example, https:d199xe5izz82ea.cloudfront.net/prod/). Note that it may take several minutes for the CloudFront distribution to reach the Deployed Status from the previous step, during which time you may not be able to access the sample application.
  2. Fill out and submit the HTML form on the page:
    1. Complete the three form fields: Full Name, Email Address, and Phone Number.
    2. Choose Submit.
      Screenshot of completing the sample application form
      Notice that the application response includes the form values. The phone number returns the following ciphertext encryption using your public key. This ciphertext has been stored in DynamoDB.
      Screenshot of the phone number as ciphertext
  3. Execute the Lambda decryption function to download ciphertext from DynamoDB and decrypt the phone number using the private key:
    1. In the CloudFormation stack Outputs, locate DecryptFunction and click the URL to open the Lambda console.
    2. Configure a test event using the “Hello World” template.
    3. Choose the Test button.
  4. View the encrypted and decrypted phone number data.
    Screenshot of the encrypted and decrypted phone number data

Summary

In this blog post, we showed you how to use CloudFront field-level encryption to encrypt sensitive data at edge locations and help prevent access from unauthorized systems. The source code for this solution is available on GitHub. For additional information about field-level encryption, see the documentation.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the CloudFront forum.

– Alex and Cameron