Tag Archives: AWS security

AWS Security Profile: Byron Cook, Director of the AWS Automated Reasoning Group

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-byron-cook-director-aws-automated-reasoning-group/

Author


Byron Cook leads the AWS Automated Reasoning Group, which automates proof search in mathematical logic and builds tools that provide AWS customers with provable security. Byron has pushed boundaries in this field, delivered real-world applications in the cloud, and fostered a sense of community amongst its practitioners. In recognition of Byron’s contributions to cloud security and automated reasoning, the UK’s Royal Academy of Engineering elected him as one of 7 new Fellows in computing this year.

I recently sat down with Byron to discuss his new Fellowship, the work that it celebrates, and how he and his team continue to use automated reasoning in new ways to provide higher security assurance for customers in the AWS cloud.

Congratulations, Byron! Can you tell us a little bit about the Royal Academy of Engineering, and the significance of being a Fellow?

Thank you. I feel very honored! The Royal Academy of Engineering is focused on engineering in the broad sense; for example, aeronautical, biomedical, materials, etc. I’m one of only 7 Fellows elected this year that specialize in computing or logic, making the announcement really unique.

As for what the Royal Academy of Engineering is: the UK has Royal Academies for key disciplines such as music, drama, etc. The Royal Academies focus financial support and recognition on these fields, and gives a location and common meeting place. The Royal Academy of Music, for example, is near Regent’s Park in West London. The Royal Academy of Engineering’s building is in Carlton Place, one of the most exclusive locations in central London near Pall Mall and St. James’ Park. I’ve been to a number of lectures and events in that space. For example, it’s where I spoke ten years ago when I was the recipient of the Roger Needham prize. Some examples of previously elected Fellows include Sir Frank Whittle, who invented the jet engine; radar pioneer Sir George MacFarlane, and Sir Tim Berners-Lee, who developed the world-wide web.

Can you tell us a little bit about why you were selected for the award?

The letter I received from the Royal Academy says it better than I could say myself:

“Byron Cook is a world-renowned leader in the field of formal verification. For over 20 years Byron has worked to bring this field from academic hypothesis to mechanised industrial reality. Byron has made major research contributions, built influential tools, led teams that operationalised formal verification activities, and helped establish connections between others that have dramatically accelerated growth of the area. Byron’s tools have been applied to a wide array of topics, e.g. biological systems, computer operating systems, programming languages, and security. Byron’s Automated Reasoning Group at Amazon is leading the field to even greater success”.

Formal verification is the one term here that may be foreign to you, so perhaps I should explain. Formal verification is the use of mathematical logic to prove properties of systems. Euclid, for example, used formal verification in ~300 BC to prove that the Pythagorean theorem holds for all possible right-angled triangles. Today we are using formal verification to prove things about all possible configurations of a computer program might reach. When I founded Amazon’s Automated Reasoning Group, I named it that because my ambition was to automate all of the reasoning performed during formal verification.

Can you give us a bit of detail about some of the “research contributions and tools” mentioned in the text from Royal Academy of Engineering?

Probably my best-known work before joining Amazon was on the Terminator tool. Terminator was designed to reason at compile-time about what a given computer program would eventually do when running in production. For example, “Will the program eventually halt?” This is the famous “Halting problem,” proved undecidable in the 1930s. The Terminator tool piloted a new approach to the problem which is popular now, based on the idea of incrementally improving the best guess for a proof based on failed proof attempts. This was the first known approach capable of scaling termination proving to industrial problems. My colleagues and I used Terminator to find bugs in device drivers that could cause operating systems to become unresponsive. We found many bugs in device drivers that ran keyboards, mice, network devices, and video cards. The Terminator tool was also the basis of BioModelAnaylzer. It turns out that there’s a connection between diseases like Leukemia and the Halting problem: Leukemia is a termination bug in the genetic-regulatory pathways in your blood. You can think of it in the same way you think of a device driver that’s stuck in an infinite loop, causing your computer to freeze. My tools helped answer fundamental questions that no tool could solve before. Several pharmaceutical companies use BioModelAnaylzer today to understand disease and find new treatment options. And these days, there is an annual international competition with many termination provers that are much better than the Terminator. I think that this is what Royal Academy is talking about when they say I moved the area from “academic hypothesis to mechanized industrial reality.”

I have also worked on problems related to the question of P=NP, the most famous open problem in computing theory. From 2000-2006, I built tools that made NP feel equal to P in certain limited circumstances to try and understand the problem better. Then I focused on circumstances that aligned with important industrial problems, like proving the absence of bugs in microprocessors, flight control software, telecommunications systems, and railway control systems. These days the tools in this space are incredibly powerful. You should check out the software tools CVC4 or Z3.

And, of course, there’s my work with the Automated Reasoning Group, where I’ve built a team of domain experts that develop and apply formal verification tools to a wide variety of problems, helping make the cloud more secure. We have built tools that automatically reason about the semantics of policies, networks, cryptography, virtualization, etc. We reason about the implementation of Amazon Web Services (AWS) itself, and we’ve built tools that help customers prove the correctness of their AWS-based implementations.

Could you go into a bit more detail about how this work connects to Amazon and its customers?

AWS provides cloud services globally. Cloud is shorthand for on-demand access to IT resources such as compute, storage, and analytics via the Internet with pay-as-you-go pricing. AWS has a wide variety of customers, ranging from individuals to the largest enterprises, and practically all industries. My group develops mathematical proof tools that help make AWS more secure, and helps AWS customers understand how to build in the cloud more securely.

I first became an AWS customer myself when building BioModelAnaylzer. AWS allowed us working on this project to solve major scientific challenges (see this Nature Scientific Report for an example) using very large datacenters, but without having to buy the machines, maintain the machines, maintain the rooms that the machines would sit in, the A/C system that would keep them cool, etc. I was also able to easily provide our customers with access to the tool via the cloud, because it’s all over the internet. I just pointed people to the end-point on the internet and, presto, they were using the tool. About 5 years before developing BioModelAnalyzer, I was developing proof tools for device drivers and I gave a demo of the tool to my executive leadership. At the end of the demo, I was asked if 5,000 machines would help us do more proofs. Computationally, the answer was an obvious “yes,” but then I thought a minute about the amount of overhead required to manage a fleet of 5,000 machines and reluctantly replied “No, but thank you very much for the offer!” With AWS, it’s not even a question. Anyone with an Amazon account can provision 5,000 machines for practically nothing. In less than 5 minutes, you can be up and running and computing with thousands of machines.

What I love about working at AWS is that I can focus a very small team on proving the correctness of some aspect of AWS (for example, the cryptography) and, because of the size and importance of the customer base, we make much of the world meaningfully more secure. Just to name a few examples: s2n (the Amazon TLS implementation); the AWS Key Management Service (KMS), which allows customers to securely store crypto keys; and networking extensions to the IoT operating system Amazon FreeRTOS, which customers use to link cloud to IoT devices, such as robots in factories. We also focus on delivering service features that help customers prove the correctness of their AWS-based implementations. One example is Tiros, which powers a network reachability feature in Amazon Inspector. Another example is Zelkova, which powers features in services such as Amazon S3, AWS Config, and AWS IoT Device Defender.

When I think of mathematical logic I think of obscure theory and messy blackboards, not practical application. But it sounds like you’ve managed to balance the tension between theory and practical industrial problems?

I think that this is a common theme that great scientists don’t often talk about. Alan Turing, for example, did his best work during the war. John Snow, who made fundamental contributions to our understanding of germs and epidemics, did his greatest work while trying to figure out why people were dying in the streets of London. Christopher Stratchey, one of the founders of our field, wrote:

“It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles in their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing.”

Throughout my career, I’ve been at the intersection of practical and theoretical. In the early days, this was driven by necessity: I had two children during my PhD and, frankly, I needed the money. But I soon realized that my deep connection to real engineering problems was an advantage and not a disadvantage, and I’ve tried through the rest of my career to stay in that hot spot of commercially applicable problems while tackling abstract mathematical topics.

What’s next for you? For the Automated Reasoning Group? For your scientific field?

The Royal Academy of Engineering kindly said that I’ve brought “this field from academic hypothesis to mechanized industrial reality.” That’s perhaps true, but we are very far from done: it’s not yet an industrial standard. The full power of automated reasoning is not yet available to everyone because today’s tools are either difficult to use or weak. The engineering challenge is to make them both powerful and easy to use. With that I believe that they’ll become a key part of every software engineer’s daily routine. What excites me is that I believe that Amazon has a lot to teach me about how to operationalize the impossible. That’s what Amazon has done over and over again. That’s why I’m at Amazon today. I want to see these proof techniques operating automatically at Amazon scale.

Links:
Provable security webpage
Lecture: Fundamentals for Provable Security at AWS
Lecture: The evolution of Provable Security at AWS
Lecture: Automating compliance verification using provable security
Lecture: Byron speaks about Terminator at University of Colorado
https://biomodelanalyzer.org/

If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Updates to Serverless Architectural Patterns and Best Practices

Post Syndicated from Drew Dennis original https://aws.amazon.com/blogs/architecture/updates-to-serverless-architectural-patterns-and-best-practices/

As we sail past the halfway point between re:Invent 2018 and re:Invent 2019, I’d like to revisit some of the recent serverless announcements we’ve made. These are all complimentary to the patterns discussed in the re:Invent architecture track’s Serverless Architectural Patterns and Best Practices session.

AWS Event Fork Pipelines

AWS Event Fork Pipelines was announced in March 2019. Many customers use asynchronous event-driven processing in their serverless applications to decouple application components and address high concurrency needs. And in doing so, they often find themselves needing to backup, search, analyze, or replay these asynchronous events. That is exactly what AWS Event Fork Pipelines aims to achieve. You can plug them into a new or existing SNS topic used by your application and immediately address retention and compliance needs, gain new business insights, or even improve your application’s disaster recovery abilities.

AWS Event Fork Pipelines is a suite of three applications. The first application addresses event storage and backup needs by writing all events to an S3 bucket where they can be queried with services like Amazon Athena. The second is a search and analytics pipeline that delivers events to a new or existing Amazon ES domain, enabling search and analysis of your events. Finally, the third application is an event replay pipeline that can be used to reprocess messages should a downstream failure occur in your application. AWS Event Fork Pipelines is available in AWS Serverless Application Model (SAM) templates and are available in the AWS Serverless Application Repository (SAR). Check out our example e-commerce application on GitHub..

Amazon API Gateway Serverless Developer Portal

If you publish APIs for developers allowing them to build new applications and capabilities with your data, you understand the need for a developer portal. Also, in March 2019, we announced some significant upgrades to the API Gateway Serverless Developer Portal. The portal’s front end is written in React and is designed to be fully customizable.

The API Gateway Serverless Developer Portal is also available in GitHub and the AWS SAR. As you can see from the architecture diagram below, it is integrated with Amazon Cognito User Pools to allow developers to sign-up, receive an API Key, and register for one or more of your APIs. You can now also enable administrative scenarios from your developer portal by logging in as users belonging to the portal’s Admin group which is created when the portal is initially deployed to your account. For example, you can control which APIs appear in a customer’s developer portal, enable SDK downloads, solicit developer feedback, and even publish updates for APIs that have been recently revised.

AWS Lambda with Amazon Application Load Balancer (ALB)

Serverless microservices have been built by our customers for quite a while, with AWS Lambda and Amazon API Gateway. At re:Invent 2018 during Dr. Werner Vogel’s keynote, a new approach to serverless microservices was announced, Lambda functions as ALB targets.

ALB’s support for Lambda targets gives customers the ability to deploy serverless code behind an ALB, alongside servers, containers, and IP addresses. With this feature, ALB path and host-based routing can be used to direct incoming requests to Lambda functions. Also, ALB can now provide an entry point for legacy applications to take on new serverless functionality, and enable migration scenarios from monolithic legacy server or container-based applications.

Use cases for Lambda targets for ALB include adding new functionality to an existing application that already sits behind an ALB. This could be request monitoring by sending http headers to Elasticsearch clusters or implementing controls that manage cookies. Check out our demo of this new feature. For additional details, take a look at the feature’s documentation.

Security Overview of AWS Lambda Whitepaper

Finally, I’d be remiss if I didn’t point out the great work many of my colleagues have done in releasing the Security Overview of AWS Lambda Whitepaper. It is a succinct and enlightening read for anyone wishing to better understand the Lambda runtime environment, function isolation, or data paths taken for payloads sent to the Lambda service during synchronous and asynchronous invocations. It also has some great insight into compliance, auditing, monitoring, and configuration management of your Lambda functions. A must read for anyone wishing to better understand the overall security of AWS serverless applications.

I look forward to seeing everyone at re:Invent 2019 for more exciting serverless announcements!

About the author

Drew DennisDrew Dennis is a Global Solutions Architect with AWS based in Dallas, TX. He enjoys all things Serverless and has delivered the Architecture Track’s Serverless Patterns and Best Practices session at re:Invent the past three years. Today, he helps automotive companies with autonomous driving research on AWS, connected car use cases, and electrification.

Provable security podcast: automated reasoning’s past, present, and future with Moshe Vardi

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/provable-security-podcast-automated-reasonings-past-present-and-future-with-moshe-vardi/

AWS just released the first podcast of a new miniseries called Provable Security: Conversations on Next Gen Security. We published a podcast on provable security last fall, and, due to high customer interest, we decided to bring you a regular peek into this AWS initiative. This series will explore the unique intersection between academia and industry in the cloud security space. Specifically, the miniseries will cover how the traditionally academic field of automated reasoning is being applied at AWS at scale to help provide higher assurances for our customers, regulators, and the broader cloud industry. We’ll talk to individuals whose minds helped shape the history of automated reasoning, as well as learn from engineers and scientists who are applying automated reasoning to help solve pressing security and privacy challenges in the cloud.

This first interview is with Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology. Moshe describes the history of logic, automated reasoning, and formal verification. He discusses modern day applications in software and how principles of automated reasoning underlie core aspects of computer science, such as databases. The podcast interview highlights Moshe’s standout contributions to the formal verification and automated reasoning space, as well as the number of awards he’s received for his work. You can learn more about Moshe Vardi here.

Byron Cook, Director of the AWS Automated Reasoning Group, interviews Moshe and will be featured throughout the miniseries. Byron is leading the provable security initiative at AWS, which is a collection of technologies that provide higher security assurance to customers by giving them a deeper understanding of their cloud architecture.

You can listen to or download the podcast above, or visit this link. We’ve also included links below to many of the technology and references Moshe discusses in his interview.

We hope you enjoy the podcast and this new miniseries! If you have feedback, let us know in the Comments section below.

Automated reasoning public figures:

Automated techniques and algorithms:

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

A cybersecurity strategy to thwart advanced attackers

Post Syndicated from Tim Rains original https://aws.amazon.com/blogs/security/a-cybersecurity-strategy-to-thwart-advanced-attackers/

Today, many Chief Information Security Officers and cybersecurity practitioners are looking for an effective cybersecurity strategy that will help them achieve measurably better security for their organizations. AWS has released two new whitepapers to help customers plan and implement a strategy that has helped many organizations protect, detect, and respond to modern-day attacks.

  • Breaking Intrusion Kill Chains with AWS provides context and shows you, in detail, how to mitigate advanced attackers’ favorite strategies and tactics using the AWS cloud platform. It also offers advice on how to measure the effectiveness of this approach.
  • Breaking Intrusion Kill Chains with AWS Reference Material contains a detailed example of how AWS services, features, functionality, and AWS Partner offerings can be used together to safeguard your organization’s data and cloud infrastructure. This paper will save you time and effort by providing you with a comprehensive AWS security control mapping to each phase of advanced attacks, which you’d otherwise have to do on your own.

    This document provides a list of some of the key AWS security controls, organized in an easy-to-understand format, and it includes a mapping to the AWS Cloud Adoption Framework (CAF). Many organizations use the CAF to build a comprehensive approach to cloud computing across their organization. If your organization uses the CAF, and you decide to implement some or all of the controls described in Breaking Intrusion Kill Chains with AWS, then the reference material in this whitepaper can be used to cross-reference with your other CAF efforts, potentially increasing your ROI.

For a high-level introduction, check out my webinar recording. In it, I discuss cybersecurity strategies, explain the framework that’s used, and talk about how to implement it on AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tim Rains

Tim is the Regional Leader for Security and Compliance in Europe, Africa, and the Middle East for AWS. He helps federal, regional and local governments, in addition to non-profit organizations, education and health care customers with their security and compliance needs. Tim is a frequent speaker at AWS and industry events. Prior to joining AWS, Tim held a variety of executive-level cybersecurity strategy positions at global companies.

How to visualize Amazon GuardDuty findings: serverless edition

Post Syndicated from Ben Romano original https://aws.amazon.com/blogs/security/how-to-visualize-amazon-guardduty-findings-serverless-edition/

Note: This blog provides an alternate solution to Visualizing Amazon GuardDuty Findings, in which the authors describe how to build an Amazon Elasticsearch Service-powered Kibana dashboard to ingest and visualize Amazon GuardDuty findings.

Amazon GuardDuty is a managed threat detection service powered by machine learning that can monitor your AWS environment with just a few clicks. GuardDuty can identify threats such as unusual API calls or potentially unauthorized users attempting to access your servers. Many customers also like to visualize their findings in order to generate additional meaningful insights. For example, you might track resources affected by security threats to see how they evolve over time.

In this post, we provide a solution to ingest, process, and visualize your GuardDuty finding logs in a completely serverless fashion. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers. Our solution covers how to build a pipeline that ingests findings into Amazon Simple Storage Service (Amazon S3), transforms their nested JSON structure into tabular form using Amazon Athena and AWS Glue, and creates visualizations using Amazon QuickSight. We aim to provide both an easy-to-implement and cost-effective solution for consuming and analyzing your GuardDuty findings, and to more generally showcase a repeatable example for processing and visualizing many types of complex JSON logs.

Many customers already maintain centralized logging solutions using Amazon Elasticsearch Service (Amazon ES). If you want to incorporate GuardDuty findings with an existing solution, we recommend referencing this blog post to get started. If you don’t have an existing solution or previous experience with Amazon ES, if you prefer to use serverless technologies, or if you’re familiar with more traditional business intelligence tools, read on!

Before you get started

To follow along with this post, you’ll need to enable GuardDuty in order to start generating findings. See Setting Up Amazon GuardDuty for details if you haven’t already done so. Once enabled, GuardDuty will automatically generate findings as events occur. If you have public-facing compute resources in the same region in which you’ve enabled GuardDuty, you may soon find that they are being scanned quite often. All the more reason to continue reading!

You’ll also need Amazon QuickSight enabled in your account for the visualization sections of this post. You can find instructions in Setting Up Amazon QuickSight.

Architecture from end to end

 

Figure 1:  Complete architecture from findings to visualization

Figure 1: Complete architecture from findings to visualization

Figure 1 highlights the solution architecture, from finding generation all the way through final visualization. The steps are as follows:

  1. Deliver GuardDuty findings to Amazon CloudWatch Events
  2. Push GuardDuty Events to S3 using Amazon Kinesis Data Firehose
  3. Use AWS Lambda to reorganize S3 folder structure
  4. Catalog your GuardDuty findings using AWS Glue
  5. Configure Views with Amazon Athena
  6. Build a GuardDuty findings dashboard in Amazon QuickSight

Below, we’ve included an AWS CloudFormation template to launch a complete ingest pipeline (Steps 1-4) so that we can focus this post on the steps dedicated to building the actual visualizations (Steps 5-6). We cover steps 1-4 briefly in the next section to provide context, and we provide links to the pertinent pages in the documentation for those of you interested in building your own pipeline.
 
Select this image to open a link that starts building the CloudFormation stack

Ingest (Steps 1-4): Get Amazon GuardDuty findings into Amazon S3 and AWS Glue Data Catalog

 

Figure 2: In this section, we'll cover the services highlighted in blue

Figure 2: In this section, we’ll cover the services highlighted in blue

Step 1: Deliver GuardDuty findings to Amazon CloudWatch Events

GuardDuty has integration with and can deliver findings to Amazon CloudWatch Events. To perform this manually, follow the instructions in Creating a CloudWatch Events Rule and Target for GuardDuty.

Step 2: Push GuardDuty events to Amazon S3 using Kinesis Data Firehose

Amazon CloudWatch Events can write to an Kinesis Data Firehose delivery stream to store your GuardDuty events in S3, where you can use AWS Lambda, AWS Glue, and Amazon Athena to build the queries you’ll need to visualize the data. You can create your own delivery stream by following the instructions in Creating a Kinesis Data Firehose Delivery Stream and then adding it as a target for CloudWatch Events.

Step 3: Use AWS Lambda to reorganize Amazon S3 folder structure

Kinesis Data Firehose will automatically create a datetime-based file hierarchy to organize the findings as they come in. Due to the variability of the GuardDuty finding types, we recommend reorganizing the file hierarchy with a folder for each finding type, with separate datetime subfolders for each. This will make it easier to target findings that you want to focus on in your visualization. The provided AWS CloudFormation template utilizes an AWS Lambda function to rewrite the files in a new hierarchy as new files are written to S3. You can use the code provided in it along with Using AWS Lambda with S3 to trigger your own function that reorganizes the data. Once the Lambda function has run, the S3 bucket structure should look similar to the structure we show in figure 3.
 

Figure 3: Sample S3 bucket structure

Figure 3: Sample S3 bucket structure

Step 4: Catalog the GuardDuty findings using AWS Glue

With the reorganized findings stored in S3, use an AWS Glue crawler to scan and catalog each finding type. The CloudFormation template we provided schedules the crawler to run once a day. You can also run it on demand as needed. To build your own crawler, refer to Cataloging Tables with a Crawler. Assuming GuardDuty has generated findings in your account, you can navigate to the GuardDuty findings database in the AWS Glue Data Catalog. It should look something like figure 4:
 

Figure 4: List of finding type tables in the AWS Glue Catalog

Figure 4: List of finding type tables in the AWS Glue Catalog

Note: Because AWS Glue crawlers will attempt to combine similar data into one table, you might need to generate sample findings to ensure enough variability for each finding type to have its own table. If you only intend to build your dashboard from a small subset of finding types, you can opt to just edit the crawler to have multiple data sources and specify the folder path for each desired finding type.

Explore the table structure

Before moving on to the next step, take some time to explore the schema structure of the tables. Selecting one of the tables will bring you to a page that looks like what’s shown in figure 5.
 

Figure 5: Schema information for a single finding table

Figure 5: Schema information for a single finding table

You should see that most of the columns contain basic information about each finding, but there’s a column named detail that is of type struct. Select it to expand, as shown in figure 6.
 

Figure 6: The "detail" column expanded

Figure 6: The “detail” column expanded

Ah, this is where the interesting information is tucked away! The tables for each finding may differ slightly, but in all cases the detail column will hold the bulk of the information you’ll want to visualize. See GuardDuty Active Finding Types for information on what you should expect to find in the logs for each finding type. In the next step, we’ll focus on unpacking detail to prepare it for visualization!

Process (Step 5): Unpack nested JSON and configure views with Amazon Athena

 

Figure 7: In this section, we'll cover the services highlighted in blue

Figure 7: In this section, we’ll cover the services highlighted in blue

Note: This step picks up where the CloudFormation template finishes

Explore the table structure (again) in the Amazon Athena console

Begin by navigating to Athena from the AWS Management Console. Once there, you should see a drop-down menu with a list of databases. These are the same databases that are available in the AWS Glue Data Catalog. Choose the database with your GuardDuty findings and expand a table.
 

Figure 8: Expanded table in the Athena console

Figure 8: Expanded table in the Athena console

This should look very familiar to the table information you explored in step 4, including the detail struct!

You’ll need a method to unpack the struct in order to effectively visualize the data. There are many methods and tools to approach this problem. One that we recommend (and will show) is to use SQL queries within Athena to construct tabular views. This approach will allow you to push the bulk of the processing work to Athena. It will also allow you to simplify building visualizations when using Amazon QuickSight by providing a more conventional tabular format.

Extract details for use in visualization using SQL

The following examples contain SQL statements that will provide everything necessary to extract the necessary fields from the detail struct of the Recon:EC2/PortProbeUnprotectedPort finding to build the Amazon QuickSight dashboard we showcase in the next section. The examples also cover most of the operations you’ll need to work with the elements found in GuardDuty findings (such as deeply nested data with lists), and they serve as a good starting point for constructing your own custom queries. In general, you’ll want to traverse the nested layers (i.e. root.detail.service.count) and create new records for each item in an embedded list that you want to target using the UNNEST function. See this blog for even more examples of constructing queries on complex JSON data using Amazon Athena.

Simply copy the SQL statements that you want into the Athena query field to build the port_probe_geo and affected_instances views.

Note: If your account has yet to generate Recon:EC2/PortProbeUnprotectedPort findings, you can generate sample findings to follow along.


CREATE OR REPLACE VIEW "port_probe_geo" AS

WITH getportdetails AS (
    SELECT id, portdetails
    FROM by_finding_type
    CROSS JOIN UNNEST(detail.service.action.portProbeAction.portProbeDetails) WITH ORDINALITY AS p (portdetails, portdetailsindex)
)

SELECT 
    root.id AS id,
    root.region AS region,
    root.time AS time,
    root.detail.type AS type,
    root.detail.service.count AS count, 
    portdetails.localportdetails.port AS localport, 
    portdetails.localportdetails.portname AS localportname, 
    portdetails.remoteipdetails.geolocation.lon AS longitude, 
    portdetails.remoteipdetails.geolocation.lat AS latitude, 
    portdetails.remoteipdetails.country.countryname AS country, 
    portdetails.remoteipdetails.city.cityname AS city 

FROM 
    by_finding_type  as root, getPortDetails
    
WHERE 
    root.id = getportdetails.id

CREATE OR REPLACE VIEW "affected_instances" AS

SELECT 
    max(root.detail.service.count) AS count,
    date_parse(root.time,'%Y-%m-%dT%H:%i:%sZ') as time,
    root.detail.resource.instancedetails.instanceid

FROM 
    recon_ec2_portprobeunprotectedport  AS root

GROUP BY  
    root.detail.resource.instancedetails.instanceid, 
    time

Visualize (Step 6): Build a GuardDuty findings dashboard in Amazon QuickSight

 

Figure 9: In this section we will cover the services highlighted in blue

Figure 9: In this section we will cover the services highlighted in blue

Now that you’ve created tabular views using Athena, you can jump into Amazon QuickSight from the AWS Management Console and begin visualizing! If you haven’t already done so, enable Amazon QuickSight in your account by following the instructions for Setting Up Amazon QuickSight.

For this example, we’ll leverage the geo_port_probe view to build a geographic visualization and see the locations from which nefarious actors are launching port probes.

Creating an analysis

In the upper left-hand corner of the Amazon QuickSight console select New analysis and then New data set.
 

Figure 10: Create a new analysis

Figure 10: Create a new analysis

To utilize the views you built in the previous step, select Athena as the data source. Give your data source a name (in our example, we use “port probe geo”), and select the database that contains the views you created in the previous section. Then select Visualize.
 

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

 

Figure 12: Select the "port prob geo view" you created in step 5

Figure 12: Select the “port prob geo view” you created in step 5

Viz time!

From the Visual types menu in the bottom left corner, select the globe icon to create a map. Then select the latitude and longitude geospatial coordinates. Choose count (with a max aggregation) for size. Finally, select localportname to break the data down by color.
 

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Voila! A detailed map of your environment’s attackers!

Build out a dashboard

Once you like how everything looks, you can move on to adding more visuals to create a full monitoring dashboard.

To add another visual to the analysis, select Add and then Add visual.
 

Figure 14: Add another visual using the 'Add' option from the Amazon QuickSight menu bar

Figure 14: Add another visual using the ‘Add’ option from the Amazon QuickSight menu bar

If the new visual will use the same dataset, then you can immediately start selecting fields to build it. If you want to create a visual from a different data set (our example dashboard below adds the affected_instances view), follow the Creating Data Sets guide to add a new data set. Then return to the current analysis and associate the data set with the analysis by selecting the pencil icon shown below and selecting Add data set.
 

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Repeat this process until you’ve built out everything you need in your monitoring dashboard. Once it’s completed, you can publish the dashboard by selecting Share and then Publish dashboard.
 

Figure 16: Publish your dashboard using the "Share" option of the Amazon QuickSight menu

Figure 16: Publish your dashboard using the “Share” option of the Amazon QuickSight menu

Here’s an example of a dashboard we created using the port_probe_geo and affected_instances views:
 

Figure 17: An example dashboard created using the "port_probe_geo" and "affected_instances" views

Figure 17: An example dashboard created using the “port_probe_geo” and “affected_instances” views

What does something like this cost?

To get an idea of the scale of the cost, we’ve provided a small pricing example (accurate as of the writing of this blog) that assumes 10,000 GuardDuty findings per month with an average payload size of 5KB.

ServicePricing StructureAmount ConsumedTotal Cost
Amazon CloudWatch Events$1 per million events/td>

10000 events $0.01
Amazon Kinesis Data Firehose$0.029 per GB ingested0.05GB ingested $0.00145
Amazon S3$0.029 per GB stored per month0.1GB stored $0.00230
AWS LambdaFirst million invocations free~200 invocations $0
Amazon Athena$5 per TB Scanned0.003TB scanned (Assume 2 full data scans per day to refresh views) $0.015
AWS Glue$0.44 per DPU hour (2 DPU minimum and 10 minute minimum) = $0.15 per crawler run30 crawler runs $4.50
Total Processing Cost$4.53

Oh, the joys of a consumption-based model: Less than five dollars per month for all of that processing!

From here, all that remains are your visualization costs using Amazon QuickSight. This pricing is highly dependent upon your number of users and their respective usage patterns. See the Amazon QuickSight pricing page for more specific details.

Summary

In this post, we demonstrated how you can ingest your GuardDuty findings into S3, process them with AWS Glue and Amazon Athena, and visualize with Amazon QuickSight. All serverless! Each portion of what we showed can be used in tandem or on its own for this or many other data sets. Go launch the template and get started monitoring your AWS environment!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Romano

Ben is a Solutions Architect in AWS supporting customers in their journey to the cloud with a focus on big data solutions. Ben loves to delight customers by diving deep on AWS technologies and helping them achieve their business and technology objectives.

Author

Jimmy Boyle

Jimmy is a Solutions Architect in AWS with a background in software development. He enjoys working with all things serverless because he doesn’t have to maintain infrastructure. Jimmy enjoys delighting customers to drive their business forward and design solutions that will scale as their business grows.

How to analyze AWS WAF logs using Amazon Elasticsearch Service

Post Syndicated from Tom Adamski original https://aws.amazon.com/blogs/security/how-to-analyze-aws-waf-logs-using-amazon-elasticsearch-service/

Log analysis is essential for understanding the effectiveness of any security solution. It can be valuable for day-to-day troubleshooting and also for your long-term understanding of how your security environment is performing.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules.

With the release of access to full AWS WAF logs, you now have the ability to view all of the logs generated by AWS WAF while it’s protecting your web applications. In addition, you can use Amazon Kinesis Data Firehose to forward the logs to Amazon Simple Storage Service (Amazon S3) for archival, and to Amazon Elasticsearch Service to analyze access to your web applications.

In this blog post, I’ll show you how you can analyze AWS WAF logs using Amazon Elasticsearch Service (Amazon ES). I’ll walk you through how to find out in near-real time which AWS WAF rules get triggered, why, and by which request. I’ll also show you how to create a historical view of your web applications’ access trends for long-term analysis.

Architecture overview

When enabled, AWS WAF sends logs to Amazon Kinesis Data Firehose. From there, you can decide what to do with them next. In my architecture, I’ll forward the relevant logs to Amazon ES for analysis and to Amazon S3 for long-term storage.
 

Figure 1: Architecture overview

Figure 1: Architecture overview

In this blog, I will show you how to enable AWS WAF logging to Amazon ES in the following steps:

  1. Create an Amazon Elasticsearch Service domain.
  2. Configure Kinesis Data Firehose for log delivery.
  3. Enable AWS WAF logs on a web access control list (ACL) to send data to Kinesis Data Firehose.
  4. Analyze AWS WAF logs in Amazon ES.

Note: The region in which you should enable Kinesis Data Firehose must match the region where your AWS WAF web ACL is deployed. If you’re enabling logging for AWS WAF deployed on Amazon CloudFront, then Kinesis Data Firehose must be created in the Northern Virginia AWS Region.

Preparing Amazon Elasticsearch Service

If you don’t have your Amazon ES environment already running on AWS, you’ll need to create one. Start by navigating to the Elasticsearch Service from your AWS Console and choosing Create a new domain. There, you will be able to define the namespace for your Elasticsearch cluster and choose the version you’d like to use. In my setup, I’ll be using awswaf-logs as my namespace and version 6.3 of Elasticsearch.
 

Figure 2: Defining Amazon ES Domain

Figure 2: Defining Amazon ES Domain

You can also decide on other parameters, like the number of nodes in your cluster or how to access the dashboards. See the Amazon Elasticsearch Service Documentation to find out more details about setting up your environment, or review this previous blog post, which goes deeper into Amazon ES setup.

When your environment is ready, you’ll be able to click on your namespace and find the link to your Kibana dashboard. Kibana is the data visualisation plugin for Elasticsearch that you’ll use to analyze your logs.
 

Figure 3: Identifying the Kibana management URL

Figure 3: Identifying the Kibana management URL

While Elasticsearch can automatically classify most of the fields from the AWS WAF logs, you need to inform Elasticsearch how to interpret fields that have specific formatting. Therefore, before you start sending logs to Elasticsearch, you should create an index pattern template so Elasticsearch can distinguish AWS WAF logs and classify the fields correctly.

The pattern template I’m using below will be applied to all indexes beginning with the string awswaf-. That’s because in my setup all AWS WAF log index files created in Elasticsearch will always begin with the character string awswaf- followed by a date.

The pattern template is defining two fields from the logs. It will indicate to Elasticsearch that the httpRequest.clientIp field is using an IP address format and that the timestamp field is represented in epoch time. All the other log fields will be classified automatically.

In the template, I’m also setting the number of Elasticsearch shards that should be used for the index. Shards help with distributing large amount of data and their number depends on the size of the index—the larger the index the more shards it should be deployed across. I’m going to use only one shard since I expect my index to be less than 30 GB. For recommendations on sizing your shards appropriately, refer to Jon Handler’s blog post on using Amazon Elasticsearch Service.


PUT  _template/awswaf-logs
{
    "index_patterns": ["awswaf-*"],
    "settings": {
    "number_of_shards": 1
    },
    "mappings": {
      "waflog": {
        "properties": {
          "httpRequest": {
            "properties": {
              "clientIp": {
                "type": "keyword",
                "fields": {
                  "keyword": {
                    "type": "ip"
                  }
                }
              }
            }
          },
          "timestamp": {
            "type": "date",
            "format": "epoch_millis"
          }
      }
    }
  }
}

I’m going to apply this template to Elasticsearch through the Kibana developer interface (from the Dev Tools tab in the Kibana dashboard), but it can also be done through an API call to the Amazon ES endpoint.
 

Figure 4: Applying the index pattern template in Kibana

Figure 4: Applying the index pattern template in Kibana

The template will be applied to all log indexes starting with the awswaf- prefix. This is the prefix name you’ll use at a later stage in the Kinesis Data Firehose configuration.

Setting up Kinesis for log delivery

With Amazon ES now ready to receive AWS WAF logs, I’ll show you how to use Amazon Kinesis Data Firehose to deliver your logs to Amazon S3 and Amazon ES. This is a prerequisite before you enable logging in AWS WAF.

First, go to the Kinesis service in the AWS Console and under the Data Firehose tab choose Create delivery stream. If the Data Firehose stream is going to be used for WAF logs, its name must start with aws-waf-logs-. I’ll enable logging for the already existing web ACL that I’m using for OWASP top 10 protection, therefore I’ll name my delivery stream aws-waf-logs-owasp.

Keep all the settings at the default until you get to the Select destination page. On this page, you’ll configure Amazon Elasticsearch Service as the destination for your logs. Make sure you enter awswaf as the index name and waflog as the type to match the index pattern template you already created in Elasticsearch earlier.
 

Figure 5: Setting up Amazon ES as Data Firehose log destination

Figure 5: Setting up Amazon ES as Data Firehose log destination

On this page, you can also set what to back up to Amazon S3—either all records or just the failed ones. You’ll then need to decide on the bucket to use, specify the prefix, and then select Next.
 

Figure 6: Setting up S3 backup in Data Firehose

Figure 6: Setting up S3 backup in Data Firehose

On the next page, you’ll have the option to update the buffer conditions—size and interval. These indicate how often Kinesis Data Firehose will send new data to Amazon ES. The lower the values you set, the closer to real time your log delivery will be. In my example, I’ve configured a buffer size of 1 MB and a buffer interval of 60 seconds.
 

Figure 7: Setting up Data Firehose buffer conditions

Figure 7: Setting up Data Firehose buffer conditions

Next, you’ll need to specify an IAM role that enables Kinesis Data Firehose to send data to Amazon S3 and Amazon ES. You can select an existing IAM role or create a new one. For more information about creating a role, see Controlling Access with Amazon Kinesis Data Firehose.

Finally, review all the settings, and complete the creation of your Kinesis Data Firehose.

Enabling AWS WAF logging

Now that you have Kinesis Data Firehose prepared, you can enable logging for an existing AWS WAF Web ACL. You can do this in the AWS Console under the AWS WAF & AWS Shield service. Go to the Web ACLs tab and select the Web ACL for which you want to start logging. Then, under the Logging tab, choose Enable Logging.
 

Figure 8: Enabling logging for AWS WAF web ACL

Figure 8: Enabling logging for AWS WAF web ACL

On the next page, specify the Kinesis Data Firehose that the logs should be delivered to. If you can’t see any options in the drop-down list, make sure that the Kinesis Data Firehose you configured earlier is deployed in the same region as the web ACL you’re enabling logging for and that its name begins with aws-waf-logs-. Remember that for CloudFront web ACL deployments, the Kinesis Data Firehose should be created in the Northern Virginia Region.

Searching through logs

As soon as you log back into your Kibana dashboard for Amazon ES, you should see a new index appear called awswaf-<date>. You’ll need to define an index pattern that will encompass all the potential awswaf log entries. Configure your index pattern description as awswaf-*, and on the next screen select timestamp as the time filter.
 

Figure 9: Configuring the Kibana index pattern

Figure 9: Configuring the Kibana index pattern

 

Figure 10: Configuring a time filter for the index pattern

Figure 10: Configuring a time filter for the index pattern

After you complete the setup of your index pattern, you’ll be able to run searches through your logs by going into the Discover tab in Kibana. The searches can be based on any fields in the AWS WAF logs. For example, you can look for specific HTTP headers, query strings, or source IP addresses to find out what action has been applied to them. To find out more about what fields are available in the AWS WAF logs, see the AWS WAF Developer Guide.

In the example search that follows, I’m looking for all of the log entries where a web ACL has blocked the traffic and the HTTP arguments were equal to name=badname.
 

Figure 11: An example ad-hoc search

Figure 11: An example ad-hoc search

This approach can help you identify specific request based on their parameters, such as a cookie, host header, query string, and understand why they are being blocked (or allowed) and by which web ACL. It is especially useful for tuning AWS WAF to match your web application.

Building a dashboard

You can also use Amazon ES to create custom graphs to look at long-term trends—and you can combine your graphs into a dashboard. For example, you may want to see the number of blocked and allowed requests by IP address to identify potentially hostile IP addresses.

To understand the breakdown of the requests per IP address, you should first set up a graph showing the number of requests that were blocked and allowed by AWS WAF per source. To do this, go to the Visualize section of your Kibana dashboard and create a new visualization. You can select from a variety of visualization options, but I’m going to use the horizontal bar chart for the example that follows.
 

Figure 12: Configuring primary bucket parameters

Figure 12: Configuring primary bucket parameters

  1. Select your index and then configure your graph options. For a horizontal bar chart, leave the Y axis to represent the count of requests, but expand the primary bucket (X-Axis) form and change the fields to show the source IP address and action:
    1. From the Aggregation dropdown, select Terms.
    2. From the Field dropdown, select httpRequest.clientIP.
    3. From the Order By dropdown, select metric: Count.
       
      Figure 13: Configuring bucket series

      Figure 13: Configuring bucket series

  2. Next, expand the Split Series section and adjust the following dropdown fields:
    1. From the Sub Aggregation dropdown, select Terms.
    2. From the Field dropdown select action.keyword.
    3. From the Order By dropdown, select metric: Count.
       
      Figure 14: Sample Kibana graphs

      Figure 14: Sample Kibana graphs

    4. Apply your changes by selecting the play button at the top of the page. You should see the results appear on the right side of the screen.
       
      Figure 15: Select the "Apply changes" button

      Figure 15: Select the “Apply changes” button

    You can combine graphs into dashboards that aggregate relevant information about AWS WAF functionality. In the sample dashboard below, I’m showing multiple visualisations focusing on different dimensions like request per country, IP address, and even which WAF rules get the most hits.
     

    Figure 16: You can combine graphs

    Figure 16: You can combine graphs

    I’ve made the sample dashboard and sample visualizations available for download. They can be imported to your Kibana instance in the Management/SavedObjects section of the dashboard.

    Conclusion

    The ability to access AWS WAF logs for any web ACL allows you to start analyzing the requests coming to your web applications. In this post, I’ve covered an example of how you can use Amazon Elasticsearch Service as a destination for AWS WAF logs and shown you how to run searches on the log data as well as build graphs and dashboards. Amazon ES is just one of the destination options for native log delivery. To learn more, please refer to the documentation on how to send data directly to Amazon S3, Amazon Redshift, and Splunk.

    If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the WAF forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Tom Adamski

    Tom is a Specialist Solutions Engineer in Networking. He helps customers across the EMEA region design their network environments on AWS. He has over 10 years of experience in the networking and security industries. In his spare time, he’s an avid surfer who’s always on the lookout for cold water surf breaks.

Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/setting-the-record-straight-on-bloomberg-businessweeks-erroneous-article/

Today, Bloomberg BusinessWeek published a story claiming that AWS was aware of modified hardware or malicious chips in SuperMicro motherboards in Elemental Media’s hardware at the time Amazon acquired Elemental in 2015, and that Amazon was aware of modified hardware or chips in AWS’s China Region.

As we shared with Bloomberg BusinessWeek multiple times over the last couple months, this is untrue. At no time, past or present, have we ever found any issues relating to modified hardware or malicious chips in SuperMicro motherboards in any Elemental or Amazon systems. Nor have we engaged in an investigation with the government.

There are so many inaccuracies in ‎this article as it relates to Amazon that they’re hard to count. We will name only a few of them here. First, when Amazon was considering acquiring Elemental, we did a lot of due diligence with our own security team, and also commissioned a single external security company to do a security assessment for us as well. That report did not identify any issues with modified chips or hardware. As is typical with most of these audits, it offered some recommended areas to remediate, and we fixed all critical issues before the acquisition closed. This was the sole external security report commissioned. Bloomberg has admittedly never seen our commissioned security report nor any other (and refused to share any details of any purported other report with us).

The article also claims that after learning of hardware modifications and malicious chips in Elemental servers, we conducted a network-wide audit of SuperMicro motherboards and discovered the malicious chips in a Beijing data center. This claim is similarly untrue. The first and most obvious reason is that we never found modified hardware or malicious chips in Elemental servers. Aside from that, we never found modified hardware or malicious chips in servers in any of our data centers. And, this notion that we sold off the hardware and datacenter in China to our partner Sinnet because we wanted to rid ourselves of SuperMicro servers is absurd. Sinnet had been running these data centers since we ‎launched in China, they owned these data centers from the start, and the hardware we “sold” to them was a transfer-of-assets agreement mandated by new China regulations for non-Chinese cloud providers to continue to operate in China.

Amazon employs stringent security standards across our supply chain – investigating all hardware and software prior to going into production and performing regular security audits internally and with our supply chain partners. We further strengthen our security posture by implementing our own hardware designs for critical components such as processors, servers, storage systems, and networking equipment.

Security will always be our top priority. AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting their security above all else. We are constantly vigilant about potential threats to our customers, and we take swift and decisive action to address them whenever they are identified.

– Steve Schmidt, Chief Information Security Officer

Visualizing Amazon GuardDuty findings

Post Syndicated from Mike Fortuna original https://aws.amazon.com/blogs/security/visualizing-amazon-guardduty-findings/

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help protect your AWS accounts and workloads. Enable GuardDuty and it begins monitoring for:

  • Anomalous API activity
  • Potentially unauthorized deployments and compromised instances
  • Reconnaissance by attackers.

GuardDuty analyzes and processes VPC flow log, AWS CloudTrail event log, and DNS log data sources. You don’t need to manually manage these data sources because the data is automatically leveraged and analyzed when you activate GuardDuty. For example, GuardDuty consumes VPC Flow Log events directly from the VPC Flow Logs feature through an independent and duplicative stream of flow logs. As a result, you don’t incur any operational burden on existing workloads.

GuardDuty helps find potential threats in your AWS environment by producing security findings that you can view in the GuardDuty console or consume through Amazon CloudWatch Events, which is a service that makes alerts actionable and easier to integrate into existing event management and workflow systems. One common question we hear from customers is “how do I visualize these findings to generate meaningful insights?” In this post, we’re going to show you how to create a dashboard that includes visualizations like this:
 

Figure 1: Example visualization

Figure 1: Example visualization

You’ll learn how you can use AWS services to create a pipeline for your GuardDuty findings so you can log and visualize them. The services include:

Architecture

The architectural diagram below illustrates the pipeline we’ll create.
 

Figure 2: Architectural diagram

Figure 2: Architectural diagram

We’ll walk through the data flow to explain the architecture and highlight the additional customizations available to you.

  1. Amazon GuardDuty is enabled in an account and begins monitoring CloudTrail logs, VPC flow logs, and DNS query logs. If a threat is detected, GuardDuty forwards a finding to CloudWatch Events. For a newly generated finding, GuardDuty sends a notification based on its CloudWatch event within 5 minutes of the finding. CloudWatch Events allows you to send upstream notifications to various services filtered on your configured event patterns. We’ll configure an event pattern that only forwards events coming from the GuardDuty service.
  2. We define two targets in our CloudWatch Event Rule. The first target is a Kinesis Firehose stream for delivery into an Elasticsearch domain and an S3 bucket. The second target is an SNS Topic for Email/SMS notification of findings. We’ll send all findings to our targets; however, you can filter and format the findings you send by using a Lambda function (or by event pattern matching with a CloudWatch Event Rule). For example, you could send only high-severity alarms (that is, findings with detail.severity > 7).
  3. The Firehose stream delivers findings to Amazon Elasticsearch, which provides visualization and analysis for our event findings. The stream also delivers findings to an S3 bucket. The S3 bucket is used for long term archiving. This data can augment your data lake and you can use services such as Amazon Athena to perform advanced analytics.
  4. We’ll search, explore, and visualize the GuardDuty findings using Kibana and the Elasticsearch query Domain Specific Language (DSL) to gain valuable insights. Amazon Elasticsearch has a built-in Kibana plugin to visualize the data and perform operational analyses.
  5. To provide a simplified and secure authentication method, we provide user authentication to Kibana with Amazon Cognito User Pools. This method provides improved security from traditional IP whitelists or proxy infrastructure.
  6. Our second CloudWatch Event target is SNS, which has subscribed email endpoint(s) that allow your operations teams to receive email (or SMS messages) when a new GuardDuty Event is received.

If you would like to centralize your findings from multiple regions into a single S3 bucket, you can adapt this pipeline. You would deploy the frontend of the pipeline by configuring Kinesis Firehose in the remote regions to point to the S3 bucket in the centralized region. You can leverage prefixes in the Kinesis Firehose configuration to identify the source region. For example, you would configure a prefix of us-west-1 for events originating from the us-west-1 region. Analytic queries from tools such as Athena can then selectively target the desired region.

Deployment Steps

This CloudFormation template will install the pipeline and components required for GuardDuty visualization:
 
Select button to launch stack

When you start the stack creation process you will be prompted for the following information:

  • Stack name — This is the name of the stack you will create
  • EmailAddress — This email address is used to create a username in Cognito and a subscriber to the SNS topic.
  • ESDomainName — This will be the name given to the Elasticsearch Domain.
  • IndexName — This will be the Index created by Firehose to load data into Elasticsearch.

 

Figure 3: The "Create stack" interface

Figure 3: The “Create stack” interface

Once the infrastructure is installed, you’ll follow two main steps, each of which is described in detail later:

  1. Add Cognito authentication to Kibana, which is hosted on the Elasticsearch domain. At the time of writing, this can’t be done natively in CloudFormation. We’ll also confirm the SNS subscription so we can start to receive GuardDuty Findings via email.
  2. Configure Kibana with the index, the appropriate scripted fields, and the dashboard to provide the visualizations. We’ll also enable GuardDuty to start monitoring your account and send sample findings to test the pipeline.

Step 1: Enable Cognito authentication in Kibana

To enable user authentication to your dashboards hosted in Kibana, you need to enable the integration from the Elasticsearch domain that was created within the Cloudformation template.

  1. Open the AWS Console and select the Cognito service. Select Manage User Pools to access the User Pool that was created in Cloudformation. Select the user pool beginning with the name VisualizeGuardDutyUserPool and, under the App Integration menu item, select Domain name.
     
    Figure 4: The "Domain name" interface

    Figure 4: The “Domain name” interface

  2. You need to create a unique domain prefix to allow Kibana to authenticate using Cognito. Enter a unique domain prefix (it can only contain lowercase letters, numbers, and hyphens). After entering the prefix, select the Check Availability button to ensure it’s available in the region. If it’s available, select Save Changes button.
  3. From the AWS Console, select the Cloudformation service.
  4. Select the template you created for your pipeline, select the Outputs tab, and then, under Value, copy the value of ESCognitoRole. You’ll use this role when you enable Cognito authentication of Elasticsearch.
     
    Figure 5: The "Outputs" tab and the "ESCognitoRole" key

    Figure 5: The “Outputs” tab and the “ESCognitoRole” key

  5. Next, browse to the Elasticsearch Service, select the domain you created from the CloudFormation template, and select the Configure cluster button:
     
    Figure 6: The "Configure cluster" button

    Figure 6: The “Configure cluster” button

  6. Under the Kibana authentication section, select the Enable Amazon Cognito for authentication checkbox. You’ll be presented with several fields you need to configure, including: Cognito User Pool (the name of the user pool should start with VisualizeGuardDutyUserPool), Cognito Identity Pool (the name of the identity pool should start with VisualizeGuardDutyIDPool), and IAM Role Name (this was copied in step 4 earlier). A Cognito User Pool is a user directory in Amazon Cognito, we use this to create a user account to provide authentication to Kibana. Amazon Cognito Identity Pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. The Cognito Identity Pool in our case is used to provide federated access to Kibana. After you provided values for these fields, select the Submit button.
     
    Figure 7: The "Kibana authentication" interface

    Figure 7: The “Kibana authentication” interface

  7. The cluster reconfiguration will take several minutes to complete processing. When you see Domain status as Active, you can proceed.
  8. Finally, confirm the subscription email you received from SNS. Look for an email from: AWS Notifications <[email protected]>, open the message and select Confirm subscription to allow SNS to send you email when the SNS Topic receives a notification for new GuardDuty findings.

Step 2: Set up the Kibana dashboard and enable GuardDuty

Now, you can set up the Kibana dashboard with custom visualizations.

  1. Open the CloudFormation service page and select the stack you created earlier.
  2. Under the Outputs section, copy the Kibana URL.
     
    Figure 8: Copy the Kibana URL

    Figure 8: Copy the Kibana URL

  3. Paste the Kibana URL in a new browser window.
  4. Check your email client. You should have an email containing the temporary password from Cognito. Copy the temporary password and use it to log in to Cognito. If you haven’t received the email, check your email junk folder. You can also create additional users in the Cognito User Pool that was created from the CloudFormation Stack to provide additional users Kibana access.
     
    Figure 9: Example email with temporary password

    Figure 9: Example email with temporary password

  5. At the login prompt, enter the email address and password for the Cognito user the CloudFormation template created. A prompt to change your password will appear. Change your password to proceed. The Cognito User Pool requires: upper case letters, lower case letters, special characters, and numbers with a minimum length of 8 characters.
  6. It’s time to add mapping information to your index to instruct Kibana that some of the fields are delivered as geopoints. This allows these fields to be properly visualized with a Coordinate Map. Select Dev Tools in the menu on the left side:
  7.  

    Figure 10: Select "Dev tools"

    Figure 10: Select “Dev tools”

  8. Paste the following API call in the text box to provide the appropriate mappings for the networkConnectionAction & portProbeAction geolocation field. This calls the Elasticsearch API and updates the geolocation mapping for the above fields:
    
    PUT _template/gdt
    {
      "template": "gdt*",
      "settings": {},
      "mappings": {
        "_default_": {
          "properties": {
            "detail.service.action.portProbeAction.portProbeDetails.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            },
            "detail.service.action.networkConnectionAction.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            }        
          }
        }
      }
    }
    

  9. After you paste the API call be sure to remove whitespace after the ending brace. This allows you to select the green arrow to execute it. You should receive a message that the call was successful.
     
    Figure 11: Paste the API call

    Figure 11: Paste the API call

  10. Next, enable GuardDuty and send sample findings so you can create the Kibana Dashboard with data present. Find the GuardDuty service in the AWS Console and select the Get started button.
  11. From the Welcome to GuardDuty page, select the Enable GuardDuty button.
  12. Next, send some sample events. From the GuardDuty service, select the Settings menu on the left-hand menu, and then select Generate sample findings as shown here:
     
    Figure 12: The "Generate sample findings" button

    Figure 12: The “Generate sample findings” button

  13. Optionally, if you want to test with real GuardDuty findings, you can leverage the Amazon GuardDuty Tester. This AWS CloudFormation template creates an isolated environment with a bastion host, a tester EC2 instance, and two target EC2 instances to simulate five types of common attacks that GuardDuty is built to detect and notify you with generated findings. Once deployed, you would use the tester EC2 instance to execute a shell script to generate GuardDuty findings. Additional detail about this option can be found in the GuardDuty documentation.
  14. On the Kibana landing page, in the menu on the left side, create the Index by selecting Management.
  15. On the Management page, select Index Patterns.
  16. On the Create index pattern page, under Index patterns, enter gdt-* (if you used a different IndexName in the Cloudformation template, use that here), and then select Next Step.

    Note: It takes several minutes for the GuardDuty findings to generate a CloudWatch Event, work through the pipeline, and create the index in Elasticsearch. If the index doesn’t appear initially, please wait a few minutes and try again.

     

    Figure 13: The "Create index pattern" page

    Figure 13: The “Create index pattern” page

  17. Under Time Filter field name, select time from the drop-down list, and then select Create index pattern.
     
    Figure 14: The "Time Filter field name" list

    Figure 14: The “Time Filter field name” list

Create scripted fields

With the Index defined, we will now create two scripted fields that your dashboard visualizations will use.

Define the severity level

  1. Select the Index you just created, and then select scripted fields.
     
    Figure 15: The "scripted fields" tab

    Figure 15: The “scripted fields” tab

  2. Select Add Scripted Field, and enter the following information:
    • Name — sevLevel
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — copy and paste this script into the text-entry field:
      
      if (doc['detail.severity'].value < 3.9) { 
          return "Low";
      }
      else {if (doc['detail.severity'].value < 6.9) {
                return "Medium";
             }
      return "High";
      }
      

  3. After entering the information, select Create Field.

The sevLevel field provides a value-to-level mapping as defined by GuardDuty Severity Levels. This allows you to visualize the severity levels in a more user-friendly format (High, Medium, and Low) instead of a cryptic numerical value. To generate sevLevel, we used Kibana painless scripting, which allows custom field creation.

Define the attack type

  1. Now create a second scripted field for typeCategory. The typeCategory field extracts the finding attack type. Enter the following information:
    • Name — typeCategory
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — Copy and paste this script into the text-entry field:
      
      def path = doc['detail.type.keyword'].value;
      if (path != null) {
          int firstColon = path.indexOf(":");
          if (firstColon > 0) {
          return path.substring(0,firstColon);
          }
      }
      return "";
      

  2. After entering the information, select Create Field.

The typeCategory field is used to define the broad category “attack type.” The source field (detail.type.keyword) provides a lot of detailed information (for example: Recon:EC2/PortProbeUnprotectedPort), but we want to visualize the category of “attack type” in the high-level dashboard (that is, only Recon). We can still visualize on a more granular level, if necessary.

Create the Kibana Dashboard

  1. Create the Kibana dashboard by importing a JSON file containing its definition. To do this, download the Kibana dashboard and visualizations definition JSON file from here.
  2. Select Management in the menu on the left, and then select Saved Objects. On the right, select Import.
  3. Select the JSON file you downloaded and select Open. This imports the GuardDuty dashboard and visualizations. Select Yes, overwrite all objects.
  4. In the Index Pattern Conflicts section, under New index pattern, select gdt-*, and then select Confirm all changes.

Dashboard in action

  1. Select Dashboard in the menu on the left.
  2. Select the Guard Duty Summary link.

Your GuardDuty Dashboard will look like this:
 

Figure 16: The GuardDuty dashboard with callouts

Figure 16: The GuardDuty dashboard

The dashboard provides the following visualizations:

  1. This filter allows you to filter sample findings from real findings. If you generate sample findings from the GuardDuty AWS console, this filter allows you to remove the sample findings from the dashboard.
  2. The GuardDuty — Affected Instances chart shows which EC2 instances have associated findings. This visualization allows you to filter specific instances from display by selecting them in the graphic.
  3. The Guard Duty — Threat Type chart allows you to filter on the general attack type (inner circle) as well as the specific attack type (outer circle).
  4. The Guard Duty — Events Per Day graph allows you to visualize and filter on a specific time or date to show findings for that specific time, as well as search for temporal patterns in findings.
  5. GuardDuty — Top10 Findings provides a list of the top 10 findings by count.
  6. GuardDuty — Total Events provides the total number of events based on the criteria chosen. This value will change based on the filters defined.
  7. The GuardDuty — Heatmap — Port Probe Source Countries visualizes the countries where port probes are issued from. This is a Coordinate Map visualization that allows you to see the source and volume of the port probes targeting your instances.
  8. The GuardDuty — Network Connection Source Countries visualizes where brute force attacks are coming from. This is a Region Map visualization that allows you to highlight the country the brute force attacks are sourced from.
  9. GuardDuty — Severity Levels is a pie chart that show findings by severity levels (High, Med, Low), and you can filter by a specific level (that is, only show high-severity findings). This visualization uses the scripted field we created earlier for simplified visualization.
  10. The All-GuardDuty table includes the raw findings for all events. This provides complete raw event detail and the ability to filter at very granular levels.

In a previous blog, we saw how you can create a Kibana dashboard to visualize your network security posture by visualizing your VPC flow logs. This GuardDuty dashboard augments that dashboard. You can use a single Elasticsearch cluster to host both of these dashboards, in addition to other data sources you want to analyze and report on.

Conclusion

We’ve outlined an approach to rapidly build a pipeline to help you archive, analyze, and visualize your GuardDuty findings for rapid insight and actionable intelligence. You can extend this solution in a number of ways, including:

  • Modifying the alert email sent with a structured message (instead of raw JSON)
  • Adding additional visualizations, such as a heatmaps or timeseries charts
  • Extending the solution across AWS accounts or regions.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum.

Want more AWS Security news? Follow us on Twitter.

Michael Fortuna

Michael Fortuna

Michael is a Solutions Architect in AWS supporting enterprise customers and their journey to the cloud. Prior to his work on AWS and cloud technologies, Michael’s areas of focus included software-defined networking, security, collaboration, and virtualization technologies. He’s very excited to work as an SA because it allows him to dive deep on technology while helping customers.

Ravi Sakaria

Ravi Sakariar

Ravi is a Senior Solutions Architect at AWS based in New York. He works with enterprise customers as they transform their business and journey to the cloud. He enjoys the culture of innovation at Amazon because it’s similar to his prior experiences building startup companies. Outside of work, Ravi enjoys spending time with his family, cooking, and watching the New Jersey Devils.

AWS Compliance Center for financial services now available

Post Syndicated from Frank Fallon original https://aws.amazon.com/blogs/security/aws-compliance-center-financial-services/

On Tuesday, September 4, AWS announced the launch of an AWS Compliance Center for our Financial Services (FS) customers. This addition to our compliance offerings gives you a central location to research cloud-related regulatory requirements that impact the financial services industry. Prior to the launch of the AWS Compliance Center, customers preparing to adopt AWS for their FS workloads typically had to browse multiple in-depth sources to understand the expectations of regulatory agencies in each country.

The AWS Compliance Center is designed to make this process easier. It aggregates any given country’s regulatory position regarding the adoption and operation of cloud services. Key components of the FS industry—including regulatory approvals, data privacy, and data protection—are explained, along with the steps you must take throughout your adoption of AWS services to help satisfy regulatory requirements. You can browse the information in the portal and export it as printable documents.

We expect the AWS Compliance Center to evolve as our customers’ compliance needs change and as regulators begin to address the challenges and opportunities that cloud services create in the FS industry. The AWS Compliance Center covers 13 countries, and we’ll continue to enhance it with additional countries and information based on your needs.

New guide helps financial services customers in Brazil navigate cloud requirements

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/new-guide-helps-financial-services-customers-in-brazil-navigate-cloud-requirements/

We have a new resource to help our financial services customers in Brazil navigate regulatory requirements for using the cloud. The AWS User Guide to Financial Services Regulations in Brazil is a deep dive into the Brazilian National Monetary Council’s Resolution No. 4,658. The cybersecurity cloud resolution is the first of its kind by regulators in Brazil. The guide details how our services may be able to assist you in achieving these security expectations.

The resolution covers topics such as implementing a cybersecurity policy, incident response, entering into agreements with cloud service providers, subcontracting, business continuity, and notification requirements. Our guide addresses each of these issues and provides specific guidance on how you can use AWS to satisfy requirements.

The AWS User Guide to Financial Services Regulations in Brazil is part of a series of publications that seek to facilitate customer compliance. It’s available in English and Portuguese. We’ll continue to monitor the regulatory environment in Brazil and around the world and to publish additional resources.

If you have any questions, please contact your account executive.

Amazon sponsors R00tz at DEF CON 2018

Post Syndicated from Patrick McCanna original https://aws.amazon.com/blogs/security/amazon-sponsors-rootz-at-def-con-2018/

It’s early August, and we’re quickly approaching Hacker summer camp (AKA DEF CON). The Black Hat Briefings start August 8, DEF CON starts August 9, and many people will be closely following the latest security presentations at both conferences. But there’s another, exclusive conference happening at DEF CON that Amazon is excited to be a part of: we’re sponsoring R00tz Asylum. R00tz is a conference dedicated to teaching kids ages 8-18 how to become white-hat hackers.

Kids who attend will learn how hackers break into computers and networks and how cybersecurity experts defend against hackers. They’ll get hands-on experience soldering and disassembling computers. They’ll learn about lock-picking and how to use 3D printers, and they’ll even get to compete in a security-oriented “Capture the Flag” event where competitors are rewarded for discovering secrets embedded in servers and encrypted files. And throughout the conference, they’ll hear talks from top-notch presenters on topics that include hacking web apps, hacking elections, and hacking conference badges covered in blinky LEDs.

Many of Amazon’s best security experts started young, with limited access to mentors. We learned how to keep hackers from breaking into computers and networks before cybersecurity became the industry it is today. Amazon’s support for R00tz is our chance to give back to the next generation of cyber-security professionals. Kids who are interested in learning about security will get a safe environment and access to mentors. If you’re in Las Vegas for DEF CON, head over to the R00tz FAQs to learn more about the event, and be sure to check out the conference schedule.

How AWS uses automated reasoning to help you achieve security at scale

Post Syndicated from Andrew Gacek original https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova/

At AWS, we focus on achieving security at scale to diminish risks to your business. Fundamental to this approach is ensuring your policies are configured in a way that helps protect your data, and the Automated Reasoning Group (ARG), an advanced innovation team at AWS, is using automated reasoning to do it.

What is automated reasoning, you ask? It’s a method of formal verification that automatically generates and checks mathematical proofs which help to prove the correctness of systems; that is, fancy math that proves things are working as expected. If you want a deeper understanding of automated reasoning, check out this re:Invent session. While the applications of this methodology are vast, in this post I’ll explore one specific aspect: analyzing policies using an internal Amazon service named Zelkova.

What is Zelkova? How will it help me?

Zelkova uses automated reasoning to analyze policies and the future consequences of policies. This includes AWS Identity and Access Management (IAM) policies, Amazon Simple Storage Service (S3) policies, and other resource policies. These policies dictate who can (or can’t) do what to which resources. Because Zelkova uses automated reasoning, you no longer need to think about what questions you need to ask about your policies. Using fancy math, as mentioned above, Zelkova will automatically derive the questions and answers you need to be asking about your policies, improving confidence in your security configuration(s).

How does it work?

Zelkova translates policies into precise mathematical language and then uses automated reasoning tools to check properties of the policies. These tools include automated reasoners called Satisfiability Modulo Theories (SMT) solvers, which use a mix of numbers, strings, regular expressions, dates, and IP addresses to prove and disprove logical formulas. Zelkova has a deep understanding of the semantics of the IAM policy language and builds upon a solid mathematical foundation. While tools like the IAM Policy Simulator let you test individual requests, Zelkova is able to use mathematics to talk about all possible requests. Other techniques guess and check, but Zelkova knows.

You may have noticed, as an example, the new “Public / Not public” checks in S3. These are powered by Zelkova:
 

Figure 1: the "public/Not public" checks in S3

Figure 1: the “Public/Not public” checks in S3

S3 uses Zelkova to check each bucket policy and warns you if an unauthorized user is able to read or write to your bucket. When a bucket is flagged as “Public”, there are some public requests that are allowed to access the bucket. However, when a bucket is flagged as “Not public”, all public requests are denied. Zelkova is able to make such statements because it has a precise mathematical representation of IAM policies. In fact, it creates a formula for each policy and proves a theorem about that formula.

Consider the following S3 bucket policy statement where my goal is to disallow a certain principal from accessing the bucket:


{
    "Effect": "Allow",
    "NotPrincipal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

Unfortunately, this policy statement does not capture my intentions. Instead, it allows access for everybody in the world who is not the given principal. This means almost everybody now has access to my bucket, including anonymous unauthorized users. Fortunately, as soon as I attach this policy, S3 flags my bucket as “Public”—warning me that there’s something wrong with the policy I wrote. How did it know?

Zelkova translates this policy into a mathematical formula:

(Resource = “arn:aws:s3:::test-bucket”) ∧ (Principal ≠ 11112222333)

Here, ∧ is the mathematical symbol for “and” which is true only when both its left and right side are true. Resource and Principal are variables just like you would use x and y in algebra class. The above formula is true exactly when my policy allows a request. The precise meaning of my policy has now been defined in the universal language of mathematics. The next step is to decide if this policy formula allows public access, but this is a hard problem. Now Zelkova really goes to work.

A counterintuitive trick sometimes used by mathematicians is to make a problem harder in order to make finding a solution easier. That is, solving a more difficult problem can sometimes lead to a simpler solution. In this case, Zelkova solves the harder problem of comparing two policies against each other to decide which is more permissive. If P1 and P2 are policy formulas, then suppose formula P1 ⇒ P2 is true. This arrow symbol is an implication that means whenever P1 is true, P2 must also be true. So, whenever policy 1 accepts a request, policy 2 must also accept the request. Thus, policy 2 is at least as permissive as policy 1. Suppose also that the converse formula P2 ⇒ P1 is not true. That means there’s a request which makes P2 true and P1 false. This request is allowed by policy 2 and is denied by policy 1. Combining all these results, policy 1 is strictly less permissive than policy 2.

How does this solve the “Public / Not public” problem? Zelkova has a special policy that allows anonymous, unauthorized users to access an S3 resource. It compares your policy against this policy. If your policy is more permissive, then Zelkova says your policy allows public access. If you restrict access—for example, based on source VPC endpoint (aws:SourceVpce) or source IP address (aws:SourceIp)—then your policy is not more permissive than the special policy, and Zelkova says your policy does not allow public access.

For all this to work, Zelkova uses SMT solvers. Using mathematical language, these tools take a formula and either prove it is true for all possible values of the variables, or they return a counterexample that makes the formula false.

To understand SMT solvers better, you can play with them yourself. For example, if asked to prove x+y > xy, an SMT solver will quickly find a counterexample such as x=5,y=-1. To fix this, you could strengthen your formula to assume that y is positive:

(y > 0) ⇒ (x + y > xy)

The SMT solver will now respond that your formula is true for all values of the variables x and y. It does this using the rules of algebra and logic. This same idea carries over into theories like strings. You can ask the SMT solver to prove the formula length(append(a,b)) > length(a) where a and b are string variables. It will find a counterexample such as a=”hello” and b=”” where b is the empty string. This time, you could fix your formula by changing from greater-than to greater-than-or-equal-to:

length(append(a, b)) ≥ length(a)

The SMT solver will now respond that the formula is true for all values of the variables a and b. Here, the solver has combined reasoning about strings (length, append) with reasoning about numbers (greater-than-or-equal-to). SMT solvers are designed for exactly this sort of theory composition.

What about my original policy? Once I see that my bucket is public, I can fix my policy using an explicit “Deny”:


{
    "Effect": "Deny"
    "Principal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

With this policy statement attached, S3 correctly reports my bucket as “Not public”. Zelkova has translated this policy into a mathematical formula, compared it against a special policy, and proved that my policy is less permissive. Fancy math has proved that things are working (or in this case, not working) as expected.

Where else is Zelkova being used?

In addition to S3, several AWS services are using Zelkova:

We have also engaged with a number of enterprise and regulated customers who have adopted Zelkova for their use cases:

“Bridgewater, like many other security-conscious AWS customers, needs to quickly reason about the security posture of our AWS infrastructure, and an integral part of that posture is IAM policies. These govern permissions on everything from individual users, to S3 buckets, KMS keys, and even VPC endpoints, among many others. Bridgewater uses Zelkova to verify and provide assurances that our policies do not allow data exfiltration, misconfigurations, and many other malicious and accidental undesirable behaviors. Zelkova allows our security experts to encode their understanding once and then mechanically apply it to any relevant policies, avoiding error-prone and slow human reviews, while at the same time providing us high confidence in the correctness and security of our IAM policies.”
Dan Peebles, Lead Cloud Security Architect at Bridgewater Associates

Summary

AWS services such as S3 use Zelkova to precisely represent policies and prove that they are secure—improving confidence in your security configurations. Zelkova can make broad statements about all resource requests because it’s based on mathematics and proofs instead of heuristics, pattern matching, or simulation. The ubiquity of policies in AWS means that the value of Zelkova and its benefits will continue to grow as it serves to protect more customers every day.

Want more AWS Security news? Follow us on Twitter.

AWS Resources Addressing Argentina’s Personal Data Protection Law and Disposition No. 11/2006

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/aws-and-resources-addressing-argentinas-personal-data-protection-law-and-disposition-no-112006/

We have two new resources to help customers address their data protection requirements in Argentina. These resources specifically address the needs outlined under the Personal Data Protection Law No. 25.326, as supplemented by Regulatory Decree No. 1558/2001 (“PDPL”), including Disposition No. 11/2006. For context, the PDPL is an Argentine federal law that applies to the protection of personal data, including during transfer and processing.

A new webpage focused on data privacy in Argentina features FAQs, helpful links, and whitepapers that provide an overview of PDPL considerations, as well as our security assurance frameworks and international certifications, including ISO 27001, ISO 27017, and ISO 27018. You’ll also find details about our Information Request Report and the high bar of security at AWS data centers.

Additionally, we’ve released a new workbook that offers a detailed mapping as to how customers can operate securely under the Shared Responsibility Model while also aligning with Disposition No. 11/2006. The AWS Disposition 11/2006 Workbook can be downloaded from the Argentina Data Privacy page or directly from this link. Both resources are also available in Spanish from the Privacidad de los datos en Argentina page.

Want more AWS Security news? Follow us on Twitter.

 

Protecting your API using Amazon API Gateway and AWS WAF — Part I

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-i/

This post courtesy of Thiago Morais, AWS Solutions Architect

When you build web applications or expose any data externally, you probably look for a platform where you can build highly scalable, secure, and robust REST APIs. As APIs are publicly exposed, there are a number of best practices for providing a secure mechanism to consumers using your API.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

In this post, I show you how to take advantage of the regional API endpoint feature in API Gateway, so that you can create your own Amazon CloudFront distribution and secure your API using AWS WAF.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

As you make your APIs publicly available, you are exposed to attackers trying to exploit your services in several ways. The AWS security team published a whitepaper solution using AWS WAF, How to Mitigate OWASP’s Top 10 Web Application Vulnerabilities.

Regional API endpoints

Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution created and managed by API Gateway. Before the launch of regional API endpoints, this was the default option when creating APIs using API Gateway. It primarily helped to reduce latency for API consumers that were located in different geographical locations than your API.

When API requests predominantly originate from an Amazon EC2 instance or other services within the same AWS Region as the API is deployed, a regional API endpoint typically lowers the latency of connections. It is recommended for such scenarios.

For better control around caching strategies, customers can use their own CloudFront distribution for regional APIs. They also have the ability to use AWS WAF protection, as I describe in this post.

Edge-optimized API endpoint

The following diagram is an illustrated example of the edge-optimized API endpoint where your API clients access your API through a CloudFront distribution created and managed by API Gateway.

Regional API endpoint

For the regional API endpoint, your customers access your API from the same Region in which your REST API is deployed. This helps you to reduce request latency and particularly allows you to add your own content delivery network, as needed.

Walkthrough

In this section, you implement the following steps:

  • Create a regional API using the PetStore sample API.
  • Create a CloudFront distribution for the API.
  • Test the CloudFront distribution.
  • Set up AWS WAF and create a web ACL.
  • Attach the web ACL to the CloudFront distribution.
  • Test AWS WAF protection.

Create the regional API

For this walkthrough, use an existing PetStore API. All new APIs launch by default as the regional endpoint type. To change the endpoint type for your existing API, choose the cog icon on the top right corner:

After you have created the PetStore API on your account, deploy a stage called “prod” for the PetStore API.

On the API Gateway console, select the PetStore API and choose Actions, Deploy API.

For Stage name, type prod and add a stage description.

Choose Deploy and the new API stage is created.

Use the following AWS CLI command to update your API from edge-optimized to regional:

aws apigateway update-rest-api \
--rest-api-id {rest-api-id} \
--patch-operations op=replace,path=/endpointConfiguration/types/EDGE,value=REGIONAL

A successful response looks like the following:

{
    "description": "Your first API with Amazon API Gateway. This is a sample API that integrates via HTTP with your demo Pet Store endpoints", 
    "createdDate": 1511525626, 
    "endpointConfiguration": {
        "types": [
            "REGIONAL"
        ]
    }, 
    "id": "{api-id}", 
    "name": "PetStore"
}

After you change your API endpoint to regional, you can now assign your own CloudFront distribution to this API.

Create a CloudFront distribution

To make things easier, I have provided an AWS CloudFormation template to deploy a CloudFront distribution pointing to the API that you just created. Click the button to deploy the template in the us-east-1 Region.

For Stack name, enter RegionalAPI. For APIGWEndpoint, enter your API FQDN in the following format:

{api-id}.execute-api.us-east-1.amazonaws.com

After you fill out the parameters, choose Next to continue the stack deployment. It takes a couple of minutes to finish the deployment. After it finishes, the Output tab lists the following items:

  • A CloudFront domain URL
  • An S3 bucket for CloudFront access logs
Output from CloudFormation

Output from CloudFormation

Test the CloudFront distribution

To see if the CloudFront distribution was configured correctly, use a web browser and enter the URL from your distribution, with the following parameters:

https://{your-distribution-url}.cloudfront.net/{api-stage}/pets

You should get the following output:

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Set up AWS WAF and create a web ACL

With the new CloudFront distribution in place, you can now start setting up AWS WAF to protect your API.

For this demo, you deploy the AWS WAF Security Automations solution, which provides fine-grained control over the requests attempting to access your API.

For more information about deployment, see Automated Deployment. If you prefer, you can launch the solution directly into your account using the following button.

For CloudFront Access Log Bucket Name, add the name of the bucket created during the deployment of the CloudFormation stack for your CloudFront distribution.

The solution allows you to adjust thresholds and also choose which automations to enable to protect your API. After you finish configuring these settings, choose Next.

To start the deployment process in your account, follow the creation wizard and choose Create. It takes a few minutes do finish the deployment. You can follow the creation process through the CloudFormation console.

After the deployment finishes, you can see the new web ACL deployed on the AWS WAF console, AWSWAFSecurityAutomations.

Attach the AWS WAF web ACL to the CloudFront distribution

With the solution deployed, you can now attach the AWS WAF web ACL to the CloudFront distribution that you created earlier.

To assign the newly created AWS WAF web ACL, go back to your CloudFront distribution. After you open your distribution for editing, choose General, Edit.

Select the new AWS WAF web ACL that you created earlier, AWSWAFSecurityAutomations.

Save the changes to your CloudFront distribution and wait for the deployment to finish.

Test AWS WAF protection

To validate the AWS WAF Web ACL setup, use Artillery to load test your API and see AWS WAF in action.

To install Artillery on your machine, run the following command:

$ npm install -g artillery

After the installation completes, you can check if Artillery installed successfully by running the following command:

$ artillery -V
$ 1.6.0-12

As the time of publication, Artillery is on version 1.6.0-12.

One of the WAF web ACL rules that you have set up is a rate-based rule. By default, it is set up to block any requesters that exceed 2000 requests under 5 minutes. Try this out.

First, use cURL to query your distribution and see the API output:

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets
[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Based on the test above, the result looks good. But what if you max out the 2000 requests in under 5 minutes?

Run the following Artillery command:

artillery quick -n 2000 --count 10  https://{distribution-name}.cloudfront.net/prod/pets

What you are doing is firing 2000 requests to your API from 10 concurrent users. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try to run the cURL request again and see what happens:

 

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Request blocked.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: [removed]
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>

As you can see from the output above, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

In this first part, you saw how to use the new API Gateway regional API endpoint together with Amazon CloudFront and AWS WAF to secure your API from a series of attacks.

In the second part, I will demonstrate some other techniques to protect your API using API keys and Amazon CloudFront custom headers.

AWS GDPR Data Processing Addendum – Now Part of Service Terms

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-gdpr-data-processing-addendum/

Today, we’re happy to announce that the AWS GDPR Data Processing Addendum (GDPR DPA) is now part of our online Service Terms. This means all AWS customers globally can rely on the terms of the AWS GDPR DPA which will apply automatically from May 25, 2018, whenever they use AWS services to process personal data under the GDPR. The AWS GDPR DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that AWS customers wishing to transfer personal data from the European Economic Area (EEA) to other countries can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA.

As we approach the GDPR enforcement date this week, this announcement is an important GDPR compliance component for us, our customers, and our partners. All customers which that are using cloud services to process personal data will need to have a data processing agreement in place between them and their cloud services provider if they are to comply with GDPR. As early as April 2017, AWS announced that AWS had a GDPR-ready DPA available for its customers. In this way, we started offering our GDPR DPA to customers over a year before the May 25, 2018 enforcement date. Now, with the DPA terms included in our online service terms, there is no extra engagement needed by our customers and partners to be compliant with the GDPR requirement for data processing terms.

The AWS GDPR DPA also provides our customers with a number of other important assurances, such as the following:

  • AWS will process customer data only in accordance with customer instructions.
  • AWS has implemented and will maintain robust technical and organizational measures for the AWS network.
  • AWS will notify its customers of a security incident without undue delay after becoming aware of the security incident.
  • AWS will make available certificates issued in relation to the ISO 27001 certification, the ISO 27017 certification, and the ISO 27018 certification to further help customers and partners in their own GDPR compliance activities.

Customers who have already signed an offline version of the AWS GDPR DPA can continue to rely on that GDPR DPA. By incorporating our GDPR DPA into the AWS Service Terms, we are simply extending the terms of our GDPR DPA to all customers globally who will require it under GDPR.

AWS GDPR DPA is only part of the story, however. We are continuing to work alongside our customers and partners to help them on their journey towards GDPR compliance.

If you have any questions about the GDPR or the AWS GDPR DPA, please contact your account representative, or visit the AWS GDPR Center at: https://aws.amazon.com/compliance/gdpr-center/

-Chad

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Spring 2018 AWS SOC Reports are Now Available with 11 Services Added in Scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2018-aws-soc-reports-are-now-available-with-11-services-added-in-scope/

Since our last System and Organization Control (SOC) audit, our service and compliance teams have been working to increase the number of AWS Services in scope prioritized based on customer requests. Today, we’re happy to report 11 services are newly SOC compliant, which is a 21 percent increase in the last six months.

With the addition of the following 11 new services, you can now select from a total of 62 SOC-compliant services. To see the full list, go to our Services in Scope by Compliance Program page:

• Amazon Athena
• Amazon QuickSight
• Amazon WorkDocs
• AWS Batch
• AWS CodeBuild
• AWS Config
• AWS OpsWorks Stacks
• AWS Snowball
• AWS Snowball Edge
• AWS Snowmobile
• AWS X-Ray

Our latest SOC 1, 2, and 3 reports covering the period from October 1, 2017 to March 31, 2018 are now available. The SOC 1 and 2 reports are available on-demand through AWS Artifact by logging into the AWS Management Console. The SOC 3 report can be downloaded here.

Finally, prospective customers can read our SOC 1 and 2 reports by reaching out to AWS Compliance.

Want more AWS Security news? Follow us on Twitter.