Tag Archives: Customer stories

Serverless @ re:Invent 2017

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/serverless-reinvent-2017/

At re:Invent 2014, we announced AWS Lambda, what is now the center of the serverless platform at AWS, and helped ignite the trend of companies building serverless applications.

This year, at re:Invent 2017, the topic of serverless was everywhere. We were incredibly excited to see the energy from everyone attending 7 workshops, 15 chalk talks, 20 skills sessions and 27 breakout sessions. Many of these sessions were repeated due to high demand, so we are happy to summarize and provide links to the recordings and slides of these sessions.

Over the course of the week leading up to and then the week of re:Invent, we also had over 15 new features and capabilities across a number of serverless services, including AWS Lambda, Amazon API Gateway, AWS [email protected], AWS SAM, and the newly announced AWS Serverless Application Repository!

AWS Lambda

Amazon API Gateway

  • Amazon API Gateway Supports Endpoint Integrations with Private VPCs – You can now provide access to HTTP(S) resources within your VPC without exposing them directly to the public internet. This includes resources available over a VPN or Direct Connect connection!
  • Amazon API Gateway Supports Canary Release Deployments – You can now use canary release deployments to gradually roll out new APIs. This helps you more safely roll out API changes and limit the blast radius of new deployments.
  • Amazon API Gateway Supports Access Logging – The access logging feature lets you generate access logs in different formats such as CLF (Common Log Format), JSON, XML, and CSV. The access logs can be fed into your existing analytics or log processing tools so you can perform more in-depth analysis or take action in response to the log data.
  • Amazon API Gateway Customize Integration Timeouts – You can now set a custom timeout for your API calls as low as 50ms and as high as 29 seconds (the default is 30 seconds).
  • Amazon API Gateway Supports Generating SDK in Ruby – This is in addition to support for SDKs in Java, JavaScript, Android and iOS (Swift and Objective-C). The SDKs that Amazon API Gateway generates save you development time and come with a number of prebuilt capabilities, such as working with API keys, exponential back, and exception handling.

AWS Serverless Application Repository

Serverless Application Repository is a new service (currently in preview) that aids in the publication, discovery, and deployment of serverless applications. With it you’ll be able to find shared serverless applications that you can launch in your account, while also sharing ones that you’ve created for others to do the same.

AWS [email protected]

[email protected] now supports content-based dynamic origin selection, network calls from viewer events, and advanced response generation. This combination of capabilities greatly increases the use cases for [email protected], such as allowing you to send requests to different origins based on request information, showing selective content based on authentication, and dynamically watermarking images for each viewer.

AWS SAM

Twitch Launchpad live announcements

Other service announcements

Here are some of the other highlights that you might have missed. We think these could help you make great applications:

AWS re:Invent 2017 sessions

Coming up with the right mix of talks for an event like this can be quite a challenge. The Product, Marketing, and Developer Advocacy teams for Serverless at AWS spent weeks reading through dozens of talk ideas to boil it down to the final list.

From feedback at other AWS events and webinars, we knew that customers were looking for talks that focused on concrete examples of solving problems with serverless, how to perform common tasks such as deployment, CI/CD, monitoring, and troubleshooting, and to see customer and partner examples solving real world problems. To that extent we tried to settle on a good mix based on attendee experience and provide a track full of rich content.

Below are the recordings and slides of breakout sessions from re:Invent 2017. We’ve organized them for those getting started, those who are already beginning to build serverless applications, and the experts out there already running them at scale. Some of the videos and slides haven’t been posted yet, and so we will update this list as they become available.

Find the entire Serverless Track playlist on YouTube.

Talks for people new to Serverless

Advanced topics

Expert mode

Talks for specific use cases

Talks from AWS customers & partners

Looking to get hands-on with Serverless?

At re:Invent, we delivered instructor-led skills sessions to help attendees new to serverless applications get started quickly. The content from these sessions is already online and you can do the hands-on labs yourself!
Build a Serverless web application

Still looking for more?

We also recently completely overhauled the main Serverless landing page for AWS. This includes a new Resources page containing case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

AWS HIPAA Eligibility Update (October 2017) – Sixteen Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-post-update-october-2017-sixteen-additional-services/

Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

Sixteen More Services
In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:

Amazon Aurora with PostgreSQL Compatibility – This brand-new addition to Amazon Aurora allows you to encrypt your relational databases using keys that you create and manage through AWS Key Management Service (KMS). When you enable encryption for an Amazon Aurora database, the underlying storage is encrypted, as are automated backups, read replicas, and snapshots. Read New – Encryption at Rest for Amazon Aurora to learn more.

Amazon CloudWatch Logs – You can use the logs to monitor and troubleshoot your systems and applications. You can monitor your existing system, application, and custom log files in near real-time, watching for specific phrases, values, or patterns. Log data can be stored durably and at low cost, for as long as needed. To learn more, read Store and Monitor OS & Application Log Files with Amazon CloudWatch and Improvements to CloudWatch Logs and Dashboards.

Amazon Connect – This self-service, cloud-based contact center makes it easy for you to deliver better customer service at a lower cost. You can use the visual designer to set up your contact flows, manage agents, and track performance, all without specialized skills. Read Amazon Connect – Customer Contact Center in the Cloud and New – Amazon Connect and Amazon Lex Integration to learn more.

Amazon ElastiCache for Redis – This service lets you deploy, operate, and scale an in-memory data store or cache that you can use to improve the performance of your applications. Each ElastiCache for Redis cluster publishes key performance metrics to Amazon CloudWatch. To learn more, read Caching in the Cloud with Amazon ElastiCache and Amazon ElastiCache – Now With a Dash of Redis.

Amazon Kinesis Streams – This service allows you to build applications that process or analyze streaming data such as website clickstreams, financial transactions, social media feeds, and location-tracking events. To learn more, read Amazon Kinesis – Real-Time Processing of Streaming Big Data and New: Server-Side Encryption for Amazon Kinesis Streams.

Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.

Amazon RDS SQL Server – This service lets you set up scalable, managed Microsoft SQL Server instances in minutes, and also offers high performance, high availability, and a simplified security model. To learn more, read Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk and Amazon RDS for Microsoft SQL Server – Transparent Data Encryption (TDE) to learn more.

Amazon Route 53 – This is a highly available Domain Name Server. It translates names like www.example.com into IP addresses. To learn more, read Moving Ahead with Amazon Route 53.

AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.

AWS CloudHSM – A cloud-based Hardware Security Module (HSM) for key storage and management at cloud scale. Designed for sensitive workloads, CloudHSM lets you manage your own keys using FIPS 140-2 Level 3 validated HSMs. To learn more, read AWS CloudHSM – Secure Key Storage and Cryptographic Operations and AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads.

AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.

AWS Lambda – This service lets you run event-driven application or backend code without thinking about or managing servers. To learn more, read AWS Lambda – Run Code in the Cloud, AWS Lambda – A Look Back at 2016, and AWS Lambda – In Full Production with New Features for Mobile Devs.

[email protected] – You can use this new feature of AWS Lambda to run Node.js functions across the global network of AWS locations without having to provision or manager servers, in order to deliver rich, personalized content to your users with low latency. Read [email protected] – Intelligent Processing of HTTP Requests at the Edge to learn more.

AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.

AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).

AWS Storage Gateway – This hybrid storage service lets your on-premises applications use AWS cloud storage (Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Elastic File System) in a simple and seamless way, with storage for volumes, files, and virtual tapes. To learn more, read The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage and File Interface to AWS Storage Gateway.

And there you go! Check out my earlier post for a list of resources that will help you to build applications that comply with HIPAA and HITECH.

Jeff;

 

AWS HIPAA Eligibility Update (July 2017) – Eight Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-update-july-2017-eight-additional-services/

It is time for an update on our on-going effort to make AWS a great host for healthcare and life sciences applications. As you can see from our Health Customer Stories page, Philips, VergeHealth, and Cambia (to choose a few) trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

In May we announced that we added Amazon API Gateway, AWS Direct Connect, AWS Database Migration Service, and Amazon Simple Queue Service (SQS) to our list of HIPAA eligible services and discussed our how customers and partners are putting them to use.

Eight More Eligible Services
Today I am happy to share the news that we are adding another eight services to the list:

Amazon CloudFront can now be utilized to enhance the delivery and transfer of Protected Health Information data to applications on the Internet. By providing a completely secure and encryptable pathway, CloudFront can now be used as a part of applications that need to cache PHI. This includes applications for viewing lab results or imaging data, and those that transfer PHI from Healthcare Information Exchanges (HIEs).

AWS WAF can now be used to protect applications running on AWS which operate on PHI such as patient care portals, patient scheduling systems, and HIEs. Requests and responses containing encrypted PHI and PII can now pass through AWS WAF.

AWS Shield can now be used to protect web applications such as patient care portals and scheduling systems that operate on encrypted PHI from DDoS attacks.

Amazon S3 Transfer Acceleration can now be used to accelerate the bulk transfer of large amounts of research, genetics, informatics, insurance, or payer/payment data containing PHI/PII information. Transfers can take place between a pair of AWS Regions or from an on-premises system and an AWS Region.

Amazon WorkSpaces can now be used by researchers, informaticists, hospital administrators and other users to analyze, visualize or process PHI/PII data using on-demand Windows virtual desktops.

AWS Directory Service can now be used to connect the authentication and authorization systems of organizations that use or process PHI/PII to their resources in the AWS Cloud. For example, healthcare providers operating hybrid cloud environments can now use AWS Directory Services to allow their users to easily transition between cloud and on-premises resources.

Amazon Simple Notification Service (SNS) can now be used to send notifications containing encrypted PHI/PII as part of patient care, payment processing, and mobile applications.

Amazon Cognito can now be used to authenticate users into mobile patient portal and payment processing applications that use PHI/PII identifiers for accounts.

Additional HIPAA Resources
Here are some additional resources that will help you to build applications that comply with HIPAA and HITECH:

Keep in Touch
In order to make use of any AWS service in any manner that involves PHI, you must first enter into an AWS Business Associate Addendum (BAA). You can contact us to start the process.

Jeff;

Amazon EC2 Container Service – Launch Recap, Customer Stories, and Code

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-container-service-launch-recap-customer-stories-and-code/

Today seems like a good time to recap some of the features that we have added to Amazon EC2 Container Service over the last year or so, and to share some customer success stories and code with you! The service makes it easy for you to run any number of Docker containers across a managed cluster of EC2 instances, with full console, API, CloudFormation, CLI, and PowerShell support. You can store your Linux and Windows Docker images in the EC2 Container Registry for easy access.

Launch Recap
Let’s start by taking a look at some of the newest ECS features and some helpful how-to blog posts that will show you how to use them:

Application Load Balancing – We added support for the application load balancer last year. This high-performance load balancing option runs at the application level and allows you to define content-based routing rules. It provides support for dynamic ports and can be shared across multiple services, making it easier for you to run microservices in containers. To learn more, read about Service Load Balancing.

IAM Roles for Tasks – You can secure your infrastructure by assigning IAM roles to ECS tasks. This allows you to grant permissions on a fine-grained, per-task basis, customizing the permissions to the needs of each task. Read IAM Roles for Tasks to learn more.

Service Auto Scaling – You can define scaling policies that scale your services (tasks) up and down in response to changes in demand. You set the desired minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling will take care of the rest. The documentation for Service Auto Scaling will help you to make use of this feature.

Blox – Scheduling, in a container-based environment, is the process of assigning tasks to instances. ECS gives you three options: automated (via the built-in Service Scheduler), manual (via the RunTask function), and custom (via a scheduler that you provide). Blox is an open source scheduler that supports a one-task-per-host model, with room to accommodate other models in the future. It monitors the state of the cluster and is well-suited to running monitoring agents, log collectors, and other daemon-style tasks.

Windows – We launched ECS with support for Linux containers and followed up with support for running Windows Server 2016 Base with Containers.

Container Instance Draining – From time to time you may need to remove an instance from a running cluster in order to scale the cluster down or to perform a system update. Earlier this year we added a set of lifecycle hooks that allow you to better manage the state of the instances. Read the blog post How to Automate Container Instance Draining in Amazon ECS to see how to use the lifecycle hooks and a Lambda function to automate the process of draining existing work from an instance while preventing new work from being scheduled for it.

CI/CD Pipeline with Code* – Containers simplify software deployment and are an ideal target for a CI/CD (Continuous Integration / Continuous Deployment) pipeline. The post Continuous Deployment to Amazon ECS using AWS CodePipeline, AWS CodeBuild, Amazon ECR, and AWS CloudFormation shows you how to build and operate a CI/CD pipeline using multiple AWS services.

CloudWatch Logs Integration – This launch gave you the ability to configure the containers that run your tasks to send log information to CloudWatch Logs for centralized storage and analysis. You simply install the Amazon ECS Container Agent and enable the awslogs log driver.

CloudWatch Events – ECS generates CloudWatch Events when the state of a task or a container instance changes. These events allow you to monitor the state of the cluster using a Lambda function. To learn how to capture the events and store them in an Elasticsearch cluster, read Monitor Cluster State with Amazon ECS Event Stream.

Task Placement Policies – This launch provided you with fine-grained control over the placement of tasks on container instances within clusters. It allows you to construct policies that include cluster constraints, custom constraints (location, instance type, AMI, and attribute), placement strategies (spread or bin pack) and to use them without writing any code. Read Introducing Amazon ECS Task Placement Policies to see how to do this!

EC2 Container Service in Action
Many of our customers from large enterprises to hot startups and across all industries, such as financial services, hospitality, and consumer electronics, are using Amazon ECS to run their microservices applications in production. Companies such as Capital One, Expedia, Okta, Riot Games, and Viacom rely on Amazon ECS.

Mapbox is a platform for designing and publishing custom maps. The company uses ECS to power their entire batch processing architecture to collect and process over 100 million miles of sensor data per day that they use for powering their maps. They also optimize their batch processing architecture on ECS using Spot Instances. The Mapbox platform powers over 5,000 apps and reaches more than 200 million users each month. Its backend runs on ECS allowing it to serve more than 1.3 billion requests per day. To learn more about their recent migration to ECS, read their recent blog post, We Switched to Amazon ECS, and You Won’t Believe What Happened Next.

Travel company Expedia designed their backends with a microservices architecture. With the popularization of Docker, they decided they would like to adopt Docker for its faster deployments and environment portability. They chose to use ECS to orchestrate all their containers because it had great integration with the AWS platform, everything from ALB to IAM roles to VPC integration. This made ECS very easy to use with their existing AWS infrastructure. ECS really reduced the heavy lifting of deploying and running containerized applications. Expedia runs 75% of all apps on AWS in ECS allowing it to process 4 billion requests per hour. Read Kuldeep Chowhan‘s blog post, How Expedia Runs Hundreds of Applications in Production Using Amazon ECS to learn more.

Realtor.com provides home buyers and sellers with a comprehensive database of properties that are currently for sale. Their move to AWS and ECS has helped them to support business growth that now numbers 50 million unique monthly users who drive up to 250,000 requests per second at peak times. ECS has helped them to deploy their code more quickly while increasing utilization of their cloud infrastructure. Read the Realtor.com Case Study to learn more about how they use ECS, Kinesis, and other AWS services.

Instacart talks about how they use ECS to power their same-day grocery delivery service:

Capital One talks about how they use ECS to automate their operations and their infrastructure management:

Code
Clever developers are using ECS as a base for their own work. For example:

Rack is an open source PaaS (Platform as a Service). It focuses on infrastructure automation, runs in an isolated VPC, and uses a single-tenant build service for security.

Empire is also an open source PaaS. It provides a Heroku-like workflow and is targeted at small and medium sized startups, with an emphasis on microservices.

Cloud Container Cluster Visualizer (c3vis) helps to visualize resource utilization within ECS clusters:

Stay Tuned
We have plenty of new features in the works for ECS, so stay tuned!

Jeff;

 

DevOps and Continuous Delivery at re:Invent 2016 – Wrap-up

Post Syndicated from Frank Li original https://aws.amazon.com/blogs/devops/devops-and-continuous-delivery-at-reinvent-2016-wrap-up/

The AWS re:Invent 2016 conference was packed with some exciting announcements and sessions around DevOps and Continuous Delivery. We launched AWS CodeBuild, a fully managed build service that eliminates the need to provision, manage, and scale your own build servers. You now have the ability to run your continuous integration and continuous delivery process entirely on AWS by plugging AWS CodeBuild into AWS CodePipeline, which automates building, testing, and deploying code each time you push a change to your source repository. If you are interested in learning more about AWS CodeBuild, you can sign up for the webinar on January 20th here.

The DevOps track had over 30 different breakout sessions ranging from customer stories to deep dive talks to best practices. If you weren’t able to attend the conference or missed a specific session, here is a link to the entire playlist.

 

There were a number of talks that can help you get started with your own DevOps practices for rapid software delivery. Here are some introductory sessions to give you the proper background:
DEV201: Accelerating Software Delivery with AWS Developer Tools
DEV211: Automated DevOps and Continuous Delivery

After you understand the big picture, you can dive into automating your software delivery. Here are some sessions on how to deploy your applications:
DEV310: Choosing the Right Software Deployment Technique
DEV403: Advanced Continuous Delivery Techniques
DEV404: Develop, Build, Deploy, and Manage Services and Applications

Finally, to maximize your DevOps efficiency, you’ll want to automate the provisioning of your infrastructure. Here are a couple sessions on how to manage your infrastructure:
DEV313: Infrastructure Continuous Delivery Using AWS CloudFormation
DEV319: Automating Cloud Management & Deployment

If you’re a Lambda developer, be sure to watch this session and read this documentation on how to practice continuous delivery for your serverless applications:
SVR307: Application Lifecycle Management in a Serverless World

For all 30+ DevOps sessions, click here.

AWS Week in Review – September 5, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-5-2016/

This is the third community-driven edition of the AWS Week in Review. Special thanks are due to the 15 internal and external contributors who helped to make this happen. If you would like to contribute, please take a look at the AWS Week in Review on GitHub.

Monday

September 5

Tuesday

September 6

Wednesday

September 7

Thursday

September 8

Friday

September 9

Saturday

September 10

Sunday

September 11

New & Notable Open Source

  • s3logs-cloudwatch is a Lambda function parsing S3 server access log files and putting extra bucket metrics in CloudWatch.
  • README.md is a curated list of AWS resources used to prepare for AWS certifications.
  • RedEye is a utility to monitor Redshift performance.
  • Dockerfile will build a Docker image, push it to the EC2 Container Registry, and deploy it to Elastic Beanstalk.
  • lambda-contact-form supports contact form posts from static websites hosted on S3/CloudFront.
  • dust is an SSH cluster shell for EC2.
  • aws-ssh-scp-connector is a utility to help connect to EC2 instances.
  • lambda-comments is a blog commenting system built with Lambda.

New SlideShare Presentations

New YouTube Videos

New Customer Stories

  • MYOB uses AWS to scale its infrastructure to support demand for new services and saves up to 30 percent by shutting down unused capacity and using Reserved Amazon EC2 Instances. MYOB provides business management software to about 1.2 million organizations in Australia and New Zealand. MYOB uses a wide range of AWS services, including Amazon Machine Learning to build smart applications incorporating predictive analytics and AWS CloudFormation scripts to create new AWS environments in the event of a disaster.
  • PATI Games needed IT solutions that would guarantee the stability and scalability of their game services for global market penetration, and AWS provided them with the most safe and cost-efficient solution. PATI Games is a Korean company primarily engaged in the development of games based on SNS platforms. AWS services including Amazon EC2, Amazon RDS (Aurora), and Amazon CloudFront enable PATI Games to maintain high reliability, decrease latency, and eventually boost customer satisfaction.
  • Rabbi Interactive scales to support a live-broadcast, second-screen app and voting system for hundreds of thousands of users, gives home television viewers real-time interactive capabilities, and reduces monthly operating costs by 60 percent by using AWS. Based in Israel, the company provides digital experiences such as second-screen apps used to interact with popular television shows such as “Rising Star” and “Big Brother.” Rabbi Interactive worked with AWS partner CloudZone to develop an interactive second-screen platform.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Watch the AWS Summit – New York Keynote on August 11

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1CZBVD57LTTD8/Watch-the-AWS-Summit-New-York-Keynote-on-August-11

Join us online Thursday, August 11, at 10:00 A.M. Eastern Time for the AWS Summit – New York Keynote! This keynote presentation, given by Dr. Werner Vogels, Amazon CTO and Vice President, will highlight the newest AWS services and features as well as select customer stories. Don’t miss it!

If you are in the New York area and would like to attend the Summit in person, register now to attend.

Learn more about the sessions offered at AWS Summit – New York, in addition to the hands-on training opportunities in five, full-day paid training bootcamps and free hands-on labs.

– Craig

P.S. If you have Summit questions, please contact [email protected]

Register Now for AWS Summit – New York

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/TxUPB50NPUS6JM/Register-Now-for-AWS-Summit-New-York

The AWS Summit – New York is just around the corner! If you are planning to attend August 10-11, register now because seats are limited. This year’s keynote speaker is Dr. Werner Vogels, Amazon CTO and Vice President, and he will highlight the newest AWS features, services, and customer stories.

Choose one of our full-day bootcamps to get the most out of your Summit experience:

  • AWS Technical Essentials (Introductory Level)
  • Securing Cloud Workloads with DevOps Automation (Expert Level)
  • Build a Serverless, Location-Aware, Search & Recommendations-Enabled Application (Expert Level)
  • Taking AWS Operations to the Next Level (Expert Level)

Stay connected: Join the conversation on Twitter using #AWSSummit, and on Facebook.

We look forward to seeing you in New York!

– Craig

Watch the AWS Summit – Santa Clara Keynote in Real Time on July 13

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1UMV1L79BHDWJ/Watch-the-AWS-Summit-Santa-Clara-Keynote-in-Real-Time-on-July-13

Join us online Wednesday, July 13, at 10:00 A.M. Pacific Time for the AWS Summit – Santa Clara Livestream! This keynote presentation, given by Dr. Matt Wood, AWS General Manager of Product Strategy, will highlight the newest AWS features and services, and select customer stories. Don’t miss this live presentation!

Join us in person at the Santa Clara Convention Center
If you are in the Santa Clara area and would like to attend the free Summit, you still have time. Register now to attend.

The Summit includes:

  • More than 50 technical sessions, including these security-related sessions:

    • Automating Security Operations in AWS (Deep Dive)
    • Securing Cloud Workloads with DevOps Automation
    • Deep Dive on AWS IoT
    • Getting Started with AWS Security (Intro)
    • Network Security and Access Control within AWS (Intro)
  • Training opportunities in Hands-on Labs.
  • Full-day training bootcamps. Registration is $600.
  • The opportunity to learn best practices and get questions answered from AWS engineers, expert customers, and partners.
  • Networking opportunities with your cloud and IT peers.

– Craig 

P.S. Can’t make the Santa Clara event? Check out our other AWS Summit locations. If you have summit questions, please contact us at [email protected]

How to Prevent Uploads of Unencrypted Objects to Amazon S3

Post Syndicated from Michael St. Onge original https://blogs.aws.amazon.com/security/post/Tx2R0GFOXFYEDM5/How-to-Prevent-Uploads-of-Unencrypted-Objects-to-Amazon-S3

There are many use cases to prevent uploads of unencrypted objects to an Amazon S3 bucket, but the underlying objective is to protect the confidentiality and integrity of the objects stored in that bucket. AWS provides several services that help make this process easier, such as AWS Identity and Access Management (IAM) and AWS Key Management Service (KMS). By using an S3 bucket policy, you can enforce the encryption requirement when users upload objects, instead of assigning a restrictive IAM policy to all users.

In this blog post, I will show you how to create an S3 bucket policy that prevents users from uploading unencrypted objects, unless they are using server-side encryption with S3–managed encryption keys (SSE-S3) or server-side encryption with AWS KMS–managed keys (SSE-KMS).

Encryption primer

When thinking about S3 and encryption, remember that you do not “encrypt S3” or “encrypt an S3 bucket.” Instead, S3 encrypts your data at the object level as it writes to disks in AWS data centers, and decrypts it for you when you access it. You can encrypt objects by using client-side encryption or server-side encryption. Client-side encryption occurs when an object is encrypted before you upload it to S3, and the keys are not managed by AWS. With server-side encryption, Amazon manages the keys in one of three ways:

  • Server-side encryption with customer-provided encryption keys (SSE-C).
  • SSE-S3.
  • SSE-KMS.

Server-side encryption is about data encryption at rest—that is, S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.

S3 uses a concept called envelope encryption to protect data at rest. Each object is encrypted with a unique key employing strong multi-factor encryption. As an additional safeguard, Amazon encrypts the key itself with a master key. S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. The following diagram illustrates the encryption solutions discussed in this blog post.

Solution overview

To upload an object to S3, you use a Put request, regardless if called via the console, CLI, or SDK. The Put request looks similar to the following.

PUT /example-object HTTP/1.1
Host: myBucket.s3.amazonaws.com
Date: Wed, 8 Jun 2016 17:50:00 GMT
Authorization: authorization string
Content-Type: text/plain
Content-Length: 11434
x-amz-meta-author: Janet
Expect: 100-continue
[11434 bytes of object data]

To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3.

PUT /example-object HTTP/1.1
Host: myBucket.s3.amazonaws.com
Date: Wed, 8 Jun 2016 17:50:00 GMT
Authorization: authorization string  
Content-Type: text/plain
Content-Length: 11434
x-amz-meta-author: Janet
Expect: 100-continue
x-amz-server-side-encryption: AES256
[11434 bytes of object data]

In order to enforce object encryption, create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption header. There are two possible values for the x-amz-server-side-encryption header: AES256, which tells S3 to use S3-managed keys, and aws:kms, which tells S3 to use AWS KMS–managed keys.

Background for use case #1:Using SSE-S3 managed keys

In the first use case, you enforce the use of SSE-S3, which allows S3 to manage the master keys:

  1. As each object is uploaded, a data encryption key is generated and the object is encrypted with the data encryption key using the AES256 block cipher.
  2. The data encryption key is encrypted with a master key maintained by Amazon.
  3. The encrypted object and the encrypted data encryption key are stored together on S3, and the master key is stored separately by Amazon.
  4. The object is protected because the object can only be decrypted using the data encryption key, which is itself encrypted with the master key. Amazon regularly rotates the master key for additional security.

SS3-S3 is a good solution to protect data when you are not required to manage the master key. A sample S3 bucket policy that implements the solution is shown in the following implementation section. The policy needs to cover two conditions in order to deny the object upload. The first condition looks for the s3:x-amz-server-side-encryption key with a value of AES256. The second condition looks for a Null value for the s3:x-amz-server-side-encryption key.

Implementing use case #1: Using SSE-S3 managed keys

To implement this policy, navigate to the S3 console and follow these steps:

  1. Choose the target bucket in the left pane.
  2. Expand Permissions in the right pane, and choose Edit bucket policy.
  3. Copy the following policy, paste it in that bucket policy box, and then click Save. (Throughout this solution, be sure to replace <bucket_name> with the actual name of your bucket.)
 {
     "Version": "2012-10-17",
     "Id": "PutObjPolicy",
     "Statement": [
           {
                "Sid": "DenyIncorrectEncryptionHeader",
                "Effect": "Deny",
                "Principal": "*",
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<bucket_name>/*",
                "Condition": {
                        "StringNotEquals": {
                               "s3:x-amz-server-side-encryption": "AES256"
                         }
                }
           },
           {
                "Sid": "DenyUnEncryptedObjectUploads",
                "Effect": "Deny",
                "Principal": "*",
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<bucket_name>/*",
                "Condition": {
                        "Null": {
                               "s3:x-amz-server-side-encryption": true
                        }
               }
           }
     ]
 }

You have now created an S3 bucket policy that will deny any Put requests that do not include a header to encrypt the object using SSE-S3.

Background for use case #2: Using SSE-KMS managed keys

In the second use case, you need not only to force object encryption, but also to manage the lifecycle of the encryption keys. You use KMS to create encryption keys centrally, define the policies that control how keys can be used, and audit key usage to prove keys are being used correctly. You can use these keys to protect your data in S3 buckets. The first time you add an SSE-KMS–encrypted object to a bucket in a region, a default customer master key (CMK) is created for you automatically. The CMK is used for SSE-KMS encryption, unless you select a CMK that you created separately using KMS. Creating your own CMK can give you more flexibility, including the ability to create, rotate, disable, and define access controls, and to audit the encryption keys used to protect your data.

KMS provides the key management infrastructure that you can use to generate and manage your own encryption keys:

  1. As each object is uploaded, a data encryption key is generated and the object is encrypted with that data encryption key by using the AES256 block cipher.
  2. The data encryption key is then encrypted with a KMS CMK that is managed by you via KMS, and the encrypted object and the encrypted data encryption key are stored together.
  3. The object is protected because the object can only be decrypted using the data encryption key, which is itself encrypted with the CMK. You will be responsible for maintaining the lifecycle of your CMK, including key rotation. 

Implementing use case #2: Using SSE-KMS managed keys

To implement use case #2, follow the steps described in use case #1 and substitute the following bucket policy. The logic behind the bucket policy solution for our second use case is similar to the first use case, but instead of looking for the x-amz-server-side-encryption header with the value of AES256, you will look for the same header with the value of aws:kms, as shown in the following policy.

{
       "Version": "2012-10-17",
       "Id": "PutObjPolicy",
       "Statement": [
           {
                "Sid": "DenyIncorrectEncryptionHeader",
                "Effect": "Deny",
                "Principal": "*",
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<bucket_name>/*",
                "Condition": {
                    "StringNotEquals": {
                          "s3:x-amz-server-side-encryption": "aws:kms"
                             }
                   }
           },
           {
                "Sid": "DenyUnEncryptedObjectUploads",
                "Effect": "Deny",
                "Principal": "*",
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<bucket_name>/*",
                "Condition": {
                    "Null": {
                          "s3:x-amz-server-side-encryption": true
                            }
                    }
           }
    ]
}

Testing the solutions

To test the solutions, you can use the IAM policy simulator to ensure each policy in this post works as intended. Because the IAM policy simulator includes support for AWS CLI and AWS SDK, you can automate the testing process. For a more detailed overview of the IAM policy simulator and how to test resource policies, see Testing IAM Policies with the IAM Policy Simulator and Verify Resource-Based Permissions Using the IAM Policy Simulator.

To access the IAM policy simulator, navigate to the IAM console and select Policy Simulator under Additional Information on the right side of the console. To set up the IAM policy simulator for testing:

  1. Choose a user from the left pane (see the following screenshot). 
  2. Ensure Mode is set to Existing Policies.

  1. Ensure the user has an IAM policy with the following policy and that you have selected the Policy check box. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<bucket_name>"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::<bucket_name>/*",
        }
    ]
}
  1. Select S3 from the list of services, as shown in the following screenshot.


     

  2. Select PutObject from the Actions drop-down list.


     

  3. Expand Amazon S3 under Action Settings and Results to expose the ARN field, as shown in the following screenshot.
  4. Type the ARN of the S3 bucket.
  5. Be sure to select the check box for Include Resource Policy.

You are now ready to begin testing the solutions. For each policy, you need to test three possible variations on an S3 Put request. The following table outlines the test conditions and expected results.  

Bucket policy Service Action ARN s3:x-amz-server-side-encryption Expected results
Force
SSE-S3
S3 PutObject arn:aws:s3:::<bucket_name>/ Blank Denied
Force
SSE-S3
S3 PutObject arn:aws:s3:::<bucket_name>/ AES256 Allowed
Force
SSE-S3
S3 PutObject arn:aws:s3:::<bucket_name>/ aws:kms Denied
Force
SSE-KMS
S3 PutObject arn:aws:s3:::<bucket_name>/ Blank Denied
Force
SSE-KMS
S3 PutObject arn:aws:s3:::<bucket_name>/ AES256 Denied
Force
SSE-KMS
S3 PutObject arn:aws:s3:::<bucket_name>/ aws:kms Allowed
  1. Simulate making a PutObject call without supplying a value for s3:x-amz-server-side-encryption.
  2. Click Run Simulation.
  3. As expected by looking at the preceding table, the result is denied, as shown in the following screenshot.


     

  4. Simulate a value of AES256 for SSE-S3.
  5. Click Run Simulation to see that the result is allowed, as shown in the following screenshot.

  1. Last, simulate a value of aws:kms for SSE-KMS
  2. Click Run Simulation and see that the result is denied

The results should align with the preceding conditions table. To dive deeper into why the Action was allowed or denied, click the Show statement link (highlighted in the following screenshot) to see which policy allowed or denied the action.

Repeat each test condition to validate that the bucket policies match the expected results for each use case. The IAM policy simulator helps you understand, test, and validate how your resource-based policies and IAM policies work together to grant or deny access to AWS resources. For more information about troubleshooting policies, see Troubleshoot IAM Policies.

Conclusion

In this post, I have demonstrated how to create an S3 bucket policy that prevents unencrypted objects from being uploaded unless they are using SSE-S3 or SSE-KMS. Most importantly, I showed how to test this S3 bucket policy by using the IAM policy simulator to validate the policy.

If you have comments, submit them in the “Comments” section below. If you have questions, start a new thread on the IAM forum.

– Michael

Month in Review: March 2016

Post Syndicated from Andy Werth original https://blogs.aws.amazon.com/bigdata/post/Tx2AHF9X25N6M7U/Month-in-Review-March-2016

March provided another full slate of big data solutions on the AWS Big Data Blog! Take a look at the summaries below for something that catches your interest and share with anyone who’s interested in big data.

Will Spark Power the Data behind Precision Medicine?
Spark is already known for being a major player in big data analysis, but it is additionally uniquely capable in advancing genomics algorithms given the complex nature of genomics research. This post introduces gene analysis using Spark on EMR and ADAM, for those new to precision medicine.

Crunching Statistics at Scale with SparkR on Amazon EMR
SparkR is an R package that allows you to integrate complex statistical analysis with large datasets. In this post, we introduce you running R with the Apache SparkR project on Amazon EMR.

AWS Big Data Meetup March 31 in San Francisco: Intro to SparkR and breakout discussions
The guest speaker was Cory Dolphin from Twitter, who talked about AnswersFabric’s real-time analytics product, which processes billions of events in real time, using Twitter’s new stream processing engine, Heron. Chris Crosbie, a Solutions Architect with AWS and a statistician by training, talked about how easy and interactive cloud computing is with SparkR on Amazon EMR.

Anomaly Detection Using PySpark, Hive, and Hue on Amazon EMR
We are surrounded by more and more sensors – some of which we’re not even consciously aware. As sensors become cheaper and easier to connect, they create an increasing flood of data that’s getting cheaper and easier to store and process. This post walks through the three major steps of anomaly detection: clustering the data, choosing the number of clusters, and detecting probable anomalies.

Import Zeppelin notes from GitHub or JSON in Zeppelin 0.5.6 on Amazon EMR
With the latest Zeppelin release (0.5.6) included on Amazon EMR release 4.4.0, you can now import notes using links to S3 JSON files, raw file URLs in GitHub, or local files.

Analyze a Time Series in Real Time with AWS Lambda, Amazon Kinesis and Amazon DynamoDB Streams
As more devices, sensors and web servers continuously collect real-time streaming data, there is a growing need to analyze, understand and react to events as they occur, rather than waiting for a report that is generated the next day. This post explains how to perform time-series analysis on a stream of Amazon Kinesis records, without the need for any servers or clusters, using AWS Lambda, Amazon Kinesis Streams, Amazon DynamoDB and Amazon CloudWatch.

Big Data Website Gets a Big Makeover at AWS
We have completely redesigned the pages and updated them with some of the most common use cases, tutorials, and resources to get you started, along with customer stories and videos so that you can learn from what other organizations are doing.

Analyze Your Data on Amazon DynamoDB with Apache Spark
Every day, tons of customer data is generated, such as website logs, gaming data, advertising data, and streaming videos. Many companies capture this information as it is generated and process it in real time to understand their customers. This post shows show you how to use Apache Spark to process customer data in Amazon DynamoDB.

FROM THE ARCHIVE

Querying Amazon Kinesis Streams Directly with SQL and Spark Streaming (January 14, 2016)

———————————————–

Want to learn more about Big Data or Streaming Data? Check out our Big Data and Streaming data educational pages.

Big Data Website Gets a Big Makeover at AWS

Post Syndicated from Jorge A. Lopez original https://blogs.aws.amazon.com/bigdata/post/Tx2M06SI81S7TG8/Big-Data-Website-Gets-a-Big-Makeover-at-AWS

Jorge A. Lopez is responsible for Big Data Solutions Marketing at AWS

The big data ecosystem is evolving at a tremendous pace, giving rise to a plethora of tools, use cases, and applications. The new AWS Big Data website is now the ideal starting point to learn about new and existing capabilities, and the services you can leverage to build and deploy your big data applications.

We have completely redesigned the pages and updated them with some of the most common use cases, tutorials, and resources to get you started, along with customer stories and videos so that you can learn from what other organizations are doing.

If you haven’t visited the AWS Big Data website recently, check it out. I hope you find it helpful as a reference to all things big data on AWS. Share it with colleagues and customers to help spread the word and don’t forget to send us your feedback to help make it even better.

 

——————————————–

Looking to learn about Streaming Data? Check out our Streaming data educational page.

Related

Big Data Analytics Options on AWS: Updated White Paper

Big Data Website Gets a Big Makeover at AWS

Post Syndicated from Jorge A. Lopez original https://blogs.aws.amazon.com/bigdata/post/Tx2M06SI81S7TG8/Big-Data-Website-Gets-a-Big-Makeover-at-AWS

Jorge A. Lopez is responsible for Big Data Solutions Marketing at AWS

The big data ecosystem is evolving at a tremendous pace, giving rise to a plethora of tools, use cases, and applications. The new AWS Big Data website is now the ideal starting point to learn about new and existing capabilities, and the services you can leverage to build and deploy your big data applications.

We have completely redesigned the pages and updated them with some of the most common use cases, tutorials, and resources to get you started, along with customer stories and videos so that you can learn from what other organizations are doing.

If you haven’t visited the AWS Big Data website recently, check it out. I hope you find it helpful as a reference to all things big data on AWS. Share it with colleagues and customers to help spread the word and don’t forget to send us your feedback to help make it even better.

 

——————————————–

Looking to learn about Streaming Data? Check out our Streaming data educational page.

Related

Big Data Analytics Options on AWS: Updated White Paper

In Case You Missed These: AWS Security Blog Posts from January and February

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1J9OK26Z1WA3L/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-January-and-February

In case you missed any of the AWS Security Blog posts from January and February, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from using AWS WAF to automating HIPAA compliance.

February

February 29, AWS Compliance Announcement: Announcing Industry Best Practices for Securing AWS Resources
We are happy to announce that the Center for Internet Security (CIS) has published the CIS AWS Foundations Benchmark, a set of security configuration best practices for AWS. These industry-accepted best practices go beyond the high-level security guidance already available, providing AWS users with clear, step-by-step implementation and assessment procedures. This is the first time CIS has issued a set of security best practices specific to an individual cloud service provider.

February 24, AWS WAF How-To: How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
In this blog post, I show you how to create an AWS Lambda function that automatically parses Amazon CloudFront access logs as they are delivered to Amazon S3, counts the number of bad requests from unique sources (IP addresses), and updates AWS WAF to block further requests from those IP addresses. I also provide a CloudFormation template that creates the web access control list (ACL), rule sets, Lambda function, and logging S3 bucket so that you can try this yourself.

February 23, Automating HIPAA Compliance How-To: How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series
In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture by highlighting ways you can use AWS Config to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations.  

February 22, March Webinar Announcement: Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection
AWS WAF Software Development Manager Nathan Dye will share  Lambda scripts you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

February 22, Automating HIPAA Compliance How-To: How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series
In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

February 17, AWS Partner Network: New AWS Partner Network Blog Post: Securely Accessing Customers’ AWS Accounts with Cross-Account IAM Roles
Building off AWS Identity and Access Management (IAM) best practices, the AWS Partner Network (APN) Blog this week published a blog post called, Securely Accessing Customer AWS Accounts with Cross-Account IAM Roles. Written by AWS Partner Solutions Architect David Rocamora, this post addresses how best practices can be applied when working with APN Partners, and describes the potential drawbacks with APN Partners having access to their customers’ AWS resources.

February 16, AWS Summit in Chicago: Register for the Free AWS Summit – Chicago, April 2016
Registration for the 2016 AWS Summit – Chicago is now open. This free event will educate you about the AWS platform and offer information about architecture best practices and new cloud services. Register todayto reserve your seat to hear keynote speaker Matt Wood, AWS General Manager of Product Strategy, highlighting the latest AWS services and customer stories.

February 16, Automating HIPAA Compliance How-To: How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series
In my previous blog post, I discussed the idea of using the cloud to protect the cloud and improving healthcare IT by applying DevSecOps methods. In Part 2 today, I will show an architecture composed of AWS services that gives healthcare security administrators necessary controls, allows healthcare developers to interact with the system using familiar tools (such as Git), and leverages AWS managed services without the need for advanced coding or complex configuration.

February 15, Automating HIPAA Compliance How-To: How to Automate HIPAA Compliance (Part 1): Use the Cloud to Protect the Cloud
In a series of blog posts on the AWS Security Blog this month, I will provide prescriptive advice and code samples to developers, system administrators, and security specialists who wish to improve their healthcare IT by applying the DevSecOps methods that the cloud enables. I will also demonstrate AWS services that can help customers meet their AWS Business Associate Agreement obligations in an automated fashion. Consider this series a getting started guide for DevSecOps strategies you can implement as you migrate your own compliance frameworks and controls to the cloud. 

February 9, AWS WAF How-To: How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
One security challenge you may have faced is how to prevent your web servers from being flooded by unwanted requests, or scanning tools such as bots and crawlers that don’t respect the crawl-delay directivevalue. The main objective of this kind of distributed denial of service (DDoS) attack, commonly called an HTTP flood, is to overburden system resources and make them unavailable to your real users or customers (as shown in the following illustration). In this blog post, I will show you how to provision a solution that automatically detects unwanted traffic based on request rate, and then updates configurations of AWS WAF (a web application firewall that protects any application deployed on the Amazon CloudFront content delivery service) to block subsequent requests from those users.

February 3, AWS Compliance Pilot Program: AWS FedRAMP-Trusted Internet Connection (TIC) Overlay Pilot Program
I’m pleased to announce a newly created resource for usage of the Federal Cloud—after successfully completing the testing phase of the FedRAMP-Trusted Internet Connection (TIC) Overlay pilot program, we’ve developed Guidance for TIC Readiness on AWS. This new way of architecting cloud solutions that address TIC capabilities (in a FedRAMP moderate baseline) comes as the result of our relationships with the FedRAMP Program Management Office (PMO), Department of Homeland Security (DHS) TIC PMO, GSA 18F, and FedRAMP third-party assessment organization (3PAO), Veris Group. Ultimately, this approach will provide US Government agencies and contractors with information assisting in the development of “TIC Ready” architectures on AWS.

February 2, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory
In my previous post, I showed how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. Today, I will show how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities.

February 1, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
As you establish private connectivity between your on-premises networks and your AWS Virtual Private Cloud (VPC) environments, the need for Domain Name System (DNS) resolution across these environments grows in importance. One common approach used to address this need is to run DNS servers on Amazon EC2 across multiple Availability Zones (AZs) and integrate them with private on-premises DNS domains. In many cases, though, a managed private DNS service (accessible outside of a VPC) with less administrative overhead is advantageous. In this blog post, I will show you two approaches that use Amazon Route 53 and AWS Directory Service to provide DNS resolution between on-premises networks and AWS VPC environments.

 

January

January 26, DNS Filtering How-To: How to Add DNS Filtering to Your NAT Instance with Squid
In this post, I discuss and give an example of how Squid, a leading open-source proxy, can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet. First, I explain briefly how to create the infrastructure resources required for this approach. Then, I provide step-by-step instructions to install, configure, and test Squid as a transparent proxy.

 

January 25, AWS KMS How-To: How to Help Protect Sensitive Data with AWS KMS
One question AWS KMS customers frequently ask is about how how to encrypt Primary Account Number (PAN) data within AWS because PCI DSS sections 3.5 and 3.6 require the encryption of credit card data at rest and has stringent requirements around the management of encryption keys. One KMS encryption option is to encrypt your PAN data using customer data keys (CDKs) that are exportable out of KMS. Alternatively, you also can use KMS to directly encrypt PAN data by using a customer master key (CMK). In this blog post, I will show you how to help protect sensitive PAN data by using KMS CMKs.

January 21, AWS Certificate Manager Announcement: Now Available: AWS Certificate Manager
Launched today, AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with provisioning and managing SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates—all at no extra cost!

January 19, AWS Compliance Announcement: Introducing GxP Compliance on AWS
We’re happy to announce that customers now are enabled to bring the next generation of medical, health, and wellness solutions to their GxP systems by using AWS for their processing and storage needs. Compliance with healthcare and life sciences requirements is a key priority for us, and we are pleased to announce the availability of new compliance enablers for customers with GxP requirements.

January 19, AWS Config How-To: How to Record and Govern Your IAM Resource Configurations Using AWS Config
Using Config Rules on IAM resources, you can codify your best practices for using IAM and assess the compliance state of these rules regularly. In this blog post, I will show how to start recording the configuration of IAM resources, and author an example rule that checks whether all IAM users in the account are using a sample managed policy, MyIAMUserPolicy. I will also describe examples of other rules customers have authored to assess their organizations’ compliance with their own standards.

January 15, AWS Summits: Mark Your Calendar for AWS Summits in 2016
Are you ready for AWS Summits in 2016? This year we have created even more information-packed Summits that will take place across the globe, each designed to accelerate your cloud journey and help you get the most out of AWS services.

January 13, AWS IAM Announcement: The IAM Console Now Helps Prevent You from Accidentally Deleting In-Use Resources
Starting today, the IAM console shows service last accessed data as part of the process of deleting an IAM user or role. Now you have additional data that shows you when a resource was last active so that you can make a more informed decision about whether or not to delete it.

January 6, IAM Best Practices: Adhere to IAM Best Practices in 2016
As another new year begins, we encourage you to review our recommended IAM best practices. Following these best practices can help you maintain the security of your AWS resources. You can learn more by watching the IAM Best Practices to Live By presentation that Anders Samuelsson gave at AWS re:Invent 2015, or you can click the following links that will take you to IAM documentation, blog posts, and videos. 

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

In Case You Missed These: AWS Security Blog Posts from January and February

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1J9OK26Z1WA3L/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-January-and-February

In case you missed any of the AWS Security Blog posts from January and February, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from using AWS WAF to automating HIPAA compliance.

February

February 29, AWS Compliance Announcement: Announcing Industry Best Practices for Securing AWS Resources
We are happy to announce that the Center for Internet Security (CIS) has published the CIS AWS Foundations Benchmark, a set of security configuration best practices for AWS. These industry-accepted best practices go beyond the high-level security guidance already available, providing AWS users with clear, step-by-step implementation and assessment procedures. This is the first time CIS has issued a set of security best practices specific to an individual cloud service provider.

February 24, AWS WAF How-To: How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
In this blog post, I show you how to create an AWS Lambda function that automatically parses Amazon CloudFront access logs as they are delivered to Amazon S3, counts the number of bad requests from unique sources (IP addresses), and updates AWS WAF to block further requests from those IP addresses. I also provide a CloudFormation template that creates the web access control list (ACL), rule sets, Lambda function, and logging S3 bucket so that you can try this yourself.

February 23, Automating HIPAA Compliance How-To: How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series
In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture by highlighting ways you can use AWS Config to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations.  

February 22, March Webinar Announcement: Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection
AWS WAF Software Development Manager Nathan Dye will share  Lambda scripts you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

February 22, Automating HIPAA Compliance How-To: How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series
In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

February 17, AWS Partner Network: New AWS Partner Network Blog Post: Securely Accessing Customers’ AWS Accounts with Cross-Account IAM Roles
Building off AWS Identity and Access Management (IAM) best practices, the AWS Partner Network (APN) Blog this week published a blog post called, Securely Accessing Customer AWS Accounts with Cross-Account IAM Roles. Written by AWS Partner Solutions Architect David Rocamora, this post addresses how best practices can be applied when working with APN Partners, and describes the potential drawbacks with APN Partners having access to their customers’ AWS resources.

February 16, AWS Summit in Chicago: Register for the Free AWS Summit – Chicago, April 2016
Registration for the 2016 AWS Summit – Chicago is now open. This free event will educate you about the AWS platform and offer information about architecture best practices and new cloud services. Register todayto reserve your seat to hear keynote speaker Matt Wood, AWS General Manager of Product Strategy, highlighting the latest AWS services and customer stories.

February 16, Automating HIPAA Compliance How-To: How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series
In my previous blog post, I discussed the idea of using the cloud to protect the cloud and improving healthcare IT by applying DevSecOps methods. In Part 2 today, I will show an architecture composed of AWS services that gives healthcare security administrators necessary controls, allows healthcare developers to interact with the system using familiar tools (such as Git), and leverages AWS managed services without the need for advanced coding or complex configuration.

February 15, Automating HIPAA Compliance How-To: How to Automate HIPAA Compliance (Part 1): Use the Cloud to Protect the Cloud
In a series of blog posts on the AWS Security Blog this month, I will provide prescriptive advice and code samples to developers, system administrators, and security specialists who wish to improve their healthcare IT by applying the DevSecOps methods that the cloud enables. I will also demonstrate AWS services that can help customers meet their AWS Business Associate Agreement obligations in an automated fashion. Consider this series a getting started guide for DevSecOps strategies you can implement as you migrate your own compliance frameworks and controls to the cloud. 

February 9, AWS WAF How-To: How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
One security challenge you may have faced is how to prevent your web servers from being flooded by unwanted requests, or scanning tools such as bots and crawlers that don’t respect the crawl-delay directivevalue. The main objective of this kind of distributed denial of service (DDoS) attack, commonly called an HTTP flood, is to overburden system resources and make them unavailable to your real users or customers (as shown in the following illustration). In this blog post, I will show you how to provision a solution that automatically detects unwanted traffic based on request rate, and then updates configurations of AWS WAF (a web application firewall that protects any application deployed on the Amazon CloudFront content delivery service) to block subsequent requests from those users.

February 3, AWS Compliance Pilot Program: AWS FedRAMP-Trusted Internet Connection (TIC) Overlay Pilot Program
I’m pleased to announce a newly created resource for usage of the Federal Cloud—after successfully completing the testing phase of the FedRAMP-Trusted Internet Connection (TIC) Overlay pilot program, we’ve developed Guidance for TIC Readiness on AWS. This new way of architecting cloud solutions that address TIC capabilities (in a FedRAMP moderate baseline) comes as the result of our relationships with the FedRAMP Program Management Office (PMO), Department of Homeland Security (DHS) TIC PMO, GSA 18F, and FedRAMP third-party assessment organization (3PAO), Veris Group. Ultimately, this approach will provide US Government agencies and contractors with information assisting in the development of “TIC Ready” architectures on AWS.

February 2, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory
In my previous post, I showed how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. Today, I will show how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities.

February 1, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
As you establish private connectivity between your on-premises networks and your AWS Virtual Private Cloud (VPC) environments, the need for Domain Name System (DNS) resolution across these environments grows in importance. One common approach used to address this need is to run DNS servers on Amazon EC2 across multiple Availability Zones (AZs) and integrate them with private on-premises DNS domains. In many cases, though, a managed private DNS service (accessible outside of a VPC) with less administrative overhead is advantageous. In this blog post, I will show you two approaches that use Amazon Route 53 and AWS Directory Service to provide DNS resolution between on-premises networks and AWS VPC environments.
 

January

January 26, DNS Filtering How-To: How to Add DNS Filtering to Your NAT Instance with Squid
In this post, I discuss and give an example of how Squid, a leading open-source proxy, can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet. First, I explain briefly how to create the infrastructure resources required for this approach. Then, I provide step-by-step instructions to install, configure, and test Squid as a transparent proxy.
 

January 25, AWS KMS How-To: How to Help Protect Sensitive Data with AWS KMS
One question AWS KMS customers frequently ask is about how how to encrypt Primary Account Number (PAN) data within AWS because PCI DSS sections 3.5 and 3.6 require the encryption of credit card data at rest and has stringent requirements around the management of encryption keys. One KMS encryption option is to encrypt your PAN data using customer data keys (CDKs) that are exportable out of KMS. Alternatively, you also can use KMS to directly encrypt PAN data by using a customer master key (CMK). In this blog post, I will show you how to help protect sensitive PAN data by using KMS CMKs.

January 21, AWS Certificate Manager Announcement: Now Available: AWS Certificate Manager
Launched today, AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with provisioning and managing SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates—all at no extra cost!

January 19, AWS Compliance Announcement: Introducing GxP Compliance on AWS
We’re happy to announce that customers now are enabled to bring the next generation of medical, health, and wellness solutions to their GxP systems by using AWS for their processing and storage needs. Compliance with healthcare and life sciences requirements is a key priority for us, and we are pleased to announce the availability of new compliance enablers for customers with GxP requirements.

January 19, AWS Config How-To: How to Record and Govern Your IAM Resource Configurations Using AWS Config
Using Config Rules on IAM resources, you can codify your best practices for using IAM and assess the compliance state of these rules regularly. In this blog post, I will show how to start recording the configuration of IAM resources, and author an example rule that checks whether all IAM users in the account are using a sample managed policy, MyIAMUserPolicy. I will also describe examples of other rules customers have authored to assess their organizations’ compliance with their own standards.

January 15, AWS Summits: Mark Your Calendar for AWS Summits in 2016
Are you ready for AWS Summits in 2016? This year we have created even more information-packed Summits that will take place across the globe, each designed to accelerate your cloud journey and help you get the most out of AWS services.

January 13, AWS IAM Announcement: The IAM Console Now Helps Prevent You from Accidentally Deleting In-Use Resources
Starting today, the IAM console shows service last accessed data as part of the process of deleting an IAM user or role. Now you have additional data that shows you when a resource was last active so that you can make a more informed decision about whether or not to delete it.

January 6, IAM Best Practices: Adhere to IAM Best Practices in 2016
As another new year begins, we encourage you to review our recommended IAM best practices. Following these best practices can help you maintain the security of your AWS resources. You can learn more by watching the IAM Best Practices to Live By presentation that Anders Samuelsson gave at AWS re:Invent 2015, or you can click the following links that will take you to IAM documentation, blog posts, and videos. 

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

Register for the Free AWS Summit – Chicago, April 2016

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1L8SH2RN283QP/Register-for-the-Free-AWS-Summit-Chicago-April-2016

Registration for the 2016 AWS Summit – Chicago is now open. This free event will educate you about the AWS platform and offer information about architecture best practices and new cloud services. Register today to reserve your seat to hear keynote speaker Matt Wood, AWS General Manager of Product Strategy, highlighting the latest AWS services and customer stories.

Here’s what to expect:

More than 50 technical sessions, including deep-dive technical sessions about new services and key topics such as security, architecture, DevOps, and big data.

The ever-popular AWS GameDay, in which teams compete in real-world role playing to deliver highly available, highly scalable solutions in the cloud.

Hands-on training opportunities in paid training bootcamps and free hands-on labs. Choose from five, full-day training bootcamps, such as:

Securing Next-Generation Workloads at Cloud Scale (Expert Level) – This bootcamp looks at the design considerations for operating high-assurance workloads on top of the AWS platform. 

AWS engineers, expert customers, and partners answering your questions and explaining best practices.

Networking opportunities with your cloud and IT peers from across the Midwest.

Seats are limited, so tell your friends and colleagues, and register today. You can also join the conversation on Twitter and #AWSSummit, and on Facebook.

We look forward to seeing you in April!

– Craig

How to Automatically Update Your Security Groups for Amazon CloudFront and AWS WAF by Using AWS Lambda

Post Syndicated from Travis Brown original https://blogs.aws.amazon.com/security/post/Tx1LPI2H6Q6S5KC/How-to-Automatically-Update-Your-Security-Groups-for-Amazon-CloudFront-and-AWS-W

Amazon CloudFront can help you increase the performance of your web applications and significantly lower the latency of delivering content to your customers. Recently announced, AWS WAF (a web application firewall) gives you control over which traffic to allow or block by defining customizable web security rules. In conjunction with AWS WAF, CloudFront now can also help you secure your web applications. This blog post will show you how to create an AWS Lambda function to automatically update VPC security groups with AWS internal service IP ranges to ensure that AWS WAF and CloudFront cannot be bypassed.

When using AWS WAF to secure your web applications, it’s important to ensure that only CloudFront can access your origin; otherwise, someone could bypass AWS WAF itself. If your origin is an Elastic Load Balancing load balancer or an Amazon EC2 instance, you can use VPC security groups to allow only CloudFront to access your applications. You can accomplish this by creating a security group that only allows the specific IP ranges of CloudFront. AWS publishes these IP ranges in JSON format so that you can create networking configurations that use them. These ranges are separated by service and region, which means you’ll only need to allow IP ranges that correspond to CloudFront.

In the past, you would use these IP ranges to manually create a security group rule in the AWS Management Console and supply only the prefixes marked for CloudFront. But what would you have done if the IP ranges changed? One solution was to poll the IP ranges’ endpoint periodically with a simple cron job to make sure they were current. This meant you needed infrastructure to support the task. However, you ended up with another host to manage, complete with the typical patching, deployment, and monitoring. As you can see, a small task could quickly become more complicated than the problem it aimed to solve.

An Amazon Simple Notification Service (SNS) topic is generated whenever the AWS IP ranges change. Therefore, you can build an event-driven, zero-infrastructure solution using a Lambda function that is triggered in response to the SNS notification. Let’s get started!

Create a security group

The first thing you need to do is create a security group. This security group will allow only traffic from CloudFront and AWS WAF into your Elastic Load Balancing load balancers or EC2 instances.

In the EC2 console:

Click Security Groups > Create Security Group.

Give your security group a meaningful name and description.

Next, view the security group you just created, and add two tags that our Lambda function will use to identify security groups it needs to update: set Name to cloudfront and AutoUpdate to true. Any security groups with these tags will automatically get their ingress permissions updated with CloudFront’s IP ranges.

Create an IAM policy and execution role for the Lambda function

When creating a Lambda function, it’s important to understand and properly define the security context to which the Lambda function is subject. Using IAM, you will create the Lambda execution role that determines the AWS service calls that the function is authorized to complete. (Learn more about the Lambda permissions model.)

Before you can create the IAM role, you need to create an IAM policy that you will attach to it. In the IAM console, click Policies > Create Policy > Select (next to Create Your Own Policy).

Supply a name for your policy, and then copy and paste the following policy document into the Policy Document box.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeSecurityGroups",
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:RevokeSecurityGroupIngress"
      ],
      "Resource": "*"
    }
  ]
}

To explain what this policy allows, let’s look closely at both statements in the policy. The first statement allows the Lambda function to write to Amazon CloudWatch logs, which is vital for debugging and monitoring our function. The second statement allows the function to get information about existing security groups and to authorize and revoke ingress permissions. It’s an important best practice that your IAM policies be as granular as possible, to observe the principal of least privilege.

Now that you have created your policy, you can create your Lambda execution role using that policy:

In the IAM console, click Roles > Create New Role, and then name your role.

To select a role type, select AWS Service Roles > AWS Lambda.

Attach the policy you just created.

After confirming your selections, click Create Role.

Create your Lambda function

Now that you have created your Lambda execution role, you are ready to create your Lambda function:

Go to the Lambda console and select Create a Lambda function. (Because I’ll be providing the code for your Lambda function, you can skip the blueprint step, but for other functions, blueprints can be a great way to get started.)

Give your Lambda function a name and description, and select Python 2.7 from the Runtime menu.

Paste the following Lambda function code. (You can also download this Lambda function from the aws-cloudfront-samples GitHub repository.)

”’
Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/
or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
”’
 
import boto3
import hashlib
import json
import urllib2
 
# Name of the service, as seen in the ip-groups.json file, to extract information for
SERVICE = "CLOUDFRONT"
# Ports your application uses that need inbound permissions from the service for
INGRESS_PORTS = [ 80, 443 ]
# Tags which identify the security groups you want to update
SECURITY_GROUP_TAGS = { ‘Name’: ‘cloudfront’, ‘AutoUpdate’: ‘true’ }
 
def lambda_handler(event, context):
    print("Received event: " + json.dumps(event, indent=2))
    message = json.loads(event[‘Records’][0][‘Sns’][‘Message’])
    
    # Load the ip ranges from the url
    ip_ranges = json.loads(get_ip_groups_json(message[‘url’], message[‘md5’]))
    
    # extract the service ranges
    cf_ranges = get_ranges_for_service(ip_ranges, SERVICE)
    
    # update the security groups
    result = update_security_groups(cf_ranges)
    
    return result
    
def get_ip_groups_json(url, expected_hash):
    print("Updating from " + url)
 
    response = urllib2.urlopen(url)
    ip_json = response.read()
    
    m = hashlib.md5()
    m.update(ip_json)
    hash = m.hexdigest()
    
    if hash != expected_hash:
        raise Exception(‘MD5 Mismatch: got ‘ + hash + ‘ expected ‘ + expected_hash)
    
    return ip_json
    
def get_ranges_for_service(ranges, service):
    service_ranges = list()
    for prefix in ranges[‘prefixes’]:
        if prefix[‘service’] == service:
            print(‘Found ‘ + service + ‘ range: ‘ + prefix[‘ip_prefix’])
            service_ranges.append(prefix[‘ip_prefix’])
    
    return service_ranges
 
def update_security_groups(new_ranges):
    client = boto3.client(‘ec2’)
    
    groups = get_security_groups_for_update(client)
    print (‘Found ‘ + str(len(groups)) + ‘ SecurityGroups to update’)
 
    result = list()
    updated = 0
    
    for group in groups:
        if update_security_group(client, group, new_ranges):
            updated += 1
            result.append(‘Updated ‘ + group[‘GroupId’])
    
    result.append(‘Updated ‘ + str(updated) + ‘ of ‘ + str(len(groups)) + ‘ SecurityGroups’)
 
    return result
 
def update_security_group(client, group, new_ranges):
    added = 0
    removed = 0
 
    if len(group[‘IpPermissions’]) > 0:
        for permission in group[‘IpPermissions’]:
            if INGRESS_PORTS.count(permission[‘ToPort’]) > 0:
                old_prefixes = list()
                to_revoke = list()
                to_add = list()
                for range in permission[‘IpRanges’]:
                    cidr = range[‘CidrIp’]
                    old_prefixes.append(cidr)
                    if new_ranges.count(cidr) == 0:
                        to_revoke.append(range)
                        print(group[‘GroupId’] + ": Revoking " + cidr + ":" + str(permission[‘ToPort’]))
            
                for range in new_ranges:
                    if old_prefixes.count(range) == 0:
                        to_add.append({ ‘CidrIp’: range })
                        print(group[‘GroupId’] + ": Adding " + range + ":" + str(permission[‘ToPort’]))
            
                removed += revoke_permissions(client, group, permission, to_revoke)
                added += add_permissions(client, group, permission, to_add)
    else:        
        for port in INGRESS_PORTS:
            to_add = list()
            for range in new_ranges:
                to_add.append({ ‘CidrIp’: range })
                print(group[‘GroupId’] + ": Adding " + range + ":" + str(port))
            permission = { ‘ToPort’: port, ‘FromPort’: port, ‘IpProtocol’: ‘tcp’}
            added += add_permissions(client, group, permission, to_add)
 
    print (group[‘GroupId’] + ": Added " + str(added) + ", Revoked " + str(removed))
    return (added > 0 or removed > 0)
 
def revoke_permissions(client, group, permission, to_revoke):
    if len(to_revoke) > 0:
        revoke_params = {
            ‘ToPort’: permission[‘ToPort’],
            ‘FromPort’: permission[‘FromPort’],
            ‘IpRanges’: to_revoke,
            ‘IpProtocol’: permission[‘IpProtocol’]
        }
        
        client.revoke_security_group_ingress(GroupId=group[‘GroupId’], IpPermissions=[revoke_params])
        
    return len(to_revoke)
    

def add_permissions(client, group, permission, to_add):
    if len(to_add) > 0:
        add_params = {
            ‘ToPort’: permission[‘ToPort’],
            ‘FromPort’: permission[‘FromPort’],
            ‘IpRanges’: to_add,
            ‘IpProtocol’: permission[‘IpProtocol’]
        }
        
        client.authorize_security_group_ingress(GroupId=group[‘GroupId’], IpPermissions=[add_params])
        
    return len(to_add)
    
def get_security_groups_for_update(client):
    filters = list();
    for key, value in SECURITY_GROUP_TAGS.iteritems():
        filters.extend(
            [
                { ‘Name’: "tag-key", ‘Values’: [ key ] },
                { ‘Name’: "tag-value", ‘Values’: [ value ] }
            ]
        )
 
    response = client.describe_security_groups(Filters=filters)
    
    return response[‘SecurityGroups’]
 
”’
 Sample Event From SNS:
{
  "Records": [
    {
      "EventVersion": "1.0",
      "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
      "EventSource": "aws:sns",
      "Sns": {
        "SignatureVersion": "1",
        "Timestamp": "1970-01-01T00:00:00.000Z",
        "Signature": "EXAMPLE",
        "SigningCertUrl": "EXAMPLE",
        "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
        "Message": "{"create-time": "yyyy-mm-ddThh:mm:ss+00:00", "synctoken": "0123456789", "md5": "03a8199d0c03ddfec0e542f8bf650ee7", "url": "https://ip-ranges.amazonaws.com/ip-ranges.json"}",
        "Type": "Notification",
        "UnsubscribeUrl": "EXAMPLE",
        "TopicArn": "arn:aws:sns:EXAMPLE",
        "Subject": "TestInvoke"
      }
    }
  ]
}
 
”’

Below the code window for Lambda function handler and role, select the execution role you created earlier, and then click Next.

After confirming your settings are correct, click Create function.

Test your Lambda function

Now that you have created your function, it’s time to test it and initialize your security group:

In the Lambda console, select your function, select Actions, and then Configure sample event.

Enter the following as your sample event, which will represent an SNS notification.

{
  "Records": [
    {
      "EventVersion": "1.0",
      "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
      "EventSource": "aws:sns",
      "Sns": {
        "SignatureVersion": "1",
        "Timestamp": "1970-01-01T00:00:00.000Z",
        "Signature": "EXAMPLE",
        "SigningCertUrl": "EXAMPLE",
        "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
        "Message": "{"create-time": "yyyy-mm-ddThh:mm:ss+00:00", "synctoken": "0123456789", "md5": "7fd59f5c7f5cf643036cbd4443ad3e4b", "url": "https://ip-ranges.amazonaws.com/ip-ranges.json"}",
        "Type": "Notification",
        "UnsubscribeUrl": "EXAMPLE",
        "TopicArn": "arn:aws:sns:EXAMPLE",
        "Subject": "TestInvoke"
      }
    }
  ]
}

After you’ve added the sample event, click Save and test. Your Lambda function will be invoked, and you should see log output at the bottom of the console similar to the following.

Updating from https://ip-ranges.amazonaws.com/ip-ranges.json
MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b: Exception
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 29, in lambda_handler
    ip_ranges = json.loads(get_ip_groups_json(message[‘url’], message[‘md5’]))
  File "/var/task/lambda_function.py", line 50, in get_ip_groups_json
    raise Exception(‘MD5 Missmatch: got ‘ + hash + ‘ expected ‘ + expected_hash)
Exception: MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b
 

You will see a message indicating there was a hash mismatch. Normally, a real SNS notification from the IP Ranges SNS topic will include the right hash, but because our sample event is a test case representing the event, you will need to update the sample event manually to have the expected hash.

Edit the sample event again, and this time change the md5 hash highlighted in red to be the first hash provided in the log output. In this example, we would update the sample event with the hash “2e967e943cf98ae998efeec05d4f351c”.

Click Save and test, and your Lambda function will be invoked.

This time, you should see output indicating your security group was properly updated. If you go back to the EC2 console and view the security group you created, you will now see all the CloudFront IP ranges added as allowed points of ingress. If your log output is different, it should help you identify the issue.

Configure your Lambda function’s event source

After you have validated that your function is executing properly, it’s time to connect the SNS topic:

On the Event sources tab, click Add event source.

Select the event source type of SNS, and in the field labeled SNS topic, type the following ARN:

arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged

Click Submit, and your Lambda function will now be invoked whenever AWS publishes new IP ranges!

Summary

As you followed this blog post, you created a security group and a Lambda function to update the security group’s rules dynamically whenever AWS publishes new internal service IP ranges. This solution has several advantages:

The solution is not designed as a periodic poll, so it only executes when it needs to.

It is automatic, so you don’t need to update security groups manually.

It is simple because you have no extra infrastructure to maintain.

It is cost effective. Because the Lambda function fires only when necessary and only runs for a few seconds, this solution only costs pennies to operate.

And this is just the tip of the iceberg for AWS WAF. In the coming year, we hope to provide you additional blog posts about how to use AWS WAF.

If you have any questions or comments, please add them in the comments section below or on the Lambda forum. If you have any other use cases for using Lambda functions to dynamically update security groups or even other networking configurations such as VPC route tables or ACLs, we’d love to hear about them as well!

– Travis

AWS OpsWorks at re:Invent 2015

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx3B33V56JTM4B2/AWS-OpsWorks-at-re-Invent-2015

re:Invent 2015 is right around the corner. Here’s an overview of the AWS OpsWorks breakout sessions and bootcamp.

DVO301 – AWS OpsWorks Under the Hood

AWS OpsWorks helps you deploy and operate applications of all shapes and sizes. With OpsWorks, you can create your application stack with layers that define the building blocks of your application: load balancers, application servers, databases, etc. But did you know that you can also use OpsWorks to run commands or scripts on your instances? Whether you need to perform a specific task or install a new software package, AWS OpsWorks gives you the tools to install and configure your instances consistently and help them evolve in an automated and predictable fashion. In this session, we explain how lifecycle events work, and how to create custom layers and a runtime system for your operational tooling and how to develop and test locally.

DVO310 – Benefit from DevOps When Moving to AWS for Windows

In this session, we discuss DevOps patterns of success that favor automation and drive consistency from the start of your cloud journey. We explore two key concepts that you need to understand when moving to AWS: pushing and running code. We look at Windows-specific features of services like AWS CodeDeploy, AWS CloudFormation, AWS OpsWorks, and AWS Elastic Beanstalk, and supporting technologies like Chef, PowerShell, and Visual Studio. We also share customer stories about fleets of Microsoft Windows Server that successfully operate at scale in AWS.

Taking AWS Operations to the Next Level Bootcamp

This full-day bootcamp is designed to teach solutions architects, SysOps administrators, and other technical end users how to leverage AWS CloudFormation, AWS OpsWorks, and AWS Service Catalog to automate provisioning and configuring AWS infrastructure resources and applications. In this bootcamp, we build and deploy an end-to-end automation system that provides hands-off failure recovery for key systems.

re:Invent is a great opportunity to talk with AWS teams. As in previous years, you will find OpsWorks team members at the Application Management booth. Drop by and ask for a demo!

Didn’t register before the conference sold out? All sessions will be recorded and posted on YouTube after the conference and all slide decks will be posted on SlideShare.net.