Tag Archives: AWS CloudFormation

AWS Week in Review – March 21, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-21-2016/

Let’s take a quick look at what happened in AWS-land last week:

Monday

March 21

Tuesday

March 22

Wednesday

March 23

Thursday

March 24

Friday

March 25

Saturday

March 26

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

Explore Continuous Delivery in AWS with the Pipeline Starter Kit

Post Syndicated from David Nasi original http://blogs.aws.amazon.com/application-management/post/Tx2CIB02ZO05ZII/Explore-Continuous-Delivery-in-AWS-with-the-Pipeline-Starter-Kit

By Chris Munns, David Nasi, Shankar Sivadasan, and Susan Ferrell

Continuous delivery, automating your software delivery process from code to build to deployment, is a powerful development technique and the ultimate goal for many development teams. AWS provides services, including AWS CodePipeline (a continuous delivery service) and AWS CodeDeploy (an automated application deployment service) to help you reach this goal. With AWS CodePipeline, any time a change to the code occurs, that change runs automatically through the delivery process you’ve defined. If you’ve ever wanted to try these services, but not wanted to set up the resources, we’ve created a starter kit you can use. This starter kit sets up a complete pipeline that builds and deploys a sample application in just a few steps. The starter kit includes an AWS CloudFormation template to create the pipeline and all of its resources in the US East (N. Virginia) Region. Specifically, the CloudFormation template creates:

An AWS Virtual Private Cloud (VPC), including all the necessary routing tables and routes, an Internet gateway, and network ACLs for EC2 instances to be launched into.

An Amazon EC2 instance that hosts a Jenkins server (also installed and configured for you).

Two AWS CodeDeploy applications, each of which contains a deployment group that deploys to a single Amazon EC2 instance.

All IAM service and instance roles required to run the resources.

A pipeline in AWS CodePipeline that builds the sample application and deploys it. This includes creating an Amazon S3 bucket to use as the artifact store for this pipeline.

What you’ll need:

An AWS account. (Sign up for one here if you don’t have one already.)

An Amazon EC2 key pair in the US East (N. Virginia) Region. (Learn how to create one here if you don’t have one.)

Administrator-level permissions in IAM, AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3. (Not sure how to set permissions in these services? See the sample policy in Troubleshooting Problems with the Starter Kit.)

Optionally, a GitHub account so you can fork the repository for the sample application. Alternatively, if you do not want to create a GitHub account, you can use the Amazon S3 bucket configured in the starter kit template, but you will not be able to edit the application or see your changes automatically run through the pipeline.

That’s it! The starter kit will create everything else for you.

Note: The resources created in the starter kit exceed what’s included in the AWS Free Tier so the use of the kit will result in charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

Let’s get started.

Decide how you want to source the provided sample application. AWS CodePipeline currently allows you use either an Amazon S3 bucket or a GitHub repository as the source location for your application. The CloudFormation template allows you to choose either of these methods. If you choose to use a GitHub repository, you will have a little more set up work to do, but you will be able to easily test modifying the application and seeing the changes run automatically through the pipeline. If you choose to use the Amazon S3 bucket already configured as the source in the startup kit, set up is simpler, but you won’t be able to modify the application.

Follow the steps for your choice:

GitHub:

Sign in to GitHub and fork the sample application repository at https://github.com/awslabs/aws-codedeploy-sample-tomcat.

Navigate to https://github.com/settings/tokens and generate a token to use with the starter kit. The token requires the permissions needed to integrate with AWS CodePipeline: repo and admin:repo_hook. For more information, see the AWS CodePipeline User Guide. Make sure you copy the token after you create it.

Amazon S3:

If you’re using the bucket configured in the starter kit, there’s nothing else for you to do but continue on to step 3. If you want to use your own bucket, see Troubleshooting Problems with the Starter Kit.

Choose  to launch the starter kit template directly in the AWS CloudFormation console. Make sure that you are in the US East (N. Virginia) region.

Note: If you want to download the template to your own computer and then upload it directly to AWS CloudFormation, you can do so from this Amazon S3 bucket. Save the aws-codedeploy-codepipeline-starter-kit.template file to a location on your computer that’s easy to remember.

Choose Next.

On the Specify Details page, do the following:

In Stack name, type a name for the stack. Choose something short and simple for easy reference.

In AppName, you can leave the default as-is, or you can type a name of no more than 15 characters (for example, starterkit-demo). The name has the following restrictions:

The only allowed characters are lower-case letters, numbers, periods, and hyphens.

The name must be unique in your AWS account, so be sure to choose a new name each time you use the starter kit.

In AppSourceType, choose S3 or GitHub, depending on your preference for a source location, and then do the following:

If you want to use the preconfigured Amazon S3 bucket as the source for your starter kit, leave all the default information as-is. (If you want to use your own Amazon S3 bucket, see Troubleshooting Problems with the Starter Kit.)

If you want to use a GitHub repo as the source for your starter kit, in Application Source – GitHub, type the name of your user account in GitHubUser. In GitHubToken, paste the token you created earlier. In GitHubRepoName, type the name of the forked repo. In GitHubBranchName, type the name of the branch (by default, master).

In Key Name, choose the name of your Amazon EC2 key pair.

In YourIP, type the IP address from which you will access the resources created by this starter kit. This is a recommended security best practice.

Choose Next.

(Optional) On the Options page, in Key, type Name. In Value, type a name that will help you easily identify the resources created for the starter kit. This name will be used to tag all of the resources created by the starter kit. Although this step is optional, it’s a good idea, particularly if you want to use or modify these resources later on. Choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. (It will.) Review the other settings, and then choose Create.

It will take several minutes for CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console.

When the stack has been created, you will see a CREATE_COMPLETE message in the Status column of the console and on the Overview tab.

Congratulations! You’ve created your first pipeline, complete with all required resources. The pipeline has four stages, each with a single action. The pipeline will start automatically as soon as it is created.

(If CloudFormation fails to create your resources and pipeline, it will roll back all resource creation automatically. The most common reason for failure is that you specified a stack name that is allowed in CloudFormation but not allowed in Amazon S3, and you chose Amazon S3 for your source location. For more information, see the Troubleshooting problems with the starter kit section at the end of this post.)

To view your pipeline, open the AWS CodePipeline console at http://console.aws.amazon.com/codepipeline. On the dashboard page, choose the name of your new pipeline (for example, StarterKitDemo-Pipeline). Your pipeline, which might or might not have started its first run, will appear on the view pipeline page.

You can watch the progress of your pipeline as it completes the action configured for each of its four stages (a source stage, a build stage, and two deployment stages).

The pipeline flows as follows:

The source stage contains an action that retrieves the application from the source location (the Amazon S3 bucket created for you to store the app or the GitHub repo you specified).

The build stage contains an action that builds the app in Jenkins, which is hosted on an Amazon EC2 instance.

The first deploy stage contains an action that uses AWS CodeDeploy to deploy the app to a beta website on an Amazon EC2 instance.

The second deploy stage contains an action that again uses AWS CodeDeploy to deploy the app, this time to a separate, production website on a different Amazon EC2 instance.

When each stage is complete, it turns from blue (in progress) to green (success).

You can view the details of any stage except the source stage by choosing the Details link for that stage. For example, choosing the Details link for the Jenkins build action in the build stage opens the status page for that Jenkins build:

Note: The first time the pipeline runs, the link to the build will point to Build #2. Build #1 is a failed build left over from the initial instance and Jenkins configuration process in AWS CloudFormation.

To view the details of the build, choose the link to the log file. To view the Maven project created in Jenkins to build the application, choose Back to Project.

While you’re in Jenkins, we strongly encourage you to consider securing it if you’re going to keep the resource for any length of time. From the Jenkins dashboard, choose Manage Jenkins, choose Setup Security, and choose the security options that are best for your organization. For more information about Jenkins security, see Standard Security Setup.

When Succeeded is displayed for the pipeline status, you can view the application you built and deployed:

In the status area for the ProdDeploy action in the Prod stage, choose Details. The details of the deployment will appear in the AWS CodeDeploy console.

In the Deployment Details section, in Instance ID, choose the instance ID of the successfully deployed instance.

In the Amazon EC2 console, on the Description tab, in Public DNS, copy the address, and then paste it into the address bar of your web browser. The web page opens on the application you built:

Tip: You can also find the IP addresses of each instance in AWS CloudFormation on the Outputs tab of the stack.

Now that you have a pipeline, try experimenting with it. You can release a change, disable and enable transitions, edit the pipeline to add more actions or change the existing ones – whatever you want to do, you can do it. It’s yours to play with. You can make changes to the source in your GitHub repository (if you chose GitHub as your source location) and watch those pushed changes build and deploy automatically. You can also explore the links to the resources used by the pipeline, such as the application and deployment groups in AWS CodeDeploy and the Jenkins server.

What to Do Next

After you’ve finished exploring your pipeline and its associated resources, you can do one of two things:

     Delete the stack in AWS CloudFormation, which deletes the pipeline, its resources, and the stack itself. This is the option to choose if you no longer want to use the pipeline or any of its resources. Cleaning up resources you’re no longer using is important, because you don’t want to be charged for resources you no longer using.

To delete the stack:

Delete the Amazon S3 bucket used as the artifact store in AWS CodePipeline. Although this bucket was created as part of the CloudFormation stack, Amazon S3 does not allow CloudFormation to delete buckets that contain objects. To delete this bucket, open the Amazon S3 console, select the bucket whose name starts with demo and ends with the name you chose for your stack, and then delete it. For more information, see Delete or Empty a Bucket.

Follow the steps in Delete the stack.

Change the pipeline and its resources to start building applications you actually care about. Maybe you’re not ready to get into the business of creating bespoke suits for dogs. (We understand that dogs can be difficult clients to dress well, and that not everyone wants to be paid in dog treats.) However, perhaps you do have an application or two that you would like to set up for continuous delivery with AWS CodePipeline. AWS CodePipeline integrates with other services you might already be using for your software development, as well as GitHub. You can edit the pipeline to remove the actions or stages and add new actions and stages that more accurately reflect the delivery process for your applications. You can even create your own custom actions, if you want to integrate your own solutions.

If you decide to keep the pipeline and some or all of its resources, here are some things to consider:

Review the IAM policies and roles created for you by the starter kit and modify their permissions, if necessary.

Use one of the managed policies provided by AWS to give permissions for AWS CodePipeline to IAM users and groups you want to access the pipeline.

If you plan to continue using AWS CodeDeploy, add or remove permissions for AWS CodeDeploy, review the IAM instance profile and service role that were created for you, and review or change the deployment group settings.

If you plan to continue using the Jenkins server or the Amazon EC2 instance it is hosted on, make sure you secure both Jenkins and the EC2 instance appropriately.

We hope you’ve enjoyed the starter kit and this blog post. If you have any feedback or questions, feel free to get in touch with us on the AWS CodePipeline forum.

Troubleshooting Problems with the Starter Kit

You can use the events on the Events tab of the CloudFormation stack to help you troubleshoot problems if the stack fails to complete creation or deletion.

Problem: The stack creation fails when trying to create the custom action in AWS CodePipeline.

Possible Solution: You or someone who shares your AWS account number might have used the starter kit once and chosen the same name for the application. Custom actions must have unique names within an AWS account. Another possibility is that you or someone else then deleted the resources, including the custom action. You cannot create a custom action using the name of a deleted custom action. In either case, delete the failed stack, and then try to create the stack again using a different application name.

Problem: The stack creation fails in AWS CloudFormation without any error messages.

Possible Solution: You’re probably missing one or more required permissions. Creating resources with the template in AWS CloudFormation requires the following policy or its equivalent permissions:

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Action": [

                "cloudformation:*",

                "codedeploy:*",

                "codepipeline:*",

                "ec2:*",

                "iam:AddRoleToInstanceProfile",

                "iam:CreateInstanceProfile",

                "iam:CreateRole",

                "iam:DeleteInstanceProfile",

                "iam:DeleteRole",

                "iam:DeleteRolePolicy",

                "iam:GetRole",

                "iam:PassRole",

                "iam:PutRolePolicy",

                "iam:RemoveRoleFromInstanceProfile",

                "s3:*"

            ],

            "Resource": "*"

        }

    ]

}

 

Problem: Deleting the stack fails when trying to delete the Amazon S3 bucket created by the stack.

Possible solution:  One or more files or folders might be left in the bucket created by the stack. To delete this bucket, follow the instructions in Delete or Empty a Bucket, and then delete the stack in AWS CloudFormation.

Problem: I want to use my own Amazon S3 bucket as the source location for a pipeline, not the bucket pre-configured in the template.

Possible solution: Create your own bucket, following these steps:

 

Download the sample application from GitHub at https://github.com/awslabs/aws-codedeploy-sample-tomcat and upload the suitsfordogs.zip application to an Amazon S3 bucket that was created in the US East (N. Virginia) Region.

Sign into the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3.

Choose your bucket from the list of buckets available, and on the Properties tab for the bucket, choose to add or edit the bucket policy.

Make sure that your bucket has the following permissions set to Allow:

s3:PutObject

s3:List*

s3:Get*

For more information, see Editing Bucket Permissions.

When configuring details in CloudFormation, on the Specify Details page, in AppSourceType, choose S3, but then replace the information in Application Source – S3 with the details of your bucket and object.

Explore Continuous Delivery in AWS with the Pipeline Starter Kit

Post Syndicated from David Nasi original http://blogs.aws.amazon.com/application-management/post/Tx2CIB02ZO05ZII/Explore-Continuous-Delivery-in-AWS-with-the-Pipeline-Starter-Kit

By Chris Munns, David Nasi, Shankar Sivadasan, and Susan Ferrell

Continuous delivery, automating your software delivery process from code to build to deployment, is a powerful development technique and the ultimate goal for many development teams. AWS provides services, including AWS CodePipeline (a continuous delivery service) and AWS CodeDeploy (an automated application deployment service) to help you reach this goal. With AWS CodePipeline, any time a change to the code occurs, that change runs automatically through the delivery process you’ve defined. If you’ve ever wanted to try these services, but not wanted to set up the resources, we’ve created a starter kit you can use. This starter kit sets up a complete pipeline that builds and deploys a sample application in just a few steps. The starter kit includes an AWS CloudFormation template to create the pipeline and all of its resources in the US East (N. Virginia) Region. Specifically, the CloudFormation template creates:

An AWS Virtual Private Cloud (VPC), including all the necessary routing tables and routes, an Internet gateway, and network ACLs for EC2 instances to be launched into.

An Amazon EC2 instance that hosts a Jenkins server (also installed and configured for you).

Two AWS CodeDeploy applications, each of which contains a deployment group that deploys to a single Amazon EC2 instance.

All IAM service and instance roles required to run the resources.

A pipeline in AWS CodePipeline that builds the sample application and deploys it. This includes creating an Amazon S3 bucket to use as the artifact store for this pipeline.

What you’ll need:

An AWS account. (Sign up for one here if you don’t have one already.)

An Amazon EC2 key pair in the US East (N. Virginia) Region. (Learn how to create one here if you don’t have one.)

Administrator-level permissions in IAM, AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3. (Not sure how to set permissions in these services? See the sample policy in Troubleshooting Problems with the Starter Kit.)

Optionally, a GitHub account so you can fork the repository for the sample application. Alternatively, if you do not want to create a GitHub account, you can use the Amazon S3 bucket configured in the starter kit template, but you will not be able to edit the application or see your changes automatically run through the pipeline.

That’s it! The starter kit will create everything else for you.

Note: The resources created in the starter kit exceed what’s included in the AWS Free Tier so the use of the kit will result in charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

Let’s get started.

Decide how you want to source the provided sample application. AWS CodePipeline currently allows you use either an Amazon S3 bucket or a GitHub repository as the source location for your application. The CloudFormation template allows you to choose either of these methods. If you choose to use a GitHub repository, you will have a little more set up work to do, but you will be able to easily test modifying the application and seeing the changes run automatically through the pipeline. If you choose to use the Amazon S3 bucket already configured as the source in the startup kit, set up is simpler, but you won’t be able to modify the application.

Follow the steps for your choice:

GitHub:

Sign in to GitHub and fork the sample application repository at https://github.com/awslabs/aws-codedeploy-sample-tomcat.

Navigate to https://github.com/settings/tokens and generate a token to use with the starter kit. The token requires the permissions needed to integrate with AWS CodePipeline: repo and admin:repo_hook. For more information, see the AWS CodePipeline User Guide. Make sure you copy the token after you create it.

Amazon S3:

If you’re using the bucket configured in the starter kit, there’s nothing else for you to do but continue on to step 3. If you want to use your own bucket, see Troubleshooting Problems with the Starter Kit.

Choose  to launch the starter kit template directly in the AWS CloudFormation console. Make sure that you are in the US East (N. Virginia) region.

Note: If you want to download the template to your own computer and then upload it directly to AWS CloudFormation, you can do so from this Amazon S3 bucket. Save the aws-codedeploy-codepipeline-starter-kit.template file to a location on your computer that’s easy to remember.

Choose Next.

On the Specify Details page, do the following:

In Stack name, type a name for the stack. Choose something short and simple for easy reference.

In AppName, you can leave the default as-is, or you can type a name of no more than 15 characters (for example, starterkit-demo). The name has the following restrictions:

The only allowed characters are lower-case letters, numbers, periods, and hyphens.

The name must be unique in your AWS account, so be sure to choose a new name each time you use the starter kit.

In AppSourceType, choose S3 or GitHub, depending on your preference for a source location, and then do the following:

If you want to use the preconfigured Amazon S3 bucket as the source for your starter kit, leave all the default information as-is. (If you want to use your own Amazon S3 bucket, see Troubleshooting Problems with the Starter Kit.)

If you want to use a GitHub repo as the source for your starter kit, in Application Source – GitHub, type the name of your user account in GitHubUser. In GitHubToken, paste the token you created earlier. In GitHubRepoName, type the name of the forked repo. In GitHubBranchName, type the name of the branch (by default, master).

In Key Name, choose the name of your Amazon EC2 key pair.

In YourIP, type the IP address from which you will access the resources created by this starter kit. This is a recommended security best practice.

Choose Next.

(Optional) On the Options page, in Key, type Name. In Value, type a name that will help you easily identify the resources created for the starter kit. This name will be used to tag all of the resources created by the starter kit. Although this step is optional, it’s a good idea, particularly if you want to use or modify these resources later on. Choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. (It will.) Review the other settings, and then choose Create.

It will take several minutes for CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console.

When the stack has been created, you will see a CREATE_COMPLETE message in the Status column of the console and on the Overview tab.

Congratulations! You’ve created your first pipeline, complete with all required resources. The pipeline has four stages, each with a single action. The pipeline will start automatically as soon as it is created.

(If CloudFormation fails to create your resources and pipeline, it will roll back all resource creation automatically. The most common reason for failure is that you specified a stack name that is allowed in CloudFormation but not allowed in Amazon S3, and you chose Amazon S3 for your source location. For more information, see the Troubleshooting problems with the starter kit section at the end of this post.)

To view your pipeline, open the AWS CodePipeline console at http://console.aws.amazon.com/codepipeline. On the dashboard page, choose the name of your new pipeline (for example, StarterKitDemo-Pipeline). Your pipeline, which might or might not have started its first run, will appear on the view pipeline page.

You can watch the progress of your pipeline as it completes the action configured for each of its four stages (a source stage, a build stage, and two deployment stages).

The pipeline flows as follows:

The source stage contains an action that retrieves the application from the source location (the Amazon S3 bucket created for you to store the app or the GitHub repo you specified).

The build stage contains an action that builds the app in Jenkins, which is hosted on an Amazon EC2 instance.

The first deploy stage contains an action that uses AWS CodeDeploy to deploy the app to a beta website on an Amazon EC2 instance.

The second deploy stage contains an action that again uses AWS CodeDeploy to deploy the app, this time to a separate, production website on a different Amazon EC2 instance.

When each stage is complete, it turns from blue (in progress) to green (success).

You can view the details of any stage except the source stage by choosing the Details link for that stage. For example, choosing the Details link for the Jenkins build action in the build stage opens the status page for that Jenkins build:

Note: The first time the pipeline runs, the link to the build will point to Build #2. Build #1 is a failed build left over from the initial instance and Jenkins configuration process in AWS CloudFormation.

To view the details of the build, choose the link to the log file. To view the Maven project created in Jenkins to build the application, choose Back to Project.

While you’re in Jenkins, we strongly encourage you to consider securing it if you’re going to keep the resource for any length of time. From the Jenkins dashboard, choose Manage Jenkins, choose Setup Security, and choose the security options that are best for your organization. For more information about Jenkins security, see Standard Security Setup.

When Succeeded is displayed for the pipeline status, you can view the application you built and deployed:

In the status area for the ProdDeploy action in the Prod stage, choose Details. The details of the deployment will appear in the AWS CodeDeploy console.

In the Deployment Details section, in Instance ID, choose the instance ID of the successfully deployed instance.

In the Amazon EC2 console, on the Description tab, in Public DNS, copy the address, and then paste it into the address bar of your web browser. The web page opens on the application you built:

Tip: You can also find the IP addresses of each instance in AWS CloudFormation on the Outputs tab of the stack.

Now that you have a pipeline, try experimenting with it. You can release a change, disable and enable transitions, edit the pipeline to add more actions or change the existing ones – whatever you want to do, you can do it. It’s yours to play with. You can make changes to the source in your GitHub repository (if you chose GitHub as your source location) and watch those pushed changes build and deploy automatically. You can also explore the links to the resources used by the pipeline, such as the application and deployment groups in AWS CodeDeploy and the Jenkins server.

What to Do Next

After you’ve finished exploring your pipeline and its associated resources, you can do one of two things:

     Delete the stack in AWS CloudFormation, which deletes the pipeline, its resources, and the stack itself. This is the option to choose if you no longer want to use the pipeline or any of its resources. Cleaning up resources you’re no longer using is important, because you don’t want to be charged for resources you no longer using.

To delete the stack:

Delete the Amazon S3 bucket used as the artifact store in AWS CodePipeline. Although this bucket was created as part of the CloudFormation stack, Amazon S3 does not allow CloudFormation to delete buckets that contain objects. To delete this bucket, open the Amazon S3 console, select the bucket whose name starts with demo and ends with the name you chose for your stack, and then delete it. For more information, see Delete or Empty a Bucket.

Follow the steps in Delete the stack.

Change the pipeline and its resources to start building applications you actually care about. Maybe you’re not ready to get into the business of creating bespoke suits for dogs. (We understand that dogs can be difficult clients to dress well, and that not everyone wants to be paid in dog treats.) However, perhaps you do have an application or two that you would like to set up for continuous delivery with AWS CodePipeline. AWS CodePipeline integrates with other services you might already be using for your software development, as well as GitHub. You can edit the pipeline to remove the actions or stages and add new actions and stages that more accurately reflect the delivery process for your applications. You can even create your own custom actions, if you want to integrate your own solutions.

If you decide to keep the pipeline and some or all of its resources, here are some things to consider:

Review the IAM policies and roles created for you by the starter kit and modify their permissions, if necessary.

Use one of the managed policies provided by AWS to give permissions for AWS CodePipeline to IAM users and groups you want to access the pipeline.

If you plan to continue using AWS CodeDeploy, add or remove permissions for AWS CodeDeploy, review the IAM instance profile and service role that were created for you, and review or change the deployment group settings.

If you plan to continue using the Jenkins server or the Amazon EC2 instance it is hosted on, make sure you secure both Jenkins and the EC2 instance appropriately.

We hope you’ve enjoyed the starter kit and this blog post. If you have any feedback or questions, feel free to get in touch with us on the AWS CodePipeline forum.

Troubleshooting Problems with the Starter Kit

You can use the events on the Events tab of the CloudFormation stack to help you troubleshoot problems if the stack fails to complete creation or deletion.

Problem: The stack creation fails when trying to create the custom action in AWS CodePipeline.

Possible Solution: You or someone who shares your AWS account number might have used the starter kit once and chosen the same name for the application. Custom actions must have unique names within an AWS account. Another possibility is that you or someone else then deleted the resources, including the custom action. You cannot create a custom action using the name of a deleted custom action. In either case, delete the failed stack, and then try to create the stack again using a different application name.

Problem: The stack creation fails in AWS CloudFormation without any error messages.

Possible Solution: You’re probably missing one or more required permissions. Creating resources with the template in AWS CloudFormation requires the following policy or its equivalent permissions:

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Action": [

                "cloudformation:*",

                "codedeploy:*",

                "codepipeline:*",

                "ec2:*",

                "iam:AddRoleToInstanceProfile",

                "iam:CreateInstanceProfile",

                "iam:CreateRole",

                "iam:DeleteInstanceProfile",

                "iam:DeleteRole",

                "iam:DeleteRolePolicy",

                "iam:GetRole",

                "iam:PassRole",

                "iam:PutRolePolicy",

                "iam:RemoveRoleFromInstanceProfile",

                "s3:*"

            ],

            "Resource": "*"

        }

    ]

}

 

Problem: Deleting the stack fails when trying to delete the Amazon S3 bucket created by the stack.

Possible solution:  One or more files or folders might be left in the bucket created by the stack. To delete this bucket, follow the instructions in Delete or Empty a Bucket, and then delete the stack in AWS CloudFormation.

Problem: I want to use my own Amazon S3 bucket as the source location for a pipeline, not the bucket pre-configured in the template.

Possible solution: Create your own bucket, following these steps:

 

Download the sample application from GitHub at https://github.com/awslabs/aws-codedeploy-sample-tomcat and upload the suitsfordogs.zip application to an Amazon S3 bucket that was created in the US East (N. Virginia) Region.

Sign into the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3.

Choose your bucket from the list of buckets available, and on the Properties tab for the bucket, choose to add or edit the bucket policy.

Make sure that your bucket has the following permissions set to Allow:

s3:PutObject

s3:List*

s3:Get*

For more information, see Editing Bucket Permissions.

When configuring details in CloudFormation, on the Specify Details page, in AppSourceType, choose S3, but then replace the information in Application Source – S3 with the details of your bucket and object.

Experiment that Discovered the Higgs Boson Uses AWS to Probe Nature

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/experiment-that-discovered-the-higgs-boson-uses-aws-to-probe-nature/

My colleague Sanjay Padhi is part of the AWS Scientific Computing team. He wrote the guest post below to share the story of how AWS provided computational resources that aided in an important scientific discovery. —
Jeff;

The Higgs boson (sometimes referred to as the God Particle), responsible for providing insight into the origin of mass, was discovered in 2012 by the world’s largest experiments, ATLAS and CMS, at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The theorists behind this discovery were awarded the 2013 Nobel Prize in Physics.
Deep underground on the border between France and Switzerland, the LHC is the world’s largest (17 miles in circumference) and highest-energy particle accelerator. It explores nature on smaller scales than any human invention has ever explored before.
From Experiment to Raw Data The high energy particle collisions turn mass in to energy, which then turns back in to mass, creating new particles that are observed in the CMS detector. This detector is 69 feet long, 49 feet wide and 49 feet high, and sits in a cavern 328 feet underground near the village of Cessy in France. The raw data from the CMS is recorded every 25 nanoseconds at a rate of approximately 1 petabyte per second.
After online and offline processing of the raw data at the CERN Tier 0 data center, the datasets are distributed to 7 large Tier 1 data centers across the world within 48 hours, ready for further processing and analysis by scientists (the CMS collaboration, one of the largest in the world, consists of more than 3,000 participating members from over 180 institutes and universities in 43 countries).
Processing at Fermilab Fermilab is one of 16 National Laboratories operated by the United States Department of Energy. Located just outside Batavia Illinois, Fermilab serves as one of the Tier 1 data centers for Cern’s CMS experiment.
With the increase in LHC collision energy last year, the demand for data assimilation, event simulations, and large-scale computing increased as well. With this increase came a desire to maximize cost efficiency by dynamically provisioning resources on an as-needed basis.
In order to address this issue, the Fermilab Scientific Computing Division launched the HEP (High Energy Physics) Cloud project in June of 2015. They planned to develop a virtual facility that would provide a common interface to access a variety of computing resources including commercial clouds. Using AWS, the HEP Cloud project successfully demonstrated the ability to add 58,000 cores elastically to their on-premises facility for the CMS experiment.
The image below depicts one of the simulations that was run on AWS. It shows how the collision of two protons creates energy that then becomes new particles.

The additional 58,000 cores represents a 4x increase in Fermilab’s computational capacity, all of which is dedicated to the CMS experiment in order to generate and reconstruct Monte Carlo simulation events. More than 500 million events were fully simulated in 10 days using 2.9 million jobs. Without help from AWS, this job would have taken 6 weeks to complete using the on-premises compute resources at Fermilab.
This simulation was done in preparation for one of the major high energy physics international conferences, Recontres de Moriond. Physicists across the world will use these simulations to probe nature in detail and will share their findings with their international colleagues during the conference.
Saving Money with HEP Cloud The HEP Cloud project aims to minimize the costs of computation. The R&D and demonstration effort was supported by an award from the AWS Cloud Credit for Research.
HEP Cloud’s decision engine, the brain of the facility, has several duties. It oversees EC2 Spot Market price fluctuations using  tools and techniques provided by Amazon’s Spot team,  initializes Amazon EC2 instances using HTCondor, tracks the DNS names of the instances using Amazon Route 53 , and makes use of AWS CloudFormation templates for infrastructure as a code.
While on the road to success, the project team had to overcome several challenges, ranging from fine-tuning configurations to optimizing their use of Amazon S3 and other resources. For example, they devised a strategy to distribute the auxiliary data across multiple AWS Regions in order to minimize storage costs and data-access latency.
Automatic Scaling into AWS The figure below shows elastic, automatic expansion of Fermilab’s Computing Facility into the AWS Cloud using Spot instances for CMS workflows. Monitoring of the resources was done using open source software provided by Grafana with custom modifications provided by the HEP Cloud.

Panagiotis Spentzouris (head of the Scientific Computing Division at Fermilab), told me:
Modern HEP experiments require massive computing resources in irregular cycles, so it is imperative for the success of our program that our computing facilities can rapidly expand and contract resources to match demand.  Using commercial clouds is an important ingredient for achieving this goal, and our work with AWS on the CMS experiment’s workloads though HEPCloud was a great success in demonstrating the value of this approach.
I hope that you enjoyed this brief insight into the ways in which AWS is helping to explore the frontiers of physics!
Sanjay Padhi, Ph.D, AWS Scientific Computing

Experiment that Discovered the Higgs Boson Uses AWS to Probe Nature

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/experiment-that-discovered-the-higgs-boson-uses-aws-to-probe-nature/

My colleague Sanjay Padhi is part of the AWS Scientific Computing team. He wrote the guest post below to share the story of how AWS provided computational resources that aided in an important scientific discovery. —
Jeff;

The Higgs boson (sometimes referred to as the God Particle), responsible for providing insight into the origin of mass, was discovered in 2012 by the world’s largest experiments, ATLAS and CMS, at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The theorists behind this discovery were awarded the 2013 Nobel Prize in Physics.
Deep underground on the border between France and Switzerland, the LHC is the world’s largest (17 miles in circumference) and highest-energy particle accelerator. It explores nature on smaller scales than any human invention has ever explored before.
From Experiment to Raw Data The high energy particle collisions turn mass in to energy, which then turns back in to mass, creating new particles that are observed in the CMS detector. This detector is 69 feet long, 49 feet wide and 49 feet high, and sits in a cavern 328 feet underground near the village of Cessy in France. The raw data from the CMS is recorded every 25 nanoseconds at a rate of approximately 1 petabyte per second.
After online and offline processing of the raw data at the CERN Tier 0 data center, the datasets are distributed to 7 large Tier 1 data centers across the world within 48 hours, ready for further processing and analysis by scientists (the CMS collaboration, one of the largest in the world, consists of more than 3,000 participating members from over 180 institutes and universities in 43 countries).
Processing at Fermilab Fermilab is one of 16 National Laboratories operated by the United States Department of Energy. Located just outside Batavia Illinois, Fermilab serves as one of the Tier 1 data centers for Cern’s CMS experiment.
With the increase in LHC collision energy last year, the demand for data assimilation, event simulations, and large-scale computing increased as well. With this increase came a desire to maximize cost efficiency by dynamically provisioning resources on an as-needed basis.
In order to address this issue, the Fermilab Scientific Computing Division launched the HEP (High Energy Physics) Cloud project in June of 2015. They planned to develop a virtual facility that would provide a common interface to access a variety of computing resources including commercial clouds. Using AWS, the HEP Cloud project successfully demonstrated the ability to add 58,000 cores elastically to their on-premises facility for the CMS experiment.
The image below depicts one of the simulations that was run on AWS. It shows how the collision of two protons creates energy that then becomes new particles.

The additional 58,000 cores represents a 4x increase in Fermilab’s computational capacity, all of which is dedicated to the CMS experiment in order to generate and reconstruct Monte Carlo simulation events. More than 500 million events were fully simulated in 10 days using 2.9 million jobs. Without help from AWS, this job would have taken 6 weeks to complete using the on-premises compute resources at Fermilab.
This simulation was done in preparation for one of the major high energy physics international conferences, Recontres de Moriond. Physicists across the world will use these simulations to probe nature in detail and will share their findings with their international colleagues during the conference.
Saving Money with HEP Cloud The HEP Cloud project aims to minimize the costs of computation. The R&D and demonstration effort was supported by an award from the AWS Cloud Credit for Research.
HEP Cloud’s decision engine, the brain of the facility, has several duties. It oversees EC2 Spot Market price fluctuations using  tools and techniques provided by Amazon’s Spot team,  initializes Amazon EC2 instances using HTCondor, tracks the DNS names of the instances using Amazon Route 53 , and makes use of AWS CloudFormation templates for infrastructure as a code.
While on the road to success, the project team had to overcome several challenges, ranging from fine-tuning configurations to optimizing their use of Amazon S3 and other resources. For example, they devised a strategy to distribute the auxiliary data across multiple AWS Regions in order to minimize storage costs and data-access latency.
Automatic Scaling into AWS The figure below shows elastic, automatic expansion of Fermilab’s Computing Facility into the AWS Cloud using Spot instances for CMS workflows. Monitoring of the resources was done using open source software provided by Grafana with custom modifications provided by the HEP Cloud.

Panagiotis Spentzouris (head of the Scientific Computing Division at Fermilab), told me:
Modern HEP experiments require massive computing resources in irregular cycles, so it is imperative for the success of our program that our computing facilities can rapidly expand and contract resources to match demand.  Using commercial clouds is an important ingredient for achieving this goal, and our work with AWS on the CMS experiment’s workloads though HEPCloud was a great success in demonstrating the value of this approach.
I hope that you enjoyed this brief insight into the ways in which AWS is helping to explore the frontiers of physics!
Sanjay Padhi, Ph.D, AWS Scientific Computing

New – Change Sets for AWS CloudFormation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-change-sets-for-aws-cloudformation/

AWS CloudFormation lets you create, manage, and update a collection of AWS resources (a “stack”) in a controlled, predictable manner. Every day, customers use CloudFormation to perform hundreds of thousands of updates to the stacks that support their production workloads. They define an initial template and then revise it as their requirements change.
This model, commonly known as infrastructure as code, gives developers, architects, and operations teams detailed control of the provisioning and configuration of their AWS resources. This detailed level of control and accountability is one of the most visible benefits that you get when you use CloudFormation. However, there are several others that are less visible but equally important:
Consistency – The CloudFormation team works with the AWS teams to make sure that newly added resource models have consistent semantics for creating, updating, and deleting resources. They take care to account for retries, idempotency, and management of related resources such as KMS keys for encrypting EBS or RDS volumes.
Stability – In any distributed system, issues related to eventual consistency often arise and must be dealt with. CloudFormation is intimately aware of these issues and automatically waits for any necessary propagation to complete before proceeding. In many cases they work with the service teams to ensure that their APIs and success signals are properly tuned for use with CloudFormation.
Uniformity – CloudFormation will choose between in-place updates and resource replacement when you make updates to your stacks.
All of this work takes time, and some of it cannot be completely tested until the relevant services have been launched or updated.
Improved Support for Updates As I mentioned earlier, many AWS customers use CloudFormation to manage updates to their production stacks. They edit their existing template (or create a new one) and then use CloudFormation’s Update Stack operation to activate the changes.
Many of our customers have asked us for additional insight into the changes that CloudFormation is planning to perform when it updates a stack in accord with the more recent template and/or parameter values. They want to be able to preview the changes, verify that they are in line with their expectations, and proceed with the update.
In order to support this important CloudFormation use case, we are introducing the concept of a change set. You create a change set by submitting changes against the stack you want to update. CloudFormation compares the stack to the new template and/or parameter values and produces a change set that you can review and then choose to apply (execute).
In addition to additional insight into potential changes, this new model also opens the door to additional control over updates. You can use IAM to control access to specific CloudFormation functions such as UpdateStack, CreateChangeSet, DescribeChangeSet, and ExecuteChangeSet. You could allow a large group developers to create and preview change sets, and restrict execution to a smaller and more experienced group. With some additional automation, you could raise alerts or seek additional approvals for changes to key resources such as database servers or networks.
Using Change Sets Let’s walk through the steps involved in working with change sets. As usual, you can get to the same functions using the AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and the CloudFormation API.
I started by creating a stack that runs a LAMP stack on a single EC2 instance. Here are the resources that it created:

Then I decided to step up to a more complex architecture. One of my colleagues shared a suitable template with me. Using the “trust but verify” model, I created a change set in order to see what would happen were I to use the template. I clicked on Create Change Set:

Then I uploaded the new template and assigned a name to the change set. If the template made use of parameters, I could have entered values for them at this point.

At this point I had the option to modify the existing tags and to add new ones. I also had the option to set up advanced options for the stack (none of these will apply until I actually execute the change set, of course):

After another click or two to confirm my intent, the console analyzed the template, checks the results against the stack, and displayed the list of changes:

At this point I can click on Execute to effect the changes. I can also leave the change set as-is, or create several others in order to explore some alternate paths forward. When I am ready to go, I can locate the change set and execute it:

CloudFormation springs to action and implements the changes per the change set:

A few minutes later my new stack configuration was in place and fully operational:

And there you have it! As I mentioned earlier, I can create and inspect multiple change sets before choosing the one that I would like to execute. When I do this, the other change sets are no longer meaningful and are discarded.
Managing Rollbacks If a stack update fails, CloudFormation does its best to put things back the way there were before the update. The rollback operation can fail on occasion; in many cases this is due to a change that was made outside of CloudFormation’s purview. We recently launched a new option that gives you additional control over what happens next. To learn more about this option, read Continue Rolling Back an Update for AWS CloudFormation stacks in the UPDATE_ROLLBACK_FAILED state.
Available Now This functionality is available now and you can start using it today! —
Jeff;

New – Change Sets for AWS CloudFormation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-change-sets-for-aws-cloudformation/

AWS CloudFormation lets you create, manage, and update a collection of AWS resources (a “stack”) in a controlled, predictable manner. Every day, customers use CloudFormation to perform hundreds of thousands of updates to the stacks that support their production workloads. They define an initial template and then revise it as their requirements change.
This model, commonly known as infrastructure as code, gives developers, architects, and operations teams detailed control of the provisioning and configuration of their AWS resources. This detailed level of control and accountability is one of the most visible benefits that you get when you use CloudFormation. However, there are several others that are less visible but equally important:
Consistency – The CloudFormation team works with the AWS teams to make sure that newly added resource models have consistent semantics for creating, updating, and deleting resources. They take care to account for retries, idempotency, and management of related resources such as KMS keys for encrypting EBS or RDS volumes.
Stability – In any distributed system, issues related to eventual consistency often arise and must be dealt with. CloudFormation is intimately aware of these issues and automatically waits for any necessary propagation to complete before proceeding. In many cases they work with the service teams to ensure that their APIs and success signals are properly tuned for use with CloudFormation.
Uniformity – CloudFormation will choose between in-place updates and resource replacement when you make updates to your stacks.
All of this work takes time, and some of it cannot be completely tested until the relevant services have been launched or updated.
Improved Support for Updates As I mentioned earlier, many AWS customers use CloudFormation to manage updates to their production stacks. They edit their existing template (or create a new one) and then use CloudFormation’s Update Stack operation to activate the changes.
Many of our customers have asked us for additional insight into the changes that CloudFormation is planning to perform when it updates a stack in accord with the more recent template and/or parameter values. They want to be able to preview the changes, verify that they are in line with their expectations, and proceed with the update.
In order to support this important CloudFormation use case, we are introducing the concept of a change set. You create a change set by submitting changes against the stack you want to update. CloudFormation compares the stack to the new template and/or parameter values and produces a change set that you can review and then choose to apply (execute).
In addition to additional insight into potential changes, this new model also opens the door to additional control over updates. You can use IAM to control access to specific CloudFormation functions such as UpdateStack, CreateChangeSet, DescribeChangeSet, and ExecuteChangeSet. You could allow a large group developers to create and preview change sets, and restrict execution to a smaller and more experienced group. With some additional automation, you could raise alerts or seek additional approvals for changes to key resources such as database servers or networks.
Using Change Sets Let’s walk through the steps involved in working with change sets. As usual, you can get to the same functions using the AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, and the CloudFormation API.
I started by creating a stack that runs a LAMP stack on a single EC2 instance. Here are the resources that it created:

Then I decided to step up to a more complex architecture. One of my colleagues shared a suitable template with me. Using the “trust but verify” model, I created a change set in order to see what would happen were I to use the template. I clicked on Create Change Set:

Then I uploaded the new template and assigned a name to the change set. If the template made use of parameters, I could have entered values for them at this point.

At this point I had the option to modify the existing tags and to add new ones. I also had the option to set up advanced options for the stack (none of these will apply until I actually execute the change set, of course):

After another click or two to confirm my intent, the console analyzed the template, checks the results against the stack, and displayed the list of changes:

At this point I can click on Execute to effect the changes. I can also leave the change set as-is, or create several others in order to explore some alternate paths forward. When I am ready to go, I can locate the change set and execute it:

CloudFormation springs to action and implements the changes per the change set:

A few minutes later my new stack configuration was in place and fully operational:

And there you have it! As I mentioned earlier, I can create and inspect multiple change sets before choosing the one that I would like to execute. When I do this, the other change sets are no longer meaningful and are discarded.
Managing Rollbacks If a stack update fails, CloudFormation does its best to put things back the way there were before the update. The rollback operation can fail on occasion; in many cases this is due to a change that was made outside of CloudFormation’s purview. We recently launched a new option that gives you additional control over what happens next. To learn more about this option, read Continue Rolling Back an Update for AWS CloudFormation stacks in the UPDATE_ROLLBACK_FAILED state.
Available Now This functionality is available now and you can start using it today! —
Jeff;

Register for and Attend This March 30 Webinar—Best Practices for Managing Security Operations in AWS

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx2KQ8JF65GILZO/Register-for-and-Attend-This-March-30-Webinar-Best-Practices-for-Managing-Securi

As part of the AWS Webinar Series, AWS will present Best Practices for Managing Security Operations in AWS on Wednesday, March 30. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time (UTC-7).

AWS Security Solutions Architect Henrik Johansson will share different ways you can use AWS Identity and Access Management (IAM) to control access to your AWS services and integrate your existing authentication system with AWS IAM. You will learn how you can deploy and control your AWS infrastructure as code by using templates, including change management policies with AWS CloudFormation. In addition, you will explore different options for managing both your AWS access logs and your Amazon Elastic Compute Cloud (EC2) system logs using AWS CloudTrail and Amazon CloudWatch Logs. You will also learn how to implement an audit and compliance validation process using AWS Config and Amazon Inspector.

You will:

  • Better understand the AWS Shared Responsibility Model.
  • Better understand AWS account and identity management options and configuration.
  • Learn the concept of infrastructure as code and change management using CloudFormation.
  • Learn how to audit and log your AWS service usage.
  • Learn about AWS services to add automatic compliance checks to your AWS infrastructure.

The webinar is free, but space is limited and registration is required. Register today.

– Craig

Register for and Attend This March 30 Webinar—Best Practices for Managing Security Operations in AWS

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx2KQ8JF65GILZO/Register-for-and-Attend-This-March-30-Webinar-Best-Practices-for-Managing-Securi

As part of the AWS Webinar Series, AWS will present Best Practices for Managing Security Operations in AWS on Wednesday, March 30. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time (UTC-7).

AWS Security Solutions Architect Henrik Johansson will share different ways you can use AWS Identity and Access Management (IAM) to control access to your AWS services and integrate your existing authentication system with AWS IAM. You will learn how you can deploy and control your AWS infrastructure as code by using templates, including change management policies with AWS CloudFormation. In addition, you will explore different options for managing both your AWS access logs and your Amazon Elastic Compute Cloud (EC2) system logs using AWS CloudTrail and Amazon CloudWatch Logs. You will also learn how to implement an audit and compliance validation process using AWS Config and Amazon Inspector.

You will:

Better understand the AWS Shared Responsibility Model.

Better understand AWS account and identity management options and configuration.

Learn the concept of infrastructure as code and change management using CloudFormation.

Learn how to audit and log your AWS service usage.

Learn about AWS services to add automatic compliance checks to your AWS infrastructure.

The webinar is free, but space is limited and registration is required. Register today.

– Craig

AWS Week in Review – March 14, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-14-2016/

Let’s take a quick look at what happened in AWS-land last week:

Monday
March 14

We announced that the Developer Preview of AWS SDK for C++ is Now Available.
We celebrated Ten Years in the AWS Cloud.
We launched Amazon EMR 4.4.0 with Sqoop, HCatalog, Java 8, and More.
The AWS Compute Blog announced the Launch of AWS Lambda and Amazon API Gateway in the EU (Frankfurt) Region.
The Amazon Simple Email Service Blog annouced that Amazon SES Now Supports Custom Email From Domains.
The AWS Java Blog talked about Using Amazon SQS with Spring Boot and Spring JMS.
The AWS Partner Network Blog urged you to Take Advantage of AWS Self-Paced Labs.
The AWS Windows and .NET Developer Blog showed you how to Retrieve Request Metrics from the AWS SDK for .NET.
The AWS Government, Education, & Nonprofits Blog announced the New Amazon-Busan Cloud Innovation and Technology Center.
We announced Lumberyard Beta 1.1 is Now Available.
Bometric shared AWS Security Best Practices: Network Security.
CloudCheckr listed 5 AWS Security Traps You Might be Missing.
Serverless Code announced that ServerlessConf is Here!
Cloud Academy launched 2 New AWS Courses – (Advanced Techniques for AWS Monitoring, Metrics and Logging and Advanced Deployment Techniques on AWS).
Cloudonaut reminded you to Avoid Sharing Key Pairs for EC2.
8KMiles talked about How Cloud Computing Can Address Healthcare Industry Challenges.
Evident discussed the CIS Foundations Benchmark for AWS Security.
Talkin’ Cloud shared 10 Facts About AWS as it Celebrates 10 Years.
The Next Platform reviewed Ten Years of AWS And a Status Check for HPC Clouds.
ZephyCloud is AWS-powered Wind Farm Design Software.

Tuesday
March 15

We announced the AWS Database Migration Service.
We announced that AWS CloudFormation Now Supports Amazon GameLift.
The AWS Partner Network Blog reminded everyone that Friends Don’t Let Friends Build Data Centers.
The Amazon GameDev Blog talked about Using Autoscaling to Control Costs While Delivering Great Player Experiences.
We updated the AWS SDK for JavaScript, the AWS SDK for Ruby, and the AWS SDK for Go.
Calorious talked about Uploading Images into Amazon S3.
Serverless Code showed you How to Use LXML in Lambda.
The Acquia Developer Center talked about Open-Sourcing Moonshot.
Concurrency Labs encouraged you to Hatch a Swarm of AWS IoT Things Using Locust, EC2 and Get Your IoT Application Ready for Prime Time.

Wednesday
March 16

We announced an S3 Lifecycle Management Update with Support for Multipart Upload and Delete Markers.
We announced that the EC2 Container Service is Now Available in the US West (Oregon) Region.
We announced that Amazon ElastiCache now supports the R3 node family in AWS China (Beijing) and AWS South America (Sao Paulo) Regions.
We announced that AWS IoT Now Integrates with Amazon Elasticsearch Service and CloudWatch.
We published the Puppet on the AWS Cloud: Quick Start Reference Deployment.
We announced that Amazon RDS Enhanced Monitoring is now available in the Asia Pacific (Seoul) Region.
I wrote about Additional Failover Control for Amazon Aurora (this feature was launched earlier in the year).
The AWS Security Blog showed you How to Set Up Uninterrupted, Federated User Access to AWS Using AD FS.
The AWS Java Blog talked about Migrating Your Databases Using AWS Database Migration Service.
We updated the AWS SDK for Java and the AWS CLI.
CloudWedge asked Cloud Computing: Cost Saver or Additional Expense?
Gathering Clouds reviewed New 2016 AWS Services: Certificate Manager, Lambda, Dev SecOps.

Thursday
March 17

We announced the new Marketplace Metering Service for 3rd Party Sellers.
We announced Amazon VPC Endpoints for Amazon S3 in South America (Sao Paulo) and Asia Pacific (Seoul).
We announced AWS CloudTrail Support for Kinesis Firehose.
The AWS Big Data Blog showed you How to Analyze a Time Series in Real Time with AWS Lambda, Amazon Kinesis and Amazon DynamoDB Streams.
The AWS Enterprise Blog showed you How to Create a Cloud Center of Excellence in your Enterprise, and then talked about Staffing Your Enterprise’s Cloud Center of Excellence.
The AWS Mobile Development Blog showed you How to Analyze Device-Generated Data with AWS IoT and Amazon Elasticsearch Service.
Stelligent initiated a series on Serverless Delivery.
CloudHealth Academy talked about Modeling RDS Reservations.
N2W Software talked about How to Pre-Warm Your EBS Volumes on AWS.
ParkMyCloud explained How to Save Money on AWS With ParkMyCloud.

Friday
March 18

The AWS Government, Education, & Nonprofits Blog told you how AWS GovCloud (US) Helps ASD Cut Costs by 50% While Dramatically Improving Security.
The Amazon GameDev Blog discussed Code Archeology: Crafting Lumberyard.
Calorious talked about Importing JSON into DynamoDB.
DZone Cloud Zone talked about Graceful Shutdown Using AWS AutoScaling Groups and Terraform.

Saturday
March 19

DZone Cloud Zone wants to honor some Trailblazing Women in the Cloud.

Sunday
March 20

 Cloudability talked about How Atlassian Nailed the Reserved Instance Buying Process.
DZone Cloud Zone talked about Serverless Delivery Architectures.
Gorillastack explained Why the Cloud is THE Key Technology Enabler for Digital Transformation.

New & Notable Open Source

Tumbless is a blogging platform based only on S3 and your browser.
aws-amicleaner cleans up old, unused AMIs and related snapshots.
alexa-aws-administration helps you to do various administration tasks in your AWS account using an Amazon Echo.
aws-s3-zipper takes an S3 bucket folder and zips it for streaming.
aws-lambda-helper is a collection of helper methods for Lambda.
CloudSeed lets you describe a list of AWS stack components, then configure and build a custom stack.
aws-ses-sns-dashboard is a Go-based dashboard with SES and SNS notifications.
snowplow-scala-analytics-sdk is a Scala SDK for working with Snowplow-enriched events in Spark using Lambda.
StackFormation is a lightweight CloudFormation stack manager.
aws-keychain-util is a command-line utility to manage AWS credentials in the OS X keychain.

New SlideShare Presentations

Account Separation and Mandatory Access Control on AWS.
Crypto Options in AWS.
Security Day IAM Recommended Practices.
What’s Nearly New.

New Customer Success Stories

AdiMap measures online advertising spend, app financials, and salary data. Using AWS, AdiMap builds predictive financial models without spending millions on compute resources and hardware, providing scalable financial intelligence and reducing time to market for new products.
Change.org is the world’s largest and fastest growing social change platform, with more than 125 million users in 196 countries starting campaigns and mobilizing support for local causes and global issues. The organization runs its website and business intelligence cluster on AWS, and runs its continuous integration and testing on Solano CI from APN member Solano Labs.
Flatiron Health has been able to reach 230 cancer clinics and 2,200 clinicians across the United States with a solution that captures and organizes oncology data, helping to support cancer treatments. Flatiron moved its solution to AWS to improve speed to market and to minimize the time and expense that the startup company needs to devote to its IT infrastructure.
Global Red specializes in lifecycle marketing, including strategy, data, analytics, and execution across all digital channels. By re-architecting and migrating its data platform and related applications to AWS, Global Red reduced the time to onboard new customers for its advertising trading desk and marketing automation platforms by 50 percent.
GMobi primarily sells its products and services to Original Design Manufacturers and Original Equipment Manufacturers in emerging markets. By running its “over the air” firmware updates, mobile billing, and advertising software development kits in an AWS infrastructure, GMobi has grown to support 120 million users while maintaining more than 99.9 percent availability
Time Inc.’s new chief technology officer joined the renowned media organization in early 2014, and promised big changes. With AWS, Time Inc. can leverage security features and functionality that mirror the benefits of cloud computing, including rich tools, best-in-class industry standards and protocols and lower costs.
Seaco Global is one of the world’s largest shipping companies. By using AWS to run SAP applications, it also reduced the time needed to complete monthly business processes to just one day, down from four days in the past.

New YouTube Videos

AWS Database Migration Service.
Introduction to Amazon WorkSpaces.
AWS Pop-up Loft.
Save the Date – AWS re:Invent 2016.

Upcoming Events

March 22nd – Live Event (Seattle, Washington) – AWS Big Data Meetup – Intro to SparkR.
March 22nd – Live Broadcast – VoiceOps: Commanding and Controlling Your AWS environments using Amazon Echo and Lambda.
March 23rd – Live Event (Atlanta, Georgia) – AWS Key Management Service & AWS Storage Services for a Hybrid Cloud (Atlanta AWS Community).
April 6th – Live Event (Boston, Massachusetts) AWS at Bio-IT World.
April 18th & 19th – Live Event (Chicago, Illinois) – AWS Summit – Chicago.
April 20th – Live Event (Melbourne, Australia) – Inaugural Melbourne Serverless Meetup.
April 26th – Live Event (Sydney, Australia) – AWS Partner Summit.
April 26th – Live Event (Sydney, Australia) – Inaugural Sydney Serverless Meetup.
ParkMyCloud 2016 AWS Cost-Reduction Roadshow.
AWS Loft – San Francisco.
AWS Loft – New York.
AWS Loft – Tel Aviv.
AWS Zombie Microservices Roadshow.
AWS Public Sector Events.
AWS Global Summit Series.

Help Wanted

AWS Careers.

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.
Jeff;

AWS Week in Review – March 14, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-14-2016/

Let’s take a quick look at what happened in AWS-land last week:

Monday
March 14

We announced that the Developer Preview of AWS SDK for C++ is Now Available.
We celebrated Ten Years in the AWS Cloud.
We launched Amazon EMR 4.4.0 with Sqoop, HCatalog, Java 8, and More.
The AWS Compute Blog announced the Launch of AWS Lambda and Amazon API Gateway in the EU (Frankfurt) Region.
The Amazon Simple Email Service Blog annouced that Amazon SES Now Supports Custom Email From Domains.
The AWS Java Blog talked about Using Amazon SQS with Spring Boot and Spring JMS.
The AWS Partner Network Blog urged you to Take Advantage of AWS Self-Paced Labs.
The AWS Windows and .NET Developer Blog showed you how to Retrieve Request Metrics from the AWS SDK for .NET.
The AWS Government, Education, & Nonprofits Blog announced the New Amazon-Busan Cloud Innovation and Technology Center.
We announced Lumberyard Beta 1.1 is Now Available.
Bometric shared AWS Security Best Practices: Network Security.
CloudCheckr listed 5 AWS Security Traps You Might be Missing.
Serverless Code announced that ServerlessConf is Here!
Cloud Academy launched 2 New AWS Courses – (Advanced Techniques for AWS Monitoring, Metrics and Logging and Advanced Deployment Techniques on AWS).
Cloudonaut reminded you to Avoid Sharing Key Pairs for EC2.
8KMiles talked about How Cloud Computing Can Address Healthcare Industry Challenges.
Evident discussed the CIS Foundations Benchmark for AWS Security.
Talkin’ Cloud shared 10 Facts About AWS as it Celebrates 10 Years.
The Next Platform reviewed Ten Years of AWS And a Status Check for HPC Clouds.
ZephyCloud is AWS-powered Wind Farm Design Software.

Tuesday
March 15

We announced the AWS Database Migration Service.
We announced that AWS CloudFormation Now Supports Amazon GameLift.
The AWS Partner Network Blog reminded everyone that Friends Don’t Let Friends Build Data Centers.
The Amazon GameDev Blog talked about Using Autoscaling to Control Costs While Delivering Great Player Experiences.
We updated the AWS SDK for JavaScript, the AWS SDK for Ruby, and the AWS SDK for Go.
Calorious talked about Uploading Images into Amazon S3.
Serverless Code showed you How to Use LXML in Lambda.
The Acquia Developer Center talked about Open-Sourcing Moonshot.
Concurrency Labs encouraged you to Hatch a Swarm of AWS IoT Things Using Locust, EC2 and Get Your IoT Application Ready for Prime Time.

Wednesday
March 16

We announced an S3 Lifecycle Management Update with Support for Multipart Upload and Delete Markers.
We announced that the EC2 Container Service is Now Available in the US West (Oregon) Region.
We announced that Amazon ElastiCache now supports the R3 node family in AWS China (Beijing) and AWS South America (Sao Paulo) Regions.
We announced that AWS IoT Now Integrates with Amazon Elasticsearch Service and CloudWatch.
We published the Puppet on the AWS Cloud: Quick Start Reference Deployment.
We announced that Amazon RDS Enhanced Monitoring is now available in the Asia Pacific (Seoul) Region.
I wrote about Additional Failover Control for Amazon Aurora (this feature was launched earlier in the year).
The AWS Security Blog showed you How to Set Up Uninterrupted, Federated User Access to AWS Using AD FS.
The AWS Java Blog talked about Migrating Your Databases Using AWS Database Migration Service.
We updated the AWS SDK for Java and the AWS CLI.
CloudWedge asked Cloud Computing: Cost Saver or Additional Expense?
Gathering Clouds reviewed New 2016 AWS Services: Certificate Manager, Lambda, Dev SecOps.

Thursday
March 17

We announced the new Marketplace Metering Service for 3rd Party Sellers.
We announced Amazon VPC Endpoints for Amazon S3 in South America (Sao Paulo) and Asia Pacific (Seoul).
We announced AWS CloudTrail Support for Kinesis Firehose.
The AWS Big Data Blog showed you How to Analyze a Time Series in Real Time with AWS Lambda, Amazon Kinesis and Amazon DynamoDB Streams.
The AWS Enterprise Blog showed you How to Create a Cloud Center of Excellence in your Enterprise, and then talked about Staffing Your Enterprise’s Cloud Center of Excellence.
The AWS Mobile Development Blog showed you How to Analyze Device-Generated Data with AWS IoT and Amazon Elasticsearch Service.
Stelligent initiated a series on Serverless Delivery.
CloudHealth Academy talked about Modeling RDS Reservations.
N2W Software talked about How to Pre-Warm Your EBS Volumes on AWS.
ParkMyCloud explained How to Save Money on AWS With ParkMyCloud.

Friday
March 18

The AWS Government, Education, & Nonprofits Blog told you how AWS GovCloud (US) Helps ASD Cut Costs by 50% While Dramatically Improving Security.
The Amazon GameDev Blog discussed Code Archeology: Crafting Lumberyard.
Calorious talked about Importing JSON into DynamoDB.
DZone Cloud Zone talked about Graceful Shutdown Using AWS AutoScaling Groups and Terraform.

Saturday
March 19

DZone Cloud Zone wants to honor some Trailblazing Women in the Cloud.

Sunday
March 20

 Cloudability talked about How Atlassian Nailed the Reserved Instance Buying Process.
DZone Cloud Zone talked about Serverless Delivery Architectures.
Gorillastack explained Why the Cloud is THE Key Technology Enabler for Digital Transformation.

New & Notable Open Source

Tumbless is a blogging platform based only on S3 and your browser.
aws-amicleaner cleans up old, unused AMIs and related snapshots.
alexa-aws-administration helps you to do various administration tasks in your AWS account using an Amazon Echo.
aws-s3-zipper takes an S3 bucket folder and zips it for streaming.
aws-lambda-helper is a collection of helper methods for Lambda.
CloudSeed lets you describe a list of AWS stack components, then configure and build a custom stack.
aws-ses-sns-dashboard is a Go-based dashboard with SES and SNS notifications.
snowplow-scala-analytics-sdk is a Scala SDK for working with Snowplow-enriched events in Spark using Lambda.
StackFormation is a lightweight CloudFormation stack manager.
aws-keychain-util is a command-line utility to manage AWS credentials in the OS X keychain.

New SlideShare Presentations

Account Separation and Mandatory Access Control on AWS.
Crypto Options in AWS.
Security Day IAM Recommended Practices.
What’s Nearly New.

New Customer Success Stories

AdiMap measures online advertising spend, app financials, and salary data. Using AWS, AdiMap builds predictive financial models without spending millions on compute resources and hardware, providing scalable financial intelligence and reducing time to market for new products.
Change.org is the world’s largest and fastest growing social change platform, with more than 125 million users in 196 countries starting campaigns and mobilizing support for local causes and global issues. The organization runs its website and business intelligence cluster on AWS, and runs its continuous integration and testing on Solano CI from APN member Solano Labs.
Flatiron Health has been able to reach 230 cancer clinics and 2,200 clinicians across the United States with a solution that captures and organizes oncology data, helping to support cancer treatments. Flatiron moved its solution to AWS to improve speed to market and to minimize the time and expense that the startup company needs to devote to its IT infrastructure.
Global Red specializes in lifecycle marketing, including strategy, data, analytics, and execution across all digital channels. By re-architecting and migrating its data platform and related applications to AWS, Global Red reduced the time to onboard new customers for its advertising trading desk and marketing automation platforms by 50 percent.
GMobi primarily sells its products and services to Original Design Manufacturers and Original Equipment Manufacturers in emerging markets. By running its “over the air” firmware updates, mobile billing, and advertising software development kits in an AWS infrastructure, GMobi has grown to support 120 million users while maintaining more than 99.9 percent availability
Time Inc.’s new chief technology officer joined the renowned media organization in early 2014, and promised big changes. With AWS, Time Inc. can leverage security features and functionality that mirror the benefits of cloud computing, including rich tools, best-in-class industry standards and protocols and lower costs.
Seaco Global is one of the world’s largest shipping companies. By using AWS to run SAP applications, it also reduced the time needed to complete monthly business processes to just one day, down from four days in the past.

New YouTube Videos

AWS Database Migration Service.
Introduction to Amazon WorkSpaces.
AWS Pop-up Loft.
Save the Date – AWS re:Invent 2016.

Upcoming Events

March 22nd – Live Event (Seattle, Washington) – AWS Big Data Meetup – Intro to SparkR.
March 22nd – Live Broadcast – VoiceOps: Commanding and Controlling Your AWS environments using Amazon Echo and Lambda.
March 23rd – Live Event (Atlanta, Georgia) – AWS Key Management Service & AWS Storage Services for a Hybrid Cloud (Atlanta AWS Community).
April 6th – Live Event (Boston, Massachusetts) AWS at Bio-IT World.
April 18th & 19th – Live Event (Chicago, Illinois) – AWS Summit – Chicago.
April 20th – Live Event (Melbourne, Australia) – Inaugural Melbourne Serverless Meetup.
April 26th – Live Event (Sydney, Australia) – AWS Partner Summit.
April 26th – Live Event (Sydney, Australia) – Inaugural Sydney Serverless Meetup.
ParkMyCloud 2016 AWS Cost-Reduction Roadshow.
AWS Loft – San Francisco.
AWS Loft – New York.
AWS Loft – Tel Aviv.
AWS Zombie Microservices Roadshow.
AWS Public Sector Events.
AWS Global Summit Series.

Help Wanted

AWS Careers.

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.
Jeff;

How to Reduce Security Threats and Operating Costs Using AWS WAF and Amazon CloudFront

Post Syndicated from Vlad Vlasceanu original https://blogs.aws.amazon.com/security/post/Tx1G747SE1R2ZWE/How-to-Reduce-Security-Threats-and-Operating-Costs-Using-AWS-WAF-and-Amazon-Clou

Some Internet operations trust that clients are “well behaved.” As an operator of a publicly accessible web application, for example, you have to trust that the clients accessing your content identify themselves accurately, or that they only use your services in the manner you expect. However, some clients are bad actors. These bad actors are typically automated processes: some might try to scrape your content for their own profit (content scrapers), and others might misrepresent who they are to bypass restrictions (bad bots). For example, they might use a fake user agent.

Successfully blocking bad actors can help reduce security threats to your systems. In addition, you can lower your overall costs, because you no longer have to serve traffic to unintended audiences. In this blog post, I will show you how you can realize these benefits by building a process to help detect content scrapers and bad bots, and then use Amazon CloudFront with AWS WAF (a web application firewall [WAF]) to help block bad actors’ access to your content.

WAFs give you back some control. For example, with AWS WAF you can filter traffic, look for bad actors, and block their access. This is no small feat because bad actors change methods continually to mask their actions, forcing you to adapt your detection methods frequently. Because AWS is fully programmable using RESTful APIs, you can integrate it into your existing DevOps workflows, and build automations around it to react dynamically to the changing methods of bad actors.

AWS WAF works by allowing you to define a set of rules, called a web access control list (web ACL). Each rule in the list contains a set of conditions and an action. Requests received by CloudFront are handed over to AWS WAF for inspection. Individual rules are checked in order. If the request matches the conditions specified in a rule, the indicated action is taken; if not, the default action of the web ACL is taken. Actions can allow the request to be serviced, block the request, or simply count the request for later analysis. Conditions offer a range of options to match traffic based on patterns, such as the source IP address, SQL injection attempts, size of the request, or strings of text. These constructs offer a wide range of capabilities to filter unwanted traffic.

Let’s get start with the involved AWS services and an overview of the solution itself. Because AWS WAF integrates with Amazon CloudFront, your website or web application must be fronted by a CloudFront distribution for the solution to work.

How AWS services help to make this solution work

The following AWS services work together to help block content scrapers and bad bots:

  • As I already mentioned, AWS WAF helps protect your web applications from common web exploits that can affect their availability, compromise security, or consume excessive resources.
  • CloudFront is a content delivery web service. It integrates with other AWS products to give you an easy way to distribute content to end users with low latency and high data-transfer speeds.
  • AWS Lambda enables you to run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or back-end service.
  • Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. You can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as code running on Lambda or any web application.
  • AWS CloudFormation gives you an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Solution overview

Blocking content scrapers and bad bots involves 2 main actions:

  1. Detect an inbound request from a content scraper or bad bot.
  2. Block any subsequent requests from that content scraper or bad bot.

For the solution in today’s post to be effective, your web application must employ both of these actions. The following architecture diagram shows how you can implement this solution by using AWS services.

These are the key elements of the diagram:

  1. A bad bot requests a specifically disallowed URL on your web application. This URL is implemented outside your web application in the blocking solution.
  2. The URL invocation triggers an Lambda function that captures the IP address of the requestor (source address).
  3. The function adds the source address to an AWS WAF block list.
  4. The function also issues a notification to an Amazon SNS topic, informing recipients that a bad bot was blocked.

CloudFront will block additional requests from the source address of the bad bot by checking the AWS WAF block list.

In the remainder of this post, I describe in detail how this solution works.

Detecting content scrapers and bad bots

To detect an inbound request from a content scraper or bad bot, set up a honeypot. This is usually a piece of content that good actors know they are not supposed to— and don’t—access. First, embed a link in your content pointing to the honeypot. You should hide this link from your regular human users, as shown in the following code.

<a href="/v1/honeypot/" style="display: none" aria-hidden="true">honeypot link</a>

Note: In production, do not call the link honeypot. Use a name that is similar to the content in your application. For example, if you are operating an online store with a product catalog, you might use a fake product name or something similar.

Next, instruct good content scrapers and bots to ignore this embedded link. Use the robots exclusion standard (a robots.txt file in the root of your website) and protocol to specify which portions of your site are off limits and to what content scrapers and bots. Conforming content scrapers and bots, such as Google’s web-crawling bot Googlebot, will actively look for this file first, download it, and refrain from indexing any content you disallow in the file. However because this protocol relies on trust, content scrapers and bots can ignore your robots.txt file, which is often the case with malware bots that scan for security vulnerabilities and scrape email addresses.

The following is a robots.txt example file, which disallows access to the honeypot URL described previously.

User-agent: *
Disallow: /v1/honeypot/

Between the embedded link and the robots.txt file, it is likely that any requests made to the honeypot URL do not come from a legitimate user. This is what forms the basis of the detection process.

Blocking content scrapers and bad bots

Next, set up a script that is triggered when the honeypot URL is requested. As mentioned previously, AWS WAF uses a set of rules and conditions to match traffic and trigger actions. In this case, you will use an AWS WAF IPSet filter condition to create a block list, which is a list of disallowed source IP addresses. The script captures the IP address of the requestor and adds it to the block list. Then, when CloudFront passes an inbound request over to AWS WAF for inspection, the rule is triggered if the source IP address appears in the block list. In turn, AWS WAF instructs CloudFront to block the request. Any subsequent requests for your content from that source IP address will be blocked when the honeypot URL is requested.

Note: IPSet filter lists can store up to 1,000 IP addresses or ranges expressed in Classless Inter-Domain Routing (CIDR) format. If you expect the block list to exceed this number, consider using multiple IPSet filter lists and rules. For more details on service limits, see the AWS WAF Limits documentation.

In the remainder of this post, I show you how to implement the honeypot trap using Lambda and Amazon API Gateway. The trap is a minimal microservice that enables you to implement it without having to manage compute capacity and scaling.

Solution implementation and deployment

All resources for this solution are also available for download from our GitHub repository to enable you to inspect the code and change it as needed.

Step 1: Create a RESTful API

To start, you’ll need to create a RESTful API using API Gateway. Using the AWS CLI tools, run the following command and make note of the API ID returned by the call. (For details about how to install and configure the AWS CLI tools, see Getting Set Up with the AWS Command Line Interface.)

$ aws apigateway create-rest-api --name myBotBockingApi

The output will look like this (the line that has the API ID is highlighted):

{
    "name": "myFirstApi",
    "id": "xxxxxxxxxx",
    "createdDate": 1454978163
}

Note: We recommend that you deploy all resources in the same region. Because this solution uses API Gateway and Lambda, see the AWS Global Infrastructure Region Table to check which AWS regions support these services.

Step 2: Deploy the CloudFormation stack

Download this CloudFormation template and run it in your AWS account in the desired region. For detailed steps about how to create a CloudFormation stack based on a template, see this walkthrough.

You must provide two parameters:

  1. The Base Resource Name you want to use for the created resources.
  2. The RESTful API ID of the API created in Step 1 earlier in this post.

The CloudFormation – Create Stack page looks like what is shown in the following screenshot.

CloudFormation will create a web ACL, rule, and empty IPSet filter condition. Additionally, it will create an Amazon Simple Notification Service (SNS) topic to which you can subscribe so that you can receive notifications when new IP addresses are added to the list. CloudFormation will also create a Lambda function and an IAM execution role for the Lambda function, authorizing the function to change the IPSet. The service will also add a permission allowing the RESTful API to invoke the function.

Step 3: Set up API Gateway

We also provide a convenient Swagger template that you can use to set up API Gateway, after the relevant resources have been created using CloudFormation. Swagger is a specification and complete framework implementation for representing RESTful web services, allowing for deployment of easily reproducible APIs. Use the Swagger importer tool to set up API Gateway, but make sure you change the downloaded Swagger template in JSON format, by updating all occurrences of the placeholders shown in the following table.

Placeholder Description Example

[[region]]

The desired region

us-east-1

[[account-id]]

The account ID where the resources are created

012345678901

[[honeypot-uri]]

The name of the honeypot URI endpoint

honeypot

[[lambda-function-name]]

The name of the Lambda function created by CloudFormation (check the Outputs section of the stack)

wafBadBotBlocker-rLambdaFunction-XXXXXXXXXXXXX

Clone the Swagger import tool from GitHub and follow the tool’s readme file to build the import tool using Apache Maven, as shown in the following command.

$ git clone https://github.com/awslabs/aws-apigateway-importer.git aws-apigateway-importer && cd aws-apigateway-importer

Import the customized template (make sure you use the same region as for the CloudFormation resources), and replace [api-id] with the ID from Step 1 earlier in this post, and replace [basepath] with your desired URL segment (such as v1).

$ ./aws-api-import.sh --update [api-id] --deploy [basepath] /path/to/swagger/template.json

In API Gateway terminology, our [basepath] URL segment is called a stage, and defines the path through which an API is accessible.

Step 4: Finish the configuration

Finish the configuration by connecting API Gateway to the CloudFront distribution:

  1. Create an API key, which will be used to ensure that only requests originating from CloudFront will be authorized by API Gateway.
  2. Associate the newly created API key with the deployed API stage. The following image shows an example console page with the API key selected and the recommended API Stage Association values.


     

  3. Find the API Gateway endpoint created by the Swagger import script. You will need this endpoint for the custom origin. Find the endpoint on the API Gateway console by clicking the name of the deployed stage, as highlighted in the following image.

  1. Create a new custom origin in your CloudFront distribution, using the API Gateway endpoint. The details screen in the AWS Management Console for your existing CloudFront distribution will look similar to the following image, which already contains a few distinct origins. Click Create Origin.


     

  2. As shown in the following screenshot, use the API Gateway endpoint as the Origin Domain Name. Make sure the Origin Protocol Policy is set to HTTPS Only and add the API key in the Origin Custom Headers box. Then click Create.

  1. Add a cache behavior that matches your base path (API Gateway stage) and honeypot URL segment. This will point traffic to the newly created custom origin. The following screenshot shows an example console screen that lists CloudFront distribution behaviors. Click Create Behavior.

  1. Use the value of your base path and honeypot URL to set the Path Pattern field. The honeypot URL must match the value in the robots.txt file you deploy and the API Gateway method specified. Select the Custom Origin you just created and configure additional settings, as illustrated in the following screenshot:
  • Though whitelist headers are not strictly required, creating them to match the following screenshot would provide additional identification for your blocked IP notifications.
  • I recommend that you customize the Object Caching policy to not cache responses from the honeypot. Set the values of Minimum TTL, Maximum TTL, and Default TTL to 0 (zero), as shown in the following screenshot.

  1. Register the AWS WAF web ACL with your CloudFront distribution. The General tab of your distribution (see the following screenshot) contains settings affecting the configuration of your content delivery network. Click Edit.


     

  2. Find the AWS WAF Web ACL drop-down list (see the following screenshot) and choose the correct web ACL from the list. The name of the web ACL will start with the name you assigned as the Base Resource Name when you launched the CloudFormation template earlier.

  1. To receive notifications when an IP address gets blocked, subscribe to the SNS topic created by CloudFormation. You can receive emails or even text messages, and you can use that opportunity to validate the blocking action and remove the IP address from the block list, if it was blocked in error. For more information about how to subscribe to SNS topics, see Subscribe to a Topic.

Summary

The solution explained in this blog post helps detect content scrapers and bad bots. In most production deployments, though, this is just a component of a more comprehensive web traffic filtering strategy. AWS WAF provides a highly customizable service that can be interacted with programmatically to react faster to changing threats.

If you have comments about this blog post, please submit them in the “Comments” section below. If you have questions about or issues deploying this solution, start a new thread on the AWS WAF forum.

– Vlad

How to Reduce Security Threats and Operating Costs Using AWS WAF and Amazon CloudFront

Post Syndicated from Vlad Vlasceanu original https://blogs.aws.amazon.com/security/post/Tx1G747SE1R2ZWE/How-to-Reduce-Security-Threats-and-Operating-Costs-Using-AWS-WAF-and-Amazon-Clou

Some Internet operations trust that clients are “well behaved.” As an operator of a publicly accessible web application, for example, you have to trust that the clients accessing your content identify themselves accurately, or that they only use your services in the manner you expect. However, some clients are bad actors. These bad actors are typically automated processes: some might try to scrape your content for their own profit (content scrapers), and others might misrepresent who they are to bypass restrictions (bad bots). For example, they might use a fake user agent.

Successfully blocking bad actors can help reduce security threats to your systems. In addition, you can lower your overall costs, because you no longer have to serve traffic to unintended audiences. In this blog post, I will show you how you can realize these benefits by building a process to help detect content scrapers and bad bots, and then use Amazon CloudFront with AWS WAF (a web application firewall [WAF]) to help block bad actors’ access to your content.

WAFs give you back some control. For example, with AWS WAF you can filter traffic, look for bad actors, and block their access. This is no small feat because bad actors change methods continually to mask their actions, forcing you to adapt your detection methods frequently. Because AWS is fully programmable using RESTful APIs, you can integrate it into your existing DevOps workflows, and build automations around it to react dynamically to the changing methods of bad actors.

AWS WAF works by allowing you to define a set of rules, called a web access control list (web ACL). Each rule in the list contains a set of conditions and an action. Requests received by CloudFront are handed over to AWS WAF for inspection. Individual rules are checked in order. If the request matches the conditions specified in a rule, the indicated action is taken; if not, the default action of the web ACL is taken. Actions can allow the request to be serviced, block the request, or simply count the request for later analysis. Conditions offer a range of options to match traffic based on patterns, such as the source IP address, SQL injection attempts, size of the request, or strings of text. These constructs offer a wide range of capabilities to filter unwanted traffic.

Let’s get start with the involved AWS services and an overview of the solution itself. Because AWS WAF integrates with Amazon CloudFront, your website or web application must be fronted by a CloudFront distribution for the solution to work.

How AWS services help to make this solution work

The following AWS services work together to help block content scrapers and bad bots:

As I already mentioned, AWS WAF helps protect your web applications from common web exploits that can affect their availability, compromise security, or consume excessive resources.

CloudFront is a content delivery web service. It integrates with other AWS products to give you an easy way to distribute content to end users with low latency and high data-transfer speeds.

AWS Lambda enables you to run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or back-end service.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. You can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as code running on Lambda or any web application.

AWS CloudFormation gives you an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Solution overview

Blocking content scrapers and bad bots involves 2 main actions:

Detect an inbound request from a content scraper or bad bot.

Block any subsequent requests from that content scraper or bad bot.

For the solution in today’s post to be effective, your web application must employ both of these actions. The following architecture diagram shows how you can implement this solution by using AWS services.

These are the key elements of the diagram:

A bad bot requests a specifically disallowed URL on your web application. This URL is implemented outside your web application in the blocking solution.

The URL invocation triggers an Lambda function that captures the IP address of the requestor (source address).

The function adds the source address to an AWS WAF block list.

The function also issues a notification to an Amazon SNS topic, informing recipients that a bad bot was blocked.

CloudFront will block additional requests from the source address of the bad bot by checking the AWS WAF block list.

In the remainder of this post, I describe in detail how this solution works.

Detecting content scrapers and bad bots

To detect an inbound request from a content scraper or bad bot, set up a honeypot. This is usually a piece of content that good actors know they are not supposed to— and don’t—access. First, embed a link in your content pointing to the honeypot. You should hide this link from your regular human users, as shown in the following code.

<a href="/v1/honeypot/" style="display: none" aria-hidden="true">honeypot link</a>

Note: In production, do not call the link honeypot. Use a name that is similar to the content in your application. For example, if you are operating an online store with a product catalog, you might use a fake product name or something similar.

Next, instruct good content scrapers and bots to ignore this embedded link. Use the robots exclusion standard (a robots.txt file in the root of your website) and protocol to specify which portions of your site are off limits and to what content scrapers and bots. Conforming content scrapers and bots, such as Google’s web-crawling bot Googlebot, will actively look for this file first, download it, and refrain from indexing any content you disallow in the file. However because this protocol relies on trust, content scrapers and bots can ignore your robots.txt file, which is often the case with malware bots that scan for security vulnerabilities and scrape email addresses.

The following is a robots.txt example file, which disallows access to the honeypot URL described previously.

User-agent: *
Disallow: /v1/honeypot/

Between the embedded link and the robots.txt file, it is likely that any requests made to the honeypot URL do not come from a legitimate user. This is what forms the basis of the detection process.

Blocking content scrapers and bad bots

Next, set up a script that is triggered when the honeypot URL is requested. As mentioned previously, AWS WAF uses a set of rules and conditions to match traffic and trigger actions. In this case, you will use an AWS WAF IPSet filter condition to create a block list, which is a list of disallowed source IP addresses. The script captures the IP address of the requestor and adds it to the block list. Then, when CloudFront passes an inbound request over to AWS WAF for inspection, the rule is triggered if the source IP address appears in the block list. In turn, AWS WAF instructs CloudFront to block the request. Any subsequent requests for your content from that source IP address will be blocked when the honeypot URL is requested.

Note: IPSet filter lists can store up to 1,000 IP addresses or ranges expressed in Classless Inter-Domain Routing (CIDR) format. If you expect the block list to exceed this number, consider using multiple IPSet filter lists and rules. For more details on service limits, see the AWS WAF Limits documentation.

In the remainder of this post, I show you how to implement the honeypot trap using Lambda and Amazon API Gateway. The trap is a minimal microservice that enables you to implement it without having to manage compute capacity and scaling.

Solution implementation and deployment

All resources for this solution are also available for download from our GitHub repository to enable you to inspect the code and change it as needed.

Step 1: Create a RESTful API

To start, you’ll need to create a RESTful API using API Gateway. Using the AWS CLI tools, run the following command and make note of the API ID returned by the call. (For details about how to install and configure the AWS CLI tools, see Getting Set Up with the AWS Command Line Interface.)

$ aws apigateway create-rest-api –name myBotBockingApi

The output will look like this (the line that has the API ID is highlighted):

{
"name": "myFirstApi",
"id": "xxxxxxxxxx",
"createdDate": 1454978163
}

Note: We recommend that you deploy all resources in the same region. Because this solution uses API Gateway and Lambda, see the AWS Global Infrastructure Region Table to check which AWS regions support these services.

Step 2: Deploy the CloudFormation stack

Download this CloudFormation template and run it in your AWS account in the desired region. For detailed steps about how to create a CloudFormation stack based on a template, see this walkthrough.

You must provide two parameters:

The Base Resource Name you want to use for the created resources.

The RESTful API ID of the API created in Step 1 earlier in this post.

The CloudFormation – Create Stack page looks like what is shown in the following screenshot.

CloudFormation will create a web ACL, rule, and empty IPSet filter condition. Additionally, it will create an Amazon Simple Notification Service (SNS) topic to which you can subscribe so that you can receive notifications when new IP addresses are added to the list. CloudFormation will also create a Lambda function and an IAM execution role for the Lambda function, authorizing the function to change the IPSet. The service will also add a permission allowing the RESTful API to invoke the function.

Step 3: Set up API Gateway

We also provide a convenient Swagger template that you can use to set up API Gateway, after the relevant resources have been created using CloudFormation. Swagger is a specification and complete framework implementation for representing RESTful web services, allowing for deployment of easily reproducible APIs. Use the Swagger importer tool to set up API Gateway, but make sure you change the downloaded Swagger template in JSON format, by updating all occurrences of the placeholders shown in the following table.

Placeholder

Description

Example

[[region]]

The desired region

us-east-1

[[account-id]]

The account ID where the resources are created

012345678901

[[honeypot-uri]]

The name of the honeypot URI endpoint

honeypot

[[lambda-function-name]]

The name of the Lambda function created by CloudFormation (check the Outputs section of the stack)

wafBadBotBlocker-rLambdaFunction-XXXXXXXXXXXXX

Clone the Swagger import tool from GitHub and follow the tool’s readme file to build the import tool using Apache Maven, as shown in the following command.

$ git clone https://github.com/awslabs/aws-apigateway-importer.git aws-apigateway-importer && cd aws-apigateway-importer

Import the customized template (make sure you use the same region as for the CloudFormation resources), and replace [api-id] with the ID from Step 1 earlier in this post, and replace [basepath] with your desired URL segment (such as v1).

$ ./aws-api-import.sh –update [api-id] –deploy [basepath] /path/to/swagger/template.json

In API Gateway terminology, our [basepath] URL segment is called a stage, and defines the path through which an API is accessible.

Step 4: Finish the configuration

Finish the configuration by connecting API Gateway to the CloudFront distribution:

Create an API key, which will be used to ensure that only requests originating from CloudFront will be authorized by API Gateway.

Associate the newly created API key with the deployed API stage. The following image shows an example console page with the API key selected and the recommended API Stage Association values.


 

Find the API Gateway endpoint created by the Swagger import script. You will need this endpoint for the custom origin. Find the endpoint on the API Gateway console by clicking the name of the deployed stage, as highlighted in the following image.

Create a new custom origin in your CloudFront distribution, using the API Gateway endpoint. The details screen in the AWS Management Console for your existing CloudFront distribution will look similar to the following image, which already contains a few distinct origins. Click Create Origin.


 

As shown in the following screenshot, use the API Gateway endpoint as the Origin Domain Name. Make sure the Origin Protocol Policy is set to HTTPS Only and add the API key in the Origin Custom Headers box. Then click Create.

Add a cache behavior that matches your base path (API Gateway stage) and honeypot URL segment. This will point traffic to the newly created custom origin. The following screenshot shows an example console screen that lists CloudFront distribution behaviors. Click Create Behavior.

Use the value of your base path and honeypot URL to set the Path Pattern field. The honeypot URL must match the value in the robots.txt file you deploy and the API Gateway method specified. Select the Custom Origin you just created and configure additional settings, as illustrated in the following screenshot:

Though whitelist headers are not strictly required, creating them to match the following screenshot would provide additional identification for your blocked IP notifications.

I recommend that you customize the Object Caching policy to not cache responses from the honeypot. Set the values of Minimum TTL, Maximum TTL, and Default TTL to 0 (zero), as shown in the following screenshot.

Register the AWS WAF web ACL with your CloudFront distribution. The General tab of your distribution (see the following screenshot) contains settings affecting the configuration of your content delivery network. Click Edit.


 

Find the AWS WAF Web ACL drop-down list (see the following screenshot) and choose the correct web ACL from the list. The name of the web ACL will start with the name you assigned as the Base Resource Name when you launched the CloudFormation template earlier.

To receive notifications when an IP address gets blocked, subscribe to the SNS topic created by CloudFormation. You can receive emails or even text messages, and you can use that opportunity to validate the blocking action and remove the IP address from the block list, if it was blocked in error. For more information about how to subscribe to SNS topics, see Subscribe to a Topic.

Summary

The solution explained in this blog post helps detect content scrapers and bad bots. In most production deployments, though, this is just a component of a more comprehensive web traffic filtering strategy. AWS WAF provides a highly customizable service that can be interacted with programmatically to react faster to changing threats.

If you have comments about this blog post, please submit them in the “Comments” section below. If you have questions about or issues deploying this solution, start a new thread on the AWS WAF forum.

– Vlad

How to Automate Restricting Access to a VPC by Using AWS IAM and AWS CloudFormation

Post Syndicated from Chris Craig original https://blogs.aws.amazon.com/security/post/Tx2Q3KHYPNJBBRX/How-to-Automate-Restricting-Access-to-a-VPC-by-Using-AWS-IAM-and-AWS-CloudFormat

Back in September, I wrote about How to Help Lock Down a User’s Amazon EC2 Capabilities to a Single VPC. In that blog post, I highlighted what I have found to be an effective approach to the virtual private cloud (VPC) lockdown scenario. Since that time, I have worked on making the related information easier to implement in your environment. As a result, I have developed an AWS CloudFormation template that automates the creation of the resources necessary to lock down AWS Identity and Access Management (IAM) entities (users, groups, and roles) to a VPC. In this blog post, I explain this CloudFormation template in detail and describe its individual sections in order to help you better understand what happens when you create a CloudFormation stack from the template.

This CloudFormation template creates a stack (related resources managed as a single unit). This stack generates an IAM role and instance profile in your account. Use the instance profile—a container for the IAM role that you use to pass role information—when launching your instances. The template also creates a managed policy with the role name, account ID, and region populated for you within the policy document, and attaches the policy to the IAM users, groups, or roles that you specify. Because you can establish a VPC in a single region only and the managed policy that is created is specific to the region and VPC, you must create a CloudFormation stack for each region to which you want to allow access.

Explaining the CloudFormation template

Parameters section

The first section of the template (see the following code block) includes required parameters that are used to define the user input for the CloudFormation template. In this template, you must define the VPC to which your users will have access as well as the IAM roles, users, or groups to which you want to attach the managed policy. The IAMUsers, IAMRoles, and IAMGroups parameters must use the CommaDelimitedList type, which allows you to pass a single value or list of values in a manner that the template can refer to later. For VPCId, use the AWS-specific parameter, AWS::EC2::VPC::Id. This parameter allows you to input a VPCId during the creation of the CloudFormation stack and outputs the VPCId as a string that you can refer to in this template.

"Parameters":{
            "VPCId":{
                  "Description" : "Select VPC to which to grant access",
                  "Type" : "AWS::EC2::VPC::Id"
                 
            },
            "IAMUsers":{
                  "Description" : "List the IAM users to which you want to apply the VPCLockDown policy, separated by a comma",
                  "Default": "",
                  "Type" : "CommaDelimitedList"
            },
            "IAMRoles":{
                  "Description" : "List the IAM roles to which you want to apply the VPCLockDown policy, separated by a comma",
                  "Default": "",
                  "Type" : "CommaDelimitedList"
            },
            "IAMGroups":{
                  "Description" : "List the IAM groups to which you want to apply the VPCLockDown policy, separated by a comma",
                  "Default": "",
                  "Type" : "CommaDelimitedList"
            }
      }

Conditions section

The next section of the CloudFormation template is the Conditions section (see the following code block), which includes statements that define when a resource is created or property is defined. This section is needed so that the template doesnot fail if one of the parameters from the previous code block—IAMUsers, IAMGroups, or IAMRoles—is left blank. This allows you to specify the user, group, or role to which you want to attach the policy, and still allow for the field to be left blank. Each condition’s logic is the same, based on the value that is being read from the parameter that you are specifying when running the template.

The condition checks the value of the referenced parameter to see if the first comma-delimited field has a value. If it does, the condition passes the full value to the rest of the template. If the condition finds nothing in the first value of the comma-delimited output, it evaluates to False and allow the template to continue with the lack of value for the parameter. These parameters and conditions are used to determine to whom the managed policy is applied. Because these values are not required to create a managed policy, use the condition to enable support for an empty or False value.  

"Conditions" : {
            "IAMUserNames" : {"Fn::Not": [{"Fn::Equals" : [{"Fn::Select": [0, {"Ref" : "IAMUsers" }]}, ""]}]},
            "IAMRoleNames" : {"Fn::Not": [{"Fn::Equals" : [{"Fn::Select": [0, {"Ref" : "IAMRoles" }]}, ""]}]},
            "IAMGroupNames": {"Fn::Not": [{"Fn::Equals" : [{"Fn::Select": [0, {"Ref" : "IAMGroups"}]}, ""]}]}
    }

Resources section

The next section of the CloudFormation template is the Resources section (see the following code block), which defines all of the resources that are created. First, you must define the IAM role and instance profile that you are using with these instances. This is the same set of resources that is created when you create an EC2 service role in the console. Because you are creating the IAM role through CloudFormation, though, the IAM instance profile is not automatically generated for you. Later in this post I will show you how to create this instance profile and link it to your role so that the role will be usable by the EC2 instances that you launch.

Next, you create the IAM managed policy that references the IAM role and instance profile, and attaches to your IAM user, groups, or roles that you defined in the user input section when creating the CloudFormation stack. CloudFormation works to resolve any dependencies first. Because the managed policy that is created depends on the IAM role and instance profile to be created first, it creates the managed policy last.

Looking at each resource more in depth, you define the IAM role first, and define the trust policy that allows a unique IAM principal to assume the role from within the account. For this template, you will use the ec2.amazonaws.com service principal as the role that is applied to EC2 resources when they are created. Each supported CloudFormation resource has its own Type whose properties you need to define as well. Some properties are required, but others are optional. For example, the AssumeRolePolicyDocument property is required to create the IAM role. IAM roles do not require an attached user policy to function, and because this role is only being attached to this instance as a placeholder, I will not be attaching a user policy to it.

If you find that the instances that you launch in this manner require IAM permissions, you can simply add a policy to the role with which you launched the instances. Keep in mind that if you do this, it will give the same level of permissions to all of the instances that were launched with this role. For more information about properties for the AWS::IAM::Role resource type, see AWS::IAM::Role.

"VPCLockDownRole": {
            "Type": "AWS::IAM::Role",
            "Properties": {
                "AssumeRolePolicyDocument": {
                    "Version": "2012-10-17",
                    "Statement": [
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "Service": [
                                    "ec2.amazonaws.com"
                                ]
                            },
                            "Action": [
                                "sts:AssumeRole"
                            ]
                        }
                    ]
                }
            }
       }

Next, you must create the IAM instance profile as shown in the following code block. Use a reference to the output of the VPCLockDownRole to properly map the instance profile to the role. As with the IAM role, some properties are required and some are optional. The required property for AWS::IAM::InstanceProfile is the Roles property, for which I do a straight Ref to the output.

If you were not using the Ref option, you would place the role’s resource name as a string in the value of Roles. For more information about the properties of the AWS::IAM::InstanceProfile resource, see AWS::IAM::InstanceProfile.

"VpcLockDownInstanceProfile":{
                  "Type": "AWS::IAM::InstanceProfile",
                        "Properties": {
                              "Path": "/",
                              "Roles": [{ "Ref" : "VPCLockDownRole" }]
                        }
                  }

The Ref in the preceding code block looks at the AWS::IAM::Role resource’s output, and places it as a string to define the role to which AWS:IAM:InstanceProfile is attached. Because the AWS::IAM::Role resource does not have a property for RoleName, CloudFormation automatically assigns this name during creation. This Ref allows the instance profile to be dynamically created, depending on the output of the role creation earlier in the template.

Last, you define the VPCLockDownPolicy resource, as shown in the following code block. I go into detail in my previous post about what exactly this policy does, so in today’s post, I will just highlight how this is used in the template. Create an AWS::IAM::ManagedPolicy resource, which allows you to define the policy as well as the IAM users, roles, or groups to which you want to attach this policy. These variables were defined in the Parameters section of the template and passed into the Conditions section for evaluation. The Fn::If condition evaluates whether there is data with which to populate the Users, Roles, or Groups fields. If there is a value for the condition, the template references the parameter that was defined. If the value of the condition is returned as False, it passes the standard parameter of AWS::NoValue for the given field.

In this resource, you also leverage the ability of CloudFormation to join strings together and to reference AWS-defined parameters, such as AWS::Region and AWS::AccountId, as well as user-defined parameters, such as VPCId. This means that if you run this template in the us-east-1 Region, that region is placed in the ARN string automatically. This is why you must create a stack in each region that you want to control in this manner. Also, when you run the template in a region, you will be prompted for VPCs only in that region.

"VPCLockDownPolicy" :{
                  "Type" : "AWS::IAM::ManagedPolicy",
                  "Properties" : {
                        "Description" : "Policy for locking down to a VPC",
                        "Users" : {
                              "Fn::If" : [
                                    "IAMUserNames",
                                    { "Ref" : "IAMUsers" },
                                    { "Ref" : "AWS::NoValue" }
                                    ]},
                        "Roles" : {
                              "Fn::If" : [
                                    "IAMRoleNames",
                                    { "Ref" : "IAMRoles" },
                                    { "Ref" : "AWS::NoValue" }
                                    ]},
                        "Groups" : {
                              "Fn::If" : [
                                    "IAMGroupNames",
                                    { "Ref" : "IAMGroups" },
                                    { "Ref" : "AWS::NoValue" }
                                    ]},
                        "PolicyDocument" : {
                            "Version": "2012-10-17",
                            "Statement": [
                                {
                                    "Sid": "NonResourceBasedReadOnlyPermissions",
                                    "Action": [
                                        "ec2:Describe*",
                                        "ec2:CreateKeyPair",
                                        "ec2:CreateSecurityGroup",
                                        "iam:GetInstanceProfiles",
                                        "iam:ListInstanceProfiles"
                                    ],
                                    "Effect": "Allow",
                                    "Resource": "*"
                                },
                                {
                                    "Sid": "IAMPassroleToInstance",
                                    "Action": [
                                        "iam:PassRole"
                                    ],
                                    "Effect": "Allow",
                                    "Resource": {"Fn::Join":["", [ "arn:aws:iam::",{ "Ref" : "AWS::AccountId" },":role/", { "Ref" : "VPCLockDownRole" }]]}
                                },
                                {
                                    "Sid": "AllowInstanceActions",
                                    "Effect": "Allow",
                                    "Action": [
                                        "ec2:RebootInstances",
                                        "ec2:StopInstances",
                                        "ec2:TerminateInstances",
                                        "ec2:StartInstances",
                                        "ec2:AttachVolume",
                                        "ec2:DetachVolume"
                                    ],
                                    "Resource": {"Fn::Join":["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":instance/*"]]},
                                    "Condition": {
                                        "StringEquals": {
                                            "ec2:InstanceProfile": {"Fn::Join" : ["",[ "arn:aws:iam::",{ "Ref" : "AWS::AccountId" },":instance-profile/", { "Ref" : "VpcLockDownInstanceProfile" }]]}
                                        }
                                    }
                                },
                                {
                                    "Sid": "EC2RunInstances",
                                    "Effect": "Allow",
                                    "Action": "ec2:RunInstances",
                                    "Resource": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":instance/*"]]},
                                    "Condition": {
                                        "StringEquals": {
                                            "ec2:InstanceProfile": {"Fn::Join" : ["",[ "arn:aws:iam::",{ "Ref" : "AWS::AccountId" },":instance-profile/", { "Ref" : "VpcLockDownInstanceProfile" }]]}
                                        }
                                    }
                                },
                                {
                                    "Sid": "EC2RunInstancesSubnet",
                                    "Effect": "Allow",
                                    "Action": "ec2:RunInstances",
                                    "Resource": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":subnet/*"]]},
                                    "Condition": {
                                        "StringEquals": {
                                            "ec2:vpc": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },"vpc/",{ "Ref" : "VPCId" },"" ]]}
                                        }
                                    }
                                },
                                {
                                    "Sid": "RemainingRunInstancePermissions",
                                    "Effect": "Allow",
                                    "Action": "ec2:RunInstances",
                                    "Resource": [
                                        {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":volume/*"]]},
                                        {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },"::image/*"]]},
                                        {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },"::snapshot/*"]]},
                                        {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":network-interface/*"]]},
                                        {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":key-pair/*"]]},
                                        {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":security-group/*"]]}
                                    ]
                                },
                                {
                                    "Sid": "EC2VpcNonresourceSpecificActions",
                                    "Effect": "Allow",
                                    "Action": [
                                        "ec2:DeleteNetworkAcl",
                                        "ec2:DeleteNetworkAclEntry",
                                        "ec2:DeleteRoute",
                                        "ec2:DeleteRouteTable",
                                        "ec2:AuthorizeSecurityGroupEgress",
                                        "ec2:AuthorizeSecurityGroupIngress",
                                        "ec2:RevokeSecurityGroupEgress",
                                        "ec2:RevokeSecurityGroupIngress",
                                        "ec2:DeleteSecurityGroup"
                                    ],
                                    "Resource": "*",
                                    "Condition": {
                                        "StringEquals": {
                                            "ec2:vpc": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":vpc/", { "Ref" : "VPCId" },""]]}
                                        }
                                    }
                                }
                            ]
                                    }
                  }
            }

After you have created a CloudFormation stack from this template in the desired region, an IAM policy is created and applied to the IAM entities that you specified. This policy requires users to launch instances in the VPC that you specified and requires that they also create the EC2 instance with the IAM instance profile that you created with this template. This approach means you don’t have to know where to apply the policy, and as a result, this approach should streamline deployment within your account.

If you have comments about this post, add them to the “Comments” section below. If you have questions about or issues implementing this solution, please open a new thread on the IAM forum.

– Chris

How to Automate Restricting Access to a VPC by Using AWS IAM and AWS CloudFormation

Post Syndicated from Chris Craig original https://blogs.aws.amazon.com/security/post/Tx2Q3KHYPNJBBRX/How-to-Automate-Restricting-Access-to-a-VPC-by-Using-AWS-IAM-and-AWS-CloudFormat

Back in September, I wrote about How to Help Lock Down a User’s Amazon EC2 Capabilities to a Single VPC. In that blog post, I highlighted what I have found to be an effective approach to the virtual private cloud (VPC) lockdown scenario. Since that time, I have worked on making the related information easier to implement in your environment. As a result, I have developed an AWS CloudFormation template that automates the creation of the resources necessary to lock down AWS Identity and Access Management (IAM) entities (users, groups, and roles) to a VPC. In this blog post, I explain this CloudFormation template in detail and describe its individual sections in order to help you better understand what happens when you create a CloudFormation stack from the template.

This CloudFormation template creates a stack (related resources managed as a single unit). This stack generates an IAM role and instance profile in your account. Use the instance profile—a container for the IAM role that you use to pass role information—when launching your instances. The template also creates a managed policy with the role name, account ID, and region populated for you within the policy document, and attaches the policy to the IAM users, groups, or roles that you specify. Because you can establish a VPC in a single region only and the managed policy that is created is specific to the region and VPC, you must create a CloudFormation stack for each region to which you want to allow access.

Explaining the CloudFormation template

Parameters section

The first section of the template (see the following code block) includes required parameters that are used to define the user input for the CloudFormation template. In this template, you must define the VPC to which your users will have access as well as the IAM roles, users, or groups to which you want to attach the managed policy. The IAMUsers, IAMRoles, and IAMGroups parameters must use the CommaDelimitedList type, which allows you to pass a single value or list of values in a manner that the template can refer to later. For VPCId, use the AWS-specific parameter, AWS::EC2::VPC::Id. This parameter allows you to input a VPCId during the creation of the CloudFormation stack and outputs the VPCId as a string that you can refer to in this template.

"Parameters":{
"VPCId":{
"Description" : "Select VPC to which to grant access",
"Type" : "AWS::EC2::VPC::Id"

},
"IAMUsers":{
"Description" : "List the IAM users to which you want to apply the VPCLockDown policy, separated by a comma",
"Default": "",
"Type" : "CommaDelimitedList"
},
"IAMRoles":{
"Description" : "List the IAM roles to which you want to apply the VPCLockDown policy, separated by a comma",
"Default": "",
"Type" : "CommaDelimitedList"
},
"IAMGroups":{
"Description" : "List the IAM groups to which you want to apply the VPCLockDown policy, separated by a comma",
"Default": "",
"Type" : "CommaDelimitedList"
}
}

Conditions section

The next section of the CloudFormation template is the Conditions section (see the following code block), which includes statements that define when a resource is created or property is defined. This section is needed so that the template doesnot fail if one of the parameters from the previous code block—IAMUsers, IAMGroups, or IAMRoles—is left blank. This allows you to specify the user, group, or role to which you want to attach the policy, and still allow for the field to be left blank. Each condition’s logic is the same, based on the value that is being read from the parameter that you are specifying when running the template.

The condition checks the value of the referenced parameter to see if the first comma-delimited field has a value. If it does, the condition passes the full value to the rest of the template. If the condition finds nothing in the first value of the comma-delimited output, it evaluates to False and allow the template to continue with the lack of value for the parameter. These parameters and conditions are used to determine to whom the managed policy is applied. Because these values are not required to create a managed policy, use the condition to enable support for an empty or False value.  

"Conditions" : {
"IAMUserNames" : {"Fn::Not": [{"Fn::Equals" : [{"Fn::Select": [0, {"Ref" : "IAMUsers" }]}, ""]}]},
"IAMRoleNames" : {"Fn::Not": [{"Fn::Equals" : [{"Fn::Select": [0, {"Ref" : "IAMRoles" }]}, ""]}]},
"IAMGroupNames": {"Fn::Not": [{"Fn::Equals" : [{"Fn::Select": [0, {"Ref" : "IAMGroups"}]}, ""]}]}
}

Resources section

The next section of the CloudFormation template is the Resources section (see the following code block), which defines all of the resources that are created. First, you must define the IAM role and instance profile that you are using with these instances. This is the same set of resources that is created when you create an EC2 service role in the console. Because you are creating the IAM role through CloudFormation, though, the IAM instance profile is not automatically generated for you. Later in this post I will show you how to create this instance profile and link it to your role so that the role will be usable by the EC2 instances that you launch.

Next, you create the IAM managed policy that references the IAM role and instance profile, and attaches to your IAM user, groups, or roles that you defined in the user input section when creating the CloudFormation stack. CloudFormation works to resolve any dependencies first. Because the managed policy that is created depends on the IAM role and instance profile to be created first, it creates the managed policy last.

Looking at each resource more in depth, you define the IAM role first, and define the trust policy that allows a unique IAM principal to assume the role from within the account. For this template, you will use the ec2.amazonaws.com service principal as the role that is applied to EC2 resources when they are created. Each supported CloudFormation resource has its own Type whose properties you need to define as well. Some properties are required, but others are optional. For example, the AssumeRolePolicyDocument property is required to create the IAM role. IAM roles do not require an attached user policy to function, and because this role is only being attached to this instance as a placeholder, I will not be attaching a user policy to it.

If you find that the instances that you launch in this manner require IAM permissions, you can simply add a policy to the role with which you launched the instances. Keep in mind that if you do this, it will give the same level of permissions to all of the instances that were launched with this role. For more information about properties for the AWS::IAM::Role resource type, see AWS::IAM::Role.

"VPCLockDownRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
}
}

Next, you must create the IAM instance profile as shown in the following code block. Use a reference to the output of the VPCLockDownRole to properly map the instance profile to the role. As with the IAM role, some properties are required and some are optional. The required property for AWS::IAM::InstanceProfile is the Roles property, for which I do a straight Ref to the output.

If you were not using the Ref option, you would place the role’s resource name as a string in the value of Roles. For more information about the properties of the AWS::IAM::InstanceProfile resource, see AWS::IAM::InstanceProfile.

"VpcLockDownInstanceProfile":{
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [{ "Ref" : "VPCLockDownRole" }]
}
}

The Ref in the preceding code block looks at the AWS::IAM::Role resource’s output, and places it as a string to define the role to which AWS:IAM:InstanceProfile is attached. Because the AWS::IAM::Role resource does not have a property for RoleName, CloudFormation automatically assigns this name during creation. This Ref allows the instance profile to be dynamically created, depending on the output of the role creation earlier in the template.

Last, you define the VPCLockDownPolicy resource, as shown in the following code block. I go into detail in my previous post about what exactly this policy does, so in today’s post, I will just highlight how this is used in the template. Create an AWS::IAM::ManagedPolicy resource, which allows you to define the policy as well as the IAM users, roles, or groups to which you want to attach this policy. These variables were defined in the Parameters section of the template and passed into the Conditions section for evaluation. The Fn::If condition evaluates whether there is data with which to populate the Users, Roles, or Groups fields. If there is a value for the condition, the template references the parameter that was defined. If the value of the condition is returned as False, it passes the standard parameter of AWS::NoValue for the given field.

In this resource, you also leverage the ability of CloudFormation to join strings together and to reference AWS-defined parameters, such as AWS::Region and AWS::AccountId, as well as user-defined parameters, such as VPCId. This means that if you run this template in the us-east-1 Region, that region is placed in the ARN string automatically. This is why you must create a stack in each region that you want to control in this manner. Also, when you run the template in a region, you will be prompted for VPCs only in that region.

"VPCLockDownPolicy" :{
"Type" : "AWS::IAM::ManagedPolicy",
"Properties" : {
"Description" : "Policy for locking down to a VPC",
"Users" : {
"Fn::If" : [
"IAMUserNames",
{ "Ref" : "IAMUsers" },
{ "Ref" : "AWS::NoValue" }
]},
"Roles" : {
"Fn::If" : [
"IAMRoleNames",
{ "Ref" : "IAMRoles" },
{ "Ref" : "AWS::NoValue" }
]},
"Groups" : {
"Fn::If" : [
"IAMGroupNames",
{ "Ref" : "IAMGroups" },
{ "Ref" : "AWS::NoValue" }
]},
"PolicyDocument" : {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "NonResourceBasedReadOnlyPermissions",
"Action": [
"ec2:Describe*",
"ec2:CreateKeyPair",
"ec2:CreateSecurityGroup",
"iam:GetInstanceProfiles",
"iam:ListInstanceProfiles"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "IAMPassroleToInstance",
"Action": [
"iam:PassRole"
],
"Effect": "Allow",
"Resource": {"Fn::Join":["", [ "arn:aws:iam::",{ "Ref" : "AWS::AccountId" },":role/", { "Ref" : "VPCLockDownRole" }]]}
},
{
"Sid": "AllowInstanceActions",
"Effect": "Allow",
"Action": [
"ec2:RebootInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:StartInstances",
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": {"Fn::Join":["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":instance/*"]]},
"Condition": {
"StringEquals": {
"ec2:InstanceProfile": {"Fn::Join" : ["",[ "arn:aws:iam::",{ "Ref" : "AWS::AccountId" },":instance-profile/", { "Ref" : "VpcLockDownInstanceProfile" }]]}
}
}
},
{
"Sid": "EC2RunInstances",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":instance/*"]]},
"Condition": {
"StringEquals": {
"ec2:InstanceProfile": {"Fn::Join" : ["",[ "arn:aws:iam::",{ "Ref" : "AWS::AccountId" },":instance-profile/", { "Ref" : "VpcLockDownInstanceProfile" }]]}
}
}
},
{
"Sid": "EC2RunInstancesSubnet",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":subnet/*"]]},
"Condition": {
"StringEquals": {
"ec2:vpc": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },"vpc/",{ "Ref" : "VPCId" },"" ]]}
}
}
},
{
"Sid": "RemainingRunInstancePermissions",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
{"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":volume/*"]]},
{"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },"::image/*"]]},
{"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },"::snapshot/*"]]},
{"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":network-interface/*"]]},
{"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":key-pair/*"]]},
{"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":security-group/*"]]}
]
},
{
"Sid": "EC2VpcNonresourceSpecificActions",
"Effect": "Allow",
"Action": [
"ec2:DeleteNetworkAcl",
"ec2:DeleteNetworkAclEntry",
"ec2:DeleteRoute",
"ec2:DeleteRouteTable",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteSecurityGroup"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:vpc": {"Fn::Join" : ["",[ "arn:aws:ec2:",{ "Ref" : "AWS::Region" },":",{ "Ref" : "AWS::AccountId" },":vpc/", { "Ref" : "VPCId" },""]]}
}
}
}
]
}
}
}

After you have created a CloudFormation stack from this template in the desired region, an IAM policy is created and applied to the IAM entities that you specified. This policy requires users to launch instances in the VPC that you specified and requires that they also create the EC2 instance with the IAM instance profile that you created with this template. This approach means you don’t have to know where to apply the policy, and as a result, this approach should streamline deployment within your account.

If you have comments about this post, add them to the “Comments” section below. If you have questions about or issues implementing this solution, please open a new thread on the IAM forum.

– Chris

In Case You Missed These: AWS Security Blog Posts from January and February

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1J9OK26Z1WA3L/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-January-and-February

In case you missed any of the AWS Security Blog posts from January and February, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from using AWS WAF to automating HIPAA compliance.

February

February 29, AWS Compliance Announcement: Announcing Industry Best Practices for Securing AWS Resources
We are happy to announce that the Center for Internet Security (CIS) has published the CIS AWS Foundations Benchmark, a set of security configuration best practices for AWS. These industry-accepted best practices go beyond the high-level security guidance already available, providing AWS users with clear, step-by-step implementation and assessment procedures. This is the first time CIS has issued a set of security best practices specific to an individual cloud service provider.

February 24, AWS WAF How-To: How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
In this blog post, I show you how to create an AWS Lambda function that automatically parses Amazon CloudFront access logs as they are delivered to Amazon S3, counts the number of bad requests from unique sources (IP addresses), and updates AWS WAF to block further requests from those IP addresses. I also provide a CloudFormation template that creates the web access control list (ACL), rule sets, Lambda function, and logging S3 bucket so that you can try this yourself.

February 23, Automating HIPAA Compliance How-To: How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series
In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture by highlighting ways you can use AWS Config to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations.  

February 22, March Webinar Announcement: Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection
AWS WAF Software Development Manager Nathan Dye will share  Lambda scripts you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

February 22, Automating HIPAA Compliance How-To: How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series
In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

February 17, AWS Partner Network: New AWS Partner Network Blog Post: Securely Accessing Customers’ AWS Accounts with Cross-Account IAM Roles
Building off AWS Identity and Access Management (IAM) best practices, the AWS Partner Network (APN) Blog this week published a blog post called, Securely Accessing Customer AWS Accounts with Cross-Account IAM Roles. Written by AWS Partner Solutions Architect David Rocamora, this post addresses how best practices can be applied when working with APN Partners, and describes the potential drawbacks with APN Partners having access to their customers’ AWS resources.

February 16, AWS Summit in Chicago: Register for the Free AWS Summit – Chicago, April 2016
Registration for the 2016 AWS Summit – Chicago is now open. This free event will educate you about the AWS platform and offer information about architecture best practices and new cloud services. Register todayto reserve your seat to hear keynote speaker Matt Wood, AWS General Manager of Product Strategy, highlighting the latest AWS services and customer stories.

February 16, Automating HIPAA Compliance How-To: How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series
In my previous blog post, I discussed the idea of using the cloud to protect the cloud and improving healthcare IT by applying DevSecOps methods. In Part 2 today, I will show an architecture composed of AWS services that gives healthcare security administrators necessary controls, allows healthcare developers to interact with the system using familiar tools (such as Git), and leverages AWS managed services without the need for advanced coding or complex configuration.

February 15, Automating HIPAA Compliance How-To: How to Automate HIPAA Compliance (Part 1): Use the Cloud to Protect the Cloud
In a series of blog posts on the AWS Security Blog this month, I will provide prescriptive advice and code samples to developers, system administrators, and security specialists who wish to improve their healthcare IT by applying the DevSecOps methods that the cloud enables. I will also demonstrate AWS services that can help customers meet their AWS Business Associate Agreement obligations in an automated fashion. Consider this series a getting started guide for DevSecOps strategies you can implement as you migrate your own compliance frameworks and controls to the cloud. 

February 9, AWS WAF How-To: How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
One security challenge you may have faced is how to prevent your web servers from being flooded by unwanted requests, or scanning tools such as bots and crawlers that don’t respect the crawl-delay directivevalue. The main objective of this kind of distributed denial of service (DDoS) attack, commonly called an HTTP flood, is to overburden system resources and make them unavailable to your real users or customers (as shown in the following illustration). In this blog post, I will show you how to provision a solution that automatically detects unwanted traffic based on request rate, and then updates configurations of AWS WAF (a web application firewall that protects any application deployed on the Amazon CloudFront content delivery service) to block subsequent requests from those users.

February 3, AWS Compliance Pilot Program: AWS FedRAMP-Trusted Internet Connection (TIC) Overlay Pilot Program
I’m pleased to announce a newly created resource for usage of the Federal Cloud—after successfully completing the testing phase of the FedRAMP-Trusted Internet Connection (TIC) Overlay pilot program, we’ve developed Guidance for TIC Readiness on AWS. This new way of architecting cloud solutions that address TIC capabilities (in a FedRAMP moderate baseline) comes as the result of our relationships with the FedRAMP Program Management Office (PMO), Department of Homeland Security (DHS) TIC PMO, GSA 18F, and FedRAMP third-party assessment organization (3PAO), Veris Group. Ultimately, this approach will provide US Government agencies and contractors with information assisting in the development of “TIC Ready” architectures on AWS.

February 2, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory
In my previous post, I showed how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. Today, I will show how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities.

February 1, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
As you establish private connectivity between your on-premises networks and your AWS Virtual Private Cloud (VPC) environments, the need for Domain Name System (DNS) resolution across these environments grows in importance. One common approach used to address this need is to run DNS servers on Amazon EC2 across multiple Availability Zones (AZs) and integrate them with private on-premises DNS domains. In many cases, though, a managed private DNS service (accessible outside of a VPC) with less administrative overhead is advantageous. In this blog post, I will show you two approaches that use Amazon Route 53 and AWS Directory Service to provide DNS resolution between on-premises networks and AWS VPC environments.
 

January

January 26, DNS Filtering How-To: How to Add DNS Filtering to Your NAT Instance with Squid
In this post, I discuss and give an example of how Squid, a leading open-source proxy, can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet. First, I explain briefly how to create the infrastructure resources required for this approach. Then, I provide step-by-step instructions to install, configure, and test Squid as a transparent proxy.
 

January 25, AWS KMS How-To: How to Help Protect Sensitive Data with AWS KMS
One question AWS KMS customers frequently ask is about how how to encrypt Primary Account Number (PAN) data within AWS because PCI DSS sections 3.5 and 3.6 require the encryption of credit card data at rest and has stringent requirements around the management of encryption keys. One KMS encryption option is to encrypt your PAN data using customer data keys (CDKs) that are exportable out of KMS. Alternatively, you also can use KMS to directly encrypt PAN data by using a customer master key (CMK). In this blog post, I will show you how to help protect sensitive PAN data by using KMS CMKs.

January 21, AWS Certificate Manager Announcement: Now Available: AWS Certificate Manager
Launched today, AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with provisioning and managing SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates—all at no extra cost!

January 19, AWS Compliance Announcement: Introducing GxP Compliance on AWS
We’re happy to announce that customers now are enabled to bring the next generation of medical, health, and wellness solutions to their GxP systems by using AWS for their processing and storage needs. Compliance with healthcare and life sciences requirements is a key priority for us, and we are pleased to announce the availability of new compliance enablers for customers with GxP requirements.

January 19, AWS Config How-To: How to Record and Govern Your IAM Resource Configurations Using AWS Config
Using Config Rules on IAM resources, you can codify your best practices for using IAM and assess the compliance state of these rules regularly. In this blog post, I will show how to start recording the configuration of IAM resources, and author an example rule that checks whether all IAM users in the account are using a sample managed policy, MyIAMUserPolicy. I will also describe examples of other rules customers have authored to assess their organizations’ compliance with their own standards.

January 15, AWS Summits: Mark Your Calendar for AWS Summits in 2016
Are you ready for AWS Summits in 2016? This year we have created even more information-packed Summits that will take place across the globe, each designed to accelerate your cloud journey and help you get the most out of AWS services.

January 13, AWS IAM Announcement: The IAM Console Now Helps Prevent You from Accidentally Deleting In-Use Resources
Starting today, the IAM console shows service last accessed data as part of the process of deleting an IAM user or role. Now you have additional data that shows you when a resource was last active so that you can make a more informed decision about whether or not to delete it.

January 6, IAM Best Practices: Adhere to IAM Best Practices in 2016
As another new year begins, we encourage you to review our recommended IAM best practices. Following these best practices can help you maintain the security of your AWS resources. You can learn more by watching the IAM Best Practices to Live By presentation that Anders Samuelsson gave at AWS re:Invent 2015, or you can click the following links that will take you to IAM documentation, blog posts, and videos. 

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

In Case You Missed These: AWS Security Blog Posts from January and February

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1J9OK26Z1WA3L/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-January-and-February

In case you missed any of the AWS Security Blog posts from January and February, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from using AWS WAF to automating HIPAA compliance.

February

February 29, AWS Compliance Announcement: Announcing Industry Best Practices for Securing AWS Resources
We are happy to announce that the Center for Internet Security (CIS) has published the CIS AWS Foundations Benchmark, a set of security configuration best practices for AWS. These industry-accepted best practices go beyond the high-level security guidance already available, providing AWS users with clear, step-by-step implementation and assessment procedures. This is the first time CIS has issued a set of security best practices specific to an individual cloud service provider.

February 24, AWS WAF How-To: How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
In this blog post, I show you how to create an AWS Lambda function that automatically parses Amazon CloudFront access logs as they are delivered to Amazon S3, counts the number of bad requests from unique sources (IP addresses), and updates AWS WAF to block further requests from those IP addresses. I also provide a CloudFormation template that creates the web access control list (ACL), rule sets, Lambda function, and logging S3 bucket so that you can try this yourself.

February 23, Automating HIPAA Compliance How-To: How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series
In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture by highlighting ways you can use AWS Config to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations.  

February 22, March Webinar Announcement: Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection
AWS WAF Software Development Manager Nathan Dye will share  Lambda scripts you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

February 22, Automating HIPAA Compliance How-To: How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series
In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

February 17, AWS Partner Network: New AWS Partner Network Blog Post: Securely Accessing Customers’ AWS Accounts with Cross-Account IAM Roles
Building off AWS Identity and Access Management (IAM) best practices, the AWS Partner Network (APN) Blog this week published a blog post called, Securely Accessing Customer AWS Accounts with Cross-Account IAM Roles. Written by AWS Partner Solutions Architect David Rocamora, this post addresses how best practices can be applied when working with APN Partners, and describes the potential drawbacks with APN Partners having access to their customers’ AWS resources.

February 16, AWS Summit in Chicago: Register for the Free AWS Summit – Chicago, April 2016
Registration for the 2016 AWS Summit – Chicago is now open. This free event will educate you about the AWS platform and offer information about architecture best practices and new cloud services. Register todayto reserve your seat to hear keynote speaker Matt Wood, AWS General Manager of Product Strategy, highlighting the latest AWS services and customer stories.

February 16, Automating HIPAA Compliance How-To: How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series
In my previous blog post, I discussed the idea of using the cloud to protect the cloud and improving healthcare IT by applying DevSecOps methods. In Part 2 today, I will show an architecture composed of AWS services that gives healthcare security administrators necessary controls, allows healthcare developers to interact with the system using familiar tools (such as Git), and leverages AWS managed services without the need for advanced coding or complex configuration.

February 15, Automating HIPAA Compliance How-To: How to Automate HIPAA Compliance (Part 1): Use the Cloud to Protect the Cloud
In a series of blog posts on the AWS Security Blog this month, I will provide prescriptive advice and code samples to developers, system administrators, and security specialists who wish to improve their healthcare IT by applying the DevSecOps methods that the cloud enables. I will also demonstrate AWS services that can help customers meet their AWS Business Associate Agreement obligations in an automated fashion. Consider this series a getting started guide for DevSecOps strategies you can implement as you migrate your own compliance frameworks and controls to the cloud. 

February 9, AWS WAF How-To: How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
One security challenge you may have faced is how to prevent your web servers from being flooded by unwanted requests, or scanning tools such as bots and crawlers that don’t respect the crawl-delay directivevalue. The main objective of this kind of distributed denial of service (DDoS) attack, commonly called an HTTP flood, is to overburden system resources and make them unavailable to your real users or customers (as shown in the following illustration). In this blog post, I will show you how to provision a solution that automatically detects unwanted traffic based on request rate, and then updates configurations of AWS WAF (a web application firewall that protects any application deployed on the Amazon CloudFront content delivery service) to block subsequent requests from those users.

February 3, AWS Compliance Pilot Program: AWS FedRAMP-Trusted Internet Connection (TIC) Overlay Pilot Program
I’m pleased to announce a newly created resource for usage of the Federal Cloud—after successfully completing the testing phase of the FedRAMP-Trusted Internet Connection (TIC) Overlay pilot program, we’ve developed Guidance for TIC Readiness on AWS. This new way of architecting cloud solutions that address TIC capabilities (in a FedRAMP moderate baseline) comes as the result of our relationships with the FedRAMP Program Management Office (PMO), Department of Homeland Security (DHS) TIC PMO, GSA 18F, and FedRAMP third-party assessment organization (3PAO), Veris Group. Ultimately, this approach will provide US Government agencies and contractors with information assisting in the development of “TIC Ready” architectures on AWS.

February 2, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory
In my previous post, I showed how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. Today, I will show how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities.

February 1, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
As you establish private connectivity between your on-premises networks and your AWS Virtual Private Cloud (VPC) environments, the need for Domain Name System (DNS) resolution across these environments grows in importance. One common approach used to address this need is to run DNS servers on Amazon EC2 across multiple Availability Zones (AZs) and integrate them with private on-premises DNS domains. In many cases, though, a managed private DNS service (accessible outside of a VPC) with less administrative overhead is advantageous. In this blog post, I will show you two approaches that use Amazon Route 53 and AWS Directory Service to provide DNS resolution between on-premises networks and AWS VPC environments.

 

January

January 26, DNS Filtering How-To: How to Add DNS Filtering to Your NAT Instance with Squid
In this post, I discuss and give an example of how Squid, a leading open-source proxy, can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet. First, I explain briefly how to create the infrastructure resources required for this approach. Then, I provide step-by-step instructions to install, configure, and test Squid as a transparent proxy.

 

January 25, AWS KMS How-To: How to Help Protect Sensitive Data with AWS KMS
One question AWS KMS customers frequently ask is about how how to encrypt Primary Account Number (PAN) data within AWS because PCI DSS sections 3.5 and 3.6 require the encryption of credit card data at rest and has stringent requirements around the management of encryption keys. One KMS encryption option is to encrypt your PAN data using customer data keys (CDKs) that are exportable out of KMS. Alternatively, you also can use KMS to directly encrypt PAN data by using a customer master key (CMK). In this blog post, I will show you how to help protect sensitive PAN data by using KMS CMKs.

January 21, AWS Certificate Manager Announcement: Now Available: AWS Certificate Manager
Launched today, AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with provisioning and managing SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates—all at no extra cost!

January 19, AWS Compliance Announcement: Introducing GxP Compliance on AWS
We’re happy to announce that customers now are enabled to bring the next generation of medical, health, and wellness solutions to their GxP systems by using AWS for their processing and storage needs. Compliance with healthcare and life sciences requirements is a key priority for us, and we are pleased to announce the availability of new compliance enablers for customers with GxP requirements.

January 19, AWS Config How-To: How to Record and Govern Your IAM Resource Configurations Using AWS Config
Using Config Rules on IAM resources, you can codify your best practices for using IAM and assess the compliance state of these rules regularly. In this blog post, I will show how to start recording the configuration of IAM resources, and author an example rule that checks whether all IAM users in the account are using a sample managed policy, MyIAMUserPolicy. I will also describe examples of other rules customers have authored to assess their organizations’ compliance with their own standards.

January 15, AWS Summits: Mark Your Calendar for AWS Summits in 2016
Are you ready for AWS Summits in 2016? This year we have created even more information-packed Summits that will take place across the globe, each designed to accelerate your cloud journey and help you get the most out of AWS services.

January 13, AWS IAM Announcement: The IAM Console Now Helps Prevent You from Accidentally Deleting In-Use Resources
Starting today, the IAM console shows service last accessed data as part of the process of deleting an IAM user or role. Now you have additional data that shows you when a resource was last active so that you can make a more informed decision about whether or not to delete it.

January 6, IAM Best Practices: Adhere to IAM Best Practices in 2016
As another new year begins, we encourage you to review our recommended IAM best practices. Following these best practices can help you maintain the security of your AWS resources. You can learn more by watching the IAM Best Practices to Live By presentation that Anders Samuelsson gave at AWS re:Invent 2015, or you can click the following links that will take you to IAM documentation, blog posts, and videos. 

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series

Post Syndicated from Chris Crosbie original https://blogs.aws.amazon.com/security/post/Tx27GJDUUTHKRRJ/How-to-Use-AWS-Config-to-Help-with-Required-HIPAA-Audit-Controls-Part-4-of-the-A

In my previous posts in this series, I explained how to get started with the DevSecOps environment for HIPAA that is depicted in the following architecture diagram. In my second post in this series, I gave you guidance about how to set up AWS Service Catalog (#4 in the following diagram) to allow developers a way to launch healthcare web servers and release source code without the need for administrator intervention. In my third post in this series, I advised healthcare security administrators about defining AWS CloudFormation templates (#1 in the diagram) for infrastructure that must comply with the AWS Business Associate Agreement (BAA).

In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture depicted in the preceding diagram by highlighting ways you can use AWS Config (#9 in the diagram) to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations. 

Auditing and monitoring are essential to HIPAA security. Auditing controls are a Technical Safeguard that must be addressed through the use of technical controls by anyone who wishes to store, process, or transmit electronic patient data. However, because there are no standard implementation specifications within the HIPAA law and regulations, AWS Config enables you to address audit controls  to use the cloud to protect the cloud.

Because Config currently targets only AWS infrastructure configuration changes, it is unlikely that Config alone will be able to meet all of the audit control requirements laid out in Technical Safeguard 164.312, the section of the HIPAA regulations that discusses the technical safeguards such as audit controls. However, Config is a cloud-native auditing service that you should evaluate as an alternative to traditional on-premises compliance tools and procedures.

Standard audit controls found in 164.312(b)(2) of the HIPAA regulations says: “Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic health information.” Config helps achieve this because it monitors the activity of both running and deleted AWS resources across time. In a DevSecOps environment in which developers have the power to turn on and turn off infrastructure in a self-service manner, using a cloud-native monitoring tool such as Config will help ensure that you can meet your auditing requirements. Understanding what a configuration looked like and who had access to it at a point in the past is something that you will need to do in a typical HIPAA audit, and Config provides this functionality.

For more about the topic of auditing HIPAA infrastructure in the cloud, the AWS re:Invent 2015 session, Architecting for HIPAA Compliance on AWS, gives additional pointers. To supplement the monitoring provided by Config, review and evaluate the easily deployable monitoring software found in the AWS Marketplace.

Get started with AWS Config

From the AWS Management Console, under Management Tools:

Click Config.

If this is your first time using Config, click Get started.

From the Set up AWS Config page, choose which types of resources that you want to track.

Config is designed to track the interaction among various AWS services. At the time of this post, you can choose to track your accounts in AWS Identity and Access Management (IAM), Amazon EC2–related services (such as Amazon Elastic Block Store, elastic network interfaces , and virtual private cloud [VPC]), and AWS CloudTrail.

All the information collected across these services is normalized into a standard format so that auditors or your compliance team may not need to understand the underlying details of how to audit each AWS service. They simply can review the Config console to ensure that your healthcare privacy standards are being met.

Because the infrastructure described in this post is designed for storing protected health information (PHI), I am going to select the check box next to All resources, as shown in the following screenshot. By choosing this option, I can ensure that not only will all the resources available for tracking be included, but also as new resource types get added to Config, they will automatically be added to my tracking as well.  

Also, be sure to select the Include global resources check box if you would like to use Config to record and govern your IAM resource configurations.

Specify where the configuration history file should be stored

Amazon S3 buckets have global naming, which makes it possible to aggregate the configuration history files across regions or send the files to a separate AWS account with limited privileges. The same consolidation can be configured for Amazon Simple Notification Service (SNS) topics, if you want to programmatically extend the information coming from Config or be immediately alerted of compliance risks.

For this example, I create a new bucket in my account and turn off the Amazon SNS topic notifications (as shown in the following screenshot), and click Continue.  

On the next page, create a new IAM role in your AWS account so that the Config service has the ability to read your infrastructure’s information. You can review the permissions that will be associated with this IAM role by clicking the arrow next to View Policy Document.

After you have verified the policy, click Allow. You should now be taken to the Resource inventory page. On the right side of the page, you should see that Recording is on and that inventory is being taken about your infrastructure. When the Taking inventory label (shown in the following image) is no longer visible, you can start reviewing your healthcare infrastructure.

Review your healthcare server

For the rest of this post, I use Config to review the healthcare web server that I created with AWS Service Catalog in How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series.

From the Resource inventory page, you can search based on types of resources, such as IAM user, network access control list (ACL), VPC, and instance. A resource tag is a way to categorize AWS resources, and you can search by those tags in Config. Because I used CloudFormation to enforce tagging, I can quickly find the type of resources I am interested in by setting up search for these tags.

As an example of why this is useful, consider employee turnover. Most healthcare organizations need to have processes and procedures to deal with employee turnover in a regulated environment. Because our CloudFormation template forced developers to populate a tag with their email addresses, you can easily use Config to find all the resources the employee was using, if they decided to leave the organization (or even if they didn’t leave the company).

Search on the Resource inventory page for the employee’s email address along with the tag, InstanceOwnerEmail, and then click Look up, as shown in the following screenshot.

Click the link under Resource identifier to see the Config timeline that shows the most recent configuration recorded for the instance as well as previously recorded configurations. This timeline will show not only the configuration details of the instance itself, but also will provide the relationships to other AWS services and an easy-to-interpret Changes section. This section provides your auditing and compliance teams the ability to quickly review and interpret changes from a single interface without needing to understand the underlying AWS services in detail or jump between multiple AWS service pages.

Clicking View Details, as shown in the following image, will produce a JSON representation of the configuration, which you may consider including as evidence in the event of an audit.

The details contained in this JSON text will help you understand the structure of the configuration objects passed to AWS Lambda, which you interact with when writing your own Config rules. I discuss this in more detail later in this blog post.

Let’s walk through a quick example of one of the many ways of how an auditor or administrator may go about using Config. Let’s say that there was an emergency production issue. The issue required an administrator to add SSH access to production web servers temporarily so that he or she could log in and manually install a software patch. The patches then were installed and SSH access was revoked from all the security groups except for one instance’s security group, which was mistakenly forgotten. In Config, the compliance team is able to review the last change to any resource type by reviewing the Config Timeline (as show in the following screenshot) and clicking Change to verify exactly what was changed.

It is clear from the following screenshot that the opening of SSH on port 22 was the last change captured, so we need to close the port on this security group to block remote access to this server.

Extend healthcare-specific compliance with Config Rules

Though the SSH configuration I just walked through provided context about how Config works, in a healthcare environment we would ideally want to automate this process. This is what AWS Config Rules can do for us.

Config Rules is a powerful rule system that can target resources and then have those resources evaluated when they are created or changed or evaluated on a periodic basis (hourly, daily, and so forth).

Let’s look at how we could have used Config Rules to identify the same improperly opened SSH port discussed previously in this post.

At the time of this post, AWS Config Rules is available only in the US East (N. Virginia) Region, so to follow along, be sure you have the AWS Management Console set to that region. From the same Config service that we have been using, click Rules in the left pane and then click Add Rule.

You can choose from available managed rules. One of those rules is restricted-common-ports, which will fit our use case. I modify this rule to be limited to only those security groups I have tagged as PROD in the Trigger section, as shown in the following screenshot.

I then override the default ports of this rule and specify my own port under Rule parameters, which is 22.

Click Save and you will be taken back to the Rules page to have the rule run on your infrastructure. While the rule is running, you will see an Evaluating status, as shown in the following image.

When I return to my Resource inventory by clicking Resources in the left pane, I again search for all of my PROD environment resources. However, with AWS Config rules, I can quickly find which resources are noncompliant with the rule I just created. The following screenshot shows the Resource type and Resource identifier of the resource that is noncompliant with this rule.

In addition to this SSH production check, for a regulated healthcare environment you should consider implementing all of the managed AWS Config rules to ensure your AWS infrastructure is meeting basic compliance requirements set by your organization. A few examples are:

Use the encrypted-volumes rule to ensure that volumes tagged as PHI=”Yes” are encrypted.

Ensure that you are always logging API activity by using the cloudtrail-enabled rule.

Ensure you do not have orphaned Elastic IP addresses with eip-attached.

Verify that all development machines can only be accessed with SSH from the development VPC by changing the defaults in restricted-ssh.

Use required-tags to ensure that you have the information you need for healthcare audits.

Ensure that only PROD resources that are hardened for exposure to the public Internet are in a VPC that has an Internet gateway attached by taking advantage of managed rule, ec2-instances-in-vpc.

Create your own healthcare rules with Lambda

The managed rules just discussed will give you a jump-start to make sure your environment is meeting some of the minimum compliance requirements shared across many compliance frameworks. These rules can be configured quickly to make sure you are meeting some of the basic checks in an automated manner.

However, for deep visibility into your healthcare-compliant architecture, you might want to consider developing your own custom rules to help meet your HIPAA obligations. As a trivial, yet important, example of something you might want to check for to be sure you are staying compliant with the AWS Business Associates Agreement, you could create a custom AWS Config rule to check that all of your EC2 instances are set to dedicated tenancy. This can be done by creating a new rule as shown previously in this post, except this time click Add custom rule at the top of the Config Rules page.

You are then taken to the custom rule page where you name your rule and then click Create AWS Lambda function (as shown in the following screenshot) to be taken to Lambda.

On the landing page to which you are taken (see following screenshot), choose a predefined blueprint with the name config-rule-change-triggered, which provides a sample function that is triggered when AWS resource configurations change.

Within the code blueprint provided, customize the evaluateCompliance function by changing the line

if (‘AWS::EC2::Instance’ !== configurationItem.resourceType)

to

if ("dedicated" === configurationItem.configuration.placement.tenancy)

This will change the function to return COMPLIANT if the EC2 instance is dedicated tenancy instead of returning COMPLIANT if the resource type is simply an EC2 instance, as shown in the following screenshot.

After you have modified the Lambda function, create a role that has the permission to interact with Config. By default, Lambda will suggest that you create the role AWS Config role. You can follow all the default advice suggested in the AWS console to create a role that contains the appropriate permissions.

After you have created the new role, click Next. On the next page, review the Lambda function you are about to create, and then click Create function. Now that you have created the function, copy the function’s Amazon Resource Name (ARN) from the Lambda page and return to your Config Rules setup page. Paste the ARN of the Lambda function you just created into the AWS Lambda function ARN* box.

From the Trigger options, choose Configuration changes under Trigger type, because this is the Lambda blueprint that you used. Set the Scope of changes to whichever resources you would like this rule to evaluate. In this sample, I will apply the rule to All changes.

After a few minutes, this rule will evaluate your infrastructure, and you can use the rule to easily audit your infrastructure to display the EC2 instances that are Compliant (in this case, that are using dedicated tenancy), as shown in the following screenshot.

For more details about working with Config Rules, see the AWS Config Developer Guide to learn how to develop your own rules.

In addition to digging deeper into the documentation, you may also want to explore the AWS Config Partners who have developed Config rules that you can simply take and use for your own AWS infrastructure. For companies that have HIPAA expertise and are interested in partnering with AWS to develop HIPAA-specific Config rules, feel free to email me or leave a comment in the “Comments” section below to discuss more.

Conclusion

In this blog post, I have completed my explanation of a DevSecOps architecture for the healthcare sector by looking at AWS Config Rules. I hope you have learned how compliance and auditing can use Config Rules to track the rapid, self service changes developers make to cloud infrastructure as well as how you can extend Config with customized compliance rules that allow auditing and compliance groups to gain deep visibility into a developer-centric AWS environment.

– Chris

Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx273MQOP5UGJWO/Register-for-and-Attend-This-March-2-Webinar-Using-AWS-WAF-and-Lambda-for-Automa

As part of the AWS Webinar Series, AWS will present Using AWS WAF and Lambda for Automatic Protection on Wednesday, March 2. This webinar will start at 10:00 A.M. and end at 11:00 A.M. Pacific Time (UTC-8).

AWS WAF Software Development Manager Nathan Dye will share AWS Lambda scripts that you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

You will:

Learn how to use AWS WAF and Lambda together to automate security responses.

Get the Lambda scripts and AWS CloudFormation templates that prevent HTTP floods, automatically block bad-behaving IPs and bad-behaving bots, and allow you to import and maintain publicly available IP reputation lists.

Gain an understanding of strategies for protecting your web applications using AWS WAF, Amazon CloudFront, and Lambda.

The webinar is free, but space is limited and registration is required. Register today.

– Craig

How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series

Post Syndicated from Chris Crosbie original https://blogs.aws.amazon.com/security/post/Tx2X8A35ONJYE2V/How-to-Translate-HIPAA-Controls-to-AWS-CloudFormation-Templates-Part-3-of-the-Au

In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

The example CloudFormation template I use as an example in today’s post is the same template I used in my previous post to define a healthcare product in AWS Directory Service. The template creates a healthcare web server that follows many of the contractual obligations outlined in the AWS BAA. The template also allows healthcare developers to customize their web server according to the following parameters:

FriendlyName – The name with which you tag your server.

CodeCommitRepo – The cloneUrlHttp field for the Git repository that you would like to release on the web server.

Environment – A choice between PROD and TEST. TEST will create a security group with several secure ports open, including SSH, from within a Classless Inter-Domain Routing (CIDR) block range. Choosing PROD will create a security group with HTTPS that is only accessible from the public Internet. (Exposing production web servers directly to the public Internet is not a best practice and is shown for example purposes only).

PHI – If you need to store protected health information (PHI) on the server. Choosing YES will create an encrypted EBS volume and attach it to the web server.

WebDirectory – This is the name of your website. For example, DNS-NAME/WebDirectory.

InstanceType – This is the Amazon EC2 instance type on which the code will be deployed. Because the AWS BAA requires PHI to be processed on dedicated instances, the choices here are limited to those EC2 instance types that are offered in dedicated tenancy mode.

I will forego CloudFormation tutorials in this post because an abundance of material for learning CloudFormation is easily accessible in AWS documentation. Instead, I will jump right in to share the Top 10 List of CloudFormation code snippets. If you are new to CloudFormation, you might find value in first understanding the capabilities it offers. qwikLABS is a great resource for learning AWS technology and offers multiple ClouldFormation labs to bring you up to speed quickly. The qwikLabs site offers entry-level CloudFormation labs at no cost. 

It’s important to note that the example CloudFormation template from which the following 10 snippets are taken is only an example and does not guarantee HIPAA or AWS BAA compliance. The template is meant as a starting point for developing your own templates that not only help you meet your AWS BAA obligations, but also provide general guidance as you expand beyond a single web server and start to utilize DevSecOps methods for other HIPAA-driven compliance needs.

Without further ado, here is a Top 10 List of CloudFormation compliance snippets that you should consider when building your own CloudFormation templates. In each section, I highlight the code I refer to in the associated description.

1. Set tenancy to dedicated.

To run a web server, you need an EC2 instance on which to install it. This can be accomplished in CloudFormation by adding it as a resource in the template. However, you also want to make sure that the EC2 instance meets your AWS BAA obligations by running in dedicated tenancy mode (in other words, your instance runs on single-tenant hardware).

To enforce this, in the EC2 instance change the tenancy property of the instance to dedicated.

    "EC2Instance": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
          "Tenancy" : "dedicated",
}}

2. Turn on detailed monitoring.

Detailed monitoring provides data about your EC2 instance over 1-minute periods. You can enable this in CloudFormation by adding a parameter to your EC2Instance resource.

When you turn on detailed monitoring, the data is then available for the instance in AWS Management Console graphs or through the API. Because there is an upcharge for detailed monitoring, you might want to turn this on only in your production environments. Having data each minute could be critical to recognizing failures and triggering responses to these failures.

On the other hand, also turning on detailed monitoring in your development environments could help you diagnose issues and prevent you from inadvertently moving such issues to production.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"Tenancy" : "dedicated",
"Monitoring": "true",
}}

3. Define security group rules based on environment.

CloudFormation allows you to modify the firewall rules on your EC2 instance based on input parameters given to the template when it runs. This is done with AWS security groups and is very useful when you want to enforce certain compliance measures you define, such as disabling SSH access to production web servers or restricting development web servers from being accessed by the public Internet.

To do this, change security group settings based on whether your instance is targeted at test, QA, or production environments. You can do this by using conditions and an intrinsic If function. Intrinsic functions help you modify the security groups between environments according to your compliance standards, but at the same time, maintain consistent infastructure between environments.

"Conditions" : {
"CreatePRODResources" : {"Fn::Equals" : [{"Ref" : "Environment"}, "PROD"]}
},
"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"Tenancy" : "dedicated",
"Monitoring": "true"
}},
"SecurityGroups": [{
"Fn::If": [
"CreateTESTResources",
{"Ref": "InstanceSecurityGroupTEST"},
{"Ref": "InstanceSecurityGroupPROD"}
]
}],
"InstanceSecurityGroupTEST": {
"Type": "AWS::EC2::SecurityGroup",
"Condition" : "CreateTESTResources",
"Properties": {
"GroupDescription": "Enable access only from secure protocols",
"SecurityGroupIngress": [
{ "IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "tcp", "FromPort" : "143", "ToPort" : "143", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "tcp", "FromPort" : "465", "ToPort" : "465", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "icmp", "FromPort" : "8", "ToPort" : "-1", "CidrIp" : "10.0.0.0/24" }
]
}}

4. Force instance tagging.

EC2 tagging is a common way for auditors and security professionals to understand why EC2 instances were launched and for what purpose. You can require the developer launching the template to enter information that you need for EC2 instance tagging by using CloudFormation parameters.

By using parameters such as AllowedValues and MinLength, you can maintain consistent tagging mechanisms by requiring that the developer enter a tag from a predetermined list of options (AllowedValues), or simply making them enter a text value meeting a certain length (MinLength).

In the following snippet, I use an AllowedValues list of YES and NO to make the developer tag the instance with information about whether or not the EC2 instance will be used to store PHI. I also use the MinLength to make the developer tag the EC2 instance with their email address so that we know who to contact if there is an issue with the instance.

"Parameters": {
"PHI":
{
"Description": "Will this instance need to store protected health information?",
"Default": "YES",
"Type": "String",
"AllowedValues": [
"YES",
"NO"
]
},
"Environment":
{
"Description": "Please specify the target environment",
"Default": "TEST",
"Type": "String",
"AllowedValues": [
"TEST",
"PROD",
“QA”
]
},
},
"InstanceOwnerEmail":
{
"Description": "Please enter the email address of the developer taking responsblity for this server",
"Default": "@mycompany.com",
"Type": "String"
},
"FriendlyName":
{
"Description": "Please enter a friendly name for the server",
"Type": "String",
"MinLength": 3,
"ConstraintDescription": "Must enter a friendy name for the server that is at least three characters long."
},
"Resources": {
"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"Tags":[
{ "Key" : "PHI", "Value" : {"Ref": "PHI"} },
{ "Key" : "Name", "Value" : {"Ref": "FriendlyName"} },
{ "Key" : "Environment", "Value" : {"Ref": "Environment"} },
{ "Key" : "InstanceOwnerEmail", "Value" : {"Ref": "InstanceOwnerEmail"} }
}}

5. Use IAM roles for EC2.

Applications must sign their API requests with AWS credentials. IAM roles are designed so that applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. In this example, I give the EC2 instance permission to perform a Git clone from our AWS CodeCommit repositories and push log data to Amazon CloudWatch.

"Resources": {

"HealthcareWebRole":
{
"Type": "AWS::IAM::Role",
"Properties":
{
"AssumeRolePolicyDocument":
{
"Version" : "2012-10-17",
"Statement":
[ {
"Effect": "Allow",
"Principal":
{
"Service": [ "ec2.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"Path": "/",
"ManagedPolicyArns": ["arn:aws:iam::aws:policy/AWSCodeCommitReadOnly", "arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"]
}
},
"HealthcareWebInstanceProfile":
{
"Type": "AWS::IAM::InstanceProfile",
"Properties":
{
"Path": "/",
"Roles": [ { "Ref": "HealthcareWebRole" } ]
}
},
"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
IamInstanceProfile": {"Ref": "HealthcareWebInstanceProfile"}
}}

6. Add encrypted storage if you need to store PHI.

Applications that need to store PHI must encrypt the data at rest to meet the AWS BAA requirements. Amazon EBS encryption is one way to do this. The highlighted portion of the following snippet will add an encrypted EBS volume if the developer answers YES to the question, ”Will this instance need to store protected health information?”

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sdm",
"Ebs" : {
"VolumeType" : "io1",
"Iops" : "200",
"DeleteOnTermination" : "false",
"VolumeSize" : "10",
"Encrypted": {
"Fn::If" : [
"ContainsPHI",
"true",
"false"
]
}
}
},
{
"DeviceName" : "/dev/sdk",
"NoDevice" : {}
}
]}

7. Turn on CloudWatch Logs.

On each instance, install the AWS CloudWatch Logs agent, which uses CloudWatch Logs to monitor, store, and access your log files from EC2 instances. You can then retrieve the associated log data from a centralized logging repository that can be segregated from the application development team. 

After turning on the CloudWatch Logs agent, by default logs from the /var/log/messages are sent to CloudWatch. These messages store valuable, nondebug, and noncritical messages. These logs should be considered the general system activity logs, where you can start ensuring that you have the highest level of audit logging. However, you most likely will want to modify the /etc/awslogs/awslogs.conf file to add additional log locations if you choose to use this service in a HIPAA environment.

For example, you may want to add authentication logs (/var/log/auth.log) and set up alerting in CloudWatch to notify an administrator if repeated unauthorized access attempts are made against your server.

The following snippet will start the CloudWatch Logs agent and make sure it gets turned on during each startup.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[
"",
[
"service awslogs startn",
"chkconfig awslogs onn"
]
}}

8. Install Amazon Inspector.

Amazon Inspector is an automated security assessment service (currently offered in preview mode) that can help improve the security and compliance of applications deployed on AWS. Amazon Inspector allows you to run assessments for common best practices, vulnerablities, and exposures, and these findings can then be mapped to your own HIPAA control frameworks. Amazon Inspector makes it easier to validate that applications are adhering to your defined standards, and it helps to manage security issues proactively before a critical event such as a breach occurs.

Amazon Inspector requires an agent-based client to be installed on the EC2 instance. However, this installation can be performed by using a CloudFormation template. In the CloudFormation template used for this blog post, the Amazon Inspector installation is intentionally missing because Amazon Inspector in preview mode is available only in a different region than CodeCommit. However, if you would like to install it while in preview mode, you can use the following snippet.         

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[
"",
[
“curl -O https://s3-us-west-2.amazonaws.com/inspector.agent.us-west-2/latest/install”
“sudo bash install”
]
}}

9. Configure SSL for encryption in flight.

As detailed in the AWS BAA, you must encrypt all PHI in flight. For production healthcare applications, open only those ports in the EC2 security group that are used in secure protocols.

The following snippet provides an example of UserData pulling down self-signed certificates from a publicly available Amazon S3 site. Although there may be situations when you have deemed self-signed certificates to be acceptable, a more secure approach would be to store the certificates in a private S3 bucket and give permission to the EC2 role to download the certificates and configurations.

Important: The certificates in the following code snippet are provided for demonstration purposes only and should never be used for any type of security or compliance purpose.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[
"",
[
"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/fakehipaa.crt -P /etc/pki/tls/certsn",

"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/fakehipaa.key -P /etc/pki/tls/private/n",

"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/fakehipaa.csr -P /etc/pki/tls/private/n",

"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/ssl.conf -P /etc/httpd/conf.d/ssl.confn",

"service httpd startn",

"chkconfig httpd onn"
]
}}

10. Clone from AWS CodeCommit.

So far the snippets in this post have focused on getting infrastructure secured in accordance with your compliance standards. However, you also need a process for automated code deployments. A variety of tools and techniques is available for automating code deployments, but in the following snippet, I will demonstrate an automated code deployment using an EC2 role and CodeCommit. This combination of an EC2 role and CodeCommit requires you to set the system-wide Git preferences by modifying the /etc/gitconfig file.

In the following snippet, after the authorized connection to CodeCommit is established, Git clones the repository provided by the developer into the default root folder of an Apache web server. However, this example could easily be extended to look for developer makefiles or to have an extra step that calls shell scripts that are written by the developer but maintained in CodeCommit.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[

"git config –system credential.https://git-codecommit.us-east-1.amazonaws.com.helper ‘!aws –profile default codecommit credential-helper [email protected]’n",

"git config –system credential.https://git-codecommit.us-east-1.amazonaws.com.UseHttpPath truen",
"aws configure set region us-east-1n",

"cd /var/www/htmln",

"git clone ", {"Ref": "CodeCommitRepo"}, " ", {"Ref": "WebDirectory"}, " n",

[
]
}}

Conclusion

I hope that these 10 code snippets give you a head start to develop your own CloudFormation compliance templates. I encourage you to build on the template I provided to learn more about how CloudFormation works as you take steps to achieve your own DevSecOps architecture.

– Chris