Tag Archives: CloudTrail

Announcing the AWS Config Rules Repository: A New Community-Based Source of Custom Rules for AWS Config

Post Syndicated from Chad Woolf original https://blogs.aws.amazon.com/security/post/TxES3UX2Z5BQRU/Announcing-the-AWS-Config-Rules-Repository-A-New-Community-Based-Source-of-Custo

Today, we’re happy to release the AWS Config Rules repository, a community-based source of custom AWS Config Rules. This new repository gives you a streamlined way to automate your assessment and compliance against best practices for security of AWS resources. AWS Config Rules is a service that provides automated, periodic security and compliance checking of AWS resources, and affords customers the ability to forego manual inspection of security configurations.

The AWS Config Rules repository accelerates automated compliance checking by allowing customers to tap in to the collective ingenuity and expertise of the AWS community. Additionally, the repository is free, public, and hosted on an independent platform, and it contains full source code for each rule, allowing you to learn and contribute. We look forward to working together to leverage the combined wisdom and lessons learned by our security experts and the security experts in the broader AWS user base.

As I mentioned in my previous post, we have partnered with the Center for Internet Security to establish industry best practices for securing AWS accounts. The repository has been seeded with rules that will help you maintain alignment with these best practices. Here’s a sample of the Custom Rules you now have access to:

Ensure CloudTrail is enabled in all regions.

Ensure all accounts have multi-factor authentication (MFA) enabled.

Ensure no access keys exist for the root account.

Ensure an AWS Identity and Access Management (IAM) password policy exists.

Ensure access keys are rotated.

To get started using these rules in your AWS account, see the readme file on GitHub. I encourage you to use this repository to share with the AWS community the Custom Rules you have written.

– Chad

Announcing the AWS Config Rules Repository: A New Community-Based Source of Custom Rules for AWS Config

Post Syndicated from Chad Woolf original https://blogs.aws.amazon.com/security/post/TxES3UX2Z5BQRU/Announcing-the-AWS-Config-Rules-Repository-A-New-Community-Based-Source-of-Custo

Today, we’re happy to release the AWS Config Rules repository, a community-based source of custom AWS Config Rules. This new repository gives you a streamlined way to automate your assessment and compliance against best practices for security of AWS resources. AWS Config Rules is a service that provides automated, periodic security and compliance checking of AWS resources, and affords customers the ability to forego manual inspection of security configurations.

The AWS Config Rules repository accelerates automated compliance checking by allowing customers to tap in to the collective ingenuity and expertise of the AWS community. Additionally, the repository is free, public, and hosted on an independent platform, and it contains full source code for each rule, allowing you to learn and contribute. We look forward to working together to leverage the combined wisdom and lessons learned by our security experts and the security experts in the broader AWS user base.

As I mentioned in my previous post, we have partnered with the Center for Internet Security to establish industry best practices for securing AWS accounts. The repository has been seeded with rules that will help you maintain alignment with these best practices. Here’s a sample of the Custom Rules you now have access to:

  1. Ensure CloudTrail is enabled in all regions.
  2. Ensure all accounts have multi-factor authentication (MFA) enabled.
  3. Ensure no access keys exist for the root account.
  4. Ensure an AWS Identity and Access Management (IAM) password policy exists.
  5. Ensure access keys are rotated.

To get started using these rules in your AWS account, see the readme file on GitHub. I encourage you to use this repository to share with the AWS community the Custom Rules you have written.

– Chad

How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series

Post Syndicated from Chris Crosbie original https://blogs.aws.amazon.com/security/post/Tx27GJDUUTHKRRJ/How-to-Use-AWS-Config-to-Help-with-Required-HIPAA-Audit-Controls-Part-4-of-the-A

In my previous posts in this series, I explained how to get started with the DevSecOps environment for HIPAA that is depicted in the following architecture diagram. In my second post in this series, I gave you guidance about how to set up AWS Service Catalog (#4 in the following diagram) to allow developers a way to launch healthcare web servers and release source code without the need for administrator intervention. In my third post in this series, I advised healthcare security administrators about defining AWS CloudFormation templates (#1 in the diagram) for infrastructure that must comply with the AWS Business Associate Agreement (BAA).

In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture depicted in the preceding diagram by highlighting ways you can use AWS Config (#9 in the diagram) to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations. 

Auditing and monitoring are essential to HIPAA security. Auditing controls are a Technical Safeguard that must be addressed through the use of technical controls by anyone who wishes to store, process, or transmit electronic patient data. However, because there are no standard implementation specifications within the HIPAA law and regulations, AWS Config enables you to address audit controls  to use the cloud to protect the cloud.

Because Config currently targets only AWS infrastructure configuration changes, it is unlikely that Config alone will be able to meet all of the audit control requirements laid out in Technical Safeguard 164.312, the section of the HIPAA regulations that discusses the technical safeguards such as audit controls. However, Config is a cloud-native auditing service that you should evaluate as an alternative to traditional on-premises compliance tools and procedures.

Standard audit controls found in 164.312(b)(2) of the HIPAA regulations says: “Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic health information.” Config helps achieve this because it monitors the activity of both running and deleted AWS resources across time. In a DevSecOps environment in which developers have the power to turn on and turn off infrastructure in a self-service manner, using a cloud-native monitoring tool such as Config will help ensure that you can meet your auditing requirements. Understanding what a configuration looked like and who had access to it at a point in the past is something that you will need to do in a typical HIPAA audit, and Config provides this functionality.

For more about the topic of auditing HIPAA infrastructure in the cloud, the AWS re:Invent 2015 session, Architecting for HIPAA Compliance on AWS, gives additional pointers. To supplement the monitoring provided by Config, review and evaluate the easily deployable monitoring software found in the AWS Marketplace.

Get started with AWS Config

From the AWS Management Console, under Management Tools:

Click Config.

If this is your first time using Config, click Get started.

From the Set up AWS Config page, choose which types of resources that you want to track.

Config is designed to track the interaction among various AWS services. At the time of this post, you can choose to track your accounts in AWS Identity and Access Management (IAM), Amazon EC2–related services (such as Amazon Elastic Block Store, elastic network interfaces , and virtual private cloud [VPC]), and AWS CloudTrail.

All the information collected across these services is normalized into a standard format so that auditors or your compliance team may not need to understand the underlying details of how to audit each AWS service. They simply can review the Config console to ensure that your healthcare privacy standards are being met.

Because the infrastructure described in this post is designed for storing protected health information (PHI), I am going to select the check box next to All resources, as shown in the following screenshot. By choosing this option, I can ensure that not only will all the resources available for tracking be included, but also as new resource types get added to Config, they will automatically be added to my tracking as well.  

Also, be sure to select the Include global resources check box if you would like to use Config to record and govern your IAM resource configurations.

Specify where the configuration history file should be stored

Amazon S3 buckets have global naming, which makes it possible to aggregate the configuration history files across regions or send the files to a separate AWS account with limited privileges. The same consolidation can be configured for Amazon Simple Notification Service (SNS) topics, if you want to programmatically extend the information coming from Config or be immediately alerted of compliance risks.

For this example, I create a new bucket in my account and turn off the Amazon SNS topic notifications (as shown in the following screenshot), and click Continue.  

On the next page, create a new IAM role in your AWS account so that the Config service has the ability to read your infrastructure’s information. You can review the permissions that will be associated with this IAM role by clicking the arrow next to View Policy Document.

After you have verified the policy, click Allow. You should now be taken to the Resource inventory page. On the right side of the page, you should see that Recording is on and that inventory is being taken about your infrastructure. When the Taking inventory label (shown in the following image) is no longer visible, you can start reviewing your healthcare infrastructure.

Review your healthcare server

For the rest of this post, I use Config to review the healthcare web server that I created with AWS Service Catalog in How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series.

From the Resource inventory page, you can search based on types of resources, such as IAM user, network access control list (ACL), VPC, and instance. A resource tag is a way to categorize AWS resources, and you can search by those tags in Config. Because I used CloudFormation to enforce tagging, I can quickly find the type of resources I am interested in by setting up search for these tags.

As an example of why this is useful, consider employee turnover. Most healthcare organizations need to have processes and procedures to deal with employee turnover in a regulated environment. Because our CloudFormation template forced developers to populate a tag with their email addresses, you can easily use Config to find all the resources the employee was using, if they decided to leave the organization (or even if they didn’t leave the company).

Search on the Resource inventory page for the employee’s email address along with the tag, InstanceOwnerEmail, and then click Look up, as shown in the following screenshot.

Click the link under Resource identifier to see the Config timeline that shows the most recent configuration recorded for the instance as well as previously recorded configurations. This timeline will show not only the configuration details of the instance itself, but also will provide the relationships to other AWS services and an easy-to-interpret Changes section. This section provides your auditing and compliance teams the ability to quickly review and interpret changes from a single interface without needing to understand the underlying AWS services in detail or jump between multiple AWS service pages.

Clicking View Details, as shown in the following image, will produce a JSON representation of the configuration, which you may consider including as evidence in the event of an audit.

The details contained in this JSON text will help you understand the structure of the configuration objects passed to AWS Lambda, which you interact with when writing your own Config rules. I discuss this in more detail later in this blog post.

Let’s walk through a quick example of one of the many ways of how an auditor or administrator may go about using Config. Let’s say that there was an emergency production issue. The issue required an administrator to add SSH access to production web servers temporarily so that he or she could log in and manually install a software patch. The patches then were installed and SSH access was revoked from all the security groups except for one instance’s security group, which was mistakenly forgotten. In Config, the compliance team is able to review the last change to any resource type by reviewing the Config Timeline (as show in the following screenshot) and clicking Change to verify exactly what was changed.

It is clear from the following screenshot that the opening of SSH on port 22 was the last change captured, so we need to close the port on this security group to block remote access to this server.

Extend healthcare-specific compliance with Config Rules

Though the SSH configuration I just walked through provided context about how Config works, in a healthcare environment we would ideally want to automate this process. This is what AWS Config Rules can do for us.

Config Rules is a powerful rule system that can target resources and then have those resources evaluated when they are created or changed or evaluated on a periodic basis (hourly, daily, and so forth).

Let’s look at how we could have used Config Rules to identify the same improperly opened SSH port discussed previously in this post.

At the time of this post, AWS Config Rules is available only in the US East (N. Virginia) Region, so to follow along, be sure you have the AWS Management Console set to that region. From the same Config service that we have been using, click Rules in the left pane and then click Add Rule.

You can choose from available managed rules. One of those rules is restricted-common-ports, which will fit our use case. I modify this rule to be limited to only those security groups I have tagged as PROD in the Trigger section, as shown in the following screenshot.

I then override the default ports of this rule and specify my own port under Rule parameters, which is 22.

Click Save and you will be taken back to the Rules page to have the rule run on your infrastructure. While the rule is running, you will see an Evaluating status, as shown in the following image.

When I return to my Resource inventory by clicking Resources in the left pane, I again search for all of my PROD environment resources. However, with AWS Config rules, I can quickly find which resources are noncompliant with the rule I just created. The following screenshot shows the Resource type and Resource identifier of the resource that is noncompliant with this rule.

In addition to this SSH production check, for a regulated healthcare environment you should consider implementing all of the managed AWS Config rules to ensure your AWS infrastructure is meeting basic compliance requirements set by your organization. A few examples are:

Use the encrypted-volumes rule to ensure that volumes tagged as PHI=”Yes” are encrypted.

Ensure that you are always logging API activity by using the cloudtrail-enabled rule.

Ensure you do not have orphaned Elastic IP addresses with eip-attached.

Verify that all development machines can only be accessed with SSH from the development VPC by changing the defaults in restricted-ssh.

Use required-tags to ensure that you have the information you need for healthcare audits.

Ensure that only PROD resources that are hardened for exposure to the public Internet are in a VPC that has an Internet gateway attached by taking advantage of managed rule, ec2-instances-in-vpc.

Create your own healthcare rules with Lambda

The managed rules just discussed will give you a jump-start to make sure your environment is meeting some of the minimum compliance requirements shared across many compliance frameworks. These rules can be configured quickly to make sure you are meeting some of the basic checks in an automated manner.

However, for deep visibility into your healthcare-compliant architecture, you might want to consider developing your own custom rules to help meet your HIPAA obligations. As a trivial, yet important, example of something you might want to check for to be sure you are staying compliant with the AWS Business Associates Agreement, you could create a custom AWS Config rule to check that all of your EC2 instances are set to dedicated tenancy. This can be done by creating a new rule as shown previously in this post, except this time click Add custom rule at the top of the Config Rules page.

You are then taken to the custom rule page where you name your rule and then click Create AWS Lambda function (as shown in the following screenshot) to be taken to Lambda.

On the landing page to which you are taken (see following screenshot), choose a predefined blueprint with the name config-rule-change-triggered, which provides a sample function that is triggered when AWS resource configurations change.

Within the code blueprint provided, customize the evaluateCompliance function by changing the line

if (‘AWS::EC2::Instance’ !== configurationItem.resourceType)

to

if ("dedicated" === configurationItem.configuration.placement.tenancy)

This will change the function to return COMPLIANT if the EC2 instance is dedicated tenancy instead of returning COMPLIANT if the resource type is simply an EC2 instance, as shown in the following screenshot.

After you have modified the Lambda function, create a role that has the permission to interact with Config. By default, Lambda will suggest that you create the role AWS Config role. You can follow all the default advice suggested in the AWS console to create a role that contains the appropriate permissions.

After you have created the new role, click Next. On the next page, review the Lambda function you are about to create, and then click Create function. Now that you have created the function, copy the function’s Amazon Resource Name (ARN) from the Lambda page and return to your Config Rules setup page. Paste the ARN of the Lambda function you just created into the AWS Lambda function ARN* box.

From the Trigger options, choose Configuration changes under Trigger type, because this is the Lambda blueprint that you used. Set the Scope of changes to whichever resources you would like this rule to evaluate. In this sample, I will apply the rule to All changes.

After a few minutes, this rule will evaluate your infrastructure, and you can use the rule to easily audit your infrastructure to display the EC2 instances that are Compliant (in this case, that are using dedicated tenancy), as shown in the following screenshot.

For more details about working with Config Rules, see the AWS Config Developer Guide to learn how to develop your own rules.

In addition to digging deeper into the documentation, you may also want to explore the AWS Config Partners who have developed Config rules that you can simply take and use for your own AWS infrastructure. For companies that have HIPAA expertise and are interested in partnering with AWS to develop HIPAA-specific Config rules, feel free to email me or leave a comment in the “Comments” section below to discuss more.

Conclusion

In this blog post, I have completed my explanation of a DevSecOps architecture for the healthcare sector by looking at AWS Config Rules. I hope you have learned how compliance and auditing can use Config Rules to track the rapid, self service changes developers make to cloud infrastructure as well as how you can extend Config with customized compliance rules that allow auditing and compliance groups to gain deep visibility into a developer-centric AWS environment.

– Chris

How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series

Post Syndicated from Chris Crosbie original https://blogs.aws.amazon.com/security/post/TxAHB2MFO7QCIX/How-to-Use-AWS-Service-Catalog-for-Code-Deployments-Part-2-of-the-Automating-HIP

In my previous blog post, I discussed the idea of using the cloud to protect the cloud and improving healthcare IT by applying DevSecOps methods. In Part 2 today, I will show an architecture composed of AWS services that gives healthcare security administrators necessary controls, allows healthcare developers to interact with the system using familiar tools (such as Git), and leverages AWS managed services without the need for advanced coding or complex configuration.

Along the way, I hope to dispel the myth that healthcare security administrators lose control in a DevSecOps environment, and show that healthcare developers can still rely on their administrators without having their development cycles affected adversely.

Architecture Overview

The following architecture diagram shows what I will build in this post. The centerpiece of this system is AWS Service Catalog, which is an AWS service that provides a single location where healthcare organizations can centrally manage catalogs of IT services. With AWS Service Catalog, healthcare security administrators can control which IT services and versions are available, limit the configuration of the available services to what is covered under the AWS Business Associate Agreement (BAA), and delegate permissions access by developer or role.

Before diving into the details of how to set up the individual elements of this architecture, I will walk you through the previous diagram and provide a basic overview of how the AWS services fit together to help you manage the lifecycle of your healthcare environment:

Starting at the top left of the diagram, the healthcare security administrator defines an AWS Service Catalog product using AWS CloudFormation templates. In this blog post, I will use the example product of a “healthcare web server,” which is simply an Apache web server with a few options enabled that serve as starting points for meeting the customer requirements outlined in the AWS BAA.

After the healthcare security administrator defines the healthcare web server, they publish it for healthcare developers in the AWS Service Catalog.

As shown on the right side of the diagram, the healthcare developer continues to develop applications under Git source control and use AWS CodeCommit to fully manage the private Git repository.

When the developer is ready to push code from Git to their healthcare infrastructure, they can use AWS Service Catalog to find and launch the product they need as a CloudFormation stack.

The stack automatically provisions the Git repository specified by the developer.

AWS Code Commit contains the source code pushed by the healthcare developers. In the design of this architecture with AWS Service Catalog at the center, healthcare developers can release their source code without needing to have access to any of the underlying resources or going through security administrators.

Access to the stack and its underlying resources (such as the web server itself) is recorded and tracked through AWS CloudTrail, and the data is stored in Amazon S3.

The healthcare security administrator configures, monitors, and alerts on the health of the stack, its changes, and its access with AWS CloudWatch.

As the stack is being changed, change events are recorded and tracked through AWS Config.

CloudWatch initiates alarms based on rules you design, which can be used to indicate that something may be out of compliance.

CloudTrail monitors all API calls made against the AWS environment.

The administrator is notified by CloudWatch when something changes that could cause the system to be noncompliant.

Now that I have reviewed the overall healthcare architecture diagram, I will jump into the details of how to implement this architecture. I will start with the tasks that would typically be performed by our healthcare security administrators.

Creating the architecture: Healthcare security administrators

Enable CloudTrail and AWS Config

From the Management Tool section of the AWS console, the administrator should choose AWS CloudTrail, click Get Started Now, name the S3 bucket in which to store the logs, and then click Turn On. When turned on, API-level logging is enabled for the entire region. It is a best practice to turn on logging in all regions, even if your organization does not plan to use all regions available. As a healthcare security administrator, you want the ability to review all activity and API calls, in the event a region is being used when it should not be. In fact, you may even want to trigger alarms if API calls show up in unexpected regions. 

Next, choose AWS Config from the AWS Management Console, and follow the on-screen instructions to set up the service. You can follow these steps if you have questions as you walk though the setup. You will know this task is complete when you see on the right side of the screen that Recording is on.

Set up the healthcare developer accounts

Create a healthcare developer IAM group with the policy AWSCodeCommitPowerUser (as shown in the following screenshot). This policy gives your developers the ability to perform most tasks with the source control service, but it does not give them the ability to delete projects. In addition, add the ServiceCatalogEndUserAccess policy. Specify the products the team has access to in AWS Service Catalog itself. This policy gives the members of the group the ability to use the service as end users.

Set up a healthcare web server in AWS Service Catalog

At the time of this post’s publication, AWS CodeCommit is available only in the US East (N. Virginia) region, so make sure you follow the following steps in the US East (N. Virginia) region so that your AWS Service Catalog product can communicate directly with AWS CodeCommit:

Under Management Tools in the AWS Management Console, click Service Catalog.

Click Get Started, and then pick a name for your portfolio of products and specify an owner’s name.

Click Upload new product in your portfolio, specify a product name such as Healthcare Web Server, and then complete the rest of the required fields.

Click through until you see the Version details page. This page is where you can specify the CloudFormation template that defines the infrastructure you wish to make available to your HealthcareDeveloper group. For this example, you can Specify an S3 URL location for an Amazon CloudFormation template by using:
https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/Healthcare-DevSecOps-1.cform.

Note: I have provided this CloudFormation template as a basic starting point and example of how you can leverage tools to support healthcare needs in a DevSecOps environment. You should customize this template for your own environment.

Now that you have created both your portfolio and your product, confirm that the product name now appears under the Products heading on the portfolio page. From this same portfolio page, give two sets of permissions:

Under Add users, groups and roles access, the first set of permissions gives permissions to the group that needs to be able to launch the product you just created. In this example, you can select the HealthcareDeveloper group you created previously, as shown in the following image.


 

The second set of permissions is defined under Constraint type. You can find this under the Constraints heading on the portfolio page. These are the permissions you give to AWS Service Catalog to build your product. You need to determine the proper permissions based on the template you build. As an example, you could choose the Launch constraint type (as shown in the following image), which will rely on an IAM role you create in the IAM console.

The IAM role will not only need to contain the permissions to perform all the tasks of the CloudFormation template used in the product, but it will also need to be able to assume the role of the AWS Service Catalog.

If you do not currently have an AWS Service Catalog role set up, you can create this role by returning to the IAM console and following these steps:                                            

Sign in to the AWS Management Console and open the IAM console.

Click Roles.

Click Create New Role.

Type a role name and click Next Step.

Under AWS Service Roles next to AWS Service Catalog, click Select.

On the Attach Policy page, click Next Step.

To create the role, click Create Role.

A launch role defines which actions our CloudFormation template can perform in AWS Directory Service. To attach a policy to the new launch role:

Click the role that you created to view its details page.

Click the Permissions tab, and expand the Inline Policies section. Click Click here.

Click Custom Policy, and then click Select.

Enter a name for the policy, and then paste the following code into the Policy Document editor.

{"Version":"2012-10-17",
"Statement":[        
{
"Effect":"Allow",
"Action":[
"catalog-user:*",
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStacks",
"cloudformation:GetTemplateSummary",
"cloudformation:SetStackPolicy",
"cloudformation:ValidateTemplate",
"cloudformation:UpdateStack",
"s3:GetObject"
],
"Resource":"*"     
}] }

Add a line to the policy for each additional service that the product uses. For example, to add permissions for Amazon Relational Database Service (Amazon RDS), type a comma at the end of the last line in the "Action" list, and then add the following line.

"rds:*"

Click Apply Policy.

See Applying Launch Constraints for more information about how to apply the correct privileges to the CloudFormation templates that underlie AWS Directory Services.

Congratulations! You have set up a DevSecOps infrastructure that utilizes AWS Service Catalog for automated deployments of healthcare web servers.

Using the architecture: Healthcare developers

Now that you have configured the DevSecOps infrastructure, the development team can take advantage of automated deployments while knowing that their underlying infrastructure has been set up in a controlled environment by their healthcare security administrator. The following steps demonstrate how a healthcare developer might interact with this architecture.

Step 1: Set up your first CodeCommit repository

Before you try to run any of the commands associated with CodeCommit, first verify that you have the correct version of the AWS CLI by running the following command.

aws codecommit help

If the output is documentation, you have a CLI version that contains CodeCommit. If your CLI version does not include CodeCommit, download the latest version of the CLI. Also be sure that you install Git.

When you are ready to start a new project, create a new CodeCommit repository by running the following command from the CLI.

aws codecommit create-repository –repository-name DevSecOpsExampleRepo –repository-description "My DevSecOps example repository"

The response will be similar to the following.

{

"repositoryMetadata": {

"repositoryName": "DevSecOpsExampleRepo",

"cloneUrlSsh": "ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/DevSecOpsExampleRepo",

"lastModifiedDate": 1449967097.005,

"repositoryDescription": "My DevSecOps example repository",

"cloneUrlHttp": "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DevSecOpsExampleRepo",

"creationDate": 1449967097.005,

"repositoryId": "xxxx-ee53-465e-a7ac-xxxx",

"Arn": "arn:aws:codecommit:us-east-1:xxxx:DevSecOpsExampleRepo",

"accountId": "xxxxxx"

}

}

Step 2: Connect to your CodeCommit repository

Developers can connect to a repository in a variety of ways, but for this example, I will use the cloneUrlHttp URL that is highlighted in red in the preceding code. The CLI keys will perform authentication.

To configure authentication against CodeCommit, first run the following Git configuration commands.

git config –global credential.helper ‘!aws –profile default codecommit credential-helper [email protected]

git config –global credential.UseHttpPath true

Note: If you are an OS X user and are experiencing issues with Git asking for passwords after running these commands, see Setup Steps for HTTPS Connections to AWS CodeCommit Repositories on Linux, OS X, or Unix.

Step 3: Push code to CodeCommit

If you currenlty use Git, the process of pushing code to CodeCommit should be familiar to you. If you are not familiar with Git, see this tutorial to get started. However, all the Git commands needed to follow along with the remainder of this post can simply be copied and pasted from the code snippets that follow. 

curl https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/index.html > index.html

From within the directory, run the following commands, replacing the URL that is highlighted in red with the cloneUrlHttp you created in Step 2.

git init
git add .

git commit –m ‘first CodeCommit’
git remote add origin https://git-codecommit.us-east-1.amazonaws.com/v1/repos/DevSecOpsExampleRepo
git push -u origin master

Review this push by using the CodeCommit console. You will see a page similar to what is shown in the following image, which displays a single-file Git repository that contains an index.html file.

Step 4: Using AWS Service Catalog to release code

As a healthcare developer, sign in to the AWS Management Console, and choose Service Catalog. You will see only the end-user view of the console, which will contain the list of products that have been set up for you by the healthcare security administrator. The following image shows the products available from a developer’s point of view.

Select the Healthcare Web Server check box. Click Launch Product and name the stack something meaningful to you.

As shown in the following image, you are now prompted for Healthcare Web Server parameters as requested via the CloudFormation template, which was used by the healthcare security administrator.

Each of these parameters is explained in the following list:

FriendlyName – The name with which you tag your server.

CodeCommitRepo – The cloneUrlHttp field for the Git repository that you would like to release on the web server.

Environment – A choice between PROD and TEST. TEST will create a security group with several secure ports open, including SSH, from within an AWS CIDR block range. Choosing PROD will create a security group with HTTPS that is only accessible from the public Internet. (Exposing production web servers directly to the public Internet is not a best practice and is shown for example purposes only).

PHI – Choose if you need to actually store protected health information (PHI) on the server. Choosing YES will create an encrypted EBS volume and attach it to the web server.

WebDirectory – This is the name of your website. For example, DNS-NAME/WebDirectory.

InstanceType – This is the Amazon EC2 instance type on which the code will be deployed. Because the AWS BAA requires PHI to be processed on dedicated instances, the choices here are limited to those EC2 instance types that are offered in dedicated tenancy mode.

InstanceOwnerEmail – The e-mail address of the developer setting up this server. This field will be used to tag the instance and will give the security administrator the ability to generate alerts and metrics based on this tag.

KeyName – The SSH key used to log in to the instance.

After you have populated the fields and launched the stack, after a few minutes you should be able to review the output, as illustrated in the following image.

Click the Value (link) for CompliantWebsiteURL, and you will be taken to the website that you just created.

Important note: Most browsers will display an error for the certificate that was used in this server’s configuration. The certificate used in this example is a self-signed dummy certificate I created to share publicly on this blog. It should never be used in an actual healthcare application. However, the techniques used to deploy that certificate can be used in your environment with your own certificates.

After you acknowledge this one-time exception by clicking Continue and verify that your connection is not actually private, you will be taken to the website deployed from your Git repository.

You have now successfully deployed your code to a healthcare website without direct intervention from the security team, but with the assurance that a security administrator has properly set up your web server to meet the correct compliance requirements.

Reviewing architecture deployments: Healthcare security administrators

Even though the web server and code were deployed entirely by the healthcare developer, the healthcare security administrator can still have complete oversight into their environment.

The developer does not have the ability to use the EC2 console, but the security administrator can review the server that was just created, who created it, and for what purposes it will be used. For example, the following screenshot shows that this server instance was created for the production environment and will not store PHI.

Along with the EC2 instance, other items such as ubiquitous and centralized logging using Amazon CloudWatch were set up as well, which you can see an example of in the following image.

Additionally, AWS Config is now recording the state of our healthcare web server, as shown in the following image.

Conclusion

This blog post has shown an introductory DevSecOps architecture that “uses the cloud to protect the cloud.” I hope I have dispelled the myth that security administrators lose control in a DevSecOps environment, and have shown healthcare developers that they can still rely on their security administrators without slowing down their development cycles.

In my next post, I will dig deeper into the CloudFormation template that was used in this blog post and provide best practices for healthcare customers who wish to automate their compliance controls in CloudFormation templates.

– Chris

How to Help Protect Sensitive Data with AWS KMS

Post Syndicated from Matt Bretan original https://blogs.aws.amazon.com/security/post/Tx79IILINW04DC/How-to-Help-Protect-Sensitive-Data-with-AWS-KMS

AWS Key Management Service (AWS KMS) celebrated its one-year launch anniversary in November 2015, and organizations of all sizes are using it to effectively manage their encryption keys. KMS also successfully completed the PCI DSS 3.1 Level 1 assessment as well as the latest SOC assessment in August 2015.

One question KMS customers frequently ask is about how how to encrypt Primary Account Number (PAN) data within AWS because PCI DSS sections 3.5 and 3.6 require the encryption of credit card data at rest and has stringent requirements around the management of encryption keys. One KMS encryption option is to encrypt your PAN data using customer data keys (CDKs) that are exportable out of KMS. Alternatively, you also can use KMS to directly encrypt PAN data by using a customer master key (CMK). In this blog post, I will show you how to help protect sensitive PAN data by using KMS CMKs.

The use of a CMK to directly encrypt data removes some of the burden of having developers manage encryption libraries. Additionally, a CMK cannot be exported from KMS, which alleviates the concern about someone saving the encryption key in an insecure location. You can also leverage AWS CloudTrail so that you have logs of the key’s use.

For the purpose of this post, I have three different AWS Identity and Access Management (IAM) roles to help ensure the security of the PAN data being encrypted:

KeyAdmin – This is the general key administrator role, which has the ability to create and manage the KMS keys. A key administrator does not have the ability to directly use the keys for encrypt and decrypt functions. Keep in mind that because the administrator does have the ability to change a key’s policy, they could escalate their own privilege by changing this policy to give themselves encrypt/decrypt permissions.

PANEncrypt – This role allows the user only to encrypt an object using the CMK.

PANDecrypt – This role allows the user only to decrypt an object using the CMK.

If you don’t already have a CMK that you wish to use to encrypt the sensitive PAN data, you can create one with the following command. (Throughout this post, remember to replace the placeholder text in red with your account-specific information.)

$ aws kms create-key –profile KeyAdmin –description "Key used to encrypt and decrypt sensitive PAN data" –policy file://Key_Policy

Notice the use of –profile KeyAdmin in the previous command. This forces the command to be run as a role specified within my configuration file that has permissions to create a KMS key. We will be using different roles, as defined in the following key policy (to which file://Key_Policy in the previous command refers), to manipulate and use the KMS key. For additional details about how to assume roles within the CLI, see Assuming a Role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowAccessForKeyAdministrators",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/KeyAdmin"   
      },
      "Action": [
        "kms:Create*",
        "kms:Describe*",
        "kms:Enable*",
        "kms:List*",
        "kms:Put*",
        "kms:Update*",
        "kms:Revoke*",
        "kms:Disable*",
        "kms:Get*",
        "kms:Delete*",
        "kms:ScheduleKeyDeletion",
        "kms:CancelKeyDeletion"
      ],
      "Resource": "*"
    },
    {
      "Sid": "AllowEncryptionWithTheKey",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/PANEncrypt"
      },
      "Action": [
        "kms:Encrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey",
        “kms:ListKeys” 
      ],
      "Resource": "*"
    },
    {
      "Sid": "AllowDecryptionWithTheKey",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/PANDecrypt"
      },                             
      "Action": [
        "kms:Decrypt",
      ],
      "Resource": "*"
    }
  ]
}

After the new CMK is created, I can then assign it an alias so that it will be easier to identify in the future. In this case, I will create the alias SensitivePANKey, as shown in the following command.

$ aws kms create-alias –profile KeyAdmin –alias-name alias/SensitivePANKey –target-key-id arn:aws:kms:us-east-1:123456789012:key/221c9ce1-9da8-44e9-801b-faf1EXAMPLE

Now that I have a CMK with least-privilege permissions to limit who can manage and use it, I can start to use it to encrypt PAN data. To keep things simple in this post, I will be using AWS CLI commands to accomplish this, but this can also be done through an AWS SDK and incorporated into an application.

Using the PANEncrypt role, the following CLI command takes in a string of data (in this case “Sensitive PAN Data”), encrypts it using the key I created earlier in this post, and sends the Base64-encoded ciphertext into a new file called encrypted-123-1449012738. Notice that I also use EncryptionContext to further protect the encrypted data. The sensitive PAN data is sent to KMS over TLS (with ciphers that enforce perfect forward secret) and is then encrypted under AES-GCM using a 256-bit key.

$ aws kms encrypt –profile PANEncrypt –key-id alias/SensitivePANKey –plaintext "Sensitive PAN Data" –query CiphertextBlob –encryption-context [email protected],Date=1449012738 –output text | base64 –decode > encrypted-123-1449012738

Because the EncryptionContext must be the same when I decrypt the file, I gave the file a unique name that can help us rebuild the EncryptionContext when it comes time to decrypt the object. The file name structure is: encrypted-GUID-Date. The GUID allows us to look up the user’s user name within our directory, and then I use the date as part of the context. As Greg Rubin discussed in another AWS Security Blog post, the EncryptionContext can help ensure the integrity of the encrypted data.

From here, I can use the following command to put this encrypted object in an Amazon S3 bucket.

$ aws s3 cp encrypted-123-1449012738 s3://Secure-S3-Bucket-For-PAN/ –region us-west-2 –sse aws:kms

For this example, I chose to use native SSE-S3 encryption, but this could have been another KMS key as well. From here, I update a database with the location of this encrypted object within the S3 bucket.

When I need to retrieve the PAN data, I can make the following CLI call to get the encrypted object from my S3 bucket.

$ aws s3 cp s3://Secure-S3-Bucket-For-PAN/ encrypted-123-1449012738 . –region us-west-2

Finally to decrypt the object, I run the following command using the PANDecrypt role.

$ echo “Decrypted PAN Data: $(aws kms decrypt –profile PANDecrypt –CiphertextBlob fileb://encrypted-123-1449012738 –encryption-context [email protected],Date=1449012738 –output text –query Plaintext | base64 –decode)”

Notice that I use the same EncryptionContext as I did when I encrypted the sensitive PAN data. To get this EncryptionContext, I again look up the UserName from the GUID and then include the Date. Then for the purpose of this example, I print this sensitive data to the screen, but in a real-world example, this can be passed to another application or service.

Now that I have shown that KMS can directly encrypt and decrypt sensitive PAN data, this can be rolled out as a service within an application environment. As a best practice, you should avoid using the same KMS CMK to encrypt more than 2 billion objects. After that point, the security of the resulting ciphertexts may be weakened. To mitigate this risk, you can choose to have KMS rotate your CMK annually or you can create multiple CMKs to handle your workloads safely. Additionally, this service should be composed of two different component services: one that provides encryption, and another that has enhanced controls around it and is used to decrypt the sensitive data. These component services would address storage of the ciphertext, metadata, error conditions, and so on. With the integration of CloudTrail into KMS and application logs, an organization can have detailed records of the calls into the service and the use of KMS keys across the organization.

If you have questions or comments about this post, either post them below or visit the KMS forum.

– Matt

Adhere to IAM Best Practices in 2016

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx2OB7YGHMB7WCM/Adhere-to-IAM-Best-Practices-in-2016

As another new year begins, we encourage you to review our recommended AWS Identity and Access Management (IAM) best practices. Following these best practices can help you maintain the security of your AWS resources. You can learn more by watching the IAM Best Practices to Live By presentation that Anders Samuelsson gave at AWS re:Invent 2015, or you can click the following links that will take you to IAM documentation, blog posts, and videos. 

Create and use IAM users instead of your root account

Do not use your AWS root account to access AWS. Instead, create individual IAM users for access to your AWS account. This allows you to give each IAM user a unique set of security credentials and grant different permissions to each user. Related: Documentation, blog posts, video.

Grant least privilege

Apply fine-grained permissions to ensure that IAM users have least privilege to perform only the tasks they need to perform. Start with a minimum set of permissions and grant additional permissions as necessary. Related: Documentation, blog posts.

Manage permissions with groups

Assign permissions to groups instead of to users to make it easier for you to assign and reassign permissions to multiple users at the same time. As people in your company change job roles, you can simply change which IAM group each IAM user belongs to. Related: Documentation, blog posts, video.

Restrict privileged access further with policy conditions

Use conditions to add more granularity when defining permissions. The more explicitly you can define when resources are available and to whom, the safer your resources will be. Using conditions also can prevent your AWS users from accidentally performing privileged actions. Related: Documentation.

Enable AWS CloudTrail to get logs of API calls

Enable logging of AWS API calls to gain greater visibility into users’ activity in your AWS resources. Logging lets you see which actions users have taken and which resources have been used, along with details such as the time and date of actions and the actions that have failed because of inadequate permissions. Related: Documentation, blog posts, video.

Configure a strong password policy

Configure password expiration, strength, and reuse to help ensure that your users and your data are protected by strong credentials. For enhanced security, use a strong password policy together with multi-factor authentication (MFA)—see the ninth IAM best practice below. Related: Documentation, blog posts.

Rotate security credentials regularly

Change your own passwords and access keys regularly, and make sure that all IAM users in your AWS account do as well. You can apply a password policy to your AWS account to require all your IAM users to rotate their passwords, and you can choose how often they must do so. If a password is compromised without your knowledge, regular credential rotation limits how long that password can be used to access your AWS account. Related: Documentationblog posts.

Remove unused security credentials that are not needed

Generate and download a credential report that lists all IAM users in your AWS account and the status of their various credentials. Review the credential report to determine which credentials have not been used recently and can be removed. Removing unused credentials reduces your attack surface. Related: Documentation, blog posts.

Enable multi-factor authentication (MFA) for privileged users

Supplement user names and passwords by requiring a one-time password during authentication. This allows you to enable extra security for privileged IAM users (users who are allowed access to sensitive resources). Related: Documentation, blog posts, video.

Use IAM roles to share access

Never share credentials! Instead, use IAM roles that allow you to specify whom you trust and what each role can do in your account. Also use IAM roles to delegate permissions across and within your accounts to both IAM and federated users. Related: Documentation, blog posts.

Use IAM roles for Amazon EC2 instances

Use IAM roles to manage credentials for your applications that run on EC2 instances. Because role credentials are temporary and rotated automatically, you don’t have to manage credentials. Also, any changes you make to a role used for multiple instances are propagated to all such instances, again simplifying credential management. Related: Documentation, blog posts.

Adhere to IAM best practices to manage AWS users, groups, permissions, and credentials in order to make your AWS account as secure as possible. If you have questions or feedback about IAM best practices, go to the AWS IAM forum.

– Craig

The Most Popular AWS Security Blog Posts in 2015

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx4QX7W51NDSLO/The-Most-Popular-AWS-Security-Blog-Posts-in-2015

The following 20 posts are the most popular posts that were published in 2015 on the AWS Security Blog. You can use this list as a guide to do some catchup reading or even read a post again that you found particularly valuable.  

Introducing s2n, a New Open Source TLS Implementation

Customer Update—AWS and EU Safe Harbor

How to Connect Your On-Premises Active Directory to AWS Using AD Connector

How to Implement Federated API and CLI Access Using SAML 2.0 and AD FS

Privacy and Data Security

Enable a New Feature in the AWS Management Console: Cross-Account Access

PCI Compliance in the AWS Cloud

How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface

How to Address the PCI DSS Requirements for Data Encryption in Transit Using Amazon VPC

How to Receive Alerts When Your IAM Configuration Changes

How to Receive Notifications When Your AWS Account’s Root Access Keys Are Used

How to Receive Alerts When Specific APIs Are Called by Using AWS CloudTrail, Amazon SNS, and AWS Lambda

New in IAM: Quickly Identify When an Access Key Was Last Used

2015 AWS PCI Compliance Package Now Available

An Easier Way to Manage Your Policies

New Whitepaper—Single Sign-On: Integrating AWS, OpenLDAP, and Shibboleth

New SOC 1, 2, and 3 Reports Available — Including a New Region and Service In-Scope

How to Create a Limited IAM Administrator by Using Managed Policies

How to Delegate Management of Multi-Factor Authentication to AWS IAM Users

Now Available: Videos and Slide Decks from the re:Invent 2015 Security and Compliance Track

Also, the following 20 posts are the most popular AWS Security Blog posts since its inception in April 2013. Some of these posts have been readers’ favorites year after year.

Introducing s2n, a New Open Source TLS Implementation

Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket

Where’s My Secret Access Key?

Securely connect to Linux instances running in a private Amazon VPC

Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0

A New and Standardized Way to Manage Credentials in the AWS SDKs

IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)

Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket

Demystifying EC2 Resource-Level Permissions

Resource-Level Permissions for EC2–Controlling Management Access on Specific Instances

Controlling Network Access to EC2 Instances Using a Bastion Server

Customer Update—AWS and EU Safe Harbor

Granting Permission to Launch EC2 Instances with IAM Roles (PassRole Permission)

How Do I Protect Cross-Account Access Using MFA?

Building an App Using Amazon Cognito and an OpenID Connect Identity Provider

A safer way to distribute AWS credentials to EC2

How to Connect Your On-Premises Active Directory to AWS Using AD Connector

How to Implement Federated API and CLI Access Using SAML 2.0 and AD FS

Privacy and Data Security

How to Enable Cross-Account Access to the AWS Management Console

We thank you for visiting the AWS Security Blog in 2015 and hope you’ll return again regularly in 2016. Let us know in the comments section below if there is a specific security or compliance topic you would like us to cover in the new year. 

– Craig

Remove Unnecessary Permissions in Your IAM Policies by Using Service Last Accessed Data

Post Syndicated from Kai Zhao original https://blogs.aws.amazon.com/security/post/Tx280RX2WH6WUD7/Remove-Unnecessary-Permissions-in-Your-IAM-Policies-by-Using-Service-Last-Access

As a security best practice, AWS recommends writing AWS Identity and Access Management (IAM) policies that adhere to the principle of least privilege, which means granting only the permissions required to perform a specific task. However, verifying which permissions an application or user actually needs can be a challenge. To help you determine which permissions are needed, the IAM console now displays service last accessed data that shows the hour when an IAM entity (a user, group, or role) last accessed an AWS service. Knowing if and when an IAM entity last exercised a permission can help you remove unnecessary rights and tighten your IAM policies with less effort.

In this blog post, I will first cover the basics of service last accessed data. Next, I will walk through a sample use case to demonstrate how you can use this data to remove unnecessary permissions from an IAM policy.

The Access Advisor tab

When you view a user, group, role, or managed policy in the IAM console, there’s now a new tab called Access Advisor. This tab includes a table that shows service last accessed data. The table contains:

The list of service permissions granted by the policy, if you’re looking at a managed policy.

The list of service permissions granted to the IAM entity, if you’re looking at a user, group, or role.

The date and time when each service was last accessed.

The following screenshot shows an example of viewing service last accessed data on the Access Advisor tab for a customer managed policy.

The meaning of the Last Accessed column depends on whether you’re looking at a managed policy, user, group, or role.

If you’re looking at a managed policy, the Last Accessed column shows the last time a user, group, or role that the policy is attached to authenticated against a given service.

If you’re looking at a user or role, the Last Accessed column shows the last time that particular user or role authenticated against a given service.

If you’re looking at a group, the Last Accessed column shows the last time that a user in that group authenticated against a given service.

Note that "Last Accessed" refers to the time of authentication, which is the last time the service was called using a request signed by using the access keys of the IAM user or role. This is distinct from authorization (see a quick summary of the difference). For example, if an IAM user has Amazon S3 read-only permissions and the user’s most recent action was an attempt to delete an S3 bucket, the delete operation will be denied, but the time stamp of the delete attempt will be displayed in the service last accessed table. For more details, see the service last accessed data documentation.

In addition, the list of services in the Access Advisor tab reflects the current state of your permissions, not their historical states. For example, if the current version of your policy allows only access to S3, but previously it allowed access to all AWS services, the service last accessed table will only show an entry for S3. If you’re trying to determine the history of access control changes in your account or you want to audit historical access, see your AWS CloudTrail logs.

Now that I’ve described service last accessed data, I will show how it can be used in a typical use case.

Sample use case: Scoping down application permissions

Alice is a DevOps administrator responsible for managing her team’s AWS infrastructure. Her team has created an app that runs on Amazon EC2 and calls other AWS services. Therefore, Alice needs to provision EC2 instances and manage their configuration. However, she is new to IAM and when she goes to create the IAM role, she’s not sure what to put in the role’s access policy. Alice is aware of security concerns, but her immediate priority is just to make the application work, so she simply attaches the PowerUserAccess AWS managed policy, which is shown following this paragraph. This policy grants full read/write access to all AWS services and resources in the account except IAM (and is definitely not recommended as a long-term solution.)

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "NotAction": "iam:*",
      "Resource": "*"
    }
  ]
}

Using service last accessed data, Alice can scope down these application-specific permissions to remove access to services that are never used. After the application has run for a while, she goes to the IAM console and navigates to the Policies pane. She finds the PowerUserAccess policy associated with her IAM role (which was attached to her EC2 instance), selects it, and then clicks the Access Advisor tab, as shown in the following screenshot.

Alice reviews the service last accessed times and sees that the application is only using Amazon DynamoDB, S3, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and AWS CloudWatch. Alice can revise the role’s access policy to revoke all permissions except for these by detaching the PowerUserAccess AWS managed policy and writing a custom policy using the policy editor or policy generator. Removing unnecessary permissions based on service last accessed data will help reduce the security surface area of her application’s permissions, in accordance with the principle of least privilege.

This example demonstrates how you can use service last accessed data in a policy-centric way (in other words, viewing the Access Advisor tab from the perspective of a policy) to tackle a common access control problem. In my next blog post, I will show how you can use service last accessed data in a principal-centric way (viewing the Access Advisor tab from the perspective of a user, group, or role) to help improve your account’s security. I will also cover the differences between these two approaches in more detail.

The IAM team would love to hear your thoughts regarding this new feature. If you have comments about service last accessed data or questions about how to use it, leave a comment below or on the IAM forum.

– Kai 

What’s New in AWS Key Management Service: AWS CloudFormation Support and Integration with More AWS Services

Post Syndicated from Sreekumar Pisharody original https://blogs.aws.amazon.com/security/post/TxHY6YJA60MTUL/What-s-New-in-AWS-Key-Management-Service-AWS-CloudFormation-Support-and-Integrat

We’re happy to make two announcements about what’s new in AWS Key Management Service (KMS).

First, AWS CloudFormation has added a template for KMS that lets you quickly create KMS customer master keys (CMK) and set their properties. Starting today, you can use the AWS::KMS::Key resource to create a CMK in KMS. To get started, you can use AWS CloudFormation Designer to drag-and-drop a KMS key resource type into your template, as shown in the following image.

To learn more about using KMS with CloudFormation, see the “AWS::KMS::Key” section of the AWS CloudFormation User Guide.

Second, AWS Import/Export Snowball, AWS CloudTrail, Amazon SES, Amazon WorkSpaces, and Amazon Kinesis Firehose now support encryption of data within those services using keys in KMS. As with other KMS-integrated services, you can use CloudTrail to audit the use of your KMS key to encrypt or decrypt your data in SES, Amazon WorkSpaces, CloudTrail, Import/Export Snowball, and Amazon Kinesis Firehose. To see the complete list of AWS services integrated with KMS, see KMS Product Details. For more details about how these services encrypt your data with KMS, see the How AWS Services Use AWS KMS documentation pages.

If you have questions or comments, please add them in the “Comments” section below or on the KMS forum.

– Sree

AWS CloudFormation at AWS re:Invent 2015: Breakout Session Recap, Videos, and Slides

Post Syndicated from George Huang original http://blogs.aws.amazon.com/application-management/post/Tx1ZYD0M87D4NW0/AWS-CloudFormation-at-AWS-re-Invent-2015-Breakout-Session-Recap-Videos-and-Slide

The AWS CloudFormation team and others presented and shared many updates and best practices during several 2015 AWS re:Invent sessions in October. We wanted to take the opportunity to show you where our presentation slides and videos are located as well as highlight a few product updates and best practices that we shared at this year’s re:Invent.

DVO304 – AWS CloudFormation Best Practices: slides and video

ARC307 – Infrastructure as Code: slides and video

DVO303 – Scaling Infrastructure Operations with AWS: slides and video

ARC401 – Cloud First: New Architecture for New Infrastructure: slides and video

DVO310 – Benefit from DevOps When Moving to AWS for Windows: slides and video

DVO401 – Deep Dive into Blue/Green Deployments on AWS: slides and video

SEC312 – Reliable Design and Deployment of Security and Compliance: slides and video

AWS CloudFormation Designer

We introduced CloudFormation Designer in early October. During our re:Invent session DVO304 (AWS CloudFormation Best Practices), we introduced CloudFormation Designer and then did a live demo and walkthrough of its key features and use cases.

AWS CloudFormation Designer is a new visual tool that allows you to visually edit your CloudFormation templates as a diagram. It provides a drag-and-drop interface for adding resources to templates, and CloudFormation Designer automatically modifies the underlying JSON when you add or remove resources. You can also use the integrated text editor to view or specify template details, such as resource property values and input parameters.

To learn more about this feature:

Watch the CloudFormation Designer portion of our re:Invent talk to see a demo

View slides 3-13 to learn more about CloudFormation Designer from our re:Invent talk

Updated resource support in CloudFormation

In the same session, we also talked about the five new resources that CloudFormation can provision which we introduced in October. To stay up to-date on CloudFormation resource support updates, please visit here to see a list of all currently supported AWS resources.

Other topics covered in our “AWS CloudFormation Best Practices” breakout session

Using Cost Explorer to budget and estimate a stack’s cost

Collecting audit logs using the CloudTrail integration with CloudFormation

CloudFormation advanced language features

How to extend CloudFormation to resources that are not yet supported by CloudFormation

Security and user-access best practices

Best practices for writing CloudFormation templates when sharing templates with teams or users that have different environments or are using different AWS regions

Please reach us at the AWS CloudFormation forum if you have more feedback or questions. 

AWS CloudFormation at AWS re:Invent 2015: Breakout Session Recap, Videos, and Slides

Post Syndicated from George Huang original http://blogs.aws.amazon.com/application-management/post/Tx1ZYD0M87D4NW0/AWS-CloudFormation-at-AWS-re-Invent-2015-Breakout-Session-Recap-Videos-and-Slide

The AWS CloudFormation team and others presented and shared many updates and best practices during several 2015 AWS re:Invent sessions in October. We wanted to take the opportunity to show you where our presentation slides and videos are located as well as highlight a few product updates and best practices that we shared at this year’s re:Invent.

DVO304 – AWS CloudFormation Best Practices: slides and video

ARC307 – Infrastructure as Code: slides and video

DVO303 – Scaling Infrastructure Operations with AWS: slides and video

ARC401 – Cloud First: New Architecture for New Infrastructure: slides and video

DVO310 – Benefit from DevOps When Moving to AWS for Windows: slides and video

DVO401 – Deep Dive into Blue/Green Deployments on AWS: slides and video

SEC312 – Reliable Design and Deployment of Security and Compliance: slides and video

AWS CloudFormation Designer

We introduced CloudFormation Designer in early October. During our re:Invent session DVO304 (AWS CloudFormation Best Practices), we introduced CloudFormation Designer and then did a live demo and walkthrough of its key features and use cases.

AWS CloudFormation Designer is a new visual tool that allows you to visually edit your CloudFormation templates as a diagram. It provides a drag-and-drop interface for adding resources to templates, and CloudFormation Designer automatically modifies the underlying JSON when you add or remove resources. You can also use the integrated text editor to view or specify template details, such as resource property values and input parameters.

To learn more about this feature:

Watch the CloudFormation Designer portion of our re:Invent talk to see a demo

View slides 3-13 to learn more about CloudFormation Designer from our re:Invent talk

Updated resource support in CloudFormation

In the same session, we also talked about the five new resources that CloudFormation can provision which we introduced in October. To stay up to-date on CloudFormation resource support updates, please visit here to see a list of all currently supported AWS resources.

Other topics covered in our “AWS CloudFormation Best Practices” breakout session

Using Cost Explorer to budget and estimate a stack’s cost

Collecting audit logs using the CloudTrail integration with CloudFormation

CloudFormation advanced language features

How to extend CloudFormation to resources that are not yet supported by CloudFormation

Security and user-access best practices

Best practices for writing CloudFormation templates when sharing templates with teams or users that have different environments or are using different AWS regions

Please reach us at the AWS CloudFormation forum if you have more feedback or questions.