Tag Archives: AWS Organizations

The 10 Most Viewed Security-Related AWS Knowledge Center Articles and Videos for November 2017

Post Syndicated from Maggie Burke original https://aws.amazon.com/blogs/security/the-10-most-viewed-security-related-aws-knowledge-center-articles-and-videos-for-november-2017/

AWS Knowledge Center image

The AWS Knowledge Center helps answer the questions most frequently asked by AWS Support customers. The following 10 Knowledge Center security articles and videos have been the most viewed this month. It’s likely you’ve wondered about a few of these topics yourself, so here’s a chance to learn the answers!

  1. How do I create an AWS Identity and Access Management (IAM) policy to restrict access for an IAM user, group, or role to a particular Amazon Virtual Private Cloud (VPC)?
    Learn how to apply a custom IAM policy to restrict IAM user, group, or role permissions for creating and managing Amazon EC2 instances in a specified VPC.
  2. How do I use an MFA token to authenticate access to my AWS resources through the AWS CLI?
    One IAM best practice is to protect your account and its resources by using a multi-factor authentication (MFA) device. If you plan use the AWS Command Line Interface (CLI) while using an MFA device, you must create a temporary session token.
  3. Can I restrict an IAM user’s EC2 access to specific resources?
    This article demonstrates how to link multiple AWS accounts through AWS Organizations and isolate IAM user groups in their own accounts.
  4. I didn’t receive a validation email for the SSL certificate I requested through AWS Certificate Manager (ACM)—where is it?
    Can’t find your ACM validation emails? Be sure to check the email address to which you requested that ACM send validation emails.
  5. How do I create an IAM policy that has a source IP restriction but still allows users to switch roles in the AWS Management Console?
    Learn how to write an IAM policy that not only includes a source IP restriction but also lets your users switch roles in the console.
  6. How do I allow users from another account to access resources in my account through IAM?
    If you have the 12-digit account number and permissions to create and edit IAM roles and users for both accounts, you can permit specific IAM users to access resources in your account.
  7. What are the differences between a service control policy (SCP) and an IAM policy?
    Learn how to distinguish an SCP from an IAM policy.
  8. How do I share my customer master keys (CMKs) across multiple AWS accounts?
    To grant another account access to your CMKs, create an IAM policy on the secondary account that grants access to use your CMKs.
  9. How do I set up AWS Trusted Advisor notifications?
    Learn how to receive free weekly email notifications from Trusted Advisor.
  10. How do I use AWS Key Management Service (AWS KMS) encryption context to protect the integrity of encrypted data?
    Encryption context name-value pairs used with AWS KMS encryption and decryption operations provide a method for checking ciphertext authenticity. Learn how to use encryption context to help protect your encrypted data.

The AWS Security Blog will publish an updated version of this list regularly going forward. You also can subscribe to the AWS Knowledge Center Videos playlist on YouTube.

– Maggie

Use CloudFormation StackSets to Provision Resources Across Multiple AWS Accounts and Regions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-across-multiple-aws-accounts-and-regions/

AWS CloudFormation helps AWS customers implement an Infrastructure as Code model. Instead of setting up their environments and applications by hand, they build a template and use it to create all of the necessary resources, collectively known as a CloudFormation stack. This model removes opportunities for manual error, increases efficiency, and ensures consistent configurations over time.

Today I would like to tell you about a new feature that makes CloudFormation even more useful. This feature is designed to help you to address the challenges that you face when you use Infrastructure as Code in situations that include multiple AWS accounts and/or AWS Regions. As a quick review:

Accounts – As I have told you in the past, many organizations use a multitude of AWS accounts, often using AWS Organizations to arrange the accounts into a hierarchy and to group them into Organizational Units, or OUs (read AWS Organizations – Policy-Based Management for Multiple AWS Accounts to learn more). Our customers use multiple accounts for business units, applications, and developers. They often create separate accounts for development, testing, staging, and production on a per-application basis.

Regions – Customers also make great use of the large (and ever-growing) set of AWS Regions. They build global applications that span two or more regions, implement sophisticated multi-region disaster recovery models, replicate S3, Aurora, PostgreSQL, and MySQL data in real time, and choose locations for storage and processing of sensitive data in accord with national and regional regulations.

This expansion into multiple accounts and regions comes with some new challenges with respect to governance and consistency. Our customers tell us that they want to make sure that each new account is set up in accord with their internal standards. Among other things, they want to set up IAM users and roles, VPCs and VPC subnets, security groups, Config Rules, logging, and AWS Lambda functions in a consistent and reliable way.

Introducing StackSet
In order to address these important customer needs, we are launching CloudFormation StackSet today. You can now define an AWS resource configuration in a CloudFormation template and then roll it out across multiple AWS accounts and/or Regions with a couple of clicks. You can use this to set up a baseline level of AWS functionality that addresses the cross-account and cross-region scenarios that I listed above. Once you have set this up, you can easily expand coverage to additional accounts and regions.

This feature always works on a cross-account basis. The master account owns one or more StackSets and controls deployment to one or more target accounts. The master account must include an assumable IAM role and the target accounts must delegate trust to this role. To learn how to do this, read Prerequisites in the StackSet Documentation.

Each StackSet references a CloudFormation template and contains lists of accounts and regions. All operations apply to the cross-product of the accounts and regions in the StackSet. If the StackSet references three accounts (A1, A2, and A3) and four regions (R1, R2, R3, and R4), there are twelve targets:

  • Region R1: Accounts A1, A2, and A3.
  • Region R2: Accounts A1, A2, and A3.
  • Region R3: Accounts A1, A2, and A3.
  • Region R4: Accounts A1, A2, and A3.

Deploying a template initiates creation of a CloudFormation stack in an account/region pair. Templates are deployed sequentially to regions (you control the order) to multiple accounts within the region (you control the amount of parallelism). You can also set an error threshold that will terminate deployments if stack creation fails.

You can use your existing CloudFormation templates (taking care to make sure that they are ready to work across accounts and regions), create new ones, or use one of our sample templates. We are launching with support for the AWS partition (all public regions except those in China), and expect to expand it to to the others before too long.

Using StackSets
You can create and deploy StackSets from the CloudFormation Console, via the CloudFormation APIs, or from the command line.

Using the Console, I start by clicking on Create StackSet. I can use my own template or one of the samples. I’ll use the last sample (Add config rule encrypted volumes):

I click on View template to learn more about the template and the rule:

I give my StackSet a name. The template that I selected accepts an optional parameter, and I can enter it at this time:

Next, I choose the accounts and regions. I can enter account numbers directly, reference an AWS organizational unit, or upload a list of account numbers:

I can set up the regions and control the deployment order:

I can also set the deployment options. Once I am done I click on Next to proceed:

I can add tags to my StackSet. They will be applied to the AWS resources created during the deployment:

The deployment begins, and I can track the status from the Console:

I can open up the Stacks section to see each stack. Initially, the status of each stack is OUTDATED, indicating that the template has yet to be deployed to the stack; this will change to CURRENT after a successful deployment. If a stack cannot be deleted, the status will change to INOPERABLE.

After my initial deployment, I can click on Manage StackSet to add additional accounts, regions, or both, to create additional stacks:

Now Available
This new feature is available now and you can start using it today at no extra charge (you pay only for the AWS resources created on your behalf).

Jeff;

PS – If you create some useful templates and would like to share them with other AWS users, please send a pull request to our AWS Labs GitHub repo.

New – Cross-Account Delivery of CloudWatch Events

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cross-account-delivery-of-cloudwatch-events/

CloudWatch Events allow you to track and respond to changes in your AWS resources. You get a near real-time stream of events that you can route to one or more targets (AWS Lambda functions, Amazon Kinesis streams, Amazon SNS topics, and more) using rules. The events that are generated depend on the particular AWS service. For example, here are the events generated for EC2 instances:

Or for S3 (CloudTrail must be enabled in order to create rules that use these events):

See the CloudWatch Event Types list to see which services and events are available.

New Cross-Account Event Delivery
Our customers have asked us to extend CloudWatch Events to handle some interesting & powerful use cases that span multiple AWS accounts, and we are happy to oblige. Today we are adding support for controlled, cross-account delivery of CloudWatch Events. As you will see, you can now arrange to route events from one AWS account to another. As is the case with the existing event delivery model, you can use CloudWatch Events rules to specify which events you would like to send to another account.

Here are some of the use cases that have been shared with us:

Separation of Concerns – Customers would like to handle and respond to events in a separate account in order to implement advanced security schemes.

Rollup – Customers are using AWS Organizations and would like to track certain types of events across the entire organization, across a multitude of AWS accounts.

Each AWS account uses a resource event bus to distribute events. This object dates back to the introduction of CloudWatch Events, but has never been formally called out as such. AWS services, the PutEvents function, and other accounts can publish events to it.

The event bus (currently one per account, with plans to allow more in the future) now has an associated access policy. This policy specifies the set of AWS accounts that are allowed to send events to the bus. You can add one or more accounts, or you can specify that any account is allowed to send events.

You can create event distribution topologies that work on a fan-in or a fan-out basis. A fan-in model allows you to handle events from multiple accounts in one place. A fan-out model allows you to route different types of events to distinct locations and accounts.

In order to avoid the possibility of creating a loop, events that are sent from one account to another will not be sent to a third one. You should take this in to account when you are planning your cross-account implementation.

Using Cross-Account Event Delivery
In order to test this new feature, I made use of my work and my personal AWS accounts. I log in to my personal account and went to the CloudWatch Console. Then I select Event Buses, clicked on Add Permission, and enter the Account ID of my work account:

I can see all of my buses (just one is allowed right now) and permissions in one place:

Next, I log in to my work account and create a rule that will send events to the event bus in my personal account. In this case my personal account is interested in changes of state for EC2 instances running in my work account:

Back in my personal account, I create a rule that will fire on any EC2 event, targeting it at an SNS topic that is configured to send email:

After testing this rule with an EC2 instance launched in my personal account, I launch an instance in my work account and wait for the email message:

The account and resources fields in the message are from the source (work) account.

Things to Know
This functionality is available in all AWS Regions where CloudWatch Events is available and you can start using it today. It is also accessible from the CloudWatch Events APIs and the AWS Command Line Interface (CLI).

Events forwarded from one account to another are considered custom events. The sending account is charged $1 for every million events (see the CloudWatch Pricing page for more info).

Jeff;

PS – AWS CloudFormation support is in the works and coming soon!

AWS Online Tech Talks – April 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-april-2017/

I’m a life-long learner! I try to set aside some time every day to read about or to try something new. I grew up in the 1960’s and 1970’s, back before the Internet existed. As a teenager, access to technical information of any sort was difficult, expensive, and time-consuming. It often involved begging my parents to drive me to the library or riding my bicycle down a narrow road to get to the nearest magazine rack. Today, with so much information at our fingertips, the most interesting challenge is deciding what to study.

As we do every month, we have assembled a set of online tech talks that can fulfill this need for you. Our team has created talks that will provide you with information about the latest AWS services along with the best practices for putting them to use in your environment.

The talks are free, but they do fill up, so you should definitely register ahead of time if you would like to attend. Most of the talks are one hour in length; all times are in the Pacific (PT) time zone. Here’s what we have on tap for the rest of this month:

Monday, April 24
10:30 AM – Modernize Meetings with Amazon Chime.

1:00 PM – Essential Capabilities of an IoT Cloud Platform.

Tuesday, April 25
8:30 AM – Hands On Lab: Windows Workloads on AWS.

Noon – AWS Services Overview & Quarterly Update.

1:30 PM – Scaling to Million of Users in the Blink of an Eye with Amazon CloudFront: Are you Ready?

Wednesday, April 26
9:00 AM – IDC and AWS Joint Webinar: Getting the Most Bang for your Buck with #EC2 #Winning.

10:30 AM – From Monolith to Microservices – Containerized Microservices on AWS.

Noon – Accelerating Mobile Application Development and Delivery with AWS Mobile Hub.

Thursday, April 27
8:30 AM – Hands On Lab: Introduction to Microsoft SQL Server in AWS.

10:30 AM – Applying AWS Organizations to Complex Account Structures.

Noon – Deep Dive on Amazon Cloud Directory.

I will be presenting Tuesday’s service overview – hope to “see” you there.

Jeff;

The New AWS Organizations User Interface Makes Managing Your AWS Accounts Easier

Post Syndicated from Anders Samuelsson original https://aws.amazon.com/blogs/security/the-new-aws-organizations-user-interface-makes-managing-your-aws-accounts-easier/

With AWS Organizations—launched on February 27, 2017—you can easily organize accounts centrally and set organizational policies across a set of accounts. Starting today, the Organizations console includes a tree view that allows you to manage accounts and organizational units (OUs) easily. The new view also makes it simple to attach service control policies (SCPs) to individual accounts or a group of accounts in an OU. In this post, I demonstrate some of the benefits of the new user interface.

The new tree view

The following screenshot shows an example of how an organization is displayed in the tree view on the Organize accounts tab. I have chosen the Frontend OU, and it shows that two OUs—Application 1 and Application 2—are child OUs of the Frontend OU. In the tree view, I can choose any OU and immediately view and take action on the contents of that OU. This new view makes it easier to quickly view OUs and navigate the relationships between OUs in your organization.

Screenshot of the new tree view

If you would prefer not to use the tree view, you can hide it by choosing the Tree view toggle in the upper left corner of the main pane. The following screenshot shows the console with the tree view turned off.

Screenshot of the console with the tree view hidden

You can toggle between the old view and the new tree view at any time. For the rest of this post, though, I will show the tree view.

Additional Organizations console improvements

In addition, we made a few other console improvements. First, we added more detail to the right pane when you choose an account or an OU. In the following screenshot, I have chosen the Application 1 OU in the main pane of the console and then the new Accounts heading in the right pane. As a result, I now can view the accounts that are in the OU without having to navigate into the OU. I can also remove an account from the OU by choosing Remove next to the account I want to remove.

Screenshot showing the accounts in the OU

Secondly, we have made it easier for you to attach SCPs to entities such as individual accounts and OUs. For example, to attach to the Application 1 OU an SCP that blocks access to Amazon Redshift, I choose Service Control Policies in the right pane. I now see a list of SCPs from which I can choose, as shown in the following screenshot.

Screenshot showing SCPs that are attached and available

The Blacklist Redshift policy is an SCP I created previously, and by choosing Attach, I attach it to the Application 1 OU.

The third console enhancement is in the Accounts tab. The right pane displays additional information when you choose an account. In the following screenshot, I choose the Accounts tab and then the DB backend account. In the right pane, I now see a new option: Organizational units.

Screenshot showing the new "Organizational units" choice in the right pane

When I choose Organizational units in the right pane, I see the OUs of which the chosen account is a member—in this case, Application 1. If the account should not be in that OU, I can remove it by choosing Remove next to the OU name, as shown in the following screenshot.

Screenshot showing the OUs of which the account is a member

We have also made it possible to attach SCPs to accounts in this view. When I choose Service Control Policies in the right pane, I see a list of all SCPs in my organization. The list is organized such that all the policies that are directly attached to the account are at the top of the list. You can detach any of these policies by choosing Detach next to the policy.

At the bottom of the list, I see the SCPs that I can attach to accounts. To do this, I choose Attach next to a policy. In the following screenshot, the Blacklist Redshift SCP can be attached directly to the account. However, when I look at the policies that are indirectly attached to the account via OUs, I see that the Blacklist Redshift SCP is already attached via the Application 1 OU. This means it is not necessary for me to attach this SCP directly to the DB backend account.

Screenshot showing that the Blacklist Redshift SCP is already attached via the Application 1 OU

Summary

The new Organizations user interface makes it easier for you to manage your accounts and OUs as well as attach SCPs to accounts. To get started, sign in to the Organizations console.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Organizations forum.

– Anders

How to Use Service Control Policies in AWS Organizations to Enforce Healthcare Compliance in Your AWS Account

Post Syndicated from Aaron Lima original https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-in-aws-organizations-to-enforce-healthcare-compliance-in-your-aws-account/

AWS customers with healthcare compliance requirements such as the U.S. Health Insurance Portability and Accountability Act (HIPAA) and Good Laboratory, Clinical, and Manufacturing Practices (GxP) might want to control access to the AWS services their developers use to build and operate their GxP and HIPAA systems. For example, customers with GxP requirements might approve AWS as a supplier on the basis of AWS’s SOC certification and therefore want to ensure that only the services in scope for SOC are available to developers of GxP systems. Likewise, customers with HIPAA requirements might want to ensure that only AWS HIPAA Eligible Services are available to store and process protected health information (PHI). Now with AWS Organizations—policy-based management for multiple AWS accounts—you can programmatically control access to the services within your AWS accounts.

In this blog post, I show how to restrict an AWS account to HIPAA Eligible Services as well as explain why you should include additional supporting AWS services with service control policies (SCPs) in AWS Organizations. Although this example is HIPAA related, you can repurpose it for GxP, a database of Genotypes and Phenotypes (dbGaP) solutions, or other healthcare compliance requirements for which you want to control developers’ access to a specific scope of services.

Managing an account hierarchy with AWS Organizations

Let’s say I manage four AWS accounts: a Payer account, a Development account, a Corporate IT account, and a fourth account that contains PHI. In accordance with AWS’s Business Associate Agreement (BAA), I want to be sure that only AWS HIPAA Eligible Services are allowed in the fourth account along with supporting AWS services that help encrypt and control access to the account. The following diagram shows a logical view of the associated account structure.

Diagram showing the logical view of the account structure

As illustrated in the preceding diagram, Organizations allows me to create this account hierarchy between the four AWS accounts I manage. Before I proceed to show how to create and apply an SCP to the HIPAA account in this hierarchy, I’ll define some Organizations terminology that I use in this post:

  • Organization – A consolidated set of AWS accounts that you manage. For the preceding example, I have already created my organization and invited my accounts. For more information about creating an organization and inviting accounts, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.
  • Master account – The management hub for Organizations. This is where I invite existing accounts, create new accounts and manage my SCPs. I run all commands demonstrated in this post from this master account. This is also my payer account in the preceding account structure diagram.
  • Service control policy (SCP) – A set of controls that the organization’s master account can apply to the organization, selected OUs, and selected accounts. SCPs allow me to whitelist or blacklist services and actions that I can delegate to the users and roles in the account to which the SCPs are applied. The resultant security permissions for a user and role are the union of the permissions in an SCP and the permissions in an AWS Identity and Access Management (IAM) policy. I refer to SCPs as a policy type in some of this post’s command-line arguments.
  • Organizational unit (OU) – A container for a set of AWS accounts. OUs can be arranged into a hierarchy that can be as many as five levels deep. The top of the hierarchy of OUs is also known as the administrative root. In the walkthrough, I create a HIPAA OU and apply my policy to that OU. I then move the account into the OU to have the policy applied. To manage the organization depicted above, I might create OUs for my Corporate IT account and my Development account.

To restrict services in the fourth account to HIPAA Eligible Services and required supporting services, I will show how to create and apply an SCP to the account with the following steps:

  1. Create a JSON document that lists HIPAA Eligible Services and supporting AWS services.
  2. Create an SCP with a JSON document.
  3. Create an OU for the HIPAA account, and move the account into the OU.
  4. Attach the SCP to the HIPAA OU.
  5. Verify which SCPs are attached to the HIPAA OU.
  6. Detach the default FullAWSAccess SCP from the OU.
  7. Verify SCP enforcement.

How to create and apply an SCP to an account

Let’s walk through the steps to create an SCP and apply it to an account. I can manage my organization by using the Organizations console, AWS CLI, or AWS API from my master account. For the purposes of this post, I will demonstrate the creation and application of an SCP to my account by using the AWS CLI.

1.  Create a JSON document that lists HIPAA Eligible Services and supporting AWS services

Creating an SCP will be familiar if you have experience writing an IAM policy because the grammar in crafting the policy is similar. I will create a JSON document that lists only the services I want to allow in my account, and I will use this JSON document to create my SCP via the command line. The SCP I create from this document allows all actions for all resources of the listed services, effectively turning on only these services in my account. I name the document HIPAAExample.json and save it to the directory from which I will demonstrate the CLI commands.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                 "dynamodb:*","rds:*","ec2:*","s3:*","elasticmapreduce:*",
                 "glacier:*","elasticloadbalancing:*", "cloudwatch:*",
                 "importexport:*", "cloudformation:*", "redshift:*",
                 "iam:*", "health:*", "config:*", "snowball:*",
                 "trustedadvisor:*", "kms:*", "apigateway:*",
                 "autoscaling:*", "directconnect:*",
	         "execute-api:*", "sts:*"
             ],
             "Effect": "Allow",
             "Resource": "*"
        }
    ]
}

Note that the SCP includes more than just the HIPAA Eligible Services.

Why include additional supporting services in a HIPAA SCP?

You can use any service in your account, but you can use only HIPAA Eligible Services to store and process PHI. Some services, such as IAM and AWS Key Management Service (KMS), can be used because these services do not directly store or process PHI, but they might still be needed for administrative and security purposes.

To those ends, I include the following supporting services in the SCP to help me with account administration and security:

  • Access controls – I include IAM to ensure that I can manage access to resources in the account. Though Organizations can limit whether a service is available, I still need the granularity of access control that IAM provides.
  • Encryption – I need a way to encrypt the data. The integration of AWS KMS with Amazon Redshift, Amazon RDS, and Amazon Elastic Block Store (Amazon EBS) helps with this security requirement.
  • Auditing – I also need to be able to demonstrate controls in practice, track changes, and discover any malicious activity in my account. You will note that AWS CloudTrail is not included in the SCP, which prohibits any mutating actions against CloudTrail from users within the account. However, when setting up the account, CloudTrail was set up to send logs to a logging account as recommended in AWS Multiple Account Security Strategy. The logs do not reside in the account, and no one has privileges to change the trail including root or administrators, which helps ensure the protection of the API logging of the account. This highlights how SCPs can be used to secure services in an account.
  • Automation – Automation can help me with my security controls as shown in How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series; therefore, I consider including AWS CloudFormation as a way to ensure that applications deployed in the account adhere to my security and compliance policies. Auto Scaling also is an important service to include to help me scale to meet demand and control cost.
  • Monitoring and support – The remaining services in the SCP such as Amazon CloudWatch are needed to make sure that I can monitor the environment and have visibility into the health of the workloads and applications in my AWS account, helping me maintain operational control. AWS Trusted Advisor is a service that helps to make sure that my cloud environment is well architected.

Now that I have created my JSON document with the services that I will include and explained in detail why I include them, I can create my SCP.

2.  Create an SCP with a JSON document

I will now create the SCP via the CLI with the aws organizations create-policy command. Using the name parameter, I name the SCP and define that I am creating an SCP, both of which are required parameters. I then provide a brief description of the SCP and specify the location of the JSON document I created in Step 1.

aws organizations create-policy --name hipaa-example-policy --type SERVICE_CONTROL_POLICY --description
 
"All HIPAA eligible services plus supporting AWS Services." --content file://./HIPAAExample.json

Output

{
    "policy": {
        "policySummary": {
            "type": "SERVICE_CONTROL_POLICY",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "name": "hipaa-example-policy",
            "awsManaged": false,
            "id": "p-6ldl8bll", "description": "All HIPAA eligible services and supporting AWS services."

I take note of the policy-id because I need it to attach the SCP to my OU in Step 4. Note: Throughout this post, fictitious placeholder values are shown for the purposes of demonstrating this post’s solution.

3.  Create an OU for the HIPAA account, and move the account into the OU

Grouping accounts by function will make it easier to manage the organization and apply policies across multiple accounts. In this step, I create an OU for the HIPAA account and move the target account into the OU. To create an OU, I need to know the ID for the parent object under which I will be placing the OU. In this case, I will place it under the root and need the ID for the root. To get the root ID, I run the list-roots command.

aws organizations list-roots

Output

{
    "Roots": [
        {
            "PolicyTypes": [
                {
                    "Status": "ENABLED", 
                    "Type": "SERVICE_CONTROL_POLICY"
                }
            ], 
            "Id": "r-rth4", 
            "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
            "Name": "Root"
        }
    ]
}

With the root ID, I can proceed to create the OU under the root.

aws organizations create-organizational-unit --parent-id r-rth4 --name HIPAA-Accounts

Output

{
    "OrganizationalUnit": {
       "Id": "ou-rth4-ezo5wonz", 
        "Arn": "arn:aws:organizations::012345678900:ou/o-p9bx61i0h1/ou-rth4-ezo5wonz", 
        "Name": "HIPAA-Accounts"
    }
}

I take note of the OU ID in the output because I need it in the next command to move my target account. I will also need the root ID in the command because I am moving the target account from the root into the OU.

aws organizations move-account --account-id 098765432110 --source-parent-id r-rth4 --destination-parent-id 
ou-rth4-ezo5wonz

No Output

 

4.  Attach the SCP to the HIPAA OU

Even though you may have enabled All Features in your organization, you still need to enable SCPs at the root level of the organization to attach SCPs to objects. To do this in my case, I will run the enable-policy-type command and provide the root ID.

aws organizations enable-policy-type --root-id r-rth4 --policy-type SERVICE_CONTROL_POLICY

Output

{
    "Root": {
        "PolicyTypes": [], 
        "Id": "r-rth4", 
        "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
        "Name": "Root"
    }
}

Now, I will attach the SCP to the OU by using the aws organizations attach-policy command. I must include the target-id, which is the OU ID noted in the previous step and the policy-id from the output of the command in Step 2.

aws organizations attach-policy --target-id ou-rth4-ezo5wonz --policy-id p-6ldl8bll

No Output

 

5.  Verify which SCPs are attached to the HIPAA OU

I will now verify which SCPs are attached to my account by using the aws organization list-policies-for-target command. I must provide the OU ID with the target-id parameter and then filter for SERVICE_CONTROL_POLICY type.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

{
    "policies": [
        {
            "awsManaged": false,
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "id": "p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "name": "hipaa-example-policy",
            "type": "SERVICE_CONTROL_POLICY"
        },
        {
            "awsManaged": true,
            "arn": "arn:aws:organizations::aws:policy/SERVICE_CONTROL_POLICY/p-FullAWSAccess",
            "id": "p-FullAWSAccess",
            "description": "Allows access to every operation",
            "name": "FullAWSAccess",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

As the output shows, two SCPs are attached to this account. I want to detach the FullAWSAccess SCP so that the HIPAA SCP is properly in effect. The FullAWSAccess SCP is an Allow SCP that allows all AWS services. If I were to leave the default FullAWSAccess SCP in place, it would grant access to services I do not want to allow in my account. Detaching the FullAWSAccess SCP means that only the services I allow in the hipaa-example-policy are allowed in my account. Note that if I were to create a Deny SCP, the SCP would take precedence over an Allow SCP.

6.  Detach the default FullAWSAccess SCP from the OU

Before detaching the default FullAWSAccess SCP from my account, I run the aws workspaces describe-workspaces call from the Amazon WorkSpaces API. I am currently not running any WorkSpaces, so the output shows an empty list. However, I will test this again after I detach the FullAWSAccess SCP from my account and am left with only the HIPAA SCP attached to the account.

aws workspaces describe-workspaces

Output

{
    "Workspaces": []
}

In order to detach the FullAWSAccess SCP, I must run the aws organizations detach-policy command, providing it the policy-id and target-id of the OU.

aws organizations detach-policy --policy-id p-FullAWSAccess --target-id ou-rth4-ezo5wonz

No Output

 

If I rerun the list-policies-for-target command again, I see that only one SCP is attached to my account that allows HIPAA Eligible Services, as shown in the following output.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

 

{
    "policies": [
        {
            "name": "hipaa-example-policy",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "awsManaged": false,
            "id": "p-6ldl8bll",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

Now I can test and verify the enforcement of this SCP.

7.  Verify SCP enforcement

Previously, the administrator of the account had full access to all AWS services, including Amazon WorkSpaces. His IAM policy for Amazon WorkSpaces allows all actions for Amazon WorkSpaces. However, after I apply the HIPAA SCP to the account, this changes the effect of the IAM policy to deny all actions for Amazon WorkSpaces because it is not an allowed service.

The following screenshot of the IAM policy simulator shows which permissions are set for the administrator after I apply the HIPAA SCP. Also, note that the IAM policy simulator shows that the Deny permission is being denied by Organizations. Because the policy simulator is aware of the SCPs attached to an account, it is a good tool to use when troubleshooting or validating an SCP.

If I run the aws workspaces describe-workspaces call again as I did in Step 5, this time I receive an AccessDeniedException error, which validates that the HIPAA SCP is working because Amazon WorkSpaces is not an allowed service in the SCP.

aws workspaces describe-workspaces

Output

An error occurred (AccessDeniedException) when calling the DescribeWorkspaces operation: 
User: arn:aws:iam::098765432110:user/admin is not authorized to perform: workspaces:DescribeWorkspaces 
on resource: arn:aws:workspaces:us-east-1:098765432110:workspace/*

This completes the process of creating and applying an SCP to my account.

Summary

In this blog post, I have shown how to create an SCP and attach it to an OU to restrict an account to HIPAA Eligible Services and additional supporting services. I also showed how to create an OU, move an account into the OU, and then validate the SCP attached to the OU. For more information, see AWS Cloud Computing in Healthcare.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues with implementing this solution, please start a new thread on the IAM forum.

– Aaron

In Case You Missed These: AWS Security Blog Posts from January, February, and March

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-january-february-and-march/

Image of lock and key

In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.

March

March 22: How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53
Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

March 21: New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption
The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

March 21: Updated CJIS Workbook Now Available by Request
The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).

March 9: New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions
Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.

March 8: How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

March 7: How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network
Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.

Image of lock and key

February

February 27: Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts
Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization, and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

February 23: s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3
Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.

February 22: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.

February 22: How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules
AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

February 13: AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud
I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

February 13: How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials
You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

February 13: How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory
Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.

February 10: How to Easily Log On to AWS Services by Using Your On-Premises Active Directory
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

February 9: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.

February 8: How to Remediate Amazon Inspector Security Findings Automatically
The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings but also to automate addressing those findings. Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings and then implements the appropriate remediation based on the type of issue.

February 6: How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector
In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda. This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

Image of lock and key

January

January 30: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption
Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.

January 27: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events
Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

January 26: Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data
Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

January 24: New SOC 2 Report Available: Confidentiality
As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

January 18: Compliance in the Cloud for New Financial Services Cybersecurity Regulations
Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.

January 9: New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift
In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.

January 6: The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016
The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.

January 6: FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services
Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

January 4: The Top 20 Most Viewed AWS IAM Documentation Pages in 2016
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

January 3: The Most Viewed AWS Security Blog Posts in 2016
The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

January 3: How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups
You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.

– Craig

AWS Week in Review – March 6, 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-6-2017/

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for!

Monday

March 6

Tuesday

March 7

Wednesday

March 8

Thursday

March 9

Friday

March 10

Saturday

March 11

Sunday

March 12

Jeff;

 

AWS Week in Review – February 27, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-february-27-2016/

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for. Going forward I hope to bring back the other sections, as soon as I get my tooling and automation into better shape.

Monday

February 27

Tuesday

February 28

Wednesday

March 1

Thursday

March 2

Friday

March 3

Saturday

March 4

Sunday

March 5

Jeff;

 

Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts

Post Syndicated from Caitlyn Shim original https://aws.amazon.com/blogs/security/now-generally-available-aws-organizations-policy-based-management-for-multiple-aws-accounts/

AWS Organizations service image

Over the years, we have found that many of our customers are managing multiple AWS accounts. Instead of dealing with a multitude of per-team, per-division, or per-application accounts, our customers have asked for a way to define access control policies that can be easily applied to all, some, or individual accounts. In many cases, these customers are also interested in additional billing and cost management options, and would like to be able to take advantage of AWS pricing benefits such as volume discounts and Reserved Instances.

Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.

To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

– Caitlyn

AWS Organizations – Policy-Based Management for Multiple AWS Accounts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-organizations-policy-based-management-for-multiple-aws-accounts/

Over the years I have found that many of our customers are managing multiple AWS accounts. This situation can arise for several reasons. Sometimes they adopt AWS incrementally and organically, with individual teams and divisions making the move to cloud computing on a decentralized basis. Other companies grow through mergers and acquisitions and take on responsibility for existing accounts. Still others routinely create multiple accounts in order to meet strict guidelines for compliance or to create a very strong isolation barrier between applications, sometimes going so far as to use distinct accounts for development, testing, and production.

As these accounts proliferate, our customers find that they would like to manage them in a scalable fashion. Instead of dealing with a multitude of per-team, per-division, or per-application accounts, they have asked for a way to define access control policies that can be easily applied to all, some, or individual accounts. In many cases, these customers are also interested in additional billing and cost management, and would like to be able to control how AWS pricing benefits such as volume discounts and Reserved Instances are applied to their accounts.

AWS Organizations Emerges from Preview
To support this increasingly important use case, we are moving AWS Organizations from Preview to General Availability today. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of Organizational Units (OUs), assign each account to an OU, define policies, and then apply them to the entire hierarchy, to select OUs, or to specific accounts. You can invite existing AWS accounts to join your organization and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.

Here are some important terms and concepts that will help you to understand Organizations (this assumes that you are the all-powerful, overall administrator of your organization’s AWS accounts, and that you are responsible for the Master account):

An Organization is a consolidated set of AWS accounts that you manage. Newly-created Organizations offer the ability to implement sophisticated, account-level controls such as Service Control Policies. This allows Organization administrators to manage lists of allowed and blocked AWS API functions and resources that place guard rails on individual accounts. For example, you could give your advanced R&D team access to a wide range of AWS services, and then be a bit more cautious with your mainstream development and test accounts. Or, on the production side, you could allow access only to AWS services that are eligible for HIPAA compliance.

Some of our existing customers use a feature of AWS called Consolidated Billing. This allows them to select a Payer Account which rolls up account activity from multiple AWS Accounts into a single invoice and provides a centralized way of tracking costs. With this launch, current Consolidated Billing customers now have an Organization that provides all the capabilities of Consolidated Billing, but by default does not have the new features (like Service Control Policies) we’re making available today. These customers can easily enable the full features of AWS Organizations. This is accomplished by first enabling the use of all AWS Organization features from the Organization’s master account and then having each member account authorize this change to the Organization. Finally, we will continue to support creating new Organizations that support only the Consolidated Billing capabilities. Customers that wish to only use the centralized billing features can continue to do so, without allowing the master account administrators to enforce the advanced policy controls on member accounts in the Organization.

An AWS account is a container for AWS resources.

The Master account is the management hub for the Organization and is also the payer account for all of the AWS accounts in the Organization. The Master account can invite existing accounts to join the Organization, and can also create new accounts.

Member accounts are the non-Master accounts in the Organization.

An Organizational Unit (OU) is a container for a set of AWS accounts. OUs can be arranged into a hierarchy that can be up to five levels deep. The top of the hierarchy of OUs is also known as the Administrative Root.

A Service Control Policy (SCP) is a set of controls that the Organization’s Master account can apply to the Organization, selected OUs, or to selected accounts. When applied to an OU, the SCP applies to the OU and to any other OUs beneath it in the hierarchy. The SCP or SCPs in effect for a member account specify the permissions that are granted to the root user for the account. Within the account, IAM users and roles can be used as usual. However, regardless of how permissive the user or the role might be, the effective set of permissions will never extend beyond what is defined in the SCP. You can use this to exercise fine-grained control over access to AWS services and API functions at the account level.

An Invitation is used to ask an AWS account to join an Organization. It must be accepted within 15 days, and can be extended via email address or account ID. Up to 20 Invitations can be outstanding at any given time. The invitation-based model allows you to start from a Master account and then bring existing accounts into the fold. When an Invitation is accepted, the account joins the Organization and all applicable policies become effective. Once the account has joined the Organization, you can move it to the proper OU.

AWS Organizations is appropriate when you want to create strong isolation boundaries between the AWS accounts that you manage. However, keep in mind that AWS resources (EC2 instances, S3 buckets, and so forth) exist within a particular AWS account and cannot be moved from one account to another. You do have access to many different cross-account AWS features including VPC peering, AMI sharing, EBS snapshot sharing, RDS snapshot sharing, cross-account email sending, delegated access via IAM roles, cross-account S3 bucket permissions, and cross-acount access in the AWS Management Console.

Like consolidated billing, AWS Organizations also provides several benefits when it comes to the use of EC2 and RDS Reserved Instances. For billing purposes, all of the accounts in the Organization are treated as if they are one account and can receive the hourly cost benefit of an RI purchased by any other account in the same Organization (in order for this benefit to be applied as expected, the Availability Zone and other attributes of the RI must match the attributes of the EC2 or RDS instance).

Creating an Organization
Let’s create an Organization from the Console, create some Organizational Units, and then create some accounts. I start by clicking on Create organization:

Then I choose ENABLE ALL FEATURES and click on Create organization:

My Organization is ready in seconds:

I can create a new account by clicking on Add account, and then selecting Create account:

Then I supply the details (the IAM role is created in the new account and grants enough permissions for the account to be customized after creation):

Here’s what the console looks like after I have created Dev, Test, and Prod accounts:

At this point all of the accounts are at the top of the hierarchy:

In order to add some structure, I click on Organize accounts, select Create organizational unit (OU), and enter a name:

I do the same for a second OU:

Then I select the Prod account, click on Move accounts, and choose the Operations OU:

Next, I move the Dev and Test accounts into the Development OU:

At this point I have four accounts (my original one plus the three that I just created) and two OUs. The next step is to create one or more Service Control Policies by clicking on Policies and selecting Create policy. I can use the Policy Generator or I can copy an existing SCP and then customize it. I’ll use the Policy Generator. I give my policy a name and make it an Allow policy:

Then I use the Policy Generator to construct a policy that allows full access to EC2 and S3, and the ability to run (invoke) Lambda functions:

Remember, that this policy defines the full set of allowable actions within the account. In order to allow IAM users within the account to be able to use these actions, I would still need to create suitable IAM policies and attach them to the users (all within the member account). I click on Create policy and my policy is ready:

Then I create a second policy for development and testing. This one also allows access to AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline:

Let’s recap. I have created my accounts and placed them into OUs. I have created a policy for the OUs. Now I need to enable the use of policies, and attach the policy to the OUs. To enable the use of policies, I click on Organize accounts and select Home (this is not the same as the root because Organizations was designed to support multiple, independent hierarchies), and then click on the checkbox in the Root OU. Then I look to the right, expand the Details section, and click on Enable:

Ok, now I can put all of the pieces together! I click on the Root OU to descend in to the hierarchy, and then click on the checkbox in the Operations OU. Then I expand the Control Policies on the right and click on Attach policy:

Then I locate the OperationsPolicy and click on Attach:

Finally, I remove the FullAWSAccess policy:

I can also attach the DevTestPolicy to the Development OU.

All of the operations that I described above could have been initiated from the AWS Command Line Interface (CLI) or by making calls to functions such as CreateOrganization, CreateAccount, CreateOrganizationalUnit, MoveAccount, CreatePolicy, AttachPolicy, and InviteAccountToOrganization. To see the CLI in action, read Announcing AWS Organizations: Centrally Manage Multiple AWS Accounts.

Best Practices for Use of AWS Organizations
Before I wrap up, I would like to share some best practices for the use of AWS Organizations:

Master Account – We recommend that you keep the Master Account free of any operational AWS resources (with one exception). In addition to making it easier for you to make high-quality control decision, this practice will make it easier for you to understand the charges on your AWS bill.

CloudTrail – Use AWS CloudTrail (this is the exception) in the Master Account to centrally track all AWS usage in the Member accounts.

Least Privilege – When setting up policies for your OUs, assign as few privileges as possible.

Organizational Units – Assign policies to OUs rather than to accounts. This will allow you to maintain a better mapping between your organizational structure and the level of AWS access needed.

Testing – Test new and modified policies on a single account before scaling up.

Automation – Use the APIs and a AWS CloudFormation template to ensure that every newly created account is configured to your liking. The template can create IAM users, roles, and policies. It can also set up logging, create and configure VPCs, and so forth.

Learning More
Here are some resources that will help you to get started with AWS Organizations:

Things to Know
AWS Organizations is available today in all AWS regions except China (Beijing) and AWS GovCloud (US) and is available to you at no charge (to be a bit more precise, the service endpoint is located in US East (Northern Virginia) and the SCPs apply across all relevant regions). All of the accounts must be from the same seller; you cannot mix AWS and AISPL (the local legal Indian entity that acts as a reseller for AWS services accounts in India) in the same Organization.

We have big plans for Organizations, and are currently thinking about adding support for multiple payers, control over allocation of Reserved Instance discounts, multiple hierarchies, and other control policies. As always, your feedback and suggestions are welcome.

Jeff;

Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data

Post Syndicated from Mahendra Chheda original https://aws.amazon.com/blogs/security/now-available-amazon-cloud-directory-fast-efficient-storage-of-hierarchical-data/

Cloud Directory image

 

Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

Cloud Directory is a building block that already powers other AWS services including Amazon Cognito, AWS Organizations, and Amazon QuickSight Standard Edition. Because it plays such a crucial role within AWS, Cloud Directory was designed with scalability, high availability, and security in mind (data is encrypted at rest and while in transit).

Cloud Directory is a managed service: you don’t need to think about installing or patching software, managing servers, or scaling any storage or compute infrastructure. You simply define the schemas, create a directory, and then populate your directory by making calls to the Cloud Directory API. This API is designed for speed and scale, with efficient, batch-based read and write functions.

To learn more about Cloud Directory, see the full blog post on the AWS Blog.

– Mahendra

Amazon Cloud Directory – A Cloud-Native Directory for Hierarchical Data

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-cloud-directory-a-cloud-native-directory-for-hierarchical-data/

Our customers have traditionally used directories (typically Active Directory Lightweight Directory Service or LDAP-based) to manage hierarchically organized data. Device registries, course catalogs, network configurations, and user directories are often represented as hierarchies, sometimes with multiple types of relationships between objects in the same collection. For example, a user directory could have one hierarchy based on physical location (country, state, city, building, floor, and office), a second one based on projects and billing codes, and a third based on the management chain. However, traditional directory technologies do not support the use of multiple relationships in a single directory; you’d have to create and maintain additional directories if you needed to do this.

Scale is another important challenge. The fundamental operations on a hierarchy involve locating the parent or the child object of a given object. Given that hierarchies can be used to represent large, nested collections of information, these fundamental operations must be as efficient as possible, regardless of how many objects there are or how deeply they are nested. Traditional directories can be difficult to scale, and the pain only grows if you are using two or more in order to represent multiple hierarchies.

New Amazon Cloud Directory
Today we are launching Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data as described above. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

Cloud Directory is a building block that already powers other AWS services including Amazon Cognito and AWS Organizations. Because it plays such a crucial role within AWS, it was designed with scalability, high availability, and security in mind (data is encrypted at rest and while in transit).

Amazon Cloud Directory is a managed service; you don’t need to think about installing or patching software, managing servers, or scaling any storage or compute infrastructure. You simply define the schemas, create a directory, and then populate your directory by making calls to the Cloud Directory API. This API is designed for speed and for scale, with efficient, batch-based read and write functions.

The long-lasting nature of a directory, combined with the scale and the diversity of use cases that it must support over its lifetime, brings another challenge to light. Experience has shown that static schemas lack the flexibility to adapt to the changes that arise with scale and with new use cases. In order to address this challenge and to make the directory future-proof, Cloud Directory is built around a model that explicitly makes room for change. You simply extend your existing schemas by adding new facets. This is a safe operation that leaves existing data intact so that existing applications will continue to work as expected. Combining schemas and facets allows you to represent multiple hierarchies within the same directory. For example, your first hierarchy could mirror your org chart. Later, you could add an additional facet to track some additional properties for each employee, perhaps a second phone number or a social network handle. After that, you can could create a geographically oriented hierarchy within the same data: Countries, states, buildings, floors, offices, and employees.

As I mentioned, other parts of AWS already use Amazon Cloud Directory. Cognito User Pools use Cloud Directory to offer application-specific user directories with support for user sign-up, sign-in and multi-factor authentication. With Cognito Your User Pools, you can easily and securely add sign-up and sign-in functionality to your mobile and web apps with a fully-managed service that scales to support hundreds of millions of users. Similarly, AWS Organizations uses Cloud Directory to support creation of groups of related AWS accounts and makes good use of multiple hierarchies to enforce a wide range of policies.

Before we dive in, let’s take a quick look at some important Amazon Cloud Directory concepts:

Directories are named, and must have at least one schema. Directories store objects, relationships between objects, schemas, and policies.

Facets model the data by defining required and allowable attributes. Each facet provides an independent scope for attribute names; this allows multiple applications that share a directory to safely and independently extend a given schema without fear of collision or confusion.

Schemas define the “shape” of data stored in a directory by making reference to one or more facets. Each directory can have one or more schemas. Schemas exist in one of three phases (Development, Published, or Applied). Development schemas can be modified; Published schemas are immutable. Amazon Cloud Directory includes a collection of predefined schemas for people, organizations, and devices. The combination of schemas and facets leaves the door open to significant additions to the initial data model and subject area over time, while ensuring that existing applications will still work as expected.

Attributes are the actual stored data. Each attribute is named and typed; data types include Boolean, binary (blob), date/time, number, and string. Attributes can be mandatory or optional, and immutable or editable. The definition of an attribute can specify a rule that is used to validate the length and/or content of an attribute before it is stored or updated. Binary and string objects can be length-checked against minimum and maximum lengths. A rule can indicate that a string must have a value chosen from a list, or that a number is within a given range.

Objects are stored in directories, have attributes, and are defined by a schema. Each object can have multiple children and multiple parents, as specified by the schema. You can use the multiple-parent feature to create multiple, independent hierarchies within a single directory (sometimes known as a forest of trees).

Policies can be specified at any level of the hierarchy, and are inherited by child objects. Cloud Directory does not interpret or assign any meaning to policies, leaving this up to the application. Policies can be used to specify and control access permissions, user rights, device characteristics, and so forth.

Creating a Directory
Let’s create a directory! I start by opening up the AWS Directory Service Console and clicking on the first Create directory button:

I enter a name for my directory (users), choose the person schema (which happens to have two facets; more about this in a moment), and click on Next:

The predefined (AWS) schema will be copied to my directory. I give it a name and a version, and click on Publish:

Then I review the configuration and click on Launch:

The directory is created, and I can then write code to add objects to it.

Pricing and Availability
Cloud Directory is available now in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions and you can start using it today.

Pricing is based on three factors: the amount of data that you store, the number of reads, and the number of writes (these prices are for US East (Northern Virginia)):

  • Storage – $0.25 / GB / month
  • Reads – $0.0040 for every 10,000 reads
  • Writes – $0.0043 for every 1,000 writes

In the Works
We have some big plans for Cloud Directory!

While the priorities can change due to customer feedback, we are working on cross-region replication, AWS Lambda integration, and the ability to create new directories via AWS CloudFormation.

Jeff;

The Most Viewed AWS Security Blog Posts in 2016

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/the-most-viewed-aws-security-blog-posts-in-2016/

The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

  1. How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
  2. How to Control Access to Your Amazon Elasticsearch Service Domain
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. Announcing AWS Organizations: Centrally Manage Multiple AWS Accounts
  5. How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
  6. How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
  7. How to Record SSH Sessions Established Through a Bastion Host
  8. How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
  9. Announcing Industry Best Practices for Securing AWS Resources
  10. How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory

The following 10 posts published since the blog’s inception in April 2013 were the most viewed AWS Security Blog posts in 2016.

  1. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  2. Securely Connect to Linux Instances Running in a Private Amazon VPC
  3. A New and Standardized Way to Manage Credentials in the AWS SDKs
  4. Where’s My Secret Access Key?
  5. Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0
  6. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  7. How to Connect Your On-Premises Active Directory to AWS Using AD Connector
  8. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
  9. How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface
  10. How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53

Let us know in the comments section below if there is a specific security or compliance topic you would like us to cover on the Security Blog in 2017.

– Craig

Announcing AWS Organizations: Centrally Manage Multiple AWS Accounts

Post Syndicated from Caitlyn Shim original https://aws.amazon.com/blogs/security/announcing-aws-organizations-centrally-manage-multiple-aws-accounts/

Today, AWS launched AWS Organizations: a new way for you to centrally manage all the AWS accounts your organization owns. Now you can arrange your AWS accounts into groups called organizational units (OUs) and apply policies to OUs or directly to accounts. For example, you can organize your accounts by application, environment, team, or any other grouping that makes sense for your business.

Organizations removes the need to manage security policies through separate AWS accounts. Before Organizations, if you had a set of AWS accounts, you had to ensure that users in those AWS accounts had the right level of access to AWS services. You had to either configure security settings on each account individually or write a custom script to iterate through each account. However, any user with administrative permissions in those AWS accounts could have bypassed the defined permissions. Organizations includes the launch of service control policies (SCPs), which give you the ability to configure one policy and have it apply to your entire organization, an OU, or an individual account. In this blog post, I walk through an example of how to use Organizations.

Create an organization

Let’s say you own software regulated by the healthcare or financial industry, and you want to deploy and run your software on AWS. In order to process, store, or transmit any protected health information on AWS, regulations such as ISO and SOC limit your software to using only designated AWS services. With Organizations, you can limit your software to access only the AWS services allowed by such regulations. SCPs can limit access to AWS services across all accounts in your organization such that the enforced policy cannot be overridden by any user in the AWS accounts on whom the SCP is enforced.

When creating a new organization, you want to start in the account that will become the master account. The master account is the account that will own the organization and all of the administrative actions applied to accounts by the organization. The master account can set up the SCPs, account groupings, and payment method of the other AWS accounts in the organization called member accounts. Like Consolidated Billing, the master account is also the account that pays the bills for the AWS activity of the member accounts, so choose it wisely.

For all my examples in this post, I use the AWS CLI. Organizations is also available through the AWS Management Console and AWS SDK.

In the following case, I use the create-organization API to create a new organization in Full_Control mode:

server:~$ aws organizations create-organization --mode FULL_CONTROL

returns:

{
    "organization": {
        "availablePolicyTypes": [
            {
                "status": "ENABLED",
                "type": "service_control_policy"
            }
        ],
        "masterAccountId": "000000000001",
        "masterAccountArn": "arn:aws:organizations::000000000001:account/o-1234567890/000000000001",
        "mode": "FULL_CONTROL",
        "masterAccountEmail": "[email protected]",
        "id": "o-1234567890",
        "arn": "arn:aws:organizations::000000000001:organization/o-1234567890"
    }
}

In this example, I want to set security policies on the organization, so I create it in Full_Control mode. You can create organizations in two different modes: Full_Control or Billing. Full_Control has all of the benefits of Billing mode but adds the additional ability to apply service control policies.

Note: If you currently use Consolidated Billing, your Consolidated Billing family is migrated automatically to AWS Organizations in Billing mode, which is equivalent to Consolidated Billing. All account activity charges accrued by member accounts are billed to the master account.

Add members to the organization

To populate the organization you’ve just created, you can either create new AWS accounts in the organization or invite existing accounts.

Create new AWS accounts

Organizations gives you APIs to programmatically create new AWS accounts in your organization. All you have to specify is the email address and account name. The rest of the account information (address, etc.) is inherited from the master account.

To create a new account by using the AWS CLI, you call create-account with the email address to assign to the new account and what you want to set the account name to be:

server:~$ aws organizations create-account --account-name DataIngestion --email [email protected]

Creating an AWS account is an asynchronous process, so create-account returns a createAccountStatus response:

{
    "createAccountStatus": {
        "requestedTimestamp": 1479783743.401,
        "state": "IN_PROGRESS",
        "id": "car-12345555555555555555555555556789",
        "accountName": "DataIngestion"
    }
}

You can use describe-create-account-status with the createAccountStatus id to see when the AWS account is ready to use. Accounts are generally ready within minutes.

Invite existing AWS accounts

You can also invite existing AWS accounts to your organization. To invite an AWS account to your organization, you either need to know the AWS account ID or email address.

The command through the AWS CLI is invite-account-to-organization:

server:~$ aws organizations invite-account-to-organization --target [email protected],type=EMAIL

That command returns a handshake object, which contains all information about the request sent. In this instance, this was a request sent to the account at [email protected] to join the organization with the id 1234567890:

{
    "handshake": {
        "id": "h-77888888888888888888888888888777",
        "state": "OPEN",
        "resources": [
            {
               "type": "ORGANIZATION",
               "resources": [
                  {
                     "type": "MASTER_EMAIL",
                     "value": "[email protected]"
                  },
                  {
                     "type": "MASTER_NAME",
                     "value": "Example Corp"
                  },
                  {
                     "type": "ORGANIZATION_MODE",
                     "value": "FULL"
                  }
             ],
             "value": "o-1234567890"
          },
          {
             "type": "EMAIL",
             "value": "[email protected]"
          }
        ],
        "parties": [
            {
              "type": "EMAIL",
              "id": "[email protected]"
            },
            {
              "type": "ORGANIZATION",
              "id": "1234567890"
            }
            ],
            "action": "invite",
            "requestedTimestamp": 1479784006.008,
            "expirationTimestamp": 1481080006.008,
            "arn": "arn:aws:organizations::000000000001:handshake/o-1234567890/invite/h-77888888888888888888888888888777"
   }
}

The preceding command sends an email to the account owner to notify them that you invited their account to join your organization. If the owner of the invited AWS account agrees to the terms of joining the organization, they can accept the request through a link in the email or by calling accept-handshake-request.

Viewing your organization

At this point, the organization is populated with two member accounts: DataIngestion and DataProcessing. The master account is [email protected]. By default, accounts are placed directly under the root of your organization. The root is the base of your organizational structure—all OUs or accounts branch from it, as shown in the following illustration.

You can view all the accounts in the organization with the CLI command, list-accounts:

server:~$ aws organizations list-accounts

The list-accounts command returns a list of accounts and the details of how they joined the organization. Note that the master account is the organization along with the member accounts:

{
    "accounts": [
        {
           "status": "ACTIVE",
           "name": "DataIngestion",
           "joinedMethod": "CREATED",
           "joinedTimestamp": 1479784102.074,
           "id": "200000000001",
           "arn": "arn:aws:organizations::000000000001:account/o-1234567890/200000000001"
        },
        {
           "status": "ACTIVE",
           "name": " DataProcessing ",
           "joinedMethod": "INVITED",
           "joinedTimestamp": 1479784102.074,
           "id": "200000000002",
           "arn": "arn:aws:organizations::000000000001:account/o-1234567890/200000000002"
        },
        {
           "status": "ACTIVE",
           "name": "Example Corp",
           "joinedMethod": "INVITED",
           "joinedTimestamp": 1479783034.579,
           "id": "000000000001",
           "arn": "arn:aws:organizations::000000000001:account/o-1234567890/000000000001"
        }
    ]
}

Service control policies

Service control policies (SCPs) let you apply controls on which AWS services and APIs are accessible to an AWS account. The SCPs apply to all entities within an account: the root user, AWS Identity and Access Management (IAM) users, and IAM roles. For this example, we’re going to set up an SCP to only allow access to seven AWS services.

Note that SCPs work in conjunction with IAM policies. Our SCP allows access to a set of services, but it is possible to set up an IAM policy to further restrict access for a user. AWS will enforce the most restrictive combination of the policies.

SCPs affect the master account differently than other accounts. To ensure you do not lock yourself out of your organization, SCPs applied to your organization do not apply to the master account. Only SCPs applied directly to the master account can modify the master account’s access.

Creating a policy

In this example, I want to make sure that only the given set of services is accessible in our AWS account, so I will create an Allow policy to grant access only to listed services. The following policy allows access to Amazon DynamoDB, Amazon RDS, Amazon EC2, Amazon S3, Amazon EMR, Amazon Glacier, and Elastic Load Balancing.

server:~$ aws organizations create-policy --name example-prod-policy --type service_control_policy --description "All services allowed by regulation policy." --content file://./Regulation-Policy.json

and inside Regulation-Policy.json:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                 "dynamodb:*","rds:*","ec2:*","s3:*","elasticmapreduce:*",
                 "glacier:*","elasticloadbalancing:*"
             ],
             "Effect": "Allow",
             "Resource": "*"
        }
    ]
}

SCPs follow the same structure as IAM policies. This example is an Allow policy, but SCPs also offer the ability to write Deny policies, restricting access to specific services and APIs.

To create an SCP in organizations through the CLI, you can store the earlier SCP in file Regulation-Policy.json and call create-policy:

server:~$ aws organizations create-policy --name example-prod-policy --type service_control_policy --description "All services allowed for regulation policy." --content file://./Regulation-Policy.json

The create-policy command returns the policy just created. Save the policy id for later when you want to apply it to an element in organizations:

{
"policy": {
"content": "{\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n {\n \"Action\": [\n \"dynamodb: *\",\"rds: *\",\"ec2: *\",\"s3: *\",\"elasticmapreduce: *\",\"glacier: *\",\"elasticloadbalancing: *\"\n],\n \"Effect\": \"Allow\",\n\"Resource\": \"*\"\n}\n ]\n}\n",
"policySummary": {
"description": "All services allowed for regulation policy.",
"type": "service_control_policy",
"id": "p-yyy12346",
"arn": "arn:aws:organizations:: 000000000001:policy/o-1234567890/service_control_policy/p-yyy12346",
"name": "example-prod-policy"
        }
    }
}

Attaching a policy

After you create your policy, you can attach it to entities in your organization. You can attach policies to the root account, an OU, or an individual account. Accounts inherit all policies above them in the organizational tree. To keep this example simple, attach the Example Corp. Prod SCP (as defined in the example-prod-policy) to your entire organization and attach the policy to the root. Because the root is the top element of an organization’s tree, policies applied to the tree apply to every member account in the organization.

Using the CLI, you first need to call list-roots to get the root id. Once you have the root id, you can call attach-policy, passing it in the example-prod-policy id (collected from the create-policy response) and the root id.

server:~$ aws organizations list-roots
{
    "roots": [
        {
            "policyTypes": [
                {
                    "status": "ENABLED",
                    "type": "service_control_policy"
                }
            ],
            "id": "r-4444",
            "arn": "arn:aws:organizations::000000000001:root/o-1234567890/r-4444",
             "name": "Root"
        }
    ]
}
server:~$ aws organizations attach-policy --target-id r-4444 --policy-id p-yyy12346

Let’s now verify the policies on the root. Calling list-policies-for-target on the root will display all policies attached to the root of the organization.

server:~$ aws organizations list-policies-for-target --target-id r-4444 --filter service_control_policy
 
{
    "policies": [
        {
            "awsManaged": false,
            "description": "All services allowed by regulation policy.",
            "type": "service_control_policy",
            "id": " p-yyy12346",
            "arn": "arn:aws:organizations::000000000001:policy/o-1234567890/service_control_policy/p-yyy12346",
            "name": "example-prod-policy"
        },
        {
            "awsManaged": true,
            "description": "Allows access to every operation",
            "type": "service_control_policy",
            "id": "p-FullAWSAccess",
            "arn": "arn:aws:organizations::aws:policy/service_control_policy/p-FullAWSAccess",
            "name": "FullAWSAccess"
        }
    ]
}

Two policies are returned from list-policies-for-target on the root: the example-prod-policy, and the FullAWSAccess policy. FullAWSAccess is an AWS managed policy that is applied to every element in the organization by default. When you create your organization, every root, OU, and account has the FullAWSAccess policy, which allows your organization to have access to everything in AWS (*:*).

If you want to limit your organization’s access to only the services specific to the Example Corp. Prod SCP, you must remove the FullAWSAccess policy. In this case, Organizations will allow the root account to have access to the services allowed in the FullAWSAccess policy and the Example Corp. Prod SCP, which is all AWS services. To have the restrictions intended by the Example Corp. Prod SCP, you have to remove the FullAWSAccess policy. To do this with the CLI:

aws organizations detach-policy --policy-id p-FullAWSAccess --target-id r-4444

And one final list to verify only the example-prod-policy is attached to the root:

aws organizations list-policies-for-target --target-id r-4444 --filter service_control_policy
 
{
    "policies": [
        {
            "awsManaged": false,
            "description": "All services allowed by regulation policy.",
            "type": "service_control_policy",
            "id": "p-yyy12346",
            "arn": "arn:aws:organizations:: 000000000001:policy/o-1234567890/service_control_policy/p-yyy12346",
            "name": "example-prod-policy"
        }
    ]
}

Now, you have an extra layer of protection, limiting all accounts in your organization to only those AWS services that are compliant with Example Corp’s regulations.

The organization limiting all accounts in your organization to only those AWS services that are compliant with Example Corp’s regulations

More advanced structure

If you want to break down your organization further, you can add OUs and apply policies. For example, some of your AWS accounts might not be used for protected health information and therefore have no need for such a restrictive SCP.

For example, you may have a development team that needs to use and experiment with different AWS services to do their job. In this case, you want to set the controls on these accounts to be lighter than the controls on the accounts that have access to regulated customer information. With Organizations, you can, for example, create an OU to represent the medical software you run in production, and a different OU to hold the accounts developers use. You can then apply different security restrictions to each OU.

The following illustration shows three developer accounts that are part of a Developer Research OU. The original two accounts that require regulation protections are in a Production OU. We also detached our Example Corp. Prod Policy from the root and instead attached it to the Production OU. Finally, any restrictions not related to regulations are kept at the root level by placing them in a new Company policy.

In the preceding illustration, the accounts in the Developer Research OU have the Company policy applied to them because their OU is still under the root with that policy. The accounts in the Production OU have both SCPs (Company policy and Example Corp. Prod Policy) applied to them. As is true with IAM policies, SCPs apply the more restrictive intersection of the two policies to the account. If the Company Policy SCP allowed Glacier, EMR, S3, EC2, RDS, Dynamo, Amazon Kinesis, AWS Lambda, and Amazon Redshift, the policy application for accounts in the Production OU would be the intersection, as shown in the following chart.

The accounts in the Production OU have both the Company policy SCP and the Example Corp. Prod Policy SCP applied to them. The intersection of the two policies applies, so these accounts have access to DynamoDB, Glacier, EMR, RDS, EC2, and S3. Because Redshift, Lambda, and Kinesis are not in the Example Corp. Prod Policy SCP, the accounts in the Production OU are not allowed to access them.

To summarize policy application: all elements (root account, OUs, accounts) have FullAWSAccess (*:*) by default. If you have multiple SCPs on the same element, Organizations takes the union. When policies are inherited across multiple elements in the organizational tree, Organizations takes the intersection of the SCPs. And finally, the master account has unique protections on it; only SCPs attached directly to it will change its access.

Conclusion

AWS Organizations makes it easy to manage multiple AWS accounts from a single master account. You can use Organizations to group accounts into organizational units and manage your accounts by application, environment, team, or any other grouping that makes sense for your business. SCPs give you the ability to apply policies to multiple accounts at once.

To learn more, see the AWS Organizations home page and sign up for the AWS Organizations Preview today. If you are at AWS re:Invent 2016, you can also attend session SAC323 – NEW SERVICE: Centrally Manage Multiple AWS Accounts with AWS Organizations on Wednesday, November 30, at 11:00 A.M.

– Caitlyn