Tag Archives: AWS Organizations

How to control access to AWS resources based on AWS account, OU, or organization

Post Syndicated from Rishi Mehrotra original https://aws.amazon.com/blogs/security/how-to-control-access-to-aws-resources-based-on-aws-account-ou-or-organization/

AWS Identity and Access Management (IAM) recently launched new condition keys to make it simpler to control access to your resources along your Amazon Web Services (AWS) organizational boundaries. AWS recommends that you set up multiple accounts as your workloads grow, and you can use multiple AWS accounts to isolate workloads or applications that have specific security requirements. By using the new conditions, aws:ResourceOrgID, aws:ResourceOrgPaths, and aws:ResourceAccount, you can define access controls based on an AWS resource’s organization, organizational unit (OU), or account. These conditions make it simpler to require that your principals (users and roles) can only access resources inside a specific boundary within your organization. You can combine the new conditions with other IAM capabilities to restrict access to and from AWS accounts that are not part of your organization.

This post will help you get started using the new condition keys. We’ll show the details of the new condition keys and walk through a detailed example based on the following scenario. We’ll also provide references and links to help you learn more about how to establish access control perimeters around your AWS accounts.

Consider a common scenario where you would like to prevent principals in your AWS organization from adding objects to Amazon Simple Storage Service (Amazon S3) buckets that don’t belong to your organization. To accomplish this, you can configure an IAM policy to deny access to S3 actions unless aws:ResourceOrgID matches your unique AWS organization ID. Because the policy references your entire organization, rather than individual S3 resources, you have a convenient way to maintain this security posture across any number of resources you control. The new conditions give you the tools to create a security baseline for your IAM principals and help you prevent unintended access to resources in accounts that you don’t control. You can attach this policy to an IAM principal to apply this rule to a single user or role, or use service control policies (SCPs) in AWS Organizations to apply the rule broadly across your AWS accounts. IAM principals that are subject to this policy will only be able to perform S3 actions on buckets and objects within your organization, regardless of their other permissions granted through IAM policies or S3 bucket policies.

New condition key details

You can use the aws:ResourceOrgID, aws:ResourceOrgPaths, and aws:ResourceAccount condition keys in IAM policies to place controls on the resources that your principals can access. The following table explains the new condition keys and what values these keys can take.

Condition key Description Operator Single/multi value Value
aws:ResourceOrgID AWS organization ID of the resource being accessed All string operators Single value key Any AWS organization ID
aws:ResourceOrgPaths Organization path of the resource being accessed All string operators Multi-value key Organization paths of AWS organization IDs and organizational unit IDs
aws:ResourceAccount AWS account ID of the resource being accessed All string operators Single value key Any AWS account ID

Note: Of the three keys, only aws:ResourceOrgPaths is a multi-value condition key, while aws:ResourceAccount and aws:ResourceOrgID are single-value keys. For information on how to use multi-value keys, see Creating a condition with multiple keys or values in the IAM documentation.

Resource owner keys compared to principal owner keys

The new IAM condition keys complement the existing principal condition keys aws:PrincipalAccount, aws:PrincipalOrgPaths, and aws:PrincipalOrgID. The principal condition keys help you define which AWS accounts, organizational units (OUs), and organizations are allowed to access your resources. For more information on the principal conditions, see Use IAM to share your AWS resources with groups of AWS accounts in AWS Organizations on the AWS Security Blog.

Using the principal and resource keys together helps you establish permission guardrails around your AWS principals and resources, and makes it simpler to keep your data inside the organization boundaries you define as you continue to scale. For example, you can define identity-based policies that prevent your IAM principals from accessing resources outside your organization (by using the aws:ResourceOrgID condition). Next, you can define resource-based policies that prevent IAM principals outside your organization from accessing resources that are inside your organization boundary (by using the aws:PrincipalOrgID condition). The combination of both policies prevents any access to and from AWS accounts that are not part of your organization. In the next sections, we’ll walk through an example of how to configure the identity-based policy in your organization. For the resource-based policy, you can follow along with the example in An easier way to control access to AWS resources by using the AWS organization of IAM principals on the AWS Security blog.

Setup for the examples

In the following sections, we’ll show an example IAM policy for each of the new conditions. To follow along with Example 1, which uses aws:ResourceAccount, you’ll just need an AWS account.

To follow along with Examples 2 and 3 that use aws:ResourceOrgPaths and aws:ResourceOrgID respectively, you’ll need to have an organization in AWS Organizations and at least one OU created. This blog post assumes that you have some familiarity with the basic concepts in IAM and AWS Organizations. If you need help creating an organization or want to learn more about AWS Organizations, visit Getting Started with AWS Organizations in the AWS documentation.

Which IAM policy type should I use?

You can implement the following examples as identity-based policies, or in SCPs that are managed in AWS Organizations. If you want to establish a boundary for some of your IAM principals, we recommend that you use identity-based policies. If you want to establish a boundary for an entire AWS account or for your organization, we recommend that you use SCPs. Because SCPs apply to an entire AWS account, you should take care when you apply the following policies to your organization, and account for any exceptions to these rules that might be necessary for some AWS services to function properly.

Example 1: Restrict access to AWS resources within a specific AWS account

Let’s look at an example IAM policy that restricts access along the boundary of a single AWS account. For this example, say that you have an IAM principal in account 222222222222, and you want to prevent the principal from accessing S3 objects outside of this account. To create this effect, you could attach the following IAM policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": " DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceAccount": [
            "222222222222"
          ]
        }
      }
    }
  ]
}

Note: This policy is not meant to replace your existing IAM access controls, because it does not grant any access. Instead, this policy can act as an additional guardrail for your other IAM permissions. You can use a policy like this to prevent your principals from access to any AWS accounts that you don’t know or control, regardless of the permissions granted through other IAM policies.

This policy uses a Deny effect to block access to S3 actions unless the S3 resource being accessed is in account 222222222222. This policy prevents S3 access to accounts outside of the boundary of a single AWS account. You can use a policy like this one to limit your IAM principals to access only the resources that are inside your trusted AWS accounts. To implement a policy like this example yourself, replace account ID 222222222222 in the policy with your own AWS account ID. For a policy you can apply to multiple accounts while still maintaining this restriction, you could alternatively replace the account ID with the aws:PrincipalAccount condition key, to require that the principal and resource must be in the same account (see example #3 in this post for more details how to accomplish this).

Organization setup: Welcome to AnyCompany

For the next two examples, we’ll use an example organization called AnyCompany that we created in AWS Organizations. You can create a similar organization to follow along directly with these examples, or adapt the sample policies to fit your own organization. Figure 1 shows the organization structure for AnyCompany.

Figure 1: Organization structure for AnyCompany

Figure 1: Organization structure for AnyCompany

Like all organizations, AnyCompany has an organization root. Under the root are three OUs: Media, Sports, and Governance. Under the Sports OU, there are three more OUs: Baseball, Basketball, and Football. AWS accounts in this organization are spread across all the OUs based on their business purpose. In total, there are six OUs in this organization.

Example 2: Restrict access to AWS resources within my organizational unit

Now that you’ve seen what the AnyCompany organization looks like, let’s walk through another example IAM policy that you can use to restrict access to a specific part of your organization. For this example, let’s say you want to restrict S3 object access within the following OUs in the AnyCompany organization:

  • Media
  • Sports
  • Baseball
  • Basketball
  • Football

To define a boundary around these OUs, you don’t need to list all of them in your IAM policy. Instead, you can use the organization structure to your advantage. The Baseball, Basketball, and Football OUs share a parent, the Sports OU. You can use the new aws:ResourceOrgPaths key to prevent access outside of the Media OU, the Sports OU, and any OUs under it. Here’s the IAM policy that achieves this effect.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": " DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "*",
      "Condition": {
        "ForAllValues:StringNotLike": {
          "aws:ResourceOrgPaths": [
            "o-acorg/r-acroot/ou-acroot-mediaou/",
            "o-acorg/r-acroot/ou-acroot-sportsou/*"
          ] 
        }
      }
    }
  ]
}

Note: Like the earlier example, this policy does not grant any access. Instead, this policy provides a backstop for your other IAM permissions, preventing your principals from accessing S3 objects outside an OU-defined boundary. If you want to require that your IAM principals consistently follow this rule, we recommend that you apply this policy as an SCP. In this example, we attached this policy to the root of our organization, applying it to all principals across all accounts in the AnyCompany organization.

The policy denies access to S3 actions unless the S3 resource being accessed is in a specific set of OUs in the AnyCompany organization. This policy is identical to Example 1, except for the condition block: The condition requires that aws:ResourceOrgPaths contains any of the listed OU paths. Because aws:ResourceOrgPaths is a multi-value condition, the policy uses the ForAllValues:StringNotLike operator to compare the values of aws:ResourceOrgPaths to the list of OUs in the policy.

The first OU path in the list is for the Media OU. The second OU path is the Sports OU, but it also adds the wildcard character * to the end of the path. The wildcard * matches any combination of characters, and so this condition matches both the Sports OU and any other OU further down its path. Using wildcards in the OU path allows you to implicitly reference other OUs inside the Sports OU, without having to list them explicitly in the policy. For more information about wildcards, refer to Using wildcards in resource ARNs in the IAM documentation.

Example 3: Restrict access to AWS resources within my organization

Finally, we’ll look at a very simple example of a boundary that is defined at the level of an entire organization. This is the same use case as the preceding two examples (restrict access to S3 object access), but scoped to an organization instead of an account or collection of OUs.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "arn:aws:s3:::*/*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceOrgID": "${aws:PrincipalOrgID}"
        }
      }
    }
  ]
}

Note: Like the earlier examples, this policy does not grant any access. Instead, this policy provides a backstop for your other IAM permissions, preventing your principals from accessing S3 objects outside your organization regardless of their other access permissions. If you want to require that your IAM principals consistently follow this rule, we recommend that you apply this policy as an SCP. As in the previous example, we attached this policy to the root of our organization, applying it to all accounts in the AnyCompany organization.

The policy denies access to S3 actions unless the S3 resource being accessed is in the same organization as the IAM principal that is accessing it. This policy is identical to Example 1, except for the condition block: The condition requires that aws:ResourceOrgID and aws:PrincipalOrgID must be equal to each other. With this requirement, the principal making the request and the resource being accessed must be in the same organization. This policy also applies to S3 resources that are created after the policy is put into effect, so it is simple to maintain the same security posture across all your resources.

For more information about aws:PrincipalOrgID, refer to AWS global condition context keys in the IAM documentation.

Learn more

In this post, we explored the new conditions, and walked through a few examples to show you how to restrict access to S3 objects across the boundary of an account, OU, or organization. These tools work for more than just S3, though: You can use the new conditions to help you protect a wide variety of AWS services and actions. Here are a few links that you may want to look at:

If you have any questions, comments, or concerns, contact AWS Support or start a new thread on the AWS Identity and Access Management forum. Thanks for reading about this new feature. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Rishi Mehrotra

Rishi Mehrotra

Rishi is a Product Manager in AWS IAM. He enjoys working with customers and influencing products decisions. Prior to Amazon, Rishi worked for enterprise IT customers after receiving engineering degree in computer science. He recently pursued MBA from The University of Chicago Booth School of Business. Outside of work, Rishi enjoys biking, reading, and playing with his kids.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

Minimizing Dependencies in a Disaster Recovery Plan

Post Syndicated from Randy DeFauw original https://aws.amazon.com/blogs/architecture/minimizing-dependencies-in-a-disaster-recovery-plan/

The Availability and Beyond whitepaper discusses the concept of static stability for improving resilience. What does static stability mean with regard to a multi-Region disaster recovery (DR) plan? What if the very tools that we rely on for failover are themselves impacted by a DR event?

In this post, you’ll learn how to reduce dependencies in your DR plan and manually control failover even if critical AWS services are disrupted. As a bonus, you’ll see how to use service control policies (SCPs) to help simulate a Regional outage, so that you can test failover scenarios more realistically.

Failover plan dependencies and considerations

Let’s dig into the DR scenario in more detail. Using Amazon Route 53 for Regional failover routing is a common pattern for DR events. In the simplest case, we’ve deployed an application in a primary Region and a backup Region. We have a Route 53 DNS record set with records for both Regions, and all traffic goes to the primary Region. In an event that triggers our DR plan, we manually or automatically switch the DNS records to direct all traffic to the backup Region.

Relying on an automated health check to control Regional failover can be tricky. A health check might not be perfectly reliable if a Region is experiencing some type of degradation. Often, we prefer to initiate our DR plan manually, which then initiates with automation.

What are the dependencies that we’ve baked into this failover plan? First, Route 53, our DNS service, has to be available. It must continue to serve DNS queries, and we have to be able to change DNS records manually. Second, if we do not have a full set of resources already deployed in the backup Region, we must be able to deploy resources into it.

Both dependencies might violate static stability, because we are relying on resources in our DR plan that might be affected by the outage we’re seeing. Ideally, we don’t want to depend on other services running so we can failover and continue to serve our own traffic. How do we reduce additional dependencies?

Static stability

Let’s look at our first dependency on Route 53 – control planes and data planes. Briefly, a control plane is used to configure resources, and the data plane delivers services (see Understanding Availability Needs for a more complete definition.)

The Route 53 data plane, which responds to DNS queries, is highly resilient across Regions. We can safely rely on it during the failure of any single Region. But let’s assume that for some reason we are not able to call on the Route 53 control plane.

Amazon Route 53 Application Recovery Controller (Route 53 ARC) was built to handle this scenario. It provisions a Route 53 health check that we can manually control with a Route 53 ARC routing control, and is a data plane operation. The Route 53 ARC data plane is highly resilient, using a cluster of five Regional endpoints. You can revise the health check if three of the five Regions are available.

Figure 1. Simple Regional failover scenario using Route 53 Application Recovery Controller

Figure 1. Simple Regional failover scenario using Route 53 Application Recovery Controller

The second dependency, being able to deploy resources into the second Region, is not a concern if we run a fully scaled-out set of resources. We must make sure that our deployment mechanism doesn’t rely only on the primary Region. Most AWS services have Regional control planes, so this isn’t an issue.

The AWS Identity and Access Management (IAM) data plane is highly available in each Region, so you can authorize the creation of new resources as long as you’ve already defined the roles. Note: If you use federated authentication through an identity provider, you should test that the IdP does not itself have a dependency on another Region.

Testing your disaster recovery plan

Once we’ve identified our dependencies, we need to decide how to simulate a disaster scenario. Two mechanisms you can use for this are network access control lists (NACLs) and SCPs. The first one enables us to restrict network traffic to our service endpoints. However, the second allows defining policies that specify the maximum permissions for the target accounts. It also allows us to simulate a Route 53 or IAM control plane outage by restricting access to the service.

For the end-to-end DR simulation, we’ve published an AWS samples repository on GitHub that you can use to deploy. This evaluates Route 53 ARC capabilities if both Route 53 and IAM control planes aren’t accessible.

By deploying test applications across us-east-1 and us-west-1 AWS Regions, we can simulate a real-world scenario that determines the business continuity impact, failover timing, and procedures required for successful failover with unavailable control planes.

Figure 2. Simulating Regional failover using service control policies

Figure 2. Simulating Regional failover using service control policies

Before you conduct the test outlined in our scenario, we strongly recommend that you create a dedicated AWS testing environment with an AWS Organizations setup. Make sure that you don’t attach SCPs to your organization’s root but instead create a dedicated organization unit (OU). You can use this pattern to test SCPs and ensure that you don’t inadvertently lock out users from key services.

Chaos engineering

Chaos engineering is the discipline of experimenting on a system to build confidence in its capability to withstand turbulent production conditions. Chaos engineering and its principles are important tools when you plan for disaster recovery. Even a simple distributed system may be too complex to operate reliably. It can be hard or impossible to plan for every failure scenario in non-trivial distributed systems, because of the number of failure permutations. Chaos experiments test these unknowns by injecting failures (for example, shutting down EC2 instances) or transient anomalies (for example, unusually high network latency.)

In the context of multi-Region DR, these techniques can help challenge assumptions and expose vulnerabilities. For example, what happens if a health check passes but the system itself is unhealthy, or vice versa? What will you do if your entire monitoring system is offline in your primary Region, or too slow to be useful? Are there control plane operations that you rely on that themselves depend on a single AWS Region’s health, such as Amazon Route 53? How does your workload respond when 25% of network packets are lost? Does your application set reasonable timeouts or does it hang indefinitely when it experiences large network latencies?

Questions like these can feel overwhelming, so start with a few, then test and iterate. You might learn that your system can run acceptably in a degraded mode. Alternatively, you might find out that you need to be able to failover quickly. Regardless of the results, the exercise of performing chaos experiments and challenging assumptions is critical when developing a robust multi-Region DR plan.

Conclusion

In this blog, you learned about reducing dependencies in your DR plan. We showed how you can use Amazon Route 53 Application Recovery Controller to reduce a dependency on the Route 53 control plane, and how to simulate a Regional failure using SCPs. As you evaluate your own DR plan, be sure to take advantage of chaos engineering practices. Formulate questions and test your static stability assumptions. And of course, you can incorporate these questions into a custom lens when you run a Well-Architected review using the AWS Well-Architected Tool.

How to automate AWS account creation with SSO user assignment

Post Syndicated from Rafael Koike original https://aws.amazon.com/blogs/security/how-to-automate-aws-account-creation-with-sso-user-assignment/

Background

AWS Control Tower offers a straightforward way to set up and govern an Amazon Web Services (AWS) multi-account environment, following prescriptive best practices. AWS Control Tower orchestrates the capabilities of several other AWS services, including AWS Organizations, AWS Service Catalog, and AWS Single Sign-On (AWS SSO), to build a landing zone very quickly. AWS SSO is a cloud-based service that simplifies how you manage SSO access to AWS accounts and business applications using Security Assertion Markup Language (SAML) 2.0. You can use AWS Control Tower to create and provision new AWS accounts and use AWS SSO to assign user access to those newly-created accounts.

Some customers need to provision tens, if not hundreds, of new AWS accounts at one time and assign access to many users. If you are using AWS Control Tower, doing this requires that you provision an AWS account in AWS Control Tower, and then assign the user access to the AWS account in AWS SSO before moving to the next AWS account. This process adds complexity and time for administrators who manage the AWS environment while delaying users’ access to their AWS accounts.

In this blog post, we’ll show you how to automate creating multiple AWS accounts in AWS Control Tower, and how to automate assigning user access to the AWS accounts in AWS SSO, with the ability to repeat the process easily for subsequent batches of accounts. This solution simplifies the provisioning and assignment processes, while enabling automation for your AWS environment, and allows your builders to start using and experimenting on AWS more quickly.

Services used

This solution uses the following AWS services:

High level solution overview

Figure 1 shows the architecture and workflow of the batch AWS account creation and SSO assignment processes.

Figure 1: Batch AWS account creation and SSO assignment automation architecture and workflow

Figure 1: Batch AWS account creation and SSO assignment automation architecture and workflow

Before starting

This solution is configured to be deployed in the North Virginia Region (us-east-1). But you can change the CloudFormation template to run in any Region that supports all the services required in the solution.

AWS Control Tower Account Factory can take up to 25 minutes to create and provision a new account. During this time, you will be unable to use AWS Control Tower to perform actions such as creating an organizational unit (OU) or enabling a guardrail on an OU. As a recommendation, running this solution during a time period when you do not anticipate using AWS Control Tower’s features is best practice.

Collect needed information

Note: You must have already configured AWS Control Tower, AWS Organizations, and AWS SSO to use this solution.

Before deploying the solution, you need to first collect some information for AWS CloudFormation.

The required information you’ll need to gather in these steps is:

  • AWS SSO instance ARN
  • AWS SSO Identity Store ID
  • Admin email address
  • Amazon S3 bucket
  • AWS SSO user group ARN

Prerequisite information: AWS SSO instance ARN

From the web console

You can find this information under Settings in the AWS SSO web console as shown in Figure 2.

Figure 2: AWS SSO instance ARN

Figure 2: AWS SSO instance ARN

From the CLI

You can also get this information by running the following CLI command using AWS Command Line Interface (AWS CLI):

aws sso-admin list-instances

The output is similar to the following:

{
    "Instances": [
        {
        "InstanceArn": "arn:aws:sso:::instance/ssoins-abc1234567",
        "IdentityStoreId": "d-123456abcd"
        }
    ]
}

Make a note of the InstanceArn value from the output, as this will be used in the AWS SSO instance ARN.

Prerequisite information: AWS SSO Identity Store ID

This is available from either the web console or the CLI.

From the web console

You can find this information in the same screen as the AWS SSO Instance ARN, as shown in Figure 3.

Figure 3: AWS SSO identity store ID

Figure 3: AWS SSO identity store ID

From the CLI

To find this from the AWS CLI command aws sso-admin list-instances, use the IdentityStoreId from the second key-value pair returned.

Prerequisite information: Admin email address

The admin email address notified when a new AWS account is created.

This email address is used to receive notifications when a new AWS account is created.

Prerequisite information: S3 bucket

The name of the Amazon S3 bucket where the AWS account list CSV files will be uploaded to automate AWS account creation.

This globally unique bucket name will be used to create a new Amazon S3 Bucket, and the automation script will receive events from new objects uploaded to this bucket.

Prerequisite information: AWS SSO user group ARN

Go to AWS SSO > Groups and select the user group whose permission set you would like to assign to the new AWS account. Copy the Group ID from the selected user group. This can be a local AWS SSO user group, or a third-party identity provider-synced user group.

Note: For the AWS SSO user group, there is no AWS CLI equivalent; you need to use the AWS web console to collect this information.

Figure 4: AWS SSO user group ARN

Figure 4: AWS SSO user group ARN

Prerequisite information: AWS SSO permission set

The ARN of the AWS SSO permission set to be assigned to the user group.

From the web console

To view existing permission sets using the AWS SSO web console, go to AWS accounts > Permission sets. From there, you can see a list of permission sets and their respective ARNs.

Figure 5: AWS SSO permission sets list

Figure 5: AWS SSO permission sets list

You can also select the permission set name and from the detailed permission set window, copy the ARN of the chosen permission set. Alternatively, create your own unique permission set to be assigned to the intended user group.

Figure 6: AWS SSO permission set ARN

Figure 6: AWS SSO permission set ARN

From the CLI

To get permission set information from the CLI, run the following AWS CLI command:

aws sso-admin list-permission-sets --instance-arn <SSO Instance ARN>

This command will return an output similar to this:

{
    "PermissionSets": [
    "arn:aws:sso:::permissionSet/ssoins-abc1234567/ps-1234567890abcdef",
    "arn:aws:sso:::permissionSet/ssoins-abc1234567/ps-abcdef1234567890"
    ]
}

If you can’t determine the details for your permission set from the output of the CLI shown above, you can get the details of each permission set by running the following AWS CLI command:

aws sso-admin describe-permission-set --instance-arn <SSO Instance ARN> --permission-set-arn <PermissionSet ARN>

The output will be similar to this:

{
    "PermissionSet": {
    "Name": "AWSPowerUserAccess",
    "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-abc1234567/ps-abc123def4567890",
    "Description": "Provides full access to AWS services and resources, but does not allow management of Users and groups",
    "CreatedDate": "2020-08-28T11:20:34.242000-04:00",
    "SessionDuration": "PT1H"
    }
}

The output above lists the name and description of each permission set, which can help you identify which permission set ARN you will use.

Solution initiation

The solution steps are in two parts: the initiation, and the batch account creation and SSO assignment processes.

To initiate the solution

  1. Log in to the management account as the AWS Control Tower administrator, and deploy the provided AWS CloudFormation stack with the required parameters filled out.

    Note: To fill out the required parameters of the solution, refer to steps 1 to 6 of the To launch the AWS CloudFormation stack procedure below.

  2. When the stack is successfully deployed, it performs the following actions to set up the batch process. It creates:
    • The S3 bucket where you will upload the AWS account list CSV file.
    • A DynamoDB table. This table tracks the AWS account creation status.
    • A Lambda function, NewAccountHandler.
    • A Lambda function, CreateManagedAccount. This function is triggered by the entries in the Amazon DynamoDB table and initiates the batch account creation process.
    • An Amazon CloudWatch Events rule to detect the AWS Control Tower CreateManagedAccount lifecycle event.
    • Another Lambda function, CreateAccountAssignment. This function is triggered by AWS Control Tower Lifecycle Events via Amazon CloudWatch Events to assign the AWS SSO Permission Set to the specified User Group and AWS account

To create the AWS Account list CSV file

After you deploy the solution stack, you need to create a CSV file based on this sample.csv and upload it to the Amazon S3 bucket created in this solution. This CSV file will be used to automate the new account creation process.

CSV file format

The CSV file must follow the following format:

AccountName,SSOUserEmail,AccountEmail,SSOUserFirstName,SSOUserLastName,OrgUnit,Status,AccountId,ErrorMsg
Test-account-1,[email protected],[email protected],Fname-1,Lname-1,Test-OU-1,,,
Test-account-2,[email protected],[email protected],Fname-2,Lname-2,Test-OU-2,,,
Test-account-3,[email protected],[email protected],Fname-3,Lname-3,Test-OU-1,,,

Where the first line is the column names, and each subsequent line contains the new AWS accounts that you want to create and automatically assign that SSO user group to the permission set.

CSV fields

AccountName: String between 1 and 50 characters [a-zA-Z0-9_-]
SSOUserEmail: String with more than seven characters and be a valid email address for the primary AWS Administrator of the new AWS account
AccountEmail: String with more than seven characters and be a valid email address not used by other AWS accounts
SSOUserFirstName: String with the first name of the primary AWS Administrator of the new AWS account
SSOUserLastName: String with the last name of the primary AWS Administrator of the new AWS account
OrgUnit: String and must be an existing AWS Organizations OrgUnit
Status: String, for future use
AccountId: String, for future use
ErrorMsg: String, for future use

Figure 7 shows the details that are included in our example for the two new AWS accounts that will be created.

Figure 7: Sample AWS account list CSV

Figure 7: Sample AWS account list CSV

  1. The NewAccountHandler function is triggered from an object upload into the Amazon S3 bucket, validates the input file entries, and uploads the validated input file entries to the Amazon DynamoDB table.
  2. The CreateManagedAccount function queries the DynamoDB table to get the details of the next account to be created. If there is another account to be created, then the batch account creation process moves on to Step 4, otherwise it completes.
  3. The CreateManagedAccount function launches the AWS Control Tower Account Factory product in AWS Service Catalog to create and provision a new account.
  4. After Account Factory has completed the account creation workflow, it generates the CreateManagedAccount lifecycle event, and the event log states if the workflow SUCCEEDED or FAILED.
  5. The CloudWatch Events rule detects the CreateManagedAccount AWS Control Tower Lifecycle Event, and triggers the CreateManagedAccount and CreateAccountAssignment functions, and sends email notification to the administrator via AWS SNS.
  6. The CreateManagedAccount function updates the Amazon DynamoDB table with the results of the AWS account creation workflow. If the account was successfully created, it updates the input file entry in the Amazon DynamoDB table with the account ID; otherwise, it updates the entry in the table with the appropriate failure or error reason.
  7. The CreateAccountAssignment function assigns the AWS SSO Permission Set with the appropriate AWS IAM policies to the User Group specified in the Parameters when launching the AWS CloudFormation stack.
  8. When the Amazon DynamoDB table is updated, the Amazon DynamoDB stream triggers the CreateManagedAccount function for subsequent AWS accounts or when new AWS account list CSV files are updated, then steps 1-9 are repeated.

Upload the CSV file

Once the AWS account list CSV file has been created, upload it into the Amazon S3 bucket created by the stack.

Deploying the solution

To launch the AWS CloudFormation stack

Now that all the requirements and the specifications to run the solution are ready, you can launch the AWS CloudFormation stack:

  1. Open the AWS CloudFormation launch wizard in the console.
  2. In the Create stack page, choose Next.

    Figure 8: Create stack in CloudFormation

    Figure 8: Create stack in CloudFormation

  3. On the Specify stack details page, update the default parameters to use the information you captured in the prerequisites as shown in Figure 9, and choose Next.

    Figure 9: Input parameters into AWS CloudFormation

    Figure 9: Input parameters into AWS CloudFormation

  4. On the Configure stack option page, choose Next.
  5. On the Review page, check the box “I acknowledge that AWS CloudFormation might create IAM resources.” and choose Create Stack.
  6. Once the AWS CloudFormation stack has completed, go to the Amazon S3 web console and select the Amazon S3 bucket that you defined in the AWS CloudFormation stack.
  7. Upload the AWS account list CSV file with the information to create new AWS accounts. See To create the AWS Account list CSV file above for details on creating the CSV file.

Workflow and solution details

When a new file is uploaded to the Amazon S3 bucket, the following actions occur:

  1. When you upload the AWS account list CSV file to the Amazon S3 bucket, the Amazon S3 service triggers an event for newly uploaded objects that invokes the Lambda function NewAccountHandler.
  2. This Lambda function executes the following steps:
    • Checks whether the Lambda function was invoked by an Amazon S3 event, or the CloudFormation CREATE event.
    • If the event is a new object uploaded from Amazon S3, read the object.
    • Validate the content of the CSV file for the required columns and values.
    • If the data has a valid format, insert a new item with the data into the Amazon DynamoDB table, as shown in Figure 10 below.

      Figure 10: DynamoDB table items with AWS accounts details

      Figure 10: DynamoDB table items with AWS accounts details

    • Amazon DynamoDB is configured to initiate the Lambda function CreateManagedAccount when insert, update, or delete items are initiated.
    • The Lambda function CreateManagedAccount checks for update event type. When an item is updated in the table, this item is checked by the Lambda function, and if the AWS account is not created, the Lambda function invokes the AWS Control Tower Account Factory from the AWS Service Catalog to create a new AWS account with the details stored in the Amazon DynamoDB item.
    • AWS Control Tower Account Factory starts the AWS account creation process. When the account creation process completes, the status of Account Factory will show as Available in Provisioned products, as shown in Figure 11.

      Figure 11: AWS Service Catalog provisioned products for AWS account creation

      Figure 11: AWS Service Catalog provisioned products for AWS account creation

    • Based on the Control Tower lifecycle events, the CreateAccountAssignment Lambda function will be invoked when the CreateManagedAccount event is sent to CloudWatch Events. An AWS SNS topic is also triggered to send an email notification to the administrator email address as shown in Figure 12 below.

      Figure 12: AWS email notification when account creation completes

      Figure 12: AWS email notification when account creation completes

    • When invoked, the Lambda function CreateAccountAssignment assigns the AWS SSO user group to the new AWS account with the permission set defined in the AWS CloudFormation stack.

      Figure 13: New AWS account showing user groups with permission sets assigned

      Figure 13: New AWS account showing user groups with permission sets assigned

Figure 13 above shows the new AWS account with the user groups and the assigned permission sets. This completes the automation process. The AWS SSO users that are part of the user group will automatically be allowed to access the new AWS account with the defined permission set.

Handling common sources of error

This solution connects multiple components to facilitate the new AWS account creation and AWS SSO permission set assignment. The correctness of the parameters in the AWS CloudFormation stack is important to make sure that when AWS Control Tower creates a new AWS account, it is accessible.

To verify that this solution works, make sure that the email address is a valid email address, you have access to that email, and it is not being used for any existing AWS account. After a new account is created, it is not possible to change its root account email address, so if you input an invalid or inaccessible email, you will need to create a new AWS account and remove the invalid account.

You can view common errors by going to AWS Service Catalog web console. Under Provisioned products, you can see all of your AWS Control Tower Account Factory-launched AWS accounts.

Figure 14: AWS Service Catalog provisioned product with error

Figure 14: AWS Service Catalog provisioned product with error

Selecting Error under the Status column shows you the source of the error. Figure 15 below is an example of the source of the error:

Figure 15: AWS account creation error explanation

Figure 15: AWS account creation error explanation

Conclusion

In this post, we’ve shown you how to automate batch creation of AWS accounts in AWS Control Tower and batch assignment of user access to AWS accounts in AWS SSO. When the batch AWS accounts creation and AWS SSO user access assignment processes are complete, the administrator will be notified by emails from AWS SNS. We’ve also explained how to handle some common sources of errors and how to avoid them.

As you automate the batch AWS account creation and user access assignment, you can reduce the time you spend on the undifferentiated heavy lifting work, and onboard your users in your organization much more quickly, so they can start using and experimenting on AWS right away.

To learn more about the best practices of setting up an AWS multi-account environment, check out this documentation for more information.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Rafael Koike

Rafael is a Principal Solutions Architect supporting Enterprise customers in SouthEast and part of the Storage TFC. Rafael has a passion to build and his expertise in security, storage, networking and application development have been instrumental to help customers move to the cloud secure and fast. When he is not building he like to do Crossfit and target shooting.

Eugene Toh

Eugene Toh is a Solutions Architect supporting Enterprise customers in the Georgia and Alabama areas. He is passionate in helping customers to transform their businesses and take them to the next level. His area of expertise is in cloud migrations and disaster recovery and he enjoys giving public talks on the latest cloud technologies. Outside of work, he loves trying great food and traveling all over the world.

How to enrich AWS Security Hub findings with account metadata

Post Syndicated from Siva Rajamani original https://aws.amazon.com/blogs/security/how-to-enrich-aws-security-hub-findings-with-account-metadata/

In this blog post, we’ll walk you through how to deploy a solution to enrich AWS Security Hub findings with additional account-related metadata, such as the account name, the Organization Unit (OU) associated with the account, security contact information, and account tags. Account metadata can help you search findings, create insights, and better respond to and remediate findings.

AWS Security Hub ingests findings from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Systems Manager Patch Manager. Findings from each service are normalized into the AWS Security Finding Format (ASFF), so you can review findings in a standardized format and take action quickly. You can use AWS Security Hub to provide a single view of all security-related findings, and to set up alerts, automate remediation, and export specific findings to third‑party incident management systems.

The Security or DevOps teams responsible for investigating, responding to, and remediating Security Hub findings may need additional account metadata beyond the account ID, to determine what to do about the finding or where to route it. For example, determining whether the finding originated from a development or production account can be key to determining the priority of the finding and the type of remediation action needed. Having this metadata information in the finding allows customers to create custom insights in Security Hub to track which OUs or applications (based on account tags) have the most open security issues. This blog post demonstrates a solution to enrich your findings with account metadata to help your Security and DevOps teams better understand and improve their security posture.

Solution Overview

In this solution, you will use a combination of AWS Security Hub, Amazon EventBridge and AWS Lambda to ingest the findings and automatically enrich them with account related metadata by querying AWS Organizations and Account management service APIs. The solution architecture is shown in Figure 1 below:

Figure 1: Solution Architecture and workflow for metadata enrichment

Figure 1: Solution Architecture and workflow for metadata enrichment

The solution workflow includes the following steps:

  1. New findings and updates to existing Security Hub findings from all the member accounts flow into the Security Hub administrator account. Security Hub generates Amazon EventBridge events for the findings.
  2. An EventBridge rule created as part of the solution in the Security Hub administrator account will trigger a Lambda function configured as a target every time an EventBridge notification for a new or updated finding imported into Security Hub matches the EventBridge rule shown below:
    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "findingEnriched": [{
              "exists": false
            }]
          }
        }
      }
    }

  3. The Lambda function uses the account ID from the event payload to retrieve both the account information and the alternate contact information from the AWS Organizations and Account management service API. The following code within the helper.py constructs the account_details object representing the account information to enrich the finding:
    def get_account_details(account_id, role_name):
        account_details ={}
        organizations_client = AwsHelper().get_client('organizations')
        response = organizations_client.describe_account(AccountId=account_id)
        account_details["Name"] = response["Account"]["Name"]
        response = organizations_client.list_parents(ChildId=account_id)
        ou_id = response["Parents"][0]["Id"]
        if ou_id and response["Parents"][0]["Type"] == "ORGANIZATIONAL_UNIT":
            response = organizations_client.describe_organizational_unit(OrganizationalUnitId=ou_id)
            account_details["OUName"] = response["OrganizationalUnit"]["Name"]
        elif ou_id:
            account_details["OUName"] = "ROOT"
        if role_name:
            account_client = AwsHelper().get_session_for_role(role_name).client("account")
        else:
            account_client = AwsHelper().get_client('account')
        try:
            response = account_client.get_alternate_contact(
                AccountId=account_id,
                AlternateContactType='SECURITY'
            )
            if response['AlternateContact']:
                print("contact :{}".format(str(response["AlternateContact"])))
                account_details["AlternateContact"] = response["AlternateContact"]
        except account_client.exceptions.AccessDeniedException as error:
            #Potentially due to calling alternate contact on Org Management account
            print(error.response['Error']['Message'])
        
        response = organizations_client.list_tags_for_resource(ResourceId=account_id)
        results = response["Tags"]
        while "NextToken" in response:
            response = organizations_client.list_tags_for_resource(ResourceId=account_id, NextToken=response["NextToken"])
            results.extend(response["Tags"])
        
        account_details["tags"] = results
        AccountHelper.logger.info("account_details: %s" , str(account_details))
        return account_details

  4. The Lambda function updates the finding using the Security Hub BatchUpdateFindings API to add the account related data into the Note and UserDefinedFields attributes of the SecurityHub finding:
    #lookup and build the finding note and user defined fields  based on account Id
    enrichment_text, tags_dict = enrich_finding(account_id, assume_role_name)
    logger.debug("Text to post: %s" , enrichment_text)
    logger.debug("User defined Fields %s" , json.dumps(tags_dict))
    #add the Note to the finding and add a userDefinedField to use in the event bridge rule and prevent repeat lookups
    response = secHubClient.batch_update_findings(
        FindingIdentifiers=[
            {
                'Id': enrichment_finding_id,
                'ProductArn': enrichment_finding_arn
            }
        ],
        Note={
            'Text': enrichment_text,
            'UpdatedBy': enrichment_author
        },
        UserDefinedFields=tags_dict
    )

    Note: All state change events published by AWS services through Amazon Event Bridge are free of cost. The AWS Lambda free tier includes 1M free requests per month, and 400,000 GB-seconds of compute time per month at the time of publication of this post. If you process 2M requests per month, the estimated cost for this solution would be approximately $7.20 USD per month.

  5. Prerequisites

    1. Your AWS organization must have all features enabled.
    2. This solution requires that you have AWS Security Hub enabled in an AWS multi-account environment which is integrated with AWS Organizations. The AWS Organizations management account must designate a Security Hub administrator account, which can view data from and manage configuration for its member accounts. Follow these steps to designate a Security Hub administrator account for your AWS organization.
    3. All the members accounts are tagged per your organization’s tagging strategy and their security alternate contact is filled. If the tags or alternate contacts are not available, the enrichment will be limited to the Account Name and the Organizational Unit name.
    4. Trusted access must be enabled with AWS Organizations for AWS Account Management service. This will enable the AWS Organizations management account to call the AWS Account Management API operations (such as GetAlternateContact) for other member accounts in the organization. Trusted access can be enabled either by using AWS Management Console or by using AWS CLI and SDKs.

      The following AWS CLI example enables trusted access for AWS Account Management in the calling account’s organization.

      aws organizations enable-aws-service-access --service-principal account.amazonaws.com

    5. An IAM role with a read only access to lookup the GetAlternateContact details must be created in the Organizations management account, with a trust policy that allows the Security Hub administrator account to assume the role.

    Solution Deployment

    This solution consists of two parts:

    1. Create an IAM role in your Organizations management account, giving it necessary permissions as described in the Create the IAM role procedure below.
    2. Deploy the Lambda function and the other associated resources to your Security Hub administrator account

    Create the IAM role

    Using console, AWS CLI or AWS API

    Follow the Creating a role to delegate permissions to an IAM user instructions to create a IAM role using the console, AWS CLI or AWS API in the AWS Organization management account with role name as account-contact-readonly, based on the trust and permission policy template provided below. You will need the account ID of your Security Hub administrator account.

    The IAM trust policy allows the Security Hub administrator account to assume the role in your Organization management account.

    IAM Role trust policy

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<SH administrator Account ID>:root"
          },
          "Action": "sts:AssumeRole",
          "Condition": {}
        }
      ]
    }

    Note: Replace the <SH Delegated Account ID> with the account ID of your Security Hub administrator account. Once the solution is deployed, you should update the principal in the trust policy shown above to use the new IAM role created for the solution.

    IAM Permission Policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "account:GetAlternateContact"
                ],
                "Resource": "arn:aws:account::<Org. Management Account id>:account/o-*/*"
            }
        ]
    }

    The IAM permission policy allows the Security Hub administrator account to look up the alternate contact information for the member accounts.

    Make a note of the Role ARN for the IAM role similar to this format:

    arn:aws:iam::<Org. Management Account id>:role/account-contact-readonly. 
    			

    You will need this while the deploying the solution in the next procedure.

    Using AWS CloudFormation

    Alternatively, you can use the  provided CloudFormation template to create the role in the management account. The IAM role ARN is available in the Outputs section of the created CloudFormation stack.

    Deploy the Solution to your Security Hub administrator account

    You can deploy the solution using either the AWS Management Console, or from the GitHub repository using the AWS SAM CLI.

    Note: if you have designated an aggregation Region within the Security Hub administrator account, you can deploy this solution only in the aggregation Region, otherwise you need to deploy this solution separately in each Region of the Security Hub administrator account where Security Hub is enabled.

    To deploy the solution using the AWS Management Console

    1. In your Security Hub administrator account, launch the template by choosing the Launch Stack button below, which creates the stack the in us-east-1 Region.

      Note: if your Security Hub aggregation region is different than us-east-1 or want to deploy the solution in a different AWS Region, you can deploy the solution from the GitHub repository described in the next section.

      Select this image to open a link that starts building the CloudFormation stack

    2. On the Quick create stack page, for Stack name, enter a unique stack name for this account; for example, aws-security-hub–findings-enrichment-stack, as shown in Figure 2 below.
      Figure 2: Quick Create CloudFormation stack for the Solution

      Figure 2: Quick Create CloudFormation stack for the Solution

    3. For ManagementAccount, enter the AWS Organizations management account ID.
    4. For OrgManagementAccountContactRole, enter the role ARN of the role you created previously in the Create IAM role procedure.
    5. Choose Create stack.
    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      Figure 3 Update Management Account Role’s Trust

      Figure 3 Update Management Account Role’s Trust

    To deploy the solution from the GitHub Repository and AWS SAM CLI

    1. Install the AWS SAM CLI
    2. Download or clone the github repository using the following commands
      $ git clone https://github.com/aws-samples/aws-security-hub-findings-account-data-enrichment.git
      $ cd aws-security-hub-findings-account-data-enrichment

    3. Update the content of the profile.txt file with the profile name you want to use for the deployment
    4. To create a new bucket for deployment artifacts, run create-bucket.sh by specifying the region as argument as below.
      $ ./create-bucket.sh us-east-1

    5. Deploy the solution to the account by running the deploy.sh script by specifying the region as argument
      $ ./deploy.sh us-east-1

    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      "AWS": "arn:aws:iam::<SH Delegated Account ID>: role/<Role Name>"

    Using the enriched attributes

    To test that the solution is working as expected, you can create a standalone security group with an ingress rule that allows traffic from the internet. This will trigger a finding in Security Hub, which will be populated with the enriched attributes. You can then use these enriched attributes to filter and create custom insights, or take specific response or remediation actions.

    To generate a sample Security Hub finding using AWS CLI

    1. Create a Security Group using following AWS CLI command:
      aws ec2 create-security-group --group-name TestSecHubEnrichmentSG--description "Test Security Hub enrichment function"

    2. Make a note of the security group ID from the output, and use it in Step 3 below.
    3. Add an ingress rule to the security group which allows unrestricted traffic on port 100:
      aws ec2 authorize-security-group-ingress --group-id <Replace Security group ID> --protocol tcp --port 100 --cidr 0.0.0.0/0

    Within few minutes, a new finding will be generated in Security Hub, warning about the unrestricted ingress rule in the TestSecHubEnrichmentSG security group. For any new or updated findings which do not have the UserDefinedFields attribute findingEnriched set to true, the solution will enrich the finding with account related fields in both the Note and UserDefinedFields sections in the Security Hub finding.

    To see and filter the enriched finding

    1. Go to Security Hub and click on Findings on the left-hand navigation.
    2. Click in the filter field at the top to add additional filters. Choose a filter field of AWS Account ID, a filter match type of is, and a value of the AWS Account ID where you created the TestSecHubEnrichmentSG security group.
    3. Add one more filter. Choose a filter field of Resource type, a filter match type of is, and the value of AwsEc2SecurityGroup.
    4. Identify the finding for security group TestSecHubEnrichmentSG with updates to Note and UserDefinedFields, as shown in Figures 4 and 5 below:
      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Note: The actual attributes you will see as part of the UserDefinedFields may be different from the above screenshot. Attributes shown will depend on your tagging configuration and the alternate contact configuration. At a minimum, you will see the AccountName and OU fields.

    5. Once you confirm that the solution is working as expected, delete the stand-alone security group TestSecHubEnrichmentSG, which was created for testing purposes.

    Create custom insights using the enriched attributes

    You can use the attributes available in the UserDefinedFields in the Security Hub finding to filter the findings. This lets you generate custom Security Hub Insight and reports tailored to suit your organization’s needs. The example shown in Figure 6 below creates a custom Security Hub Insight for findings grouped by severity for a specific owner, using the Owner attribute within the UserDefinedFields object of the Security Hub finding.

    Figure 6: Custom Insight with Account metadata filters

    Figure 6: Custom Insight with Account metadata filters

    Event Bridge rule for response or remediation action using enriched attributes

    You can also use the attributes in the UserDefinedFields object of the Security Hub finding within the EventBridge rule to take specific response or remediation actions based on values in the attributes. In the example below, you can see how the Environment attribute can be used within the EventBridge rule configuration to trigger specific actions only when value matches PROD.

    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "Environment": "PROD"
          }
        }
      }
    }

    Conclusion

    This blog post walks you through a solution to enrich AWS Security Hub findings with AWS account related metadata using Amazon EventBridge notifications and AWS Lambda. By enriching the Security Hub findings with account related information, your security teams have better visibility, additional insights and improved ability to create targeted reports for specific account or business teams, helping them prioritize and improve overall security response. To learn more, see:

 
If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security news? Follow us on Twitter.

Siva Rajamani

Siva Rajamani

Siva Rajamani is a Boston-based Enterprise Solutions Architect at AWS. Siva enjoys working closely with customers to accelerate their AWS cloud adoption and improve their overall security posture.

Prashob Krishnan

Prashob Krishnan

Prashob Krishnan is a Denver-based Technical Account Manager at AWS. Prashob is passionate about security. He enjoys working with customers to solve their technical challenges and help build a secure scalable architecture on the AWS Cloud.

Find Public IPs of Resources – Use AWS Config for Vulnerability Assessment

Post Syndicated from Gurkamal Deep Singh Rakhra original https://aws.amazon.com/blogs/architecture/find-public-ips-of-resources-use-aws-config-for-vulnerability-assessment/

Systems vulnerability management is a key component of your enterprise security program. Its goal is to remediate OS, software, and applications vulnerabilities. Scanning tools can help identify and classify these vulnerabilities to keep the environment secure and compliant.

Typically, vulnerability scanning tools operate from internal or external networks to discover and report vulnerabilities. For internal scanning, the tools use private IPs of target systems in scope. For external scans, the public target system’s IP addresses are used. It is important that security teams always maintain an accurate inventory of all deployed resource’s IP addresses. This ensures a comprehensive, consistent, and effective vulnerability assessment.

This blog discusses a scalable, serverless, and automated approach to discover public IP addresses assigned to resources in a single or multi-account environment in AWS, using AWS Config.

Single account is when you have all your resources in a single AWS account. A multi-account environment refers to many accounts under the same AWS Organization.

Understanding scope of solution

You may have good visibility into the private IPs assigned to your resources: Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Kubernetes Service (EKS) clusters, Elastic Load Balancing (ELB), and Amazon Elastic Container Service (Amazon ECS). But it may require some effort to establish a complete view of the existing public IPs. And these IPs can change over time, as new systems join and exit the environment.

An elastic network interface is a logical networking component in a Virtual Private Cloud (VPC) that represents a virtual network card. The elastic network interface routes traffic to other destinations/resources. Usually, you have to make Describe* API calls for the specific resource with an elastic network interface to get information about its configuration and IP address. This may throttle the resource-specific API calls, and result in higher costs. Additionally, if there are tens or hundreds of accounts, it becomes exponentially more difficult to get the information into a single inventory.

AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. The advanced query feature provides a single query endpoint and language to get current resource state metadata for a single account and Region, or multiple accounts and Regions. You can use configuration aggregators to run the same queries from a central account across multiple accounts and AWS Regions.

AWS Config supports a subset of structured query language (SQL) SELECT syntax, which enables you to perform property-based queries and aggregations on the current configuration item (CI) data. Advanced query is available at no additional cost to AWS Config customers in all AWS Regions (except China Regions) and AWS GovCloud (US).

AWS Organizations helps you centrally govern your environment. Its integration with other AWS services lets you define central configurations, security mechanisms, audit requirements, and resource sharing across accounts in your organization.

Choosing scope of advanced queries in AWS Config

When running advanced queries in AWS Config, you must choose the scope of the query. The scope defines the accounts you want to run the query against and is configured when you create an aggregator.

Following are the three possible scopes when running advanced queries:

  1. Single account and single Region
  2. Multiple accounts and multiple Regions
  3. AWS Organization accounts

Single account and single Region

Figure 1. AWS Config workflow for single account and single Region

Figure 1. AWS Config workflow for single account and single Region

The use case shown in Figure 1 addresses the need of customers operating within a single account and single Region. With AWS Config enabled for the individual account, you will use AWS Config advanced query feature to run SQL queries. These will give you resource metadata about associated public IPs. You do not require an aggregator for single-account and single Region.

In Figure 1.1, the advanced query returned results from a single account and all Availability Zones within the Region in which the query was run.

Figure 1.1 Advanced query returning results for a single account and single Region

Figure 1.1 Advanced query returning results for a single account and single Region

Query for reference

SELECT

  resouceId,

  resourceName,

  resourceType,

  configuration.association.publicIp,

  availabilityZone,

  awsRegion

WHERE

  resourceType='AWS::EC2::NetworkInterface'

  AND configuration.association.publicIp>'0.0.0.0'

This query is fetching the properties of all elastic network interfaces. The WHERE condition is used to list the elastic network interfaces using the resourceType property and find all public IPs greater than 0.0.0.0. This is because elastic network interfaces can exist with a private IP, in which case there will be no public IP assigned to it. For a list of supported resourceType, refer to supported resource types for AWS Config.

Multiple accounts and multiple Regions

Figure 2. AWS Config monitoring workflow for multiple account and multiple Regions. The figure shows EC2, EKS, and Amazon ECS, but it can be any AWS resource having a public elastic network interface.

Figure 2. AWS Config monitoring workflow for multiple account and multiple Regions. The figure shows EC2, EKS, and Amazon ECS, but it can be any AWS resource having a public elastic network interface.

AWS Config enables you to monitor configuration changes against multiple accounts and multiple Regions via an aggregator, see Figure 2. An aggregator is an AWS Config resource type that collects AWS Config data from multiple accounts and Regions. You can choose the aggregator scope when running advanced queries in AWS Config. Remember to authorize the aggregator accounts to collect AWS Config configuration and compliance data.

Figure 2.1 Advanced query returning results from multiple Regions (awsRegion column) as highlighted in the diagram

Figure 2.1 Advanced query returning results from multiple Regions (awsRegion column) as highlighted in the diagram

This use case applies when you have AWS resources in multiple accounts (or span multiple organizations) and multiple Regions. Figure 2.1 shows the query results being returned from multiple AWS Regions.

Accounts in AWS Organization

Figure 3. The workflow of accounts in an AWS Organization being monitored by AWS Config. This figure shows EC2, EKS, and Amazon ECS but it can be any AWS resource having a public elastic network interface.

Figure 3. The workflow of accounts in an AWS Organization being monitored by AWS Config. This figure shows EC2, EKS, and Amazon ECS but it can be any AWS resource having a public elastic network interface.

An aggregator also enables you to monitor all the accounts in your AWS Organization, see Figure 3. When this option is chosen, AWS Config enables you to run advanced queries against the configuration history in all the accounts in your AWS Organization. Remember that an aggregator will only aggregate data from the accounts and Regions that are specified when the aggregator is created.

Figure 3.1 Advanced query returning results from all accounts (accountId column) under an AWS Organization

Figure 3.1 Advanced query returning results from all accounts (accountId column) under an AWS Organization

In Figure 3.1, the query is run against all accounts in an AWS Organization. This scope of AWS Organization is accomplished by the aggregator and it automatically accumulates data from all accounts under a specific AWS Organization.

Common architecture workflow for discovering public IPs

Figure 4. High-level architecture pattern for discovering public IPs

Figure 4. High-level architecture pattern for discovering public IPs

The workflow shown in Figure 4 starts with Amazon EventBridge triggering an AWS Lambda function. You can configure an Amazon EventBridge schedule via rate or cron expressions, which define the frequency. This AWS Lambda function will host the code to make an API call to AWS Config that will run an advanced query. The advanced query will check for all elastic network interfaces in your account(s). This is because any public resource launched in your account will be assigned an elastic network interface.

When the results are returned, they can be stored on Amazon S3. These result files can be timestamped (via naming or S3 versioning) in order to keep a history of public IPs used in your account. The result set can then be fed into or accessed by the vulnerability scanning tool of your choice.

Note: AWS Config advanced queries can also be used to query IPv6 addresses. You can use the “configuration.ipv6Addresses” AWS Config property to get IPv6 addresses. When querying IPv6 addresses, remove “configuration.association.publicIp > ‘0.0.0.0’” condition from the preceding sample queries. For more information on available AWS Config properties and data types, refer to GitHub.

Conclusion

In this blog, we demonstrated how to extract public IP information from resources deployed in your account(s) using AWS Config and AWS Config advanced query. We discussed how you can support your vulnerability scanning process by identifying public IPs in your account(s) that can be fed into your scanning tool. This solution is serverless, automated, and scalable, which removes the undifferentiated heavy lifting required to manage your resources.

Learn more about AWS Config best practices:

Simplify setup of Amazon Detective with AWS Organizations

Post Syndicated from Karthik Ram original https://aws.amazon.com/blogs/security/simplify-setup-of-amazon-detective-with-aws-organizations/

Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities by collecting log data from your AWS resources. Amazon Detective simplifies the process of a deep dive into a security finding from other AWS security services, such as Amazon GuardDuty and AWS SecurityHub. Detective uses machine learning, statistical analysis, and graph theory to build a linked set of data that enables customers to easily conduct faster and more efficient security investigations.

In this post you will learn about the new AWS Organizations integration with Amazon Detective, where new and existing Detective customers can delegate any account in their organization to be the delegated Detective administrator account, and can centrally manage the Detective behavior graph database for an organization with up to 1,200 accounts.

Customers tell us that they want to manage security findings and investigations across multiple AWS Accounts. Depending on the customer this can be 100s or 1000s of accounts. AWS Organizations integration with security services, including GuardDuty, Security Hub and AWS IAM Access Analyzer comes in handy by helping customers centralize management and governance of their environments as they scale and grow their AWS accounts and resources. Adding to the list, Detective is now integrated with AWS Organizations to simplify security posture management across all existing and future AWS accounts across an organization. Organizations integration is available in all AWS Regions that Detective supports.

Detective is aware of your existing delegated administrator accounts for other AWS Security services such as GuardDuty or Security Hub. Using this awareness, Detective recommends that you choose the same account as the administrator account for Detective, as shown in Figure 1. For a more complete walk though of how to enable your accounts, visit the AWS Detective Documentation.

Figure 1. Setting delegated administrator

Figure 1. Setting delegated administrator

You can then use the same account to manage all of your security services. AWS recommends you align your Detective administrator account with your GuardDuty and SecurityHub administrator accounts, to enable seamless integration between Detective and those services.

  • In GuardDuty or Security Hub, when viewing details for a GuardDuty finding, you can pivot from the finding details to the Detective finding profile.
  • In Detective, when investigating a GuardDuty finding, you can choose the option to archive that finding.

Once designated, the chosen account becomes the administrator account for the organization behavior graph. They can enable any organization account as a member account in the organization behavior graph, and can configure Detective to automatically enable organization accounts when they join the organization.

Figure 2. Auto-enabling Organization accounts

Figure 2. Auto-enabling Organization accounts

The Detective administrator account can also manually invite other accounts to join the organization behavior graph.

Figure 3. Inviting accounts to join the Organization behavior graph

Figure 3. Inviting accounts to join the Organization behavior graph

From Detective, the administrator account can centrally conduct security investigations across the organization

Considerations for AWS Organizations support

Some considerations and recommendations around Organizations support for Detective:

  1. Detective allows up to 1,200 member accounts in each behavior graph.
  2. The Detective administrator account becomes the administrator account for the organization’s behavior graph.
  3. An account can be a member account of multiple behavior graphs in the same Region. An account can accept multiple invitations. An organization account can be enabled as a member account in the organization behavior graph, and can then also accept invitations to other behavior graphs.
  4. An account can only be the administrator account of one behavior graph per Region, but can be an administrator account in different Regions.
  5. Changes to an organization are not immediately reflected in Detective. For most changes, such as new and removed organization accounts, it can take up to an hour for Detective to be notified.

Other recent updates from Amazon Detective

Additional support for all GuardDuty findings

With the recent expansion of security investigation support for Amazon Simple Storage Service (S3) and DNS-related findings on Amazon GuardDuty, Amazon Detective now provides full coverage of all detections from GuardDuty. Security analysts can now easily investigate and analyze the root cause of all GuardDuty findings using Detective, using the Investigate in Detective option in GuardDuty and Security Hub for further investigation.

New resource focused view

In addition to these integrations with AWS Organization and GuardDuty, Detective now makes it even easier for a security analyst to investigate entities and behaviors using a revamped user experience as seen in Figure 4. Amazon Detective presents a unified view of user and resource interactions over time, with all the context and details in one place, to help you quickly analyze the root cause of a security finding.

Figure 4. New resource focused view

Figure 4. New resource focused view

New finding overview

The new finding overview provides an expanded set of details for each finding, and provides links to the profiles for each involved entity as seen in the right panel in Figure 4. With this unified view, you can visualize all of the details and context in one place, while identifying the underlying reasons for the findings. This resource-focused view helps you understand the connections between resources affected by a security finding, and further helps you drill down into relevant historical activity to quickly determine the root cause.

Integration with Splunk

Amazon Detective, in coordination with the Splunk Trumpet project, has released the ability to pivot from an Amazon GuardDuty finding in Splunk directly to an Amazon Detective entity profile. Customers can now quickly identify the root cause of potential security issues or suspicious activities. This setting can be enabled on the Splunk Trumpet project installation page by selecting Detective GuardDuty URLs from the AWS CloudWatch Events dropdown.

Amazon Detective’s interactive visualizations make it easy to investigate and analyze issues more thoroughly and effectively, with minimal effort. Using these visualizations, customers can easily filter large sets of event data into specific timelines, with all the details, context, and guidance needed to help you to investigate quickly. For example; Amazon Detective enables you to view login attempts on a geolocation map, drill down into relevant historical activity, quickly determine a root cause, and if necessary, take action to resolve the issue.

Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues. To get started, enable a 30-day free trial of Amazon Detective with just a few clicks in the AWS Management console. See the AWS Regions page for a list of all Regions where Detective is available. To learn more, visit the Amazon Detective product page.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Karthik Ram

Karthik Ram

Karthik is a Senior Solutions Architect with Amazon Web Services based in Columbus, Ohio. He has a background in Networking and Infrastructure architecture. At AWS, Karthik helps customers build secure and innovative cloud solutions, solving their business problems using data driven approaches. Karthik’s Area of Depth is Cloud Security with a focus on Threat Detection and Incident Response (TDIR).

Author

Ross Warren

Ross Warren is a Senior Solution Architect at AWS based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Ross has worked at a handful of startups and has enjoyed the transition to AWS because he can continue to build solutions for customers on today’s most innovative platform.

Get started with AWS DevOps Guru Multi-Account Insight Aggregation with AWS Organizations

Post Syndicated from Ifeanyi Okafor original https://aws.amazon.com/blogs/devops/get-started-with-aws-devops-guru-multi-account-insight-aggregation-with-aws-organizations/

Amazon DevOps Guru is a fully managed service that uses machine learning (ML) to continuously analyze and consolidate operational data streams from multiple sources, such as Amazon CloudWatch metrics, AWS Config, AWS CloudFormation, AWS X-Ray, and provide you with a single console dashboard. This dashboard helps customers improve operational performance and avoid expensive downtime by generating actionable insights that flag operational anomalies, identify the likely root cause, and recommend corrective actions.

As customers scale their AWS resources across multiple accounts and deploy DevOps Guru across applications and use cases on these accounts, they get a siloed view of the operational health of their applications. Now you can enable multi-account support with AWS Organizations and designate a member account to manage operational insights across your entire organization. This delegated administrator can get a holistic view of the operational health of their applications across the organization—without the need for any additional customization.

In this post, we will walk through the process of setting up a delegated administrator. We will also explore how to monitor insights across your entire organization from this account.

Overview of the multi-account environment

The multi-account environment operates based on the organizational hierarchy that your organization has defined in AWS Organizations. This AWS service helps you centrally manage and govern your cloud environment. For this reason, you must have an organization set up in AWS Organizations before you can implement multi-account insights visibility. AWS Organizations is available to all AWS customers at no additional charge, and the service user guide has instructions for creating and configuring an organization in your AWS environment.

Understanding the management account, a delegated administrator, and other member accounts is fundamental to the multi-account visibility functionality in DevOps Guru. Before proceeding further, let’s recap these terms.

  • A management account is the AWS account you use to create your organization. You can create other accounts in your organization, invite and manage invitations for other existing accounts to join your organization, and remove accounts from your organization. The management account has wide permissions and access to accounts within the organization. It should only be used for absolutely essential administrative tasks, such as managing accounts, OUs, and organization-level policies. You can refer to the AWS Organizations FAQ for more information on management accounts.
  • When the management account gives a member account service-level administrative permissions, it becomes a delegated administrator. Because the permissions are assigned at the service level, delegated administrator’s privileges are confined to the AWS service in question (DevOps Guru, in this case). The delegated administrator manages the service on behalf of the management account, leaving the management account to focus on administrative tasks, such as account and policy management. Currently, DevOps Guru supports a single delegated administrator, which operates at the root level (i.e., at the organization level). When elevating a member account to a delegated administrator, it must be in the organization. For more information about adding a member account to an organization, see inviting an AWS account to join your organization.
  • Member accounts are accounts without any administrative privilege. An account can be a member of only one organization at a time.

Prerequisites

You must have the following to enable multi-account visibility:

  • An organization already set up in AWS Organizations. AWS Organizations is available to all AWS customers at no additional charge, and the service user guide has instructions for creating and configuring an organization in your AWS environment.
  • A member account that is in your organization and already onboarded in DevOps Guru. This account will be registered as a delegated administrator. For more information about adding a member account to an organization, see inviting an AWS account to join your organization.

Setting up multi-account insights visibility in your organization

In line with AWS Organizations’ best practices, we recommend first assigning a delegated administrator. Although a management account can view DevOps Guru insights across the organization, management accounts should be reserved for organization-level management, while the delegated administrator manages at the service level.

Registering a Delegated Administrator

A delegated administrator must be registered by the management account. The steps below assume that you have a member account to register as a delegated administrator. If your preferred account is not yet in your organization, then invite the account to join your organization.

To register a delegated administrator

  1. Log in to the DevOps Guru Console with the management account.
  2. On the welcome page, under Set up type, select Monitor applications across your organizations. If you select Monitor applications in the current AWS account, then your dashboard will display insights for the management account.
  3. Under Delegated administrator, select Register a delegated administrator (Recommended).
  4. Select Register delegated administrator to complete the process.

Register a delegated administrator. Steps 1-4 highlighted in console screenshot.

To de-register a delegated administrator

  1. Log in to the DevOps Guru Console and navigate to the Management account
  2. On the Management account page, select De-register administrator.

De-register a delegated Administrator. Steps 1 and 2 highlighted on console.

Viewing insights as a Delegated Administrator

As the delegated administrator, you can choose to view insights from

  • specific accounts
  • specific OUs
  • the entire organization

To view insights from specific accounts

  1. Log in to the DevOps Guru console, and select Accounts from the dropdown menu on the dashboard.
  2. Select the search bar to display a list of member accounts.
  3. Select up to five accounts, and select anywhere outside the dropdown menu to apply your selection. Simply select the delegated administrator account from the dropdown menu to view insights from the delegated administrator account.

View Insights from specific account. Steps 1-3 highlighted on dashboard screenshot.

The system health summary now will display information for the selected accounts.

To view insights from specific organizational units

  1. Log in to the DevOps Guru console, and select Organizational Units from the dropdown menu on the dashboard.
  2. Select the search bar to display the list of OUs.
  3. Select up to five OUs, and select anywhere outside of the dropdown menu to apply your selection.

View insights from specific organizational units. Steps 1-3 highlighted in dashboard screenshot.

Now the system health summary will display information for the selected OUs. Nested OUs are currently not supported, so only the accounts directly under the OU are included when an OU is selected. Select the sub-OU in addition to the parent OU to include accounts in a sub-OU,.

To view insights across the entire organization

  1. Log in to the DevOps Guru console and navigate to the Insights
  2. On the Reactive tab, you can see a list of all the reactive insights in the organization. On the Proactive tab, you can also see a list of all the proactive insights in the organization. You will notice that the table displaying the insights now has an Account ID column (highlighted in the snapshot below). This table aggregates insights from different accounts in the organization.

View insights across the entire organization.

  1. Use one or more of the following filters to find the insights that you are looking for
    1. Choose the Reactive or Proactive
    2. In the search bar, you can add an account ID, status, severity, stack, or resource name to specify a filter.

View insights across the entire organization and filter. Steps 1-3 highlighted in Insights summary page.

    1. To search by account ID, select the search bar, select Account, then select an account ID in the submenu.
    2. Choose or specify a time range to filter by insight creation time. For example, 12h shows insights created in the past 12 hours, and 1d shows the insights of the previous day. 1w will show the past week’s insights, and 1m will show the last month’s insights. Custom lets you specify another time range. The maximum time range you can use to filter insights is 180 days.

View insights across the entire organization and fitler by time.

Viewing insights from the Management Account

Viewing insights from the management account is similar to viewing insights from the delegated administrator, so the process listed for the delegated administrator also applies to the management account. Although the management account can view insights across the organization, it should be reserved for running administrative tasks across various AWS services at an organization level.

Important notes

Multi-account insight visibility works at a region level, meaning that you can only view insights across the organization within a single AWS region. You must change the AWS region from the region dropdown menu at the top-right corner of the console to view insights from a different AWS region.

For data security reasons, the delegated administrator can only access insights generated across the organization after the selected member account became the delegated administrator. Insights generated across the organization before the delegated administrator registration will remain inaccessible to the delegated administrator.

Conclusion

The steps detailed above show how you can quickly enable multi-account visibility to monitor application health across your entire organization.

AWS Customers are now using AWS DevOps Guru to monitor and improve application performance, and you too can start monitoring your applications by following the instructions in the product documentation. Head over to the AWS DevOps Guru console to get started today.

About the authors

Ifeanyi Okafor

Ifeanyi Okafor is a Product Manager with AWS. He enjoys building products that solve customer problems at scale.

Haider Naqvi

Haider Naqvi is a Solutions Architect at AWS. He has extensive Software Development and Enterprise Architecture experience. He focuses on enabling customers re:Invent and achieve business outcome with AWS. He is based out of New York.

Nick Ardecky

Nick Ardecky is a Software Engineering working with the AWS DevOps Guru team. Nick enjoys building tools and visualizations that improve products and solve customer problems.

Field Notes: Building Multi-Region and Multi-Account Tools with AWS Organizations

Post Syndicated from Cody Penta original https://aws.amazon.com/blogs/architecture/field-notes-building-multi-region-and-multi-account-tools-with-aws-organizations/

It’s common to start with a single AWS account when you are beginning your cloud journey with AWS. Running operations such as creating, reading, updating, and deleting resources in a single AWS account can be straightforward with AWS application program interfaces (APIs). Because an organization grows, so does their account strategy, often splitting workloads across multiple accounts. Fortunately, AWS customers can use AWS Organizations to group these accounts into logical units, also known as organizational units (OUs), to apply common policies and deploy standard infrastructure. However, this will result in an increased difficulty to run an API against all accounts, moreover, every Region that account could use. How does an organization answer these questions:

  • What is every Amazon FSx backup I own?
  • How can I do an on-demand batch job that will apply to my entire organization?
  • What is every internet access point across my organization?

This blog post shows us how we can use Organizations, AWS Single Sign-On (AWS SSO), AWS CloudFormation StackSets, and various AWS APIs to effectively build multi-account and multi-region tools that can address use cases like the ones above.

Running an AWS API sequentially across hundreds of accounts—potentially, many Regions—could take hours, depending on the API you call. An important aspect we will cover throughout this solution is the importance of concurrency for these types of tools.

Overview of solution

For this solution, we have created a fictional organization called Tavern that is set up with multiple organizational units (OUs), accounts, and Regions, to reflect a real-world scenario.

Figure 1. Organization configuration example

Figure 1. Organization configuration example

We will set up a user with multi-factor authentication (MFA) enabled so we can sign-in and access an admin user in the root account. Using this admin user, we will deploy a stack set across the organization that enables this user to assume limited permissions into each child account.

Next, we will use the Go programming language because of its native concurrency capabilities. More specifically, we will implement the pipeline concurrency pattern to build a multi-account and multi-region tool that will run APIs across our entire AWS footprint.

Additionally, we will add two common edge cases:

  • We block mass API actions to an account in a suspended OU (not pictured) and the root account.
  • We block API actions in disabled regions.

This will show us how to implement robust error handling in equally powerful tooling.

 Walkthrough

Let us separate the solution into distinct steps:

  • Create an automation user through AWS SSO.
    • This user can optionally be an IAM user or role assumed into by a third-party identity provider (such as, Azure Active Directory). Note the ARN of this identity because that is the key piece of information we will use for crafting a policy document.
  • Deploy a CloudFormation stack set across the organization that enables this user to assume limited access into each account.
    • For this blog post, we will deploy an organization-wide role with `ec2:DescribeRouteTables` permissions. Feel free to expand or change the permission set based on the type of tool you build.
  • Using Go, AWS Command Line Interface (CLI) v2, and AWS SDK for Go v2:
    1. Authenticate using AWS SSO.
    2. List every account in the organization.
    3. Assume permissions into that account.
    4.  Run an API across every Region in that account.
    5. Aggregate results for every Region.
    6. Aggregate results for every account.
    7. Report back the result.

For additional context, review this GitHub repository that contains all code and assets for this blog post.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • Multiple AWS accounts
  • AWS Organizations
  • AWS SSO (optional)
  • AWS SDK for Go v2
  • AWS CLI v2
  • Go programming knowledge (preferred), especially Go’s concurrency model
  • General programming knowledge

Create an automation user in AWS SSO

The first thing we need to do is create an identity to sign into. This can either be an AWS Identity and Access Management (IAM) user, an IAM role integrated with a third-party identity provider, or—in this case—an AWS SSO user.

  1. Log into the AWS SSO user console.
  2. Press Add user button.
  3. Fill in the appropriate information.
Figure 2.AWS SSO create user

Figure 2. AWS SSO create user

  1. Assign the user to the appropriate group. In this case, we will assign this user to AWSControlTowerAdmins.
Figure 3.Assigning SSO user to a group

Figure 3. Assigning SSO user to a group

  1. Verify the user was created. (Optionally: enable MFA).
Figure 4.Verifying User Creation and MFA

Figure 4. Verifying User Creation and MFA

Deploy a stack set across your organization

To effectively run any API across the organization, we need to deploy a common role that our AWS SSO user can assume across every account. We can use AWS CloudFormation StackSets to deploy this role at scale.

  1. Write the IAM role and associated policy document. The following is an example AWS Cloud Development Kit (AWS CDK) code for such a role. Note that orgAccount, roleName, and ssoUser in the below code will have to be replaced with your own values.
    const role = new iam.Role(this, 'TavernAutomationRole', {
      roleName: 'TavernAutomationRole',
      assumedBy: new iam.ArnPrincipal(`arn:aws:sts::${orgAccount}:assumed-role/${roleName}/${ssoUser}`),
    })
    role.addToPolicy(new PolicyStatement({
      actions: ['ec2:DescribeRouteTables'],
      resources: ['*']
    }))
  1. Log into the CloudFormation StackSets console.
  2. Press Create StackSet button.
  3. Upload the CloudFormation template containing the common role to be deployed to the organization by the preferred method.
  4. Specify name and optional description.
  5. Add any standard organization tags, and choose Service-managed permissions option.
  6. Choose Deploy to organization, and decide whether to disable or enable automatic deployment and appropriate account removal behavior. For this blog post, we choose to enable automatic deployment and accounts should remove the stack with removed from the target OU.
  7. For Specify regions, choose US East (N.Virginia). Note, because this stack contains only an IAM role, and IAM is a global service, region choice has no effect.
  8. For Maximum concurrent accounts, choose Percent, and enter 100 (this stack is not dependent on order).
  9. For Failure tolerance, choose Number, and enter 5, account deployment failures before a total rollback happens.
  10. For Region Concurrency, choose Sequential.
  11. Review your choices, note the deployment target (should be r-*), and acknowledge that CloudFormation might create IAM resources with custom names.
  12. Press the Submit button to deploy the stack.

Configure AWS SSO for the AWS CLI

To use our organization tools, we must first configure AWS SSO locally. With the AWS CLI v2, we can run:

aws configure sso

To configure credentials:

  1. Run the preceding command in your terminal.
  2. Follow the prompted steps.
    1. Specify your AWS SSO Start URL:
    2. AWS SSO Region:
  1. Authenticate through the pop-up browser window.
  2. Navigate back to the CLI, and choose the root account (this is where our principle for IAM originates).
  3. Specify the default client region.
  4. Specify the default output format.

Note the CLI profile name. Regardless if you choose to go with the autogenerated one or the custom one, we need this profile name for our upcoming code.

Start coding to utilize the AWS SSO shared profile

After AWS SSO is configured, we can start coding the beginning part of our multi-account tool. Our first step is to list every account belonging to our organization.

var (
    stsc    *sts.Client
    orgc    *organizations.Client
    ec2c    *ec2.Client
    regions []string
)

// init initializes common AWS SDK clients and pulls in all enabled regions
func init() {
    cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithSharedConfigProfile("tavern-automation"))
    if err != nil {
        log.Fatal("ERROR: Unable to resolve credentials for tavern-automation: ", err)
    }

    stsc = sts.NewFromConfig(cfg)
    orgc = organizations.NewFromConfig(cfg)
    ec2c = ec2.NewFromConfig(cfg)

    // NOTE: By default, only describes regions that are enabled in the root org account, not all Regions
    resp, err := ec2c.DescribeRegions(context.TODO(), &ec2.DescribeRegionsInput{})
    if err != nil {
        log.Fatal("ERROR: Unable to describe regions", err)
    }

    for _, region := range resp.Regions {
        regions = append(regions, *region.RegionName)
    }
    fmt.Println("INFO: Listing all enabled regions:")
    fmt.Println(regions)
}

// main constructs a concurrent pipeline that pushes every account ID down
// the pipeline, where an action is concurrently run on each account and
// results are aggregated into a single json file
func main() {
    var accounts []string

    paginator := organizations.NewListAccountsPaginator(orgc, &organizations.ListAccountsInput{})
    for paginator.HasMorePages() {
        resp, err := paginator.NextPage(context.TODO())
        if err != nil {
            log.Fatal("ERROR: Unable to list accounts in this organization: ", err)
        }

        for _, account := range resp.Accounts {
            accounts = append(accounts, *account.Id)
        }
    }
    fmt.Println(accounts)

Implement concurrency into our code

With a slice of every AWS account, it’s time to concurrently run an API across all accounts. We will use some familiar Go concurrency patterns, as well as fan-out and fan-in.

// ... continued in main

    // Begin pipeline by calling gen with a list of every account
    in := gen(accounts...)

    // Fan out and create individual goroutines handling the requested action (getRoute)
    var out []<-chan models.InternetRoute
    for range accounts {
        c := getRoute(in)
        out = append(out, c)
    }

    // Fans in and collect the routing information from all go routines
    var allRoutes []models.InternetRoute
    for n := range merge(out...) {
        allRoutes = append(allRoutes, n)
    }

In the preceding code, we called a gen() function that started construction of our pipeline. Let’s take a deeper look into this function.

// gen primes the pipeline, creating a single separate goroutine
// that will sequentially put a single account id down the channel
// gen returns the channel so that we can plug it in into the next
// stage
func gen(accounts ...string) <-chan string {
    out := make(chan string)
    go func() {
        for _, account := range accounts {
            out <- account
        }
        close(out)
    }()
    return out
}

We see that gen just initializes the pipeline, and then starts pushing account ID’s down the pipeline one by one.

The next two functions are where all the heavy lifting is done. First, let’s investigate `getRoute()`.

// getRoute queries every route table in an account, including every enabled region, for a
// 0.0.0.0/0 (i.e. default route) to an internet gateway
func getRoute(in <-chan string) <-chan models.InternetRoute {
    out := make(chan models.InternetRoute)
    go func() {
        for account := range in {
            role := fmt.Sprintf("arn:aws:iam::%s:role/TavernAutomationRole", account)
            creds := stscreds.NewAssumeRoleProvider(stsc, role)

            for _, region := range regions {
                localCfg := aws.Config{
                    Region:      region,
                    Credentials: aws.NewCredentialsCache(creds),
                }

                localEc2Client := ec2.NewFromConfig(localCfg)

                paginator := ec2.NewDescribeRouteTablesPaginator(localEc2Client, &ec2.DescribeRouteTablesInput{})
                for paginator.HasMorePages() {
                    resp, err := paginator.NextPage(context.TODO())
                    if err != nil {
                        fmt.Println("WARNING: Unable to retrieve route tables from account: ", account, err)
                        out <- models.InternetRoute{Account: account}
                        close(out)
                        return
                    }

                    for _, routeTable := range resp.RouteTables {
                        for _, r := range routeTable.Routes {
                            if r.GatewayId != nil && strings.Contains(*r.GatewayId, "igw-") {
                                fmt.Println(
                                    "Account: ", account,
                                    " Region: ", region,
                                    " DestinationCIDR: ", *r.DestinationCidrBlock,
                                    " GatewayId: ", *r.GatewayId,
                                )
    
                                out <- models.InternetRoute{
                                    Account:         account,
                                    Region:          region,
                                    Vpc:             routeTable.VpcId,
                                    RouteTable:      routeTable.RouteTableId,
                                    DestinationCidr: r.DestinationCidrBlock,
                                    InternetGateway: r.GatewayId,
                                }
                            }
                        }
                    }
                }
            }

        }
        close(out)
    }()
    return out
}

A couple of key points to highlight are as follows:

for account := range in

When iterating over a channel, the current goroutine blocks, meaning we wait here until we get an account ID passed to us before continuing. We’ll keep doing this until our upstream closes the channel. In our case, our upstream closes the channel once it pushes every account ID down the channel.

role := fmt.Sprintf("arn:aws:iam::%s:role/TavernAutomationRole", account)
creds := stscreds.NewAssumeRoleProvider(stsc, role)

Here, we can reference our existing role that we deployed to every account and assume into that role with AWS Security Token Service (STS).

for _, region := range regions {

Lastly, when we have credentials into that account, we need to iterate over every region in that account to ensure we are capturing the entire global presence.

These three key areas are how we build organization-level tools. The remaining code is calling the desired API and delivering the result down to the next stage in our pipeline, where we merge all of the results.

// merge takes every go routine and "plugs" it into a common out channel
// then blocks until every input channel closes, signally that all goroutines
// are done in the previous stage
func merge(cs ...<-chan models.InternetRoute) <-chan models.InternetRoute {
    var wg sync.WaitGroup
    out := make(chan models.InternetRoute)

    output := func(c <-chan models.InternetRoute) {
        for n := range c {
            out <- n
        }
        wg.Done()
    }

    wg.Add(len(cs))
    for _, c := range cs {
        go output(c)
    }

    go func() {
        wg.Wait()
        close(out)
    }()
    return out
}

At the end of the main function, we take our in-memory data structures representing our internet entry points and marshal it into a JSON file.

    // ... continued in main

    savedRoutes, err := json.MarshalIndent(allRoutes, "", "\t")
    if err != nil {
        fmt.Println("ERROR: Unable to marshal internet routes to JSON: ", err)
    }
    ioutil.WriteFile("routes.json", savedRoutes, 0644)

With the code in place, we can run the code with `go run main.go` inside of your preferred terminal. The command will generate results like the following:

    // ... routes.json
    {
        "Account": "REDACTED",
        "Region": "eu-north-1",
        "Vpc": "vpc-1efd6c77",
        "RouteTable": "rtb-1038a979",
        "DestinationCidr": "0.0.0.0/0",
        "InternetGateway": "igw-c1b125a8"
    },
    {
        "Account": " REDACTED ",
        "Region": "eu-north-1",
        "Vpc": "vpc-de109db7",
        "RouteTable": "rtb-e042ce89",
        "DestinationCidr": "0.0.0.0/0",
        "InternetGateway": "igw-cbd457a2"
    },
    // ...

Cleaning up

To avoid incurring future charges, delete the following resources:

  • Stack set through the CloudFormation console
  • AWS SSO user (if you created one)

Conclusion

Creating organization tools that answer difficult questions such as, “show me every internet entry point in our organization,” are possible using Organizations APIs and CloudFormation StackSets. We also learned how to use Go’s native concurrency features to build these tools that scale across hundreds of accounts.

Further steps you might explore include:

  • Visiting the Github Repo to capture the full picture.
  • Taking our sequential solution for iterating over Regions and making it concurrent.
  • Exploring the possibility of accepting functions and interfaces in stages to generalize specific pipeline features.

Thanks for taking the time to read, and feel free to leave comments.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Enable Security Hub PCI DSS standard across your organization and disable specific controls

Post Syndicated from Pablo Pagani original https://aws.amazon.com/blogs/security/enable-security-hub-pci-dss-standard-across-your-organization-and-disable-specific-controls/

At this time, enabling the PCI DSS standard from within AWS Security Hub enables this compliance framework only within the Amazon Web Services (AWS) account you are presently administering.

This blog post showcases a solution that can be used to customize the configuration and deployment of the PCI DSS standard compliance standard using AWS Security Hub across multiple AWS accounts and AWS Regions managed by AWS Organizations. It also demonstrates how to disable specific standards or controls that aren’t required by your organization to meet its compliance requirement. This solution can be used as a baseline for implementation when creating new AWS accounts through the use of AWS CloudFormation StackSets.

Solution overview

Figure 1 that follows shows a sample account setup using the automated solution in this blog post to enable PCI DSS monitoring and reporting across multiple AWS accounts using AWS Organizations. The hierarchy depicted is of one management account used to monitor two member accounts with infrastructure spanning across multiple Regions. Member accounts are configured to send their Security Hub findings to the designated Security Hub management account for centralized compliance management.

Figure 1: Security Hub deployment using AWS Organizations

Figure 1: Security Hub deployment using AWS Organizations

Prerequisites

The following prerequisites must be in place in order to enable the PCI DSS standard:

  1. A designated administrator account for Security Hub.
  2. Security Hub enabled in all the desired accounts and Regions.
  3. Access to the management account for the organization. The account must have the required permissions for stack set operations.
  4. Choose which deployment targets (accounts and Regions) you want to enable the PCI DSS standard. Typically, you set this on the accounts where Security Hub is already enabled, or on the accounts where PCI workloads reside.
  5. (Optional) If you find standards or controls that aren’t applicable to your organization, get the Amazon Resource Names (ARNs) of the desired standards or controls to disable.

Solution Resources

The CloudFormation template that you use in the following steps contains:

Solution deployment

To set up this solution for automated deployment, stage the following CloudFormation StackSet template for rollout via the AWS CloudFormation service. The stack set runs across the organization at the root or organizational units (OUs) level of your choice. You can choose which Regions to run this solution against and also to run it each time a new AWS account is created.

To deploy the solution

  1. Open the AWS Management Console.
  2. Download the sh-pci-enabler.yaml template and save it to an Amazon Simple Storage Services (Amazon S3) bucket on the management account. Make a note of the path to use later.
  3. Navigate to CloudFormation service on the management account. Select StackSets from the menu on the left, and then choose Create StackSet.
     
    Figure 2: CloudFormation – Create StackSet

    Figure 2: CloudFormation – Create StackSet

  4. On the Choose a template page, go to Specify template and select Amazon S3 URL and enter the path to the sh-pci-enabler.yaml template you saved in step 2 above. Choose Next.
     
    Figure 3: CloudFormation – Choose a template

    Figure 3: CloudFormation – Choose a template

  5. Enter a name and (optional) description for the StackSet. Choose Next.
     
    Figure 4: CloudFormation – enter StackSet details

    Figure 4: CloudFormation – enter StackSet details

  6. (Optional) On the Configure StackSet options page, go to Tags and add tags to identify and organize your stack set.
     
    Figure 5: CloudFormation – Configure StackSet options

    Figure 5: CloudFormation – Configure StackSet options

  7. Choose Next.
  8. On the Set deployment options page, select the desired Regions, and then choose Next.

    Figure 6: CloudFormation – Set deployment options

    Figure 6: CloudFormation – Set deployment options

  9. Review the definition and select I acknowledge that AWS CloudFormation might create IAM resources. Choose Submit.
     
    Figure 7: CloudFormation – Review, acknowledge, and submit

    Figure 7: CloudFormation – Review, acknowledge, and submit

  10. After you choose Submit, you can monitor the creation of the StackSet from the Operations tab to ensure that deployment is successful.
     
    Figure 8: CloudFormation – Monitor creation of the StackSet

    Figure 8: CloudFormation – Monitor creation of the StackSet

Disable standards that don’t apply to your organization

To disable a standard that isn’t required by your organization, you can use the same template and steps as described above with a few changes as explained below.

To disable standards

  1. Start by opening the SH-PCI-enabler.yaml template and saving a copy under a new name.
  2. In the template, look for sh.batch_enable_standards. Change it to sh.batch_disable_standards.
  3. Locate standardArn=f”arn:aws:securityhub:{region}::standards/pci-dss/v/3.2.1″ and change it to the desired ARN. To find the correct standard ARN, you can use the AWS Command Line Interface (AWS CLI) or AWS CloudShell to run the command aws securityhub describe-standards.
Figure 9: Describe Security Hub standards using CLI

Figure 9: Describe Security Hub standards using CLI

Note: Be sure to keep the f before the quotation marks and replace any Region you might get from the command with the {region} variable. If the CIS standard doesn’t have the Region defined, remove the variable.

Disable controls that don’t apply to your organization

When you enable a standard, all of the controls for that standard are enabled by default. If necessary, you can disable specific controls within an enabled standard.

When you disable a control, the check for the control is no longer performed, no additional findings are generated for that control, and the related AWS Config rules that Security Hub created are removed.

Security Hub is a regional service. When you disable or enable a control, the change is applied in the Region that you specify in the API request. Also, when you disable an entire standard, Security Hub doesn’t track which controls were disabled. If you enable the standard again later, all of the controls in that standard will be enabled.

To disable a list of controls

  1. Open the Security Hub console and select Security standards from the left menu. For each check you want to disable, select Finding JSON and make a note of each StandardsControlArn to add to your list.

    Note: Another option is to use the DescribeStandardsControls API to create a list of StandardsControlArn to be disabled.

     

    Figure 10: Security Hub console – finding JSON download option

    Figure 10: Security Hub console – finding JSON download option

  2. Download the StackSet SH-disable-controls.yaml template to your computer.
  3. Use a text editor to open the template file.
  4. Locate the list of controls to disable, and edit the template to replace the provided list of StandardsControlArn with your own list of controls to disable, as shown in the following example. Use a comma as the delimiter for each ARN.
    controls=f"arn:aws:securityhub:{region}:{account_id}:control/aws-foundational-security-best-practices/v/1.0.0/ACM.1, arn:aws:securityhub:{region}:{account_id}:control/aws-foundational-security-best-practices/v/1.0.0/APIGateway.1, arn:aws:securityhub:{region}:{account_id}:control/aws-foundational-security-best-practices/v/1.0.0/APIGateway.2"
    

  5. Save your changes to the template.
  6. Follow the same steps you used to deploy the PCI DSS standard, but use your edited template.

Note: The region and account_id are set as variables, so you decide in which accounts and Regions to disable the controls from the StackSet deployment options (step 8 in Deploy the solution).

Troubleshooting

The following are issues you might encounter when you deploy this solution:

  1. StackSets deployment errors: Review the troubleshooting guide for CloudFormation StackSets.
  2. Dependencies issues: To modify the status of any standard or control, Security Hub must be enabled first. If it’s not enabled, the operation will fail. Make sure you meet the prerequisites listed earlier in this blog post. Use CloudWatch logs to analyze possible errors from the Lambda function to help identify the cause.
  3. StackSets race condition error: When creating new accounts, the Organizations service enables Security Hub in the account, and invokes the stack sets during account creation. If the stack set runs before the Security Hub service is enabled, the stack set can’t enable the PCI standard. If this happens, you can fix it by adding the Amazon EventBridge rule as shown in SH-EventRule-PCI-enabler.yaml. The EventBridge rule invokes the SHLambdaFunctionEB Lambda function after Security Hub is enabled.

Conclusion

The AWS Security Hub PCI DSS standard is fundamental for any company involved with storing, processing, or transmitting cardholder data. In this post, you learned how to enable or disable a standard or specific controls in all your accounts throughout the organization to proactively monitor your AWS resources. Frequently reviewing failed security checks, prioritizing their remediation, and aiming for a Security Hub score of 100 percent can help improve your security posture.

Further reading

If you have feedback about this post, submit comments in the Comments section below. If you have questions, please start a new thread on the Security Hub forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Pablo Pagani

Pablo is the Latam Security Manager for AWS Professional Services based in Buenos Aires, Argentina. He developed his passion for computers while writing his first lines of code in BASIC using a Talent MSX.

Author

Rogerio Kasa

Rogerio is a Senior SRC Consultant based in Sao Paulo, Brazil. He has more than 20 years experience in information security, including 11 years in financial services as a local information security officer. As a security consultant, he helps customers improve their security posture by understanding business goals and creating controls aligned with their risk strategy.

How to restrict IAM roles to access AWS resources from specific geolocations using AWS Client VPN

Post Syndicated from Artem Lovan original https://aws.amazon.com/blogs/security/how-to-restrict-iam-roles-to-access-aws-resources-from-specific-geolocations-using-aws-client-vpn/

You can improve your organization’s security posture by enforcing access to Amazon Web Services (AWS) resources based on IP address and geolocation. For example, users in your organization might bring their own devices, which might require additional security authorization checks and posture assessment in order to comply with corporate security requirements. Enforcing access to AWS resources based on geolocation can help you to automate compliance with corporate security requirements by auditing the connection establishment requests. In this blog post, we walk you through the steps to allow AWS Identity and Access Management (IAM) roles to access AWS resources only from specific geographic locations.

Solution overview

AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and your on-premises network resources. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client. A client VPN session terminates at the Client VPN endpoint, which is provisioned in your Amazon Virtual Private Cloud (Amazon VPC) and therefore enables a secure connection to resources running inside your VPC network.

This solution uses Client VPN to implement geolocation authentication rules. When a client VPN connection is established, authentication is implemented at the first point of entry into the AWS Cloud. It’s used to determine if clients are allowed to connect to the Client VPN endpoint. You configure an AWS Lambda function as the client connect handler for your Client VPN endpoint. You can use the handler to run custom logic that authorizes a new connection. When a user initiates a new client VPN connection, the custom logic is the point at which you can determine the geolocation of this user. In order to enforce geolocation authorization rules, you need:

  • AWS WAF to determine the user’s geolocation based on their IP address.
  • A Network address translation (NAT) gateway to be used as the public origin IP address for all requests to your AWS resources.
  • An IAM policy that is attached to the IAM role and validated by AWS when the request origin IP address matches the IP address of the NAT gateway.

One of the key features of AWS WAF is the ability to allow or block web requests based on country of origin. When the client connection handler Lambda function is invoked by your Client VPN endpoint, the Client VPN service invokes the Lambda function on your behalf. The Lambda function receives the device, user, and connection attributes. The user’s public IP address is one of the device attributes that are used to identify the user’s geolocation by using the AWS WAF geolocation feature. Only connections that are authorized by the Lambda function are allowed to connect to the Client VPN endpoint.

Note: The accuracy of the IP address to country lookup database varies by region. Based on recent tests, the overall accuracy for the IP address to country mapping is 99.8 percent. We recommend that you work with regulatory compliance experts to decide if your solution meets your compliance needs.

A NAT gateway allows resources in a private subnet to connect to the internet or other AWS services, but prevents a host on the internet from connecting to those resources. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. Since an Elastic IP address is static, any request originating from a private subnet will be seen with a public IP address that you can trust because it will be the elastic IP address of your NAT gateway.

AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. In an IAM policy, you can define the global condition key aws:SourceIp to restrict API calls to your AWS resources from specific IP addresses.

Note: Throughout this post, the user is authenticating with a SAML identity provider (IdP) and assumes an IAM role.

Figure 1 illustrates the authentication process when a user tries to establish a new Client VPN connection session.

Figure 1: Enforce connection to Client VPN from specific geolocations

Figure 1: Enforce connection to Client VPN from specific geolocations

Let’s look at how the process illustrated in Figure 1 works.

  1. The user device initiates a new client VPN connection session.
  2. The Client VPN service redirects the user to authenticate against an IdP.
  3. After user authentication succeeds, the client connects to the Client VPN endpoint.
  4. The Client VPN endpoint invokes the Lambda function synchronously. The function is invoked after device and user authentication, and before the authorization rules are evaluated.
  5. The Lambda function extracts the public-ip device attribute from the input and makes an HTTPS request to the Amazon API Gateway endpoint, passing the user’s public IP address in the X-Forwarded-For header.Because you’re using AWS WAF to protect API Gateway, and have geographic match conditions configured, a response with the status code 200 is returned only if the user’s public IP address originates from an allowed country of origin. Additionally, AWS WAF has another rule configured that blocks all requests to API Gateway if the request doesn’t originate from one of the NAT gateway IP addresses. Because Lambda is deployed in a VPC, it has a NAT gateway IP address, and therefore the request isn’t blocked by AWS WAF. To learn more about running a Lambda function in a VPC, see Configuring a Lambda function to access resources in a VPC.The following code example showcases Lambda code that performs the described step.

    Note: Optionally, you can implement additional controls by creating specific authorization rules. Authorization rules act as firewall rules that grant access to networks. You should have an authorization rule for each network for which you want to grant access. To learn more, see Authorization rules.

  6. The Lambda function returns the authorization request response to Client VPN.
  7. When the Lambda function—shown following—returns an allow response, Client VPN establishes the VPN session.
import os
import http.client


cloud_front_url = os.getenv("ENDPOINT_DNS")
endpoint = os.getenv("ENDPOINT")
success_status_codes = [200]


def build_response(allow, status):
    return {
        "allow": allow,
        "error-msg-on-failed-posture-compliance": "Error establishing connection. Please contact your administrator.",
        "posture-compliance-statuses": [status],
        "schema-version": "v1"
    }


def handler(event, context):
    ip = event['public-ip']

    conn = http.client.HTTPSConnection(cloud_front_url)
    conn.request("GET", f'/{endpoint}', headers={'X-Forwarded-For': ip})
    r1 = conn.getresponse()
    conn.close()

    status_code = r1.status

    if status_code in success_status_codes:
        print("User's IP is based from an allowed country. Allowing the connection to VPN.")
        return build_response(True, 'compliant')

    print("User's IP is NOT based from an allowed country. Blocking the connection to VPN.")
    return build_response(False, 'quarantined')

After the client VPN session is established successfully, the request from the user device flows through the NAT gateway. The originating source IP address is recognized, because it is the Elastic IP address associated with the NAT gateway. An IAM policy is defined that denies any request to your AWS resources that doesn’t originate from the NAT gateway Elastic IP address. By attaching this IAM policy to users, you can control which AWS resources they can access.

Figure 2 illustrates the process of a user trying to access an Amazon Simple Storage Service (Amazon S3) bucket.

Figure 2: Enforce access to AWS resources from specific IPs

Figure 2: Enforce access to AWS resources from specific IPs

Let’s look at how the process illustrated in Figure 2 works.

  1. A user signs in to the AWS Management Console by authenticating against the IdP and assumes an IAM role.
  2. Using the IAM role, the user makes a request to list Amazon S3 buckets. The IAM policy of the user is evaluated to form an allow or deny decision.
  3. If the request is allowed, an API request is made to Amazon S3.

The aws:SourceIp condition key is used in a policy to deny requests from principals if the origin IP address isn’t the NAT gateway IP address. However, this policy also denies access if an AWS service makes calls on a principal’s behalf. For example, when you use AWS CloudFormation to provision a stack, it provisions resources by using its own IP address, not the IP address of the originating request. In this case, you use aws:SourceIp with the aws:ViaAWSService key to ensure that the source IP address restriction applies only to requests made directly by a principal.

IAM deny policy

The IAM policy doesn’t allow any actions. What the policy does is deny any action on any resource if the source IP address doesn’t match any of the IP addresses in the condition. Use this policy in combination with other policies that allow specific actions.

Prerequisites

Make sure that you have the following in place before you deploy the solution:

Implementation and deployment details

In this section, you create a CloudFormation stack that creates AWS resources for this solution. To start the deployment process, select the following Launch Stack button.

Select the Launch Stack button to launch the template

You also can download the CloudFormation template if you want to modify the code before the deployment.

The template in Figure 3 takes several parameters. Let’s go over the key parameters.

Figure 3: CloudFormation stack parameters

Figure 3: CloudFormation stack parameters

The key parameters are:

  • AuthenticationOption: Information about the authentication method to be used to authenticate clients. You can choose either AWS Managed Microsoft AD or IAM SAML identity provider for authentication.
  • AuthenticationOptionResourceIdentifier: The ID of the AWS Managed Microsoft AD directory to use for Active Directory authentication, or the Amazon Resource Number (ARN) of the SAML provider for federated authentication.
  • ServerCertificateArn: The ARN of the server certificate. The server certificate must be provisioned in ACM.
  • CountryCodes: A string of comma-separated country codes. For example: US,GB,DE. The country codes must be alpha-2 country ISO codes of the ISO 3166 international standard.
  • LambdaProvisionedConcurrency: Provisioned concurrency for the client connection handler. We recommend that you configure provisioned concurrency for the Lambda function to enable it to scale without fluctuations in latency.

All other input fields have default values that you can either accept or override. Once you provide the parameter input values and reach the final screen, choose Create stack to deploy the CloudFormation stack.

This template creates several resources in your AWS account, as follows:

  • A VPC and associated resources, such as InternetGateway, Subnets, ElasticIP, NatGateway, RouteTables, and SecurityGroup.
  • A Client VPN endpoint, which provides connectivity to your VPC.
  • A Lambda function, which is invoked by the Client VPN endpoint to determine the country origin of the user’s IP address.
  • An API Gateway for the Lambda function to make an HTTPS request.
  • AWS WAF in front of API Gateway, which only allows requests to go through to API Gateway if the user’s IP address is based in one of the allowed countries.
  • A deny policy with a NAT gateway IP addresses condition. Attaching this policy to a role or user enforces that the user can’t access your AWS resources unless they are connected to your client VPN.

Note: CloudFormation stack deployment can take up to 20 minutes to provision all AWS resources.

After creating the stack, there are two outputs in the Outputs section, as shown in Figure 4.

Figure 4: CloudFormation stack outputs

Figure 4: CloudFormation stack outputs

  • ClientVPNConsoleURL: The URL where you can download the client VPN configuration file.
  • IAMRoleClientVpnDenyIfNotNatIP: The IAM policy to be attached to an IAM role or IAM user to enforce access control.

Attach the IAMRoleClientVpnDenyIfNotNatIP policy to a role

This policy is used to enforce access to your AWS resources based on geolocation. Attach this policy to the role that you are using for testing the solution. You can use the steps in Adding IAM identity permissions to do so.

Configure the AWS client VPN desktop application

When you open the URL that you see in ClientVPNConsoleURL, you see the newly provisioned Client VPN endpoint. Select Download Client Configuration to download the configuration file.

Figure 5: Client VPN endpoint

Figure 5: Client VPN endpoint

Confirm the download request by selecting Download.

Figure 6: Client VPN Endpoint - Download Client Configuration

Figure 6: Client VPN Endpoint – Download Client Configuration

To connect to the Client VPN endpoint, follow the steps in Connect to the VPN. After a successful connection is established, you should see the message Connected. in your AWS Client VPN desktop application.

Figure 7: AWS Client VPN desktop application - established VPN connection

Figure 7: AWS Client VPN desktop application – established VPN connection

Troubleshooting

If you can’t establish a Client VPN connection, here are some things to try:

  • Confirm that the Client VPN connection has successfully established. It should be in the Connected state. To troubleshoot connection issues, you can follow this guide.
  • If the connection isn’t establishing, make sure that your machine has TCP port 35001 available. This is the port used for receiving the SAML assertion.
  • Validate that the user you’re using for testing is a member of the correct SAML group on your IdP.
  • Confirm that the IdP is sending the right details in the SAML assertion. You can use browser plugins, such as SAML-tracer, to inspect the information received in the SAML assertion.

Test the solution

Now that you’re connected to Client VPN, open the console, sign in to your AWS account, and navigate to the Amazon S3 page. Since you’re connected to the VPN, your origin IP address is one of the NAT gateway IPs, and the request is allowed. You can see your S3 bucket, if any exist.

Figure 8: Amazon S3 service console view - user connected to AWS Client VPN

Figure 8: Amazon S3 service console view – user connected to AWS Client VPN

Now that you’ve verified that you can access your AWS resources, go back to the Client VPN desktop application and disconnect your VPN connection. Once the VPN connection is disconnected, go back to the Amazon S3 page and reload it. This time you should see an error message that you don’t have permission to list buckets, as shown in Figure 9.

Figure 9: Amazon S3 service console view - user is disconnected from AWS Client VPN

Figure 9: Amazon S3 service console view – user is disconnected from AWS Client VPN

Access has been denied because your origin public IP address is no longer one of the NAT gateway IP addresses. As mentioned earlier, since the policy denies any action on any resource without an established VPN connection to the Client VPN endpoint, access to all your AWS resources is denied.

Scale the solution in AWS Organizations

With AWS Organizations, you can centrally manage and govern your environment as you grow and scale your AWS resources. You can use Organizations to apply policies that give your teams the freedom to build with the resources they need, while staying within the boundaries you set. By organizing accounts into organizational units (OUs), which are groups of accounts that serve an application or service, you can apply service control policies (SCPs) to create targeted governance boundaries for your OUs. To learn more about Organizations, see AWS Organizations terminology and concepts.

SCPs help you to ensure that your accounts stay within your organization’s access control guidelines across all your accounts within OUs. In particular, these are the key benefits of using SCPs in your AWS Organizations:

  • You don’t have to create an IAM policy with each new account, but instead create one SCP and apply it to one or more OUs as needed.
  • You don’t have to apply the IAM policy to every IAM user or role, existing or new.
  • This solution can be deployed in a separate account, such as a shared infrastructure account. This helps to decouple infrastructure tooling from business application accounts.

The following figure, Figure 10, illustrates the solution in an Organizations environment.

Figure 10: Use SCPs to enforce policy across many AWS accounts

Figure 10: Use SCPs to enforce policy across many AWS accounts

The Client VPN account is the account the solution is deployed into. This account can also be used for other networking related services. The SCP is created in the Organizations root account and attached to one or more OUs. This allows you to centrally control access to your AWS resources.

Let’s review the new condition that’s added to the IAM policy:

"ArnNotLikeIfExists": {
    "aws:PrincipalARN": [
    "arn:aws:iam::*:role/service-role/*"
    ]
}

The aws:PrincipalARN condition key allows your AWS services to communicate to other AWS services even though those won’t have a NAT IP address as the source IP address. For instance, when a Lambda function needs to read a file from your S3 bucket.

Note: Appending policies to existing resources might cause an unintended disruption to your application. Consider testing your policies in a test environment or to non-critical resources before applying them to production resources. You can do that by attaching the SCP to a specific OU or to an individual AWS account.

Cleanup

After you’ve tested the solution, you can clean up all the created AWS resources by deleting the CloudFormation stack.

Conclusion

In this post, we showed you how you can restrict IAM users to access AWS resources from specific geographic locations. You used Client VPN to allow users to establish a client VPN connection from a desktop. You used an AWS client connection handler (as a Lambda function), and API Gateway with AWS WAF to identify the user’s geolocation. NAT gateway IPs served as trusted source IPs, and an IAM policy protects access to your AWS resources. Lastly, you learned how to scale this solution to many AWS accounts with Organizations.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Artem Lovan

Artem is a Senior Solutions Architect based in New York. He helps customers architect and optimize applications on AWS. He has been involved in IT at many levels, including infrastructure, networking, security, DevOps, and software development.

Author

Faiyaz Desai

Faiyaz leads a solutions architecture team supporting cloud-native customers in New York. His team guides customers in their modernization journeys through business and technology strategies, architectural best practices, and customer innovation. Faiyaz’s focus areas include unified communication, customer experience, network design, and mobile endpoint security.

How ERGO implemented an event-driven security remediation architecture on AWS

Post Syndicated from Adam Sikora original https://aws.amazon.com/blogs/architecture/how-ergo-implemented-an-event-driven-security-remediation-architecture-on-aws/

ERGO is one of the major insurance groups in Germany and Europe. Within the ERGO Group, ERGO Technology & Services S.A. (ET&S), a part of ET&SM holding, has competencies in digital transformation, know-how in creating and implementing complex IT systems with focus on the quality of solutions and a portfolio aligned with the entire value chain of the insurance market.

Business Challenge and Solution

ERGO has a multi-account AWS environment where each project team subscribes to a set of AWS accounts that conforms to workload requirements and security best practices. As ERGO began its cloud journey, CIS Foundations Benchmark Standard was used as the key indicator for measuring compliance. The report showed significant room for security posture improvements. ERGO was looking for a solution that could enable the management of security events at scale. At the same time, they needed to centralize the event response and remediation in near-real time. The goal was to improve the CIS compliance metric and overall security posture.

Architecture

ERGO uses AWS Organizations to centrally govern the multi-account AWS environment. Integration of AWS Security Hub with AWS Organizations enables ERGO to designate ERGO’s Security Account as the Security Hub administrator/primary account. Other organization accounts are automatically registered as Security Hub member accounts to send events to the Security Account.

An important aspect of the workflow is to maintain segregation of duties and separation of environments. ERGO uses two separate AWS accounts to implement automatic finding remediation:

  • Security Account – this is the primary account with Security Hub where security alerts (findings) from all the AWS accounts of the project are gathered.
  • Service Account – this is the account that can take action on target project (member) AWS accounts. ERGO uses AWS Lambda functions to run remediation actions through AWS Identity and Access Management (IAM) permissions, VPC resources actions, and more.

Within the Security Account, AWS Security Hub serves as the event aggregation solution that gathers multi-account findings from AWS services such as Amazon GuardDuty. ERGO was able to centralize the security findings. But they still needed to develop a solution that routed the filtered, actionable events to the Service Account. The solution had to automate the response to these events based on ERGO’s security policy. ERGO built this solution with the help of Amazon CloudWatch, AWS Step Functions, and AWS Lambda.

ERGO used the integration of AWS Security Hub with Amazon CloudWatch to send all the security events to CloudWatch. The filtering logic of events was managed at two levels. At the first level, ERGO used CloudWatch Events rules that match event patterns to refine the types of events ERGO wanted to focus on.

The second level of filtering logic was more nuanced and related to the remediation action ERGO wanted to take on a detected event. ERGO chose AWS Step Functions to build a workflow that enabled them to further filter the events, in addition to matching them to the suitable remediation action.

Choosing AWS Step Functions enabled ERGO to orchestrate multiple steps. They could also respond to errors in the overall workflow. For example, one of the issues that ERGO encountered was the sporadic failure of the Archival Lambda function. This was due to the Security Hub API Rate Throttling.

ERGO evaluated several workarounds to deal with this situation. They considered using the automatic retries capability of the AWS SDK to make the API call in the Archival function. However, the built-in mechanism was not sufficient in this case. Another option for dealing with rate limit was to throttle the Archival Lambda functions by applying a low reserved concurrency. Another possibility was to batch the events to be SUPPRESSED and process them as one batch at a time. The benefit was in making a single API call at a time, over several parameters.

After much consideration, ERGO decided to use the “retry on error” mechanism of the Step Function to circumvent this problem. This allowed ERGO to manage the error handling directly in the workflow logic. It wasn’t necessary to change the remediation and archival logic of the Lambda functions. This was a huge advantage. Writing and maintaining error handling logic in each one of the Lambda functions would have been time-intensive and complicated.

Additionally, the remediation actions had to be configured and run from the Service Account. That means the Step Function in the Security Account had to trigger a cross-account resource. ERGO had to find a way to integrate the Remediation Lambda in the Service Account with the state machine of the Security Account. ERGO achieved this integration using a Proxy Lambda in the Security Account.

The Proxy Lambda resides in the Security Account and is initiated by the Step Function. It takes as its argument, the function name and function version to start the Remediation function in the service account.

The Remediation functions in the Service Account have permission to take action on Project accounts. As the next step, the Remediation function is invoked on the impacted accounts. This is filtered by the Step Function, which passes the Account ID to Proxy Lambda, which in turn passes this argument to Remediation Lambda. The Remediation function runs the actions on the Project accounts and returns the output to the Proxy Lambda. This is then passed back to the Step Function.

The role that Lambda assumes using the AssumeRole mechanism, is an Organization Level role. It is deployed on every account and has proper permission to perform the remediation.

ERGO Architecture

Figure 1. Technical Solution implementation

  1. Security Hub service in ERGO Project accounts sends security findings to Administrative Account.
  2. Findings are aggregated and sent to CloudWatch Events for filtering.
  3. CloudWatch rules invoke Step Functions as the target. Step Functions process security events based on the event type and treatment required as per CIS Standards.
  4. For events that need to be suppressed without any dependency on the Project Accounts, the Step Function invokes a Lambda function to archive the findings.
  5. For events that need to be executed on the Project accounts, a Step Function invokes a Proxy Lambda with required parameters.
  6. Proxy Lambda in turn, invokes a cross-account Remediation function in Service Account. This has the permissions to run actions in Project accounts.
  7. Based on the event type, corresponding remediation action is run on the impacted Project Account.
  8. Remediation function passes the execution result back to Proxy Lambda to complete the Security event workflow.

Failed remediations are manually resolved in exceptional conditions.

Summary

By implementing this event-driven solution, ERGO was able to increase and maintain automated compliance with CIS AWS Foundation Benchmark Standard to about 95%. The remaining findings were evaluated on case basis, per specific Project requirements. This measurable improvement in ERGO compliance posture was achieved with an end-to-end serverless workflow. This offloaded any on-going platform maintenance efforts from the ERGO cloud security team. Working closely with our AWS account and service teams, ERGO will continue to evaluate and make improvements to our architecture.

Field Notes: Enroll Existing AWS Accounts into AWS Control Tower

Post Syndicated from Kishore Vinjam original https://aws.amazon.com/blogs/architecture/field-notes-enroll-existing-aws-accounts-into-aws-control-tower/

Originally published 21 April 2020 to the Field Notes blog, and updated in August 2020 with new prechecks to the account enrollment script. 

Since the launch of AWS Control Tower, customers have been asking for the ability to deploy AWS Control Tower in their existing AWS Organizations and to extend governance to those accounts in their organization.

We are happy that you can now deploy AWS Control Tower in your existing AWS Organizations. The accounts that you launched before deploying AWS Control Tower, what we refer to as unenrolled accounts, remain outside AWS Control Towers’ governance by default. These accounts must be enrolled in the AWS Control Tower explicitly.

When you enroll an account into AWS Control Tower, it deploys baselines and additional guardrails to enable continuous governance on your existing AWS accounts. However, you must perform proper due diligence before enrolling in an account. Refer to the Things to Consider section below for additional information.

In this blog, I show you how to enroll your existing AWS accounts and accounts within the unregistered OUs in your AWS organization under AWS Control Tower programmatically.

Background

Here’s a quick review of some terms used in this post:

  • The Python script provided in this post. This script interacts with multiple AWS services, to identify, validate, and enroll the existing unmanaged accounts into AWS Control Tower.
  • An unregistered organizational unit (OU) is created through AWS Organizations. AWS Control Tower does not manage this OU.
  • An unenrolled account is an existing AWS account that was created outside of AWS Control Tower. It is not managed by AWS Control Tower.
  • A registered organizational unit (OU) is an OU that was created in the AWS Control Tower service. It is managed by AWS Control Tower.
  • When an OU is registered with AWS Control Tower, it means that specific baselines and guardrails are applied to that OU and all of its accounts.
  • An AWS Account Factory account is an AWS account provisioned using account factory in AWS Control Tower.
  • Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud.
  • AWS Service Catalog allows you to centrally manage commonly deployed IT services. In the context of this blog, account factory uses AWS Service Catalog to provision new AWS accounts.
  • AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS.
  • AWS Single Sign-On (SSO) makes it easy to centrally manage access to multiple AWS accounts. It also provides users with single sign-on access to all their assigned accounts from one place.

Things to Consider

Enrolling an existing AWS account into AWS Control Tower involves moving an unenrolled account into a registered OU. The Python script provided in this blog allows you to enroll your existing AWS accounts into AWS Control Tower. However, it doesn’t have much context around what resources are running on these accounts. It assumes that you validated the account services before running this script to enroll the account.

Some guidelines to check before you decide to enroll the accounts into AWS Control Tower.

  1. An AWSControlTowerExecution role must be created in each account. If you are using the script provided in this solution, it creates the role automatically for you.
  2. If you have a default VPC in the account, the enrollment process tries to delete it. If any resources are present in the VPC, the account enrollment fails.
  3. If AWS Config was ever enabled on the account you enroll, a default config recorder and delivery channel were created. Delete the configuration-recorder and delivery channel for the account enrollment to work.
  4. Start with enrolling the dev/staging accounts to get a better understanding of any dependencies or impact of enrolling the accounts in your environment.
  5. Create a new Organizational Unit in AWS Control Tower and do not enable any additional guardrails until you enroll in the accounts. You can then enable guardrails one by one to check the impact of the guardrails in your environment.
  6. As an additional option, you can apply AWS Control Tower’s detective guardrails to an existing AWS account before moving them under Control Tower governance. Instructions to apply the guardrails are discussed in detail in AWS Control Tower Detective Guardrails as an AWS Config Conformance Pack blog.

Prerequisites

Before you enroll your existing AWS account in to AWS Control Tower, check the prerequisites from AWS Control Tower documentation.

This Python script provided part of this blog, supports enrolling all accounts with in an unregistered OU in to AWS Control Tower. The script also supports enrolling a single account using both email address or account-id of an unenrolled account. Following are a few additional points to be aware of about this solution.

  • Enable trust access with AWS Organizations for AWS CloudFormation StackSets.
  • The email address associated with the AWS account is used as AWS SSO user name with default First Name Admin and Last Name User.
  • Accounts that are in the root of the AWS Organizations can be enrolled one at a time only.
  • While enrolling an entire OU using this script, the AWSControlTowerExecution role is automatically created on all the accounts on this OU.
  • You can enroll a single account with in an unregistered OU using the script. It checks for AWSControlTowerExecution role on the account. If the role doesn’t exist, the role is created on all accounts within the OU.
  • By default, you are not allowed to enroll an account that is in the root of the organization. You must pass an additional flag to launch a role creation stack set across the organization
  • While enrolling a single account that is in the root of the organization, it prompts for additional flag to launch role creation stack set across the organization.
  • The script uses CloudFormation Stack Set Service-Managed Permissions to create the AWSControlTowerExecution role in the unenrolled accounts.

How it works

The following diagram shows the overview of the solution.

Account enrollment

  1. In your AWS Control Tower environment, access an Amazon EC2 instance running in the master account of the AWS Control Tower home Region.
  2. Get temporary credentials for AWSAdministratorAccess from AWS SSO login screen
  3. Download and execute the enroll_script.py script
  4. The script creates the AWSControlTowerExecution role on the target account using Automatic Deployments for a Stack Set feature.
  5. On successful validation of role and organizational units that are given as input, the script launches a new product in Account Factory.
  6. The enrollment process creates an AWS SSO user using the same email address as the AWS account.

Setting up the environment

It takes up to 30 minutes to enroll each AWS account in to AWS Control Tower. The accounts can be enrolled only one at a time. Depending on number of accounts that you are migrating, you must keep the session open long enough. In this section, you see one way of keeping these long running jobs uninterrupted using Amazon EC2 using the screen tool.

Optionally you may use your own compute environment where the session timeouts can be handled. If you go with your own environment, make sure you have python3, screen and a latest version of boto3 installed.

1. Prepare your compute environment:

  • Log in to your AWS Control Tower with AWSAdministratorAccess role.
  • Switch to the Region where you deployed your AWS Control Tower if needed.
  • If necessary, launch a VPC using the stack here and wait for the stack to COMPLETE.
  • If necessary, launch an Amazon EC2 instance using the stack here. Wait for the stack to COMPLETE.
  • While you are on master account, increase the session time duration for AWS SSO as needed. Default is 1 hour and maximum is 12 hours.

2. Connect to the compute environment (one-way):

  • Go to the EC2 Dashboard, and choose Running Instances.
  • Select the EC2 instance that you just created and choose Connect.
  • In Connect to your instance screen, under Connection method, choose EC2InstanceConnect (browser-based SSH connection) and Connect to open a session.
  • Go to AWS Single Sign-On page in your browser. Click on your master account.
  • Choose command line or programmatic access next to AWSAdministratorAccess.
  • From Option 1 copy the environment variables and paste them in to your EC2 terminal screen in step 5 below.

3. Install required packages and variables. You may skip this step, if you used the stack provided in step-1 to launch a new EC2 instance:

  • Install python3 and boto3 on your EC2 instance. You may have to update boto3, if you use your own environment.
$ sudo yum install python3 -y 
$ sudo pip3 install boto3
$ pip3 show boto3
Name: boto3
Version: 1.12.39
  • Change to home directory and download the enroll_account.py script.
$ cd ~
$ wget https://raw.githubusercontent.com/aws-samples/aws-control-tower-reference-architectures/master/customizations/AccountFactory/EnrollAccount/enroll_account.py
  • Set up your home Region on your EC2 terminal.
export AWS_DEFAULT_REGION=<AWSControlTower-Home-Region>

4. Start a screen session in daemon mode. If your session gets timed out, you can open a new session and attach back to the screen.

$ screen -dmS SAM
$ screen -ls
There is a screen on:
        585.SAM (Detached)
1 Socket in /var/run/screen/S-ssm-user.
$ screen -dr 585.SAM 

5. On the screen terminal, paste the environmental variable that you noted down in step 2.

6. Identify the accounts or the unregistered OUs to migrate and run the Python script provide with below mentioned options.

  • Python script usage:
usage: enroll_account.py -o -u|-e|-i -c 
Enroll existing accounts to AWS Control Tower.

optional arguments:
  -h, --help            show this help message and exit
  -o OU, --ou OU        Target Registered OU
  -u UNOU, --unou UNOU  Origin UnRegistered OU
  -e EMAIL, --email EMAIL
                        AWS account email address to enroll in to AWS Control Tower
  -i AID, --aid AID     AWS account ID to enroll in to AWS Control Tower
  -c, --create_role     Create Roles on Root Level
  • Enroll all the accounts from an unregistered OU to a registered OU
$ python3 enroll_account.py -o MigrateToRegisteredOU -u FromUnregisteredOU 
Creating cross-account role on 222233334444, wait 30 sec: RUNNING 
Executing on AWS Account: 570395911111, [email protected] 
Launching Enroll-Account-vinjak-unmgd3 
Status: UNDER_CHANGE. Waiting for 6.0 min to check back the Status 
Status: UNDER_CHANGE. Waiting for 5.0 min to check back the Status 
. . 
Status: UNDER_CHANGE. Waiting for 1.0 min to check back the Status 
SUCCESS: 111122223333 updated Launching Enroll-Account-vinjakSCchild 
Status: UNDER_CHANGE. Waiting for 6.0 min to check back the Status 
ERROR: 444455556666 
Launching Enroll-Account-Vinjak-Unmgd2 
Status: UNDER_CHANGE. Waiting for 6.0 min to check back the Status 
. . 
Status: UNDER_CHANGE. Waiting for 1.0 min to check back the Status 
SUCCESS: 777788889999 updated
  • Use AWS account ID to enroll a single account that is part of an unregistered OU.
$ python3 enroll_account.py -o MigrateToRegisteredOU -i 111122223333
  • Use AWS account email address to enroll a single account from an unregistered OU.
$ python3 enroll_account.py -o MigrateToRegisteredOU -e [email protected]

You are not allowed by default to enroll an AWS account that is in the root of the organization. The script checks for the AWSControlTowerExecution role in the account. If role doesn’t exist, you are prompted to use -c | --create-role. Using -c flag adds the stack instance to the parent organization root. Which means an AWSControlTowerExecution role is created in all the accounts with in the organization.

Note: Ensure installing AWSControlTowerExecution role in all your accounts in the organization, is acceptable in your organization before using -c flag.

If you are unsure about this, follow the instructions in the documentation and create the AWSControlTowerExecution role manually in each account you want to migrate. Rerun the script.

  • Use AWS account ID to enroll a single account that is in root OU (need -c flag).
$ python3 enroll_account.py -o MigrateToRegisteredOU -i 111122223333 -c
  • Use AWS account email address to enroll a single account that is in root OU (need -c flag).
$ python3 enroll_account.py -o MigrateToRegisteredOU -e [email protected] -c

Cleanup steps

On successful completion of enrolling all the accounts into your AWS Control Tower environment, you could clean up the below resources used for this solution.

If you have used templates provided in this blog to launch VPC and EC2 instance, delete the EC2 CloudFormation stack and then VPC template.

Conclusion

Now you can deploy AWS Control Tower in an existing AWS Organization. In this post, I have shown you how to enroll your existing AWS accounts in your AWS Organization into AWS Control Tower environment. By using the procedure in this post, you can programmatically enroll a single account or all the accounts within an organizational unit into an AWS Control Tower environment.

Now that governance has been extended to these accounts, you can also provision new AWS accounts in just a few clicks and have your accounts conform to your company-wide policies.

Additionally, you can use Customizations for AWS Control Tower to apply custom templates and policies to your accounts. With custom templates, you can deploy new resources or apply additional custom policies to the existing and new accounts. This solution integrates with AWS Control Tower lifecycle events to ensure that resource deployments stay in sync with your landing zone. For example, when a new account is created using the AWS Control Tower account factory, the solution ensures that all resources attached to the account’s OUs are automatically deployed.

Mergers and Acquisitions readiness with the Well-Architected Framework

Post Syndicated from Sushanth Mangalore original https://aws.amazon.com/blogs/architecture/mergers-and-acquisitions-readiness-with-the-well-architected-framework/

Introduction

Companies looking for an acquisition or a successful exit through a merger, undergo a technical assessment as part of the due diligence process. While being a profitable business by itself can attract interest, running a disciplined IT department within your organization can make the acquisition more valuable. As an entity operating cloud workloads on AWS, you can use the AWS Well-Architected Framework. This will demonstrate that your workloads are architected with industry best practices in mind. The Well-Architected Framework explains the pros and cons of decisions you make while building systems on AWS. It consistently measures architectures against best practices observed in customer workloads across several industries. These workloads have achieved continued success on AWS through architectures that are secure, high-performing, resilient, scalable, and efficient. The Well-Architected Framework evaluates your cloud workloads based on five pillars:

  • Operational Excellence: The ability to support development and run workloads effectively, gain insights into your operations, and continuously improve supporting processes and procedures to deliver business value.
  • Security: The ability to protect data, systems, and assets to take advantage of cloud technologies to improve your security.
  • Reliability: The ability of the workload to perform its intended function correctly and consistently. This includes the ability to operate and test the workload through its complete lifecycle.
  • Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization: The ability to run cost-aware workloads that achieve business outcomes while minimizing costs.

The Well-Architected Framework Value Proposition in Mergers & Acquisitions (M&A)

Continuously assess the pre-M&A state of cloud workloads – The Well-Architected state for a workload must be treated as a moving target. New cloud patterns and best practices emerge every day. The Well-Architected Framework constantly evolves to incorporate them. Your workloads can continuously grow, shrink, become more complex, or simpler. Mergers and acquisitions can be a long, drawn-out process, which can take can take months to complete. Well-Architected reviews are recommended for workloads every 6 months to 1 year, or with every major development milestone. This helps guard against IT inertia and allows emerging best practices to be accounted for in your continuously evolving workload architecture. The technical currency can be maintained throughout the M&A process by continuously assessing your workloads against the Well-Architected Framework. The accompanying AWS Well-Architected Tool (AWS WA Tool) helps you track milestones as you make improvements to and measure your progress.

Well-Architected Continuous Improvement Cycle

Standardize through a common framework – One of the biggest challenges in M&As is the standardization of the Enterprise IT post-merger. The IT departments of organizations can operate differently, and have vastly different IT assets, skill sets, and processes. According to a McKinsey article on the Strategic Value of IT in M&A, more than half the synergies available in a merger are related to IT. If the acquirer is also an AWS customer, this can enable the significant synergies in M&As. The Well-Architected Framework can be a foundation on which the two IT departments can find common ground. Even if the acquirer does not have a cloud-based environment like AWS, inheriting a Well-Architected AWS setup can help the post-merger IT landscape evolve.

Integrate seamlessly through Well-Architected landing zones – AWS Control Tower service or AWS Landing Zone solution are options that can provision Well-Architected multi-account AWS environments. Together with AWS Organizations, this makes the IT integration a lot smoother for the AWS environments across enterprises. AWS accounts can detach from one AWS Organization and attach to another seamlessly. The latter can enroll with an existing Control Tower setup to benefit from the security and governance guardrails. In a Well-Architected landing zone, your management account will not have any workloads. As shown in the following diagram, you may move member accounts from your AWS Organization to your acquirer’s AWS Organization under the right Organizational Unit (OU). You can later decommission your AWS Organization and close your management AWS account.

Sample pre-merger AWS environments

Sample post-merger AWS environment

Benefit from faster migration to AWS – using the Well-Architected Framework, you can achieve faster migration to AWS. Workload risks can be mitigated beforehand by using best practices from AWS before the migration. Post migration, the workloads benefit from AWS offerings that already have many of the Well-Architected best practices built into them. The improvement plans from the Well-Architected tool include the recommended AWS services that can address identified risks. Physical IT assets are heavily depreciated during an acquisition and do not fetch valuations close to their original purchase price. AWS workloads that are Well-Architected should be evaluated by the actual business value they provide. By consolidating your IT needs on AWS, you are also decreasing the overhead of vendor consolidation for the acquirer. This can be challenging when multiple active contracts must transfer hands.

Overcome the innovation barrier – At the onset of an M&A, companies may be focusing too much on keeping the lights on through the process. Businesses that do not move forward may fall behind on continuous innovation. Not only can innovation open more business opportunities, but it can also influence the acquisition valuation. Well-Architected reviews can optimize costs. This can result in diverse benefits such as better agility and an increased use of advanced technologies. This can facilitate rapid innovation. Improvements gained in the security posture, reliability, and performance of the workload make it more valuable to the acquirer.

Demonstrate depth in your area of expertise – Well-Architected lenses help evaluate workloads for specific technology or business domains. Lenses dive deeper into the domain-specific best practices for the workload. If your business specializes in a domain for which a Well-Architected lens is available, doing a review with the specific lens will provide more value for your workload. Today, AWS has lenses for serverless, SaaS, High Performance Computing (HPC), the financial services industry, machine learning, IoT workloads and more. We recently announced a new Management and Governance Lens.

Build workloads using AWS vetted constructs – AWS Solutions Library provides you a repository of Well-Architected solutions across a range of technologies and industry verticals. The library includes reference architectures, implementations of reusable patterns, and fully baked end-to-end solution implementations. Use these building blocks to assemble your workloads. Include the AWS recommended best practices into them, and create an attractive proposition to an acquirer.

Conclusion

You can start taking advantage of the Well-Architected Framework today to improve your technical readiness for an acquisition. The Well-Architected Tool in the AWS Management Console allows you to review your workload at no cost. Engage with your AWS account team early, and we can provide the right guidance for your specific M&A, and plan your Well-Architected technical readiness. Using the Well-Architected Framework as the cornerstone, the AWS Solutions Architects and APN partners have guided thousands of customers through this journey. We are looking forward to helping you succeed.

Use new account assignment APIs for AWS SSO to automate multi-account access

Post Syndicated from Akhil Aendapally original https://aws.amazon.com/blogs/security/use-new-account-assignment-apis-for-aws-sso-to-automate-multi-account-access/

In this blog post, we’ll show how you can programmatically assign and audit access to multiple AWS accounts for your AWS Single Sign-On (SSO) users and groups, using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

With AWS SSO, you can centrally manage access and user permissions to all of your accounts in AWS Organizations. You can assign user permissions based on common job functions, customize them to meet your specific security requirements, and assign the permissions to users or groups in the specific accounts where they need access. You can create, read, update, and delete permission sets in one place to have consistent role policies across your entire organization. You can then provide access by assigning permission sets to multiple users and groups in multiple accounts all in a single operation.

AWS SSO recently added new account assignment APIs and AWS CloudFormation support to automate access assignment across AWS Organizations accounts. This release addressed feedback from our customers with multi-account environments who wanted to adopt AWS SSO, but faced challenges related to managing AWS account permissions. To automate the previously manual process and save your administration time, you can now use the new AWS SSO account assignment APIs, or AWS CloudFormation templates, to programmatically manage AWS account permission sets in multi-account environments.

With AWS SSO account assignment APIs, you can now build your automation that will assign access for your users and groups to AWS accounts. You can also gain insights into who has access to which permission sets in which accounts across your entire AWS Organizations structure. With the account assignment APIs, your automation system can programmatically retrieve permission sets for audit and governance purposes, as shown in Figure 1.

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Overview

In this walkthrough, we’ll illustrate how to create permission sets, assign permission sets to users and groups in AWS SSO, and grant access for users and groups to multiple AWS accounts by using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

To grant user permissions to AWS resources with AWS SSO, you use permission sets. A permission set is a collection of AWS Identity and Access Management (IAM) policies. Permission sets can contain up to 10 AWS managed policies and a single custom policy stored in AWS SSO.

A policy is an object that defines a user’s permissions. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. AWS evaluates these policies when an IAM principal (a user or role) makes a request.

When you provision a permission set in the AWS account, AWS SSO creates a corresponding IAM role on that account, with a trust policy that allows users to assume the role through AWS SSO. With AWS SSO, you can assign more than one permission set to a user in the specific AWS account. Users who have multiple permission sets must choose one when they sign in through the user portal or the AWS CLI. Users will see these as IAM roles.

To learn more about IAM policies, see Policies and permissions in IAM. To learn more about permission sets, see Permission Sets.

Assume you have a company, Example.com, which has three AWS accounts: an organization management account (ExampleOrgMaster), a development account (ExampleOrgDev), and a test account (ExampleOrgTest). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has the IT security lead, Frank Infosec, who needs PowerUserAccess to the test account (ExampleOrgTest) and SecurityAudit access to the development account (ExampleOrgDev). Alice Developer, the developer, needs full access to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) through the development account (ExampleOrgDev). We’ll show you how to assign and audit the access for Alice and Frank centrally with AWS SSO, using the AWS CLI.

The flow includes the following steps:

  1. Create three permission sets:
    • PowerUserAccess, with the PowerUserAccess policy attached.
    • AuditAccess, with the SecurityAudit policy attached.
    • EC2-S3-FullAccess, with the AmazonEC2FullAccess and AmazonS3FullAccess policies attached.
  2. Assign permission sets to the AWS account and AWS SSO users:
    • Assign the PowerUserAccess and AuditAccess permission sets to Frank Infosec, to provide the required access to the ExampleOrgDev and ExampleOrgTest accounts.
    • Assign the EC2-S3-FullAccess permission set to Alice Developer, to provide the required permissions to the ExampleOrgDev account.
  3. Retrieve the assigned permissions by using Account Entitlement APIs for audit and governance purposes.

    Note: AWS SSO Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. In this blog we attach AWS managed polices to the AWS SSO Permission sets for simplicity. To help secure your AWS resources, follow the standard security advice of granting least privilege access using AWS SSO custom policy while creating AWS SSO Permission set.

Figure 2: AWS Organizations accounts access for Alice and Frank

Figure 2: AWS Organizations accounts access for Alice and Frank

To help simplify administration of access permissions, we recommend that you assign access directly to groups rather than to individual users. With groups, you can grant or deny permissions to groups of users, rather than having to apply those permissions to each individual. For simplicity, in this blog you’ll assign permissions directly to the users.

Prerequisites

Before you start this walkthrough, complete these steps:

Use the AWS SSO API from the AWS CLI

In order to call the AWS SSO account assignment API by using the AWS CLI, you need to install and configure AWS CLI v2. For more information about AWS CLI installation and configuration, see Installing the AWS CLI and Configuring the AWS CLI.

Step 1: Create permission sets

In this step, you learn how to create EC2-S3FullAccess, AuditAccess, and PowerUserAccess permission sets in AWS SSO from the AWS CLI.

Before you create the permission sets, run the following command to get the Amazon Resource Name (ARN) of the AWS SSO instance and the Identity Store ID, which you will need later in the process when you create and assign permission sets to AWS accounts and users or groups.

aws sso-admin list-instances

Figure 3 shows the results of running the command.

Figure 3: AWS SSO list instances

Figure 3: AWS SSO list instances

Next, create the permission set for the security team (Frank) and dev team (Alice), as follows.

Permission set for Alice Developer (EC2-S3-FullAccess)

Run the following command to create the EC2-S3-FullAccess permission set for Alice, as shown in Figure 4.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'EC2-S3-FullAccess' --description 'EC2 and S3 access for developers'
Figure 4: Creating the permission set EC2-S3-FullAccess

Figure 4: Creating the permission set EC2-S3-FullAccess

Permission set for Frank Infosec (AuditAccess)

Run the following command to create the AuditAccess permission set for Frank, as shown in Figure 5.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'AuditAccess' --description 'Audit Access for security team on ExampleOrgDev account'
Figure 5: Creating the permission set AuditAccess

Figure 5: Creating the permission set AuditAccess

Permission set for Frank Infosec (PowerUserAccess)

Run the following command to create the PowerUserAccess permission set for Frank, as shown in Figure 6.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'PowerUserAccess' --description 'Power User Access for security team on ExampleOrgDev account'
Figure 6: Creating the permission set PowerUserAccess

Figure 6: Creating the permission set PowerUserAccess

Copy the permission set ARN from these responses, which you will need when you attach the managed policies.

Step 2: Assign policies to permission sets

In this step, you learn how to assign managed policies to the permission sets that you created in step 1.

Attach policies to the EC2-S3-FullAccess permission set

Run the following command to attach the amazonec2fullacess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 7.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazonec2fullaccess'
Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Run the following command to attach the amazons3fullaccess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 8.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazons3fullaccess'
Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Attach a policy to the AuditAccess permission set

Run the following command to attach the SecurityAudit managed policy to the AuditAccess permission set that you created earlier, as shown in Figure 9.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/SecurityAudit'
Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Attach a policy to the PowerUserAccess permission set

The following command is similar to the previous command; it attaches the PowerUserAccess managed policy to the PowerUserAccess permission set, as shown in Figure 10.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/PowerUserAccess'
Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

In the next step, you assign users (Frank Infosec and Alice Developer) to their respective permission sets and assign permission sets to accounts.

Step 3: Assign permission sets to users and groups and grant access to AWS accounts

In this step, you assign the AWS SSO permission sets you created to users and groups and AWS accounts, to grant the required access for these users and groups on respective AWS accounts.

To assign access to an AWS account for a user or group, using a permission set you already created, you need the following:

  • The principal ID (the ID for the user or group)
  • The AWS account ID to which you need to assign this permission set

To obtain a user’s or group’s principal ID (UserID or GroupID), you need to use the AWS SSO Identity Store API. The AWS SSO Identity Store service enables you to retrieve all of your identities (users and groups) from AWS SSO. See AWS SSO Identity Store API for more details.

Use the first two commands shown here to get the principal ID for the two users, Alice (Alice’s user name is [email protected]) and Frank (Frank’s user name is [email protected]).

Alice’s user ID

Run the following command to get Alice’s user ID, as shown in Figure 11.

aws identitystore list-users --identity-store-id '<Identity Store ID>' --filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 11: Retrieving Alice’s user ID

Figure 11: Retrieving Alice’s user ID

Frank’s user ID

Run the following command to get Frank’s user ID, as shown in Figure 12.

aws identitystore list-users --identity-store-id '<Identity Store ID>'--filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 12: Retrieving Frank’s user ID

Figure 12: Retrieving Frank’s user ID

Note: To get the principal ID for a group, use the following command.

aws identitystore list-groups --identity-store-id '<Identity Store ID>' --filter AttributePath='DisplayName',AttributeValue='<Group Name>'

Assign the EC2-S3-FullAccess permission set to Alice in the ExampleOrgDev account

Run the following command to assign Alice access to the ExampleOrgDev account using the EC2-S3-FullAccess permission set. This will give Alice full access to Amazon EC2 and S3 services in the ExampleOrgDev account.

Note: When you call the CreateAccountAssignment API, AWS SSO automatically provisions the specified permission set on the account in the form of an IAM policy attached to the AWS SSO–created IAM role. This role is immutable: it’s fully managed by the AWS SSO, and it cannot be deleted or changed by the user even if the user has full administrative rights on the account. If the permission set is subsequently updated, the corresponding IAM policies attached to roles in your accounts won’t be updated automatically. In this case, you will need to call ProvisionPermissionSet to propagate these updates.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Assign the AuditAccess permission set to Frank Infosec in the ExampleOrgDev account

Run the following command to assign Frank access to the ExampleOrgDev account using the EC2-S3- AuditAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Assign the PowerUserAccess permission set to Frank Infosec in the ExampleOrgTest account

Run the following command to assign Frank access to the ExampleOrgTest account using the PowerUserAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

To view the permission sets provisioned on the AWS account, run the following command, as shown in Figure 16.

aws sso-admin list-permission-sets-provisioned-to-account --instance-arn '<Instance ARN>' --account-id '<AWS Account ID>'
Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

To review the created resources in the AWS Management Console, navigate to the AWS SSO console. In the list of permission sets on the AWS accounts tab, choose the EC2-S3-FullAccess permission set. Under AWS managed policies, the policies attached to the permission set are listed, as shown in Figure 17.

Figure 17: Review the permission set in the AWS SSO console

Figure 17: Review the permission set in the AWS SSO console

To see the AWS accounts, where the EC2-S3-FullAccess permission set is currently provisioned, navigate to the AWS accounts tab, as shown in Figure 18.

Figure 18: Review permission set account assignment in the AWS SSO console

Figure 18: Review permission set account assignment in the AWS SSO console

Step 4: Audit access

In this step, you learn how to audit access assigned to your users and group by using the AWS SSO account assignment API. In this example, you’ll start from a permission set, review the permissions (AWS-managed policies or a custom policy) attached to the permission set, get the users and groups associated with the permission set, and see which AWS accounts the permission set is provisioned to.

List the IAM managed policies for the permission set

Run the following command to list the IAM managed policies that are attached to a specified permission set, as shown in Figure 19.

aws sso-admin list-managed-policies-in-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 19: View the managed policies attached to the permission set

Figure 19: View the managed policies attached to the permission set

List the assignee of the AWS account with the permission set

Run the following command to list the assignee (the user or group with the respective principal ID) of the specified AWS account with the specified permission set, as shown in Figure 20.

aws sso-admin list-account-assignments --instance-arn '<Instance ARN>' --account-id '<Account ID>' --permission-set-arn '<Permission Set ARN>'
Figure 20: View the permission set and the user or group attached to the AWS account

Figure 20: View the permission set and the user or group attached to the AWS account

List the accounts to which the permission set is provisioned

Run the following command to list the accounts that are associated with a specific permission set, as shown in Figure 21.

aws sso-admin list-accounts-for-provisioned-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 21: View AWS accounts to which the permission set is provisioned

Figure 21: View AWS accounts to which the permission set is provisioned

In this section of the post, we’ve illustrated how to create a permission set, assign a managed policy to the permission set, and grant access for AWS SSO users or groups to AWS accounts by using this permission set. In the next section, we’ll show you how to do the same using AWS CloudFormation.

Use the AWS SSO API through AWS CloudFormation

In this section, you learn how to use CloudFormation templates to automate the creation of permission sets, attach managed policies, and use permission sets to assign access for a particular user or group to AWS accounts.

Sign in to your AWS Management Console and create a CloudFormation stack by using the following CloudFormation template. For more information on how to create a CloudFormation stack, see Creating a stack on the AWS CloudFormation console.

//start of Template//
{
    "AWSTemplateFormatVersion": "2010-09-09",
  
    "Description": "AWS CloudFormation template to automate multi-account access with AWS Single Sign-On (Entitlement APIs): Create permission sets, assign access for AWS SSO users and groups to AWS accounts using permission sets. Before you use this template, we assume you have enabled AWS SSO for your AWS Organization, added the AWS accounts to which you want to grant AWS SSO access to your organization, signed in to the AWS Management Console with your AWS Organizations management account credentials, and have the required permissions to use the AWS SSO console.",
  
    "Parameters": {
      "InstanceARN" : {
        "Type" : "String",
        "AllowedPattern": "arn:aws:sso:::instance/(sso)?ins-[a-zA-Z0-9-.]{16}",
        "Description" : "Enter AWS SSO InstanceARN. Ex: arn:aws:sso:::instance/ssoins-xxxxxxxxxxxxxxxx",
        "ConstraintDescription": "must be the name of an existing AWS SSO InstanceARN associated with the management account."
      },
      "ExampleOrgDevAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit Developer AWS Account ID. Ex: 123456789012"
        },
      "ExampleOrgTestAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit AWS Account ID. Ex: 123456789012"
        },
      "AliceDeveloperUserId" : {
        "Type" : "String",
        "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
        "Description" : "Enter Developer UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
        },
        "FrankInfosecUserId" : {
            "Type" : "String",
            "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
            "Description" : "Enter Test UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
            }
    },
    "Resources": {
        "EC2S3Access": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "EC2 and S3 access for developers",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : ["arn:aws:iam::aws:policy/amazonec2fullaccess","arn:aws:iam::aws:policy/amazons3fullaccess"],
                "Name" : "EC2-S3-FullAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "EC2S3Access"
                 } ]
              }
        },  
        "SecurityAuditAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Audit Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/SecurityAudit" ],
                "Name" : "AuditAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "SecurityAuditAccess"
                 } ]
              }
        },    
        "PowerUserAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Power User Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/PowerUserAccess"],
                "Name" : "PowerUserAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "PowerUserAccess"
                 } ]
              }      
        },
        "EC2S3userAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "EC2S3Access",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "AliceDeveloperUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "SecurityAudituserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "SecurityAuditAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "PowerUserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "PowerUserAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgTestAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          }
    }
}
//End of Template//

When you create the stack, provide the following information for setting the example permission sets for Frank Infosec and Alice Developer, as shown in Figure 22:

  • The Alice Developer and Frank Infosec user IDs
  • The ExampleOrgDev and ExampleOrgTest account IDs
  • The AWS SSO instance ARN

Then launch the CloudFormation stack.

Figure 22: User inputs to launch the CloudFormation template

Figure 22: User inputs to launch the CloudFormation template

AWS CloudFormation creates the resources that are shown in Figure 23.

Figure 23: Resources created from the CloudFormation stack

Figure 23: Resources created from the CloudFormation stack

Cleanup

To delete the resources you created by using the AWS CLI, use these commands.

Run the following command to delete the account assignment.

delete-account-assignment --instance-arn '<Instance ARN>' --target-id '<AWS Account ID>' --target-type 'AWS_ACCOUNT' --permission-set-arn '<PermissionSet ARN>' --principal-type '<USER/GROUP>' --principal-id '<user/group ID>'

After the account assignment is deleted, run the following command to delete the permission set.

delete-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<PermissionSet ARN>'

To delete the resource that you created by using the CloudFormation template, go to the AWS CloudFormation console. Select the appropriate stack you created, and then choose delete. Deleting the CloudFormation stack cleans up the resources that were created.

Summary

In this blog post, we showed how to use the AWS SSO account assignment API to automate the deployment of permission sets, how to add managed policies to permission sets, and how to assign access for AWS users and groups to AWS accounts by using specified permission sets.

To learn more about the AWS SSO APIs available for you, see the AWS Single Sign-On API Reference Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akhil Aendapally

Akhil is a Solutions Architect at AWS focused on helping customers with their AWS adoption. He holds a master’s degree in Network and Computer Security. Akhil has 8+ years of experience working with different cloud platforms, infrastructure automation, and security.

Author

Yuri Duchovny

Yuri is a New York-based Solutions Architect specializing in cloud security, identity, and compliance. He supports cloud transformations at large enterprises, helping them make optimal technology and organizational decisions. Prior to his AWS role, Yuri’s areas of focus included application and networking security, DoS, and fraud protection. Outside of work, he enjoys skiing, sailing, and traveling the world.

Author

Ballu Singh

Ballu is a principal solutions architect at AWS. He lives in the San Francisco Bay area and helps customers architect and optimize applications on AWS. In his spare time, he enjoys reading and spending time with his family.

Author

Nir Ozeri

Nir is a Solutions Architect Manager with Amazon Web Services, based out of New York City. Nir specializes in application modernization, application delivery, and mobile architecture.

Architecture Patterns for Red Hat OpenShift on AWS

Post Syndicated from Ryan Niksch original https://aws.amazon.com/blogs/architecture/architecture-patterns-for-red-hat-openshift-on-aws/

Editor’s note: Although this blog post and its accompanying code make use of the word “Master,” Red Hat is making open source code more inclusive by eradicating “problematic language.” Read more about this.

Introduction

Red Hat OpenShift is an application platform that provides customers with turnkey application platform that is much more than a simple Kubernetes orchestration.

OpenShift customers choose AWS as their cloud of choice because of the efficiency, security, and reliability, scalability, and elasticity it provides. Customers seeking to modernize their business, process, and application stacks are drawn to the rich AWS service and feature sets.

As such, we see some customers migrate from on-premises to AWS or exist in a hybrid context with application workloads running in various locations. For OpenShift customers, this poses a few questions and considerations:

  • What are the recommendations for the best way to deploy OpenShift on AWS?
  • How is this different from what customers were used to on-premises?
  • How does this ensure resilience and availability?
  • Do customers need a multi-region, multi-account approach?

For hybrid customers, there are assumptions and misconceptions:

  • Where does the control plane exist?
  •  Is there replication, and if so, what are the considerations and ramifications?

In this post I will run through some of the more common questions and patterns for OpenShift on AWS, while looking at some of the terminology and conceptual differences of AWS. I’ll explore migration and hybrid use cases and address some misconceptions.

OpenShift building blocks

On AWS, OpenShift 4x is the norm. To that effect, I will focus on OpenShift 4, but many of the considerations will apply to both OpenShift 3 and OpenShift 4.

Let’s unpack some of the OpenShift building blocks. An OpenShift cluster consists of Master, infrastructure, and worker nodes. The Master forms the control plane and infrastructure nodes cater to a routing layer and additional functions, such as logging, monitoring etc. Worker nodes are the nodes that customer application container workloads will exist on.

When deployed on-premises, OpenShift nodes will be placed in separate network subnets. Depending on distance, latency, etc., a single OpenShift cluster may span two data centers that have some nodes in a subnet in one data center and other subnets in a different data center. This applies to customers with data centers within a few miles of each other with high-speed connectivity. An alternative would be an OpenShift cluster in each data center.

AWS concepts and terminology

At AWS, the concept of “region” is a geolocation, such as EMEA (Europe, Middle East, and Africa) or APAC (Asian Pacific) rather than a data center or specific building. An Availability Zone (AZ) is the closest construct on AWS that maps to a physical data center. Within each region you will find multiple (typically three or more) AZs. Note that a single AZ will contain multiple physical data centers but we treat it as a single point of failure. For example, an event that impacts an AZ would be expected to impact all the data centers within that AZ. To this effect, customers should deploy workloads spanning multiple AZs to protect against any event that would impact a single AZ.

Read more about Regions, Availability Zones, and Edge Locations.

Deploying OpenShift

When deploying an OpenShift cluster on AWS, we recommend starting with three Master nodes spread across three AWS AZs and three worker nodes spread across three AZs. This allows for the combination of resilience and availably constructs provided by AWS as well as Red Hat OpenShift. The OpenShift installer provides a means of deploying the underlying AWS infrastructure in two ways: IPI Installer-provisioned infrastructure and UPI user-provisioned infrastructure. Both Red Hat and AWS collect customer feedback and use this to drive recommended patterns that are then included in the OpenShift installer. As such, the OpenShift installer IPI mode becomes a living reference architecture for deploying OpenShift on AWS.

Deploying OpenShift

The installer will require inputs for the environment on which it’s being deployed. In this case, since I am deploying on AWS, I will need to provide the AWS region, AZs, or subnets that related to the AZs, as well as EC2 instance type. The installer will then generate a set of ignition files that will be used during the deployment of OpenShift:

apiVersion: v1
baseDomain: example.com 
controlPlane: 
  hyperthreading: Enabled   
  name: master
  platform:
    aws:
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
      rootVolume:
        iops: 4000
        size: 500
        type: io1
      type: m5.xlarge 
  replicas: 3
compute: 
- hyperthreading: Enabled 
  name: worker
  platform:
    aws:
      rootVolume:
        iops: 2000
        size: 500
        type: io1 
      type: m5.xlarge
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
  replicas: 3
metadata:
  name: test-cluster 
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: us-west-2 
    userTags:
      adminContact: jdoe
      costCenter: 7536
pullSecret: '{"auths": ...}' 
fips: false 
sshKey: ssh-ed25519 AAAA... 

What does this look like at scale?

For larger implementations, we would see additional worker nodes spread across three or more AZs. As more worker nodes are added, use of the control plane increases. Initially scaling up the Amazon Elastic Compute Cloud (EC2) instance type to a larger instance type is an effective way of addressing this. It’s possible to add more Master nodes, and we recommend that an odd number of nodes are maintained. It is more common to see scaling out of the infrastructure nodes before there is a need to scale Masters. For large-scale implementations, infrastructure functions such as the router, monitoring, and logging functions can be moved to separate EC2 instances from the Master nodes, as well as from each other. It is important to spread the routing layer across multiple AZs, which is critical to maintaining availability and resilience.

The process of resource separation is now controlled by infrastructure machine sets within OpenShift. An infrastructure machine set would need to be defined, then the infrastructure role edited to be moved from the default to this new infrastructure machine set. Read about this in greater detail.

OpenShift in a multi-account context

Using AWS accounts as a means of separation is a common well-architected pattern. AWS Organizations and AWS Control Tower are services that are commonly adopted as part of a multi-account strategy. This is very much the case when looking to enable teams to use their own accounts and when an account vending process is needed to cater for self-service account provisioning.

OpenShift in a multi-account context

OpenShift clusters are deployed into multiple accounts. An OpenShift dev cluster is deployed into an AWS Dev account. This account would typically have AWS Developer Support associated with it. A separate production OpenShift cluster would be provisioned into an AWS production account with AWS Enterprise Support. Enterprise support provides for faster support case response times, and you get the benefit of dedicated resources such as a technical account manager and solutions architect.

CICD pipelines and processes are then used to control the application life cycle from code to dev to production. The pipelines would push the code to different OpenShift cluster end points at different stages of the life cycle.

Hybrid use case implementation

A common misconception of hybrid implementations is that there is a single cluster or control plan that has worker nodes in various locations. For example, there could be a cluster where the Master and infrastructure nodes are deployed in one location, but also worker nodes registered with this cluster that exist on-premises as well as in the cloud.

Having a single customer control plane for a hybrid implementation, even if technically possible, introduces undesired risks.

There is the potential to take multiple environments with very different resilience characteristics and make them interdependent of each other. This can result in performance and reliability issues, and these may increase not only the possibility of the risk manifesting, but also increase in the impact or blast radius.

Instead, hybrid implementations will see separate OpenShift clusters deployed into various locations. A customer may deploy clusters on-premises to cater for a workload that can’t be migrated to the cloud in the short term. Separate OpenShift clusters can then deployed into accounts in AWS for workloads on the cloud. Customers can also deploy separate OpenShift clusters in different AWS regions to cater for proximity to the consuming customer.

Though adding multiple clusters doesn’t add significant administrative overhead, there is a desire to be able to gain visibility and telemetry to all the deployed clusters from a central location. This may see the OpenShift clusters registered with Red Hat Advanced Cluster Manager for Kubernetes.

Summary

Take advantage of the IPI model, not only as a guide but to also save time. Make AWS Organizations, AWS Control Tower, and the AWS Service catalog part of your cloud and hybrid strategies. These will not only speed up migrations but also form building blocks for a modernized business with a focus of enabling prescriptive self-service. Consider Red Hat advanced cluster manager for multi cluster management.

Field Notes: Building a Shared Account Structure Using AWS Organizations

Post Syndicated from Abhijit Vaidya original https://aws.amazon.com/blogs/architecture/field-notes-building-a-shared-account-structure-using-aws-organizations/

For customers considering the AWS Solution Provider Program, there are challenges to mitigate when building a shared account model with SI partners. AWS Organizations make it possible to build the right account structure to support a resale arrangement. In this engagement model, the end customer gets an AWS invoice from an AWS authorized partner instead of AWS directly.

Partners and customers who want to engage in this service resale arrangement need to build a new account structure. This process includes linking or transferring existing customer accounts to the partner master account. This is so that all the billing data from customer accounts is consolidated into the partner master account.

While linking or transferring existing customer accounts to the new master account, the partner must check the new master account structure. It should not compromise any customer security controls and continue to provide full control of linked accounts to the customer.  The new account structure must fulfill the following requirements for both the AWS customer and partner:

  • The customer maintains full access to the AWS organization and able to perform all critical security-related tasks except access to billing data.
  • The Partner is able to control only billing information and not able to perform any other task in the root account (master payer account) without approval from the customer.
  • In case of contract breach / termination, the customer is able to gain back full control of all accounts including the Master.

In this post, you will learn about how partners can create a master account with shared ownership. We also show how to link or transfer customer organization accounts to the new organization master account and set up policies that would provide appropriate access to both partner and customer.

Account Structure

The following architecture represents the account structure setup that can fulfill customer and partner requirements as a part of a service resale arrangement.

As illustrated in the preceding diagram, the following list includes the key architectural components:

Architectural Components

As a part of resale arrangement, the customer’s existing AWS organization and related accounts are linked to the partner’s master payer account. The customer can continue to maintain their existing master root account, while all child accounts are linked to the master account (as shown in the list).

Customers may have valid concerns about linking/transferring accounts to the owned master payee account and may come up with many ‘what-if’ scenarios for example “What if the partner shuts down environment/servers?”  or, “What if partner blocks access to child accounts?”.

This account structure provides the right controls to both customer and partner that would address customer concerns around the security of their accounts. This includes the following benefits:

  • Starting with access to the root account, neither customer nor partner can access the root account without the other party’s involvement.
  • The partner controls the id/password for the root account while the customer maintains the MFA token for the account. The customer also controls the phone number, security questions associated with the root account. That way, the partner cannot replace the MFA token on their own.
  • The partner only has billing access and does not control any other parts of account including child accounts. Anytime the root account access is needed, both customer and partner team need to collaborate and access the root account.
  • The customer or partner cannot assign new IAM roles to themselves, therefore protecting the initial account setup.

Security considerations in the shared account setup

The following table highlights both customer and partner responsibilities and access controls provided by the architecture in the previous section.

The following points highlight security recommendations to provide adequate access rights to both partner and customers.

  • New master payer/ root account has a joint ownership between the Partner and the Customer.
  • AWS account root users (user id/password) would be with Partner and MFA (multi-factor authentication) device with Customer.
  • IAM (AWS Identity and Access Management) role to be created under the master payer with policies “FullOrganizationAccess”, “Amazon S3” (Amazon Simple Storage Service), “CloudTrail” (AWS CloudTrail), “CloudWatch”(Amazon CloudWatch) for the Customer.
  • Security team to log in and manage security of the account. Additional permissions to be added to this role as needed in future. This role does not have ANY billing permissions.
  • Only the partner has access to AWS billing and usage data.
  • IAM role / user would be created under master payer with just billing permission for the Partner team to log in and download all invoices. This role does not have any other permissions except billing and usage reports.
  • Any root login attempts to master payer triggers a notification to Customer’s SOC team and the Partner’s Customer account management team.
  • The Partner’ email address is used to create an account so invoices can be emailed to the partner’s email. The Customer cannot see these invoices.
  • The Customer phone number is used to create a master account and the customer maintains security questions/answers. This prevents replacement of MFA token by the Partner team without informing customer.  The Customer wouldn’t need the Partner’s help or permission to login and manage any security.
  • No aspect of  the Master Payer / Root Partner team can login to the master payer/Root without the Customer providing an MFA token.

Setting up the shared master AWS account structure

Create a playbook for the account transfer activity based on the following tasks. For each task, identify the owner. Make sure that owners have the right permissions to perform the tasks.

Part I – Setting up new partner master account

  1. Create new Partner Master payee Account
  2. Update payment details section with the required details for payment in Partner Master payee Account
  3. Enable MFA in the Partner Master payee Account
  4. Update contact for security and operations in the Partner Master payee Account
  5. Update demographics -Address and contact details in Partner Master payee Account
  6. Create an IAM role for Customer Team in Partner Master Payee account. IAM role is created under master payer with “FullOrganizationAccess”, “Amazon S3”, “CloudTrail”, “CloudWatch” “CloudFormationFullAccess” for the Customer SOC team to login and manage security of the account. Additional permissions can be added to this role in future if needed.

Select the roles:

create role

7. Create an IAM role/user for Partner billing role in the Partner Master Payee account.

Part II – Setting up customer master account

1.      Create an IAM user in the customer’s master account. This user assumes role into the new master payer/root account.

aws management console

 

2.      Confirm that when the IAM user from the customer account assumes a role in the new master account, and that the user does not have Billing Access.

Billing and cost management dashboard

Part III – Creating an organization structure in partner account

  1. Create an Organization in the Partner Master Payee Account
  2. Create Multiple Organizational Units (OU) in the Partner Master Payee Account

 

3. Enable Service Control Policies from AWS Organization’s Policies menu.

service control policies

5. Create/Copy Multiple in to Partner Master Payee Account from Customer root Account.  Any service control policies from the customer root account should be manually copied to new partner account.

6. If customer root account has any special software installed for example, security, install same software in Partner Master Payee Account.

7. Set alerts in Partner Master Payee root account. Any login to the root account would send alerts to customer and partner teams.

8. It is recommended to keep a copy of all billing history invoices for all accounts to be transferred to partner organization. This could be achieved by either downloading CSV or printing all invoices and storing files in Amazon S3 for long term archival. Billing history and invoices are found by clicking Orders and Invoices on Billing & Cost Management Dashboard. After accounts are transferred to new organization, historic billing data will not be available for those accounts.

9. Remove all the Member Accounts from the current Customer Root Account/ Organization. This step is performed by customer account admin and required before account can be transferred to Partner Account organization.

10. Send an invite from the Partner Master Payee Account to the delinked Member Account

master payee account

11.      Member Accounts to accept the invite from the Partner Master Payee Account.

invitations

12.      Move the Customer member account to the appropriate OU in the Partner Master Payee Account.

Setting the shared security model between partner and customer contact

While setting up the master account, three contacts need to be updated for notification.

  • Billing –  this is owned by the Partner
  • Operations – this is owned by the Customer
  • Security – this is owned by the Customer.

This will trigger a notification of any activity on the root account. The contact details contain Name, Title, Email Address and Phone number.  It is recommended to use the Customer’s SOC team distribution email for security and operations, and a phone number that belongs to the organization, and not the individual.

Alternate contacts

Additionally, before any root account activity takes place, AWS Support will verify using the security challenge questionnaire. These questions and answers are owned by the Customer’s SOC team.

security challenge questions

If a customer is not able to access the AWS account, alternate support options are available at Contact us by expanding the “I’m an AWS customer and I’m looking for billing or account support” menu. While contacting AWS Support, all the details that are listed on the account are needed, including full name, phone number, address, email address, and the last four digits of the credit card.

Clean Up

After recovering the account, the Customer should close any accounts that are not in use. It’s a good idea not to have open accounts in your name that could result in charges. For more information, review Closing an Account in the Billing and Cost Management User Guide.

The Shared master root account should be only used for selected activities referred to in the following document.

Conclusion

In this post, you learned how AWS Organizations features can be used to create a shared master account structure.  This helps both customer and partner engage in a service resale business engagement. Using AWS Organizations and cross account access, this solution allows customers to control all key aspects of managing the AWS Organization (Security / Logging / Monitoring) and also allows partners to control any billing related data.

Additional Resources

Cross Account Access

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.