Tag Archives: AWS Organizations

How to restrict IAM roles to access AWS resources from specific geolocations using AWS Client VPN

Post Syndicated from Artem Lovan original https://aws.amazon.com/blogs/security/how-to-restrict-iam-roles-to-access-aws-resources-from-specific-geolocations-using-aws-client-vpn/

You can improve your organization’s security posture by enforcing access to Amazon Web Services (AWS) resources based on IP address and geolocation. For example, users in your organization might bring their own devices, which might require additional security authorization checks and posture assessment in order to comply with corporate security requirements. Enforcing access to AWS resources based on geolocation can help you to automate compliance with corporate security requirements by auditing the connection establishment requests. In this blog post, we walk you through the steps to allow AWS Identity and Access Management (IAM) roles to access AWS resources only from specific geographic locations.

Solution overview

AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and your on-premises network resources. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client. A client VPN session terminates at the Client VPN endpoint, which is provisioned in your Amazon Virtual Private Cloud (Amazon VPC) and therefore enables a secure connection to resources running inside your VPC network.

This solution uses Client VPN to implement geolocation authentication rules. When a client VPN connection is established, authentication is implemented at the first point of entry into the AWS Cloud. It’s used to determine if clients are allowed to connect to the Client VPN endpoint. You configure an AWS Lambda function as the client connect handler for your Client VPN endpoint. You can use the handler to run custom logic that authorizes a new connection. When a user initiates a new client VPN connection, the custom logic is the point at which you can determine the geolocation of this user. In order to enforce geolocation authorization rules, you need:

  • AWS WAF to determine the user’s geolocation based on their IP address.
  • A Network address translation (NAT) gateway to be used as the public origin IP address for all requests to your AWS resources.
  • An IAM policy that is attached to the IAM role and validated by AWS when the request origin IP address matches the IP address of the NAT gateway.

One of the key features of AWS WAF is the ability to allow or block web requests based on country of origin. When the client connection handler Lambda function is invoked by your Client VPN endpoint, the Client VPN service invokes the Lambda function on your behalf. The Lambda function receives the device, user, and connection attributes. The user’s public IP address is one of the device attributes that are used to identify the user’s geolocation by using the AWS WAF geolocation feature. Only connections that are authorized by the Lambda function are allowed to connect to the Client VPN endpoint.

Note: The accuracy of the IP address to country lookup database varies by region. Based on recent tests, the overall accuracy for the IP address to country mapping is 99.8 percent. We recommend that you work with regulatory compliance experts to decide if your solution meets your compliance needs.

A NAT gateway allows resources in a private subnet to connect to the internet or other AWS services, but prevents a host on the internet from connecting to those resources. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. Since an Elastic IP address is static, any request originating from a private subnet will be seen with a public IP address that you can trust because it will be the elastic IP address of your NAT gateway.

AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. In an IAM policy, you can define the global condition key aws:SourceIp to restrict API calls to your AWS resources from specific IP addresses.

Note: Throughout this post, the user is authenticating with a SAML identity provider (IdP) and assumes an IAM role.

Figure 1 illustrates the authentication process when a user tries to establish a new Client VPN connection session.

Figure 1: Enforce connection to Client VPN from specific geolocations

Figure 1: Enforce connection to Client VPN from specific geolocations

Let’s look at how the process illustrated in Figure 1 works.

  1. The user device initiates a new client VPN connection session.
  2. The Client VPN service redirects the user to authenticate against an IdP.
  3. After user authentication succeeds, the client connects to the Client VPN endpoint.
  4. The Client VPN endpoint invokes the Lambda function synchronously. The function is invoked after device and user authentication, and before the authorization rules are evaluated.
  5. The Lambda function extracts the public-ip device attribute from the input and makes an HTTPS request to the Amazon API Gateway endpoint, passing the user’s public IP address in the X-Forwarded-For header.Because you’re using AWS WAF to protect API Gateway, and have geographic match conditions configured, a response with the status code 200 is returned only if the user’s public IP address originates from an allowed country of origin. Additionally, AWS WAF has another rule configured that blocks all requests to API Gateway if the request doesn’t originate from one of the NAT gateway IP addresses. Because Lambda is deployed in a VPC, it has a NAT gateway IP address, and therefore the request isn’t blocked by AWS WAF. To learn more about running a Lambda function in a VPC, see Configuring a Lambda function to access resources in a VPC.The following code example showcases Lambda code that performs the described step.

    Note: Optionally, you can implement additional controls by creating specific authorization rules. Authorization rules act as firewall rules that grant access to networks. You should have an authorization rule for each network for which you want to grant access. To learn more, see Authorization rules.

  6. The Lambda function returns the authorization request response to Client VPN.
  7. When the Lambda function—shown following—returns an allow response, Client VPN establishes the VPN session.
import os
import http.client


cloud_front_url = os.getenv("ENDPOINT_DNS")
endpoint = os.getenv("ENDPOINT")
success_status_codes = [200]


def build_response(allow, status):
    return {
        "allow": allow,
        "error-msg-on-failed-posture-compliance": "Error establishing connection. Please contact your administrator.",
        "posture-compliance-statuses": [status],
        "schema-version": "v1"
    }


def handler(event, context):
    ip = event['public-ip']

    conn = http.client.HTTPSConnection(cloud_front_url)
    conn.request("GET", f'/{endpoint}', headers={'X-Forwarded-For': ip})
    r1 = conn.getresponse()
    conn.close()

    status_code = r1.status

    if status_code in success_status_codes:
        print("User's IP is based from an allowed country. Allowing the connection to VPN.")
        return build_response(True, 'compliant')

    print("User's IP is NOT based from an allowed country. Blocking the connection to VPN.")
    return build_response(False, 'quarantined')

After the client VPN session is established successfully, the request from the user device flows through the NAT gateway. The originating source IP address is recognized, because it is the Elastic IP address associated with the NAT gateway. An IAM policy is defined that denies any request to your AWS resources that doesn’t originate from the NAT gateway Elastic IP address. By attaching this IAM policy to users, you can control which AWS resources they can access.

Figure 2 illustrates the process of a user trying to access an Amazon Simple Storage Service (Amazon S3) bucket.

Figure 2: Enforce access to AWS resources from specific IPs

Figure 2: Enforce access to AWS resources from specific IPs

Let’s look at how the process illustrated in Figure 2 works.

  1. A user signs in to the AWS Management Console by authenticating against the IdP and assumes an IAM role.
  2. Using the IAM role, the user makes a request to list Amazon S3 buckets. The IAM policy of the user is evaluated to form an allow or deny decision.
  3. If the request is allowed, an API request is made to Amazon S3.

The aws:SourceIp condition key is used in a policy to deny requests from principals if the origin IP address isn’t the NAT gateway IP address. However, this policy also denies access if an AWS service makes calls on a principal’s behalf. For example, when you use AWS CloudFormation to provision a stack, it provisions resources by using its own IP address, not the IP address of the originating request. In this case, you use aws:SourceIp with the aws:ViaAWSService key to ensure that the source IP address restriction applies only to requests made directly by a principal.

IAM deny policy

The IAM policy doesn’t allow any actions. What the policy does is deny any action on any resource if the source IP address doesn’t match any of the IP addresses in the condition. Use this policy in combination with other policies that allow specific actions.

Prerequisites

Make sure that you have the following in place before you deploy the solution:

Implementation and deployment details

In this section, you create a CloudFormation stack that creates AWS resources for this solution. To start the deployment process, select the following Launch Stack button.

Select the Launch Stack button to launch the template

You also can download the CloudFormation template if you want to modify the code before the deployment.

The template in Figure 3 takes several parameters. Let’s go over the key parameters.

Figure 3: CloudFormation stack parameters

Figure 3: CloudFormation stack parameters

The key parameters are:

  • AuthenticationOption: Information about the authentication method to be used to authenticate clients. You can choose either AWS Managed Microsoft AD or IAM SAML identity provider for authentication.
  • AuthenticationOptionResourceIdentifier: The ID of the AWS Managed Microsoft AD directory to use for Active Directory authentication, or the Amazon Resource Number (ARN) of the SAML provider for federated authentication.
  • ServerCertificateArn: The ARN of the server certificate. The server certificate must be provisioned in ACM.
  • CountryCodes: A string of comma-separated country codes. For example: US,GB,DE. The country codes must be alpha-2 country ISO codes of the ISO 3166 international standard.
  • LambdaProvisionedConcurrency: Provisioned concurrency for the client connection handler. We recommend that you configure provisioned concurrency for the Lambda function to enable it to scale without fluctuations in latency.

All other input fields have default values that you can either accept or override. Once you provide the parameter input values and reach the final screen, choose Create stack to deploy the CloudFormation stack.

This template creates several resources in your AWS account, as follows:

  • A VPC and associated resources, such as InternetGateway, Subnets, ElasticIP, NatGateway, RouteTables, and SecurityGroup.
  • A Client VPN endpoint, which provides connectivity to your VPC.
  • A Lambda function, which is invoked by the Client VPN endpoint to determine the country origin of the user’s IP address.
  • An API Gateway for the Lambda function to make an HTTPS request.
  • AWS WAF in front of API Gateway, which only allows requests to go through to API Gateway if the user’s IP address is based in one of the allowed countries.
  • A deny policy with a NAT gateway IP addresses condition. Attaching this policy to a role or user enforces that the user can’t access your AWS resources unless they are connected to your client VPN.

Note: CloudFormation stack deployment can take up to 20 minutes to provision all AWS resources.

After creating the stack, there are two outputs in the Outputs section, as shown in Figure 4.

Figure 4: CloudFormation stack outputs

Figure 4: CloudFormation stack outputs

  • ClientVPNConsoleURL: The URL where you can download the client VPN configuration file.
  • IAMRoleClientVpnDenyIfNotNatIP: The IAM policy to be attached to an IAM role or IAM user to enforce access control.

Attach the IAMRoleClientVpnDenyIfNotNatIP policy to a role

This policy is used to enforce access to your AWS resources based on geolocation. Attach this policy to the role that you are using for testing the solution. You can use the steps in Adding IAM identity permissions to do so.

Configure the AWS client VPN desktop application

When you open the URL that you see in ClientVPNConsoleURL, you see the newly provisioned Client VPN endpoint. Select Download Client Configuration to download the configuration file.

Figure 5: Client VPN endpoint

Figure 5: Client VPN endpoint

Confirm the download request by selecting Download.

Figure 6: Client VPN Endpoint - Download Client Configuration

Figure 6: Client VPN Endpoint – Download Client Configuration

To connect to the Client VPN endpoint, follow the steps in Connect to the VPN. After a successful connection is established, you should see the message Connected. in your AWS Client VPN desktop application.

Figure 7: AWS Client VPN desktop application - established VPN connection

Figure 7: AWS Client VPN desktop application – established VPN connection

Troubleshooting

If you can’t establish a Client VPN connection, here are some things to try:

  • Confirm that the Client VPN connection has successfully established. It should be in the Connected state. To troubleshoot connection issues, you can follow this guide.
  • If the connection isn’t establishing, make sure that your machine has TCP port 35001 available. This is the port used for receiving the SAML assertion.
  • Validate that the user you’re using for testing is a member of the correct SAML group on your IdP.
  • Confirm that the IdP is sending the right details in the SAML assertion. You can use browser plugins, such as SAML-tracer, to inspect the information received in the SAML assertion.

Test the solution

Now that you’re connected to Client VPN, open the console, sign in to your AWS account, and navigate to the Amazon S3 page. Since you’re connected to the VPN, your origin IP address is one of the NAT gateway IPs, and the request is allowed. You can see your S3 bucket, if any exist.

Figure 8: Amazon S3 service console view - user connected to AWS Client VPN

Figure 8: Amazon S3 service console view – user connected to AWS Client VPN

Now that you’ve verified that you can access your AWS resources, go back to the Client VPN desktop application and disconnect your VPN connection. Once the VPN connection is disconnected, go back to the Amazon S3 page and reload it. This time you should see an error message that you don’t have permission to list buckets, as shown in Figure 9.

Figure 9: Amazon S3 service console view - user is disconnected from AWS Client VPN

Figure 9: Amazon S3 service console view – user is disconnected from AWS Client VPN

Access has been denied because your origin public IP address is no longer one of the NAT gateway IP addresses. As mentioned earlier, since the policy denies any action on any resource without an established VPN connection to the Client VPN endpoint, access to all your AWS resources is denied.

Scale the solution in AWS Organizations

With AWS Organizations, you can centrally manage and govern your environment as you grow and scale your AWS resources. You can use Organizations to apply policies that give your teams the freedom to build with the resources they need, while staying within the boundaries you set. By organizing accounts into organizational units (OUs), which are groups of accounts that serve an application or service, you can apply service control policies (SCPs) to create targeted governance boundaries for your OUs. To learn more about Organizations, see AWS Organizations terminology and concepts.

SCPs help you to ensure that your accounts stay within your organization’s access control guidelines across all your accounts within OUs. In particular, these are the key benefits of using SCPs in your AWS Organizations:

  • You don’t have to create an IAM policy with each new account, but instead create one SCP and apply it to one or more OUs as needed.
  • You don’t have to apply the IAM policy to every IAM user or role, existing or new.
  • This solution can be deployed in a separate account, such as a shared infrastructure account. This helps to decouple infrastructure tooling from business application accounts.

The following figure, Figure 10, illustrates the solution in an Organizations environment.

Figure 10: Use SCPs to enforce policy across many AWS accounts

Figure 10: Use SCPs to enforce policy across many AWS accounts

The Client VPN account is the account the solution is deployed into. This account can also be used for other networking related services. The SCP is created in the Organizations root account and attached to one or more OUs. This allows you to centrally control access to your AWS resources.

Let’s review the new condition that’s added to the IAM policy:

"ArnNotLikeIfExists": {
    "aws:PrincipalARN": [
    "arn:aws:iam::*:role/service-role/*"
    ]
}

The aws:PrincipalARN condition key allows your AWS services to communicate to other AWS services even though those won’t have a NAT IP address as the source IP address. For instance, when a Lambda function needs to read a file from your S3 bucket.

Note: Appending policies to existing resources might cause an unintended disruption to your application. Consider testing your policies in a test environment or to non-critical resources before applying them to production resources. You can do that by attaching the SCP to a specific OU or to an individual AWS account.

Cleanup

After you’ve tested the solution, you can clean up all the created AWS resources by deleting the CloudFormation stack.

Conclusion

In this post, we showed you how you can restrict IAM users to access AWS resources from specific geographic locations. You used Client VPN to allow users to establish a client VPN connection from a desktop. You used an AWS client connection handler (as a Lambda function), and API Gateway with AWS WAF to identify the user’s geolocation. NAT gateway IPs served as trusted source IPs, and an IAM policy protects access to your AWS resources. Lastly, you learned how to scale this solution to many AWS accounts with Organizations.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Artem Lovan

Artem is a Senior Solutions Architect based in New York. He helps customers architect and optimize applications on AWS. He has been involved in IT at many levels, including infrastructure, networking, security, DevOps, and software development.

Author

Faiyaz Desai

Faiyaz leads a solutions architecture team supporting cloud-native customers in New York. His team guides customers in their modernization journeys through business and technology strategies, architectural best practices, and customer innovation. Faiyaz’s focus areas include unified communication, customer experience, network design, and mobile endpoint security.

How ERGO implemented an event-driven security remediation architecture on AWS

Post Syndicated from Adam Sikora original https://aws.amazon.com/blogs/architecture/how-ergo-implemented-an-event-driven-security-remediation-architecture-on-aws/

ERGO is one of the major insurance groups in Germany and Europe. Within the ERGO Group, ERGO Technology & Services S.A. (ET&S), a part of ET&SM holding, has competencies in digital transformation, know-how in creating and implementing complex IT systems with focus on the quality of solutions and a portfolio aligned with the entire value chain of the insurance market.

Business Challenge and Solution

ERGO has a multi-account AWS environment where each project team subscribes to a set of AWS accounts that conforms to workload requirements and security best practices. As ERGO began its cloud journey, CIS Foundations Benchmark Standard was used as the key indicator for measuring compliance. The report showed significant room for security posture improvements. ERGO was looking for a solution that could enable the management of security events at scale. At the same time, they needed to centralize the event response and remediation in near-real time. The goal was to improve the CIS compliance metric and overall security posture.

Architecture

ERGO uses AWS Organizations to centrally govern the multi-account AWS environment. Integration of AWS Security Hub with AWS Organizations enables ERGO to designate ERGO’s Security Account as the Security Hub administrator/primary account. Other organization accounts are automatically registered as Security Hub member accounts to send events to the Security Account.

An important aspect of the workflow is to maintain segregation of duties and separation of environments. ERGO uses two separate AWS accounts to implement automatic finding remediation:

  • Security Account – this is the primary account with Security Hub where security alerts (findings) from all the AWS accounts of the project are gathered.
  • Service Account – this is the account that can take action on target project (member) AWS accounts. ERGO uses AWS Lambda functions to run remediation actions through AWS Identity and Access Management (IAM) permissions, VPC resources actions, and more.

Within the Security Account, AWS Security Hub serves as the event aggregation solution that gathers multi-account findings from AWS services such as Amazon GuardDuty. ERGO was able to centralize the security findings. But they still needed to develop a solution that routed the filtered, actionable events to the Service Account. The solution had to automate the response to these events based on ERGO’s security policy. ERGO built this solution with the help of Amazon CloudWatch, AWS Step Functions, and AWS Lambda.

ERGO used the integration of AWS Security Hub with Amazon CloudWatch to send all the security events to CloudWatch. The filtering logic of events was managed at two levels. At the first level, ERGO used CloudWatch Events rules that match event patterns to refine the types of events ERGO wanted to focus on.

The second level of filtering logic was more nuanced and related to the remediation action ERGO wanted to take on a detected event. ERGO chose AWS Step Functions to build a workflow that enabled them to further filter the events, in addition to matching them to the suitable remediation action.

Choosing AWS Step Functions enabled ERGO to orchestrate multiple steps. They could also respond to errors in the overall workflow. For example, one of the issues that ERGO encountered was the sporadic failure of the Archival Lambda function. This was due to the Security Hub API Rate Throttling.

ERGO evaluated several workarounds to deal with this situation. They considered using the automatic retries capability of the AWS SDK to make the API call in the Archival function. However, the built-in mechanism was not sufficient in this case. Another option for dealing with rate limit was to throttle the Archival Lambda functions by applying a low reserved concurrency. Another possibility was to batch the events to be SUPPRESSED and process them as one batch at a time. The benefit was in making a single API call at a time, over several parameters.

After much consideration, ERGO decided to use the “retry on error” mechanism of the Step Function to circumvent this problem. This allowed ERGO to manage the error handling directly in the workflow logic. It wasn’t necessary to change the remediation and archival logic of the Lambda functions. This was a huge advantage. Writing and maintaining error handling logic in each one of the Lambda functions would have been time-intensive and complicated.

Additionally, the remediation actions had to be configured and run from the Service Account. That means the Step Function in the Security Account had to trigger a cross-account resource. ERGO had to find a way to integrate the Remediation Lambda in the Service Account with the state machine of the Security Account. ERGO achieved this integration using a Proxy Lambda in the Security Account.

The Proxy Lambda resides in the Security Account and is initiated by the Step Function. It takes as its argument, the function name and function version to start the Remediation function in the service account.

The Remediation functions in the Service Account have permission to take action on Project accounts. As the next step, the Remediation function is invoked on the impacted accounts. This is filtered by the Step Function, which passes the Account ID to Proxy Lambda, which in turn passes this argument to Remediation Lambda. The Remediation function runs the actions on the Project accounts and returns the output to the Proxy Lambda. This is then passed back to the Step Function.

The role that Lambda assumes using the AssumeRole mechanism, is an Organization Level role. It is deployed on every account and has proper permission to perform the remediation.

ERGO Architecture

Figure 1. Technical Solution implementation

  1. Security Hub service in ERGO Project accounts sends security findings to Administrative Account.
  2. Findings are aggregated and sent to CloudWatch Events for filtering.
  3. CloudWatch rules invoke Step Functions as the target. Step Functions process security events based on the event type and treatment required as per CIS Standards.
  4. For events that need to be suppressed without any dependency on the Project Accounts, the Step Function invokes a Lambda function to archive the findings.
  5. For events that need to be executed on the Project accounts, a Step Function invokes a Proxy Lambda with required parameters.
  6. Proxy Lambda in turn, invokes a cross-account Remediation function in Service Account. This has the permissions to run actions in Project accounts.
  7. Based on the event type, corresponding remediation action is run on the impacted Project Account.
  8. Remediation function passes the execution result back to Proxy Lambda to complete the Security event workflow.

Failed remediations are manually resolved in exceptional conditions.

Summary

By implementing this event-driven solution, ERGO was able to increase and maintain automated compliance with CIS AWS Foundation Benchmark Standard to about 95%. The remaining findings were evaluated on case basis, per specific Project requirements. This measurable improvement in ERGO compliance posture was achieved with an end-to-end serverless workflow. This offloaded any on-going platform maintenance efforts from the ERGO cloud security team. Working closely with our AWS account and service teams, ERGO will continue to evaluate and make improvements to our architecture.

Field Notes: Enroll Existing AWS Accounts into AWS Control Tower

Post Syndicated from Kishore Vinjam original https://aws.amazon.com/blogs/architecture/field-notes-enroll-existing-aws-accounts-into-aws-control-tower/

Originally published 21 April 2020 to the Field Notes blog, and updated in August 2020 with new prechecks to the account enrollment script. 

Since the launch of AWS Control Tower, customers have been asking for the ability to deploy AWS Control Tower in their existing AWS Organizations and to extend governance to those accounts in their organization.

We are happy that you can now deploy AWS Control Tower in your existing AWS Organizations. The accounts that you launched before deploying AWS Control Tower, what we refer to as unenrolled accounts, remain outside AWS Control Towers’ governance by default. These accounts must be enrolled in the AWS Control Tower explicitly.

When you enroll an account into AWS Control Tower, it deploys baselines and additional guardrails to enable continuous governance on your existing AWS accounts. However, you must perform proper due diligence before enrolling in an account. Refer to the Things to Consider section below for additional information.

In this blog, I show you how to enroll your existing AWS accounts and accounts within the unregistered OUs in your AWS organization under AWS Control Tower programmatically.

Background

Here’s a quick review of some terms used in this post:

  • The Python script provided in this post. This script interacts with multiple AWS services, to identify, validate, and enroll the existing unmanaged accounts into AWS Control Tower.
  • An unregistered organizational unit (OU) is created through AWS Organizations. AWS Control Tower does not manage this OU.
  • An unenrolled account is an existing AWS account that was created outside of AWS Control Tower. It is not managed by AWS Control Tower.
  • A registered organizational unit (OU) is an OU that was created in the AWS Control Tower service. It is managed by AWS Control Tower.
  • When an OU is registered with AWS Control Tower, it means that specific baselines and guardrails are applied to that OU and all of its accounts.
  • An AWS Account Factory account is an AWS account provisioned using account factory in AWS Control Tower.
  • Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud.
  • AWS Service Catalog allows you to centrally manage commonly deployed IT services. In the context of this blog, account factory uses AWS Service Catalog to provision new AWS accounts.
  • AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS.
  • AWS Single Sign-On (SSO) makes it easy to centrally manage access to multiple AWS accounts. It also provides users with single sign-on access to all their assigned accounts from one place.

Things to Consider

Enrolling an existing AWS account into AWS Control Tower involves moving an unenrolled account into a registered OU. The Python script provided in this blog allows you to enroll your existing AWS accounts into AWS Control Tower. However, it doesn’t have much context around what resources are running on these accounts. It assumes that you validated the account services before running this script to enroll the account.

Some guidelines to check before you decide to enroll the accounts into AWS Control Tower.

  1. An AWSControlTowerExecution role must be created in each account. If you are using the script provided in this solution, it creates the role automatically for you.
  2. If you have a default VPC in the account, the enrollment process tries to delete it. If any resources are present in the VPC, the account enrollment fails.
  3. If AWS Config was ever enabled on the account you enroll, a default config recorder and delivery channel were created. Delete the configuration-recorder and delivery channel for the account enrollment to work.
  4. Start with enrolling the dev/staging accounts to get a better understanding of any dependencies or impact of enrolling the accounts in your environment.
  5. Create a new Organizational Unit in AWS Control Tower and do not enable any additional guardrails until you enroll in the accounts. You can then enable guardrails one by one to check the impact of the guardrails in your environment.
  6. As an additional option, you can apply AWS Control Tower’s detective guardrails to an existing AWS account before moving them under Control Tower governance. Instructions to apply the guardrails are discussed in detail in AWS Control Tower Detective Guardrails as an AWS Config Conformance Pack blog.

Prerequisites

Before you enroll your existing AWS account in to AWS Control Tower, check the prerequisites from AWS Control Tower documentation.

This Python script provided part of this blog, supports enrolling all accounts with in an unregistered OU in to AWS Control Tower. The script also supports enrolling a single account using both email address or account-id of an unenrolled account. Following are a few additional points to be aware of about this solution.

  • Enable trust access with AWS Organizations for AWS CloudFormation StackSets.
  • The email address associated with the AWS account is used as AWS SSO user name with default First Name Admin and Last Name User.
  • Accounts that are in the root of the AWS Organizations can be enrolled one at a time only.
  • While enrolling an entire OU using this script, the AWSControlTowerExecution role is automatically created on all the accounts on this OU.
  • You can enroll a single account with in an unregistered OU using the script. It checks for AWSControlTowerExecution role on the account. If the role doesn’t exist, the role is created on all accounts within the OU.
  • By default, you are not allowed to enroll an account that is in the root of the organization. You must pass an additional flag to launch a role creation stack set across the organization
  • While enrolling a single account that is in the root of the organization, it prompts for additional flag to launch role creation stack set across the organization.
  • The script uses CloudFormation Stack Set Service-Managed Permissions to create the AWSControlTowerExecution role in the unenrolled accounts.

How it works

The following diagram shows the overview of the solution.

Account enrollment

  1. In your AWS Control Tower environment, access an Amazon EC2 instance running in the master account of the AWS Control Tower home Region.
  2. Get temporary credentials for AWSAdministratorAccess from AWS SSO login screen
  3. Download and execute the enroll_script.py script
  4. The script creates the AWSControlTowerExecution role on the target account using Automatic Deployments for a Stack Set feature.
  5. On successful validation of role and organizational units that are given as input, the script launches a new product in Account Factory.
  6. The enrollment process creates an AWS SSO user using the same email address as the AWS account.

Setting up the environment

It takes up to 30 minutes to enroll each AWS account in to AWS Control Tower. The accounts can be enrolled only one at a time. Depending on number of accounts that you are migrating, you must keep the session open long enough. In this section, you see one way of keeping these long running jobs uninterrupted using Amazon EC2 using the screen tool.

Optionally you may use your own compute environment where the session timeouts can be handled. If you go with your own environment, make sure you have python3, screen and a latest version of boto3 installed.

1. Prepare your compute environment:

  • Log in to your AWS Control Tower with AWSAdministratorAccess role.
  • Switch to the Region where you deployed your AWS Control Tower if needed.
  • If necessary, launch a VPC using the stack here and wait for the stack to COMPLETE.
  • If necessary, launch an Amazon EC2 instance using the stack here. Wait for the stack to COMPLETE.
  • While you are on master account, increase the session time duration for AWS SSO as needed. Default is 1 hour and maximum is 12 hours.

2. Connect to the compute environment (one-way):

  • Go to the EC2 Dashboard, and choose Running Instances.
  • Select the EC2 instance that you just created and choose Connect.
  • In Connect to your instance screen, under Connection method, choose EC2InstanceConnect (browser-based SSH connection) and Connect to open a session.
  • Go to AWS Single Sign-On page in your browser. Click on your master account.
  • Choose command line or programmatic access next to AWSAdministratorAccess.
  • From Option 1 copy the environment variables and paste them in to your EC2 terminal screen in step 5 below.

3. Install required packages and variables. You may skip this step, if you used the stack provided in step-1 to launch a new EC2 instance:

  • Install python3 and boto3 on your EC2 instance. You may have to update boto3, if you use your own environment.
$ sudo yum install python3 -y 
$ sudo pip3 install boto3
$ pip3 show boto3
Name: boto3
Version: 1.12.39
  • Change to home directory and download the enroll_account.py script.
$ cd ~
$ wget https://raw.githubusercontent.com/aws-samples/aws-control-tower-reference-architectures/master/customizations/AccountFactory/EnrollAccount/enroll_account.py
  • Set up your home Region on your EC2 terminal.
export AWS_DEFAULT_REGION=<AWSControlTower-Home-Region>

4. Start a screen session in daemon mode. If your session gets timed out, you can open a new session and attach back to the screen.

$ screen -dmS SAM
$ screen -ls
There is a screen on:
        585.SAM (Detached)
1 Socket in /var/run/screen/S-ssm-user.
$ screen -dr 585.SAM 

5. On the screen terminal, paste the environmental variable that you noted down in step 2.

6. Identify the accounts or the unregistered OUs to migrate and run the Python script provide with below mentioned options.

  • Python script usage:
usage: enroll_account.py -o -u|-e|-i -c 
Enroll existing accounts to AWS Control Tower.

optional arguments:
  -h, --help            show this help message and exit
  -o OU, --ou OU        Target Registered OU
  -u UNOU, --unou UNOU  Origin UnRegistered OU
  -e EMAIL, --email EMAIL
                        AWS account email address to enroll in to AWS Control Tower
  -i AID, --aid AID     AWS account ID to enroll in to AWS Control Tower
  -c, --create_role     Create Roles on Root Level
  • Enroll all the accounts from an unregistered OU to a registered OU
$ python3 enroll_account.py -o MigrateToRegisteredOU -u FromUnregisteredOU 
Creating cross-account role on 222233334444, wait 30 sec: RUNNING 
Executing on AWS Account: 570395911111, [email protected] 
Launching Enroll-Account-vinjak-unmgd3 
Status: UNDER_CHANGE. Waiting for 6.0 min to check back the Status 
Status: UNDER_CHANGE. Waiting for 5.0 min to check back the Status 
. . 
Status: UNDER_CHANGE. Waiting for 1.0 min to check back the Status 
SUCCESS: 111122223333 updated Launching Enroll-Account-vinjakSCchild 
Status: UNDER_CHANGE. Waiting for 6.0 min to check back the Status 
ERROR: 444455556666 
Launching Enroll-Account-Vinjak-Unmgd2 
Status: UNDER_CHANGE. Waiting for 6.0 min to check back the Status 
. . 
Status: UNDER_CHANGE. Waiting for 1.0 min to check back the Status 
SUCCESS: 777788889999 updated
  • Use AWS account ID to enroll a single account that is part of an unregistered OU.
$ python3 enroll_account.py -o MigrateToRegisteredOU -i 111122223333
  • Use AWS account email address to enroll a single account from an unregistered OU.
$ python3 enroll_account.py -o MigrateToRegisteredOU -e [email protected]

You are not allowed by default to enroll an AWS account that is in the root of the organization. The script checks for the AWSControlTowerExecution role in the account. If role doesn’t exist, you are prompted to use -c | --create-role. Using -c flag adds the stack instance to the parent organization root. Which means an AWSControlTowerExecution role is created in all the accounts with in the organization.

Note: Ensure installing AWSControlTowerExecution role in all your accounts in the organization, is acceptable in your organization before using -c flag.

If you are unsure about this, follow the instructions in the documentation and create the AWSControlTowerExecution role manually in each account you want to migrate. Rerun the script.

  • Use AWS account ID to enroll a single account that is in root OU (need -c flag).
$ python3 enroll_account.py -o MigrateToRegisteredOU -i 111122223333 -c
  • Use AWS account email address to enroll a single account that is in root OU (need -c flag).
$ python3 enroll_account.py -o MigrateToRegisteredOU -e [email protected] -c

Cleanup steps

On successful completion of enrolling all the accounts into your AWS Control Tower environment, you could clean up the below resources used for this solution.

If you have used templates provided in this blog to launch VPC and EC2 instance, delete the EC2 CloudFormation stack and then VPC template.

Conclusion

Now you can deploy AWS Control Tower in an existing AWS Organization. In this post, I have shown you how to enroll your existing AWS accounts in your AWS Organization into AWS Control Tower environment. By using the procedure in this post, you can programmatically enroll a single account or all the accounts within an organizational unit into an AWS Control Tower environment.

Now that governance has been extended to these accounts, you can also provision new AWS accounts in just a few clicks and have your accounts conform to your company-wide policies.

Additionally, you can use Customizations for AWS Control Tower to apply custom templates and policies to your accounts. With custom templates, you can deploy new resources or apply additional custom policies to the existing and new accounts. This solution integrates with AWS Control Tower lifecycle events to ensure that resource deployments stay in sync with your landing zone. For example, when a new account is created using the AWS Control Tower account factory, the solution ensures that all resources attached to the account’s OUs are automatically deployed.

Mergers and Acquisitions readiness with the Well-Architected Framework

Post Syndicated from Sushanth Mangalore original https://aws.amazon.com/blogs/architecture/mergers-and-acquisitions-readiness-with-the-well-architected-framework/

Introduction

Companies looking for an acquisition or a successful exit through a merger, undergo a technical assessment as part of the due diligence process. While being a profitable business by itself can attract interest, running a disciplined IT department within your organization can make the acquisition more valuable. As an entity operating cloud workloads on AWS, you can use the AWS Well-Architected Framework. This will demonstrate that your workloads are architected with industry best practices in mind. The Well-Architected Framework explains the pros and cons of decisions you make while building systems on AWS. It consistently measures architectures against best practices observed in customer workloads across several industries. These workloads have achieved continued success on AWS through architectures that are secure, high-performing, resilient, scalable, and efficient. The Well-Architected Framework evaluates your cloud workloads based on five pillars:

  • Operational Excellence: The ability to support development and run workloads effectively, gain insights into your operations, and continuously improve supporting processes and procedures to deliver business value.
  • Security: The ability to protect data, systems, and assets to take advantage of cloud technologies to improve your security.
  • Reliability: The ability of the workload to perform its intended function correctly and consistently. This includes the ability to operate and test the workload through its complete lifecycle.
  • Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization: The ability to run cost-aware workloads that achieve business outcomes while minimizing costs.

The Well-Architected Framework Value Proposition in Mergers & Acquisitions (M&A)

Continuously assess the pre-M&A state of cloud workloads – The Well-Architected state for a workload must be treated as a moving target. New cloud patterns and best practices emerge every day. The Well-Architected Framework constantly evolves to incorporate them. Your workloads can continuously grow, shrink, become more complex, or simpler. Mergers and acquisitions can be a long, drawn-out process, which can take can take months to complete. Well-Architected reviews are recommended for workloads every 6 months to 1 year, or with every major development milestone. This helps guard against IT inertia and allows emerging best practices to be accounted for in your continuously evolving workload architecture. The technical currency can be maintained throughout the M&A process by continuously assessing your workloads against the Well-Architected Framework. The accompanying AWS Well-Architected Tool (AWS WA Tool) helps you track milestones as you make improvements to and measure your progress.

Well-Architected Continuous Improvement Cycle

Standardize through a common framework – One of the biggest challenges in M&As is the standardization of the Enterprise IT post-merger. The IT departments of organizations can operate differently, and have vastly different IT assets, skill sets, and processes. According to a McKinsey article on the Strategic Value of IT in M&A, more than half the synergies available in a merger are related to IT. If the acquirer is also an AWS customer, this can enable the significant synergies in M&As. The Well-Architected Framework can be a foundation on which the two IT departments can find common ground. Even if the acquirer does not have a cloud-based environment like AWS, inheriting a Well-Architected AWS setup can help the post-merger IT landscape evolve.

Integrate seamlessly through Well-Architected landing zones – AWS Control Tower service or AWS Landing Zone solution are options that can provision Well-Architected multi-account AWS environments. Together with AWS Organizations, this makes the IT integration a lot smoother for the AWS environments across enterprises. AWS accounts can detach from one AWS Organization and attach to another seamlessly. The latter can enroll with an existing Control Tower setup to benefit from the security and governance guardrails. In a Well-Architected landing zone, your management account will not have any workloads. As shown in the following diagram, you may move member accounts from your AWS Organization to your acquirer’s AWS Organization under the right Organizational Unit (OU). You can later decommission your AWS Organization and close your management AWS account.

Sample pre-merger AWS environments

Sample post-merger AWS environment

Benefit from faster migration to AWS – using the Well-Architected Framework, you can achieve faster migration to AWS. Workload risks can be mitigated beforehand by using best practices from AWS before the migration. Post migration, the workloads benefit from AWS offerings that already have many of the Well-Architected best practices built into them. The improvement plans from the Well-Architected tool include the recommended AWS services that can address identified risks. Physical IT assets are heavily depreciated during an acquisition and do not fetch valuations close to their original purchase price. AWS workloads that are Well-Architected should be evaluated by the actual business value they provide. By consolidating your IT needs on AWS, you are also decreasing the overhead of vendor consolidation for the acquirer. This can be challenging when multiple active contracts must transfer hands.

Overcome the innovation barrier – At the onset of an M&A, companies may be focusing too much on keeping the lights on through the process. Businesses that do not move forward may fall behind on continuous innovation. Not only can innovation open more business opportunities, but it can also influence the acquisition valuation. Well-Architected reviews can optimize costs. This can result in diverse benefits such as better agility and an increased use of advanced technologies. This can facilitate rapid innovation. Improvements gained in the security posture, reliability, and performance of the workload make it more valuable to the acquirer.

Demonstrate depth in your area of expertise – Well-Architected lenses help evaluate workloads for specific technology or business domains. Lenses dive deeper into the domain-specific best practices for the workload. If your business specializes in a domain for which a Well-Architected lens is available, doing a review with the specific lens will provide more value for your workload. Today, AWS has lenses for serverless, SaaS, High Performance Computing (HPC), the financial services industry, machine learning, IoT workloads and more. We recently announced a new Management and Governance Lens.

Build workloads using AWS vetted constructs – AWS Solutions Library provides you a repository of Well-Architected solutions across a range of technologies and industry verticals. The library includes reference architectures, implementations of reusable patterns, and fully baked end-to-end solution implementations. Use these building blocks to assemble your workloads. Include the AWS recommended best practices into them, and create an attractive proposition to an acquirer.

Conclusion

You can start taking advantage of the Well-Architected Framework today to improve your technical readiness for an acquisition. The Well-Architected Tool in the AWS Management Console allows you to review your workload at no cost. Engage with your AWS account team early, and we can provide the right guidance for your specific M&A, and plan your Well-Architected technical readiness. Using the Well-Architected Framework as the cornerstone, the AWS Solutions Architects and APN partners have guided thousands of customers through this journey. We are looking forward to helping you succeed.

Use new account assignment APIs for AWS SSO to automate multi-account access

Post Syndicated from Akhil Aendapally original https://aws.amazon.com/blogs/security/use-new-account-assignment-apis-for-aws-sso-to-automate-multi-account-access/

In this blog post, we’ll show how you can programmatically assign and audit access to multiple AWS accounts for your AWS Single Sign-On (SSO) users and groups, using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

With AWS SSO, you can centrally manage access and user permissions to all of your accounts in AWS Organizations. You can assign user permissions based on common job functions, customize them to meet your specific security requirements, and assign the permissions to users or groups in the specific accounts where they need access. You can create, read, update, and delete permission sets in one place to have consistent role policies across your entire organization. You can then provide access by assigning permission sets to multiple users and groups in multiple accounts all in a single operation.

AWS SSO recently added new account assignment APIs and AWS CloudFormation support to automate access assignment across AWS Organizations accounts. This release addressed feedback from our customers with multi-account environments who wanted to adopt AWS SSO, but faced challenges related to managing AWS account permissions. To automate the previously manual process and save your administration time, you can now use the new AWS SSO account assignment APIs, or AWS CloudFormation templates, to programmatically manage AWS account permission sets in multi-account environments.

With AWS SSO account assignment APIs, you can now build your automation that will assign access for your users and groups to AWS accounts. You can also gain insights into who has access to which permission sets in which accounts across your entire AWS Organizations structure. With the account assignment APIs, your automation system can programmatically retrieve permission sets for audit and governance purposes, as shown in Figure 1.

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Overview

In this walkthrough, we’ll illustrate how to create permission sets, assign permission sets to users and groups in AWS SSO, and grant access for users and groups to multiple AWS accounts by using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

To grant user permissions to AWS resources with AWS SSO, you use permission sets. A permission set is a collection of AWS Identity and Access Management (IAM) policies. Permission sets can contain up to 10 AWS managed policies and a single custom policy stored in AWS SSO.

A policy is an object that defines a user’s permissions. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. AWS evaluates these policies when an IAM principal (a user or role) makes a request.

When you provision a permission set in the AWS account, AWS SSO creates a corresponding IAM role on that account, with a trust policy that allows users to assume the role through AWS SSO. With AWS SSO, you can assign more than one permission set to a user in the specific AWS account. Users who have multiple permission sets must choose one when they sign in through the user portal or the AWS CLI. Users will see these as IAM roles.

To learn more about IAM policies, see Policies and permissions in IAM. To learn more about permission sets, see Permission Sets.

Assume you have a company, Example.com, which has three AWS accounts: an organization management account (ExampleOrgMaster), a development account (ExampleOrgDev), and a test account (ExampleOrgTest). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has the IT security lead, Frank Infosec, who needs PowerUserAccess to the test account (ExampleOrgTest) and SecurityAudit access to the development account (ExampleOrgDev). Alice Developer, the developer, needs full access to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) through the development account (ExampleOrgDev). We’ll show you how to assign and audit the access for Alice and Frank centrally with AWS SSO, using the AWS CLI.

The flow includes the following steps:

  1. Create three permission sets:
    • PowerUserAccess, with the PowerUserAccess policy attached.
    • AuditAccess, with the SecurityAudit policy attached.
    • EC2-S3-FullAccess, with the AmazonEC2FullAccess and AmazonS3FullAccess policies attached.
  2. Assign permission sets to the AWS account and AWS SSO users:
    • Assign the PowerUserAccess and AuditAccess permission sets to Frank Infosec, to provide the required access to the ExampleOrgDev and ExampleOrgTest accounts.
    • Assign the EC2-S3-FullAccess permission set to Alice Developer, to provide the required permissions to the ExampleOrgDev account.
  3. Retrieve the assigned permissions by using Account Entitlement APIs for audit and governance purposes.

    Note: AWS SSO Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. In this blog we attach AWS managed polices to the AWS SSO Permission sets for simplicity. To help secure your AWS resources, follow the standard security advice of granting least privilege access using AWS SSO custom policy while creating AWS SSO Permission set.

Figure 2: AWS Organizations accounts access for Alice and Frank

Figure 2: AWS Organizations accounts access for Alice and Frank

To help simplify administration of access permissions, we recommend that you assign access directly to groups rather than to individual users. With groups, you can grant or deny permissions to groups of users, rather than having to apply those permissions to each individual. For simplicity, in this blog you’ll assign permissions directly to the users.

Prerequisites

Before you start this walkthrough, complete these steps:

Use the AWS SSO API from the AWS CLI

In order to call the AWS SSO account assignment API by using the AWS CLI, you need to install and configure AWS CLI v2. For more information about AWS CLI installation and configuration, see Installing the AWS CLI and Configuring the AWS CLI.

Step 1: Create permission sets

In this step, you learn how to create EC2-S3FullAccess, AuditAccess, and PowerUserAccess permission sets in AWS SSO from the AWS CLI.

Before you create the permission sets, run the following command to get the Amazon Resource Name (ARN) of the AWS SSO instance and the Identity Store ID, which you will need later in the process when you create and assign permission sets to AWS accounts and users or groups.

aws sso-admin list-instances

Figure 3 shows the results of running the command.

Figure 3: AWS SSO list instances

Figure 3: AWS SSO list instances

Next, create the permission set for the security team (Frank) and dev team (Alice), as follows.

Permission set for Alice Developer (EC2-S3-FullAccess)

Run the following command to create the EC2-S3-FullAccess permission set for Alice, as shown in Figure 4.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'EC2-S3-FullAccess' --description 'EC2 and S3 access for developers'
Figure 4: Creating the permission set EC2-S3-FullAccess

Figure 4: Creating the permission set EC2-S3-FullAccess

Permission set for Frank Infosec (AuditAccess)

Run the following command to create the AuditAccess permission set for Frank, as shown in Figure 5.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'AuditAccess' --description 'Audit Access for security team on ExampleOrgDev account'
Figure 5: Creating the permission set AuditAccess

Figure 5: Creating the permission set AuditAccess

Permission set for Frank Infosec (PowerUserAccess)

Run the following command to create the PowerUserAccess permission set for Frank, as shown in Figure 6.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'PowerUserAccess' --description 'Power User Access for security team on ExampleOrgDev account'
Figure 6: Creating the permission set PowerUserAccess

Figure 6: Creating the permission set PowerUserAccess

Copy the permission set ARN from these responses, which you will need when you attach the managed policies.

Step 2: Assign policies to permission sets

In this step, you learn how to assign managed policies to the permission sets that you created in step 1.

Attach policies to the EC2-S3-FullAccess permission set

Run the following command to attach the amazonec2fullacess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 7.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazonec2fullaccess'
Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Run the following command to attach the amazons3fullaccess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 8.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazons3fullaccess'
Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Attach a policy to the AuditAccess permission set

Run the following command to attach the SecurityAudit managed policy to the AuditAccess permission set that you created earlier, as shown in Figure 9.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/SecurityAudit'
Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Attach a policy to the PowerUserAccess permission set

The following command is similar to the previous command; it attaches the PowerUserAccess managed policy to the PowerUserAccess permission set, as shown in Figure 10.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/PowerUserAccess'
Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

In the next step, you assign users (Frank Infosec and Alice Developer) to their respective permission sets and assign permission sets to accounts.

Step 3: Assign permission sets to users and groups and grant access to AWS accounts

In this step, you assign the AWS SSO permission sets you created to users and groups and AWS accounts, to grant the required access for these users and groups on respective AWS accounts.

To assign access to an AWS account for a user or group, using a permission set you already created, you need the following:

  • The principal ID (the ID for the user or group)
  • The AWS account ID to which you need to assign this permission set

To obtain a user’s or group’s principal ID (UserID or GroupID), you need to use the AWS SSO Identity Store API. The AWS SSO Identity Store service enables you to retrieve all of your identities (users and groups) from AWS SSO. See AWS SSO Identity Store API for more details.

Use the first two commands shown here to get the principal ID for the two users, Alice (Alice’s user name is [email protected]) and Frank (Frank’s user name is [email protected]).

Alice’s user ID

Run the following command to get Alice’s user ID, as shown in Figure 11.

aws identitystore list-users --identity-store-id '<Identity Store ID>' --filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 11: Retrieving Alice’s user ID

Figure 11: Retrieving Alice’s user ID

Frank’s user ID

Run the following command to get Frank’s user ID, as shown in Figure 12.

aws identitystore list-users --identity-store-id '<Identity Store ID>'--filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 12: Retrieving Frank’s user ID

Figure 12: Retrieving Frank’s user ID

Note: To get the principal ID for a group, use the following command.

aws identitystore list-groups --identity-store-id '<Identity Store ID>' --filter AttributePath='DisplayName',AttributeValue='<Group Name>'

Assign the EC2-S3-FullAccess permission set to Alice in the ExampleOrgDev account

Run the following command to assign Alice access to the ExampleOrgDev account using the EC2-S3-FullAccess permission set. This will give Alice full access to Amazon EC2 and S3 services in the ExampleOrgDev account.

Note: When you call the CreateAccountAssignment API, AWS SSO automatically provisions the specified permission set on the account in the form of an IAM policy attached to the AWS SSO–created IAM role. This role is immutable: it’s fully managed by the AWS SSO, and it cannot be deleted or changed by the user even if the user has full administrative rights on the account. If the permission set is subsequently updated, the corresponding IAM policies attached to roles in your accounts won’t be updated automatically. In this case, you will need to call ProvisionPermissionSet to propagate these updates.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Assign the AuditAccess permission set to Frank Infosec in the ExampleOrgDev account

Run the following command to assign Frank access to the ExampleOrgDev account using the EC2-S3- AuditAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Assign the PowerUserAccess permission set to Frank Infosec in the ExampleOrgTest account

Run the following command to assign Frank access to the ExampleOrgTest account using the PowerUserAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

To view the permission sets provisioned on the AWS account, run the following command, as shown in Figure 16.

aws sso-admin list-permission-sets-provisioned-to-account --instance-arn '<Instance ARN>' --account-id '<AWS Account ID>'
Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

To review the created resources in the AWS Management Console, navigate to the AWS SSO console. In the list of permission sets on the AWS accounts tab, choose the EC2-S3-FullAccess permission set. Under AWS managed policies, the policies attached to the permission set are listed, as shown in Figure 17.

Figure 17: Review the permission set in the AWS SSO console

Figure 17: Review the permission set in the AWS SSO console

To see the AWS accounts, where the EC2-S3-FullAccess permission set is currently provisioned, navigate to the AWS accounts tab, as shown in Figure 18.

Figure 18: Review permission set account assignment in the AWS SSO console

Figure 18: Review permission set account assignment in the AWS SSO console

Step 4: Audit access

In this step, you learn how to audit access assigned to your users and group by using the AWS SSO account assignment API. In this example, you’ll start from a permission set, review the permissions (AWS-managed policies or a custom policy) attached to the permission set, get the users and groups associated with the permission set, and see which AWS accounts the permission set is provisioned to.

List the IAM managed policies for the permission set

Run the following command to list the IAM managed policies that are attached to a specified permission set, as shown in Figure 19.

aws sso-admin list-managed-policies-in-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 19: View the managed policies attached to the permission set

Figure 19: View the managed policies attached to the permission set

List the assignee of the AWS account with the permission set

Run the following command to list the assignee (the user or group with the respective principal ID) of the specified AWS account with the specified permission set, as shown in Figure 20.

aws sso-admin list-account-assignments --instance-arn '<Instance ARN>' --account-id '<Account ID>' --permission-set-arn '<Permission Set ARN>'
Figure 20: View the permission set and the user or group attached to the AWS account

Figure 20: View the permission set and the user or group attached to the AWS account

List the accounts to which the permission set is provisioned

Run the following command to list the accounts that are associated with a specific permission set, as shown in Figure 21.

aws sso-admin list-accounts-for-provisioned-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 21: View AWS accounts to which the permission set is provisioned

Figure 21: View AWS accounts to which the permission set is provisioned

In this section of the post, we’ve illustrated how to create a permission set, assign a managed policy to the permission set, and grant access for AWS SSO users or groups to AWS accounts by using this permission set. In the next section, we’ll show you how to do the same using AWS CloudFormation.

Use the AWS SSO API through AWS CloudFormation

In this section, you learn how to use CloudFormation templates to automate the creation of permission sets, attach managed policies, and use permission sets to assign access for a particular user or group to AWS accounts.

Sign in to your AWS Management Console and create a CloudFormation stack by using the following CloudFormation template. For more information on how to create a CloudFormation stack, see Creating a stack on the AWS CloudFormation console.

//start of Template//
{
    "AWSTemplateFormatVersion": "2010-09-09",
  
    "Description": "AWS CloudFormation template to automate multi-account access with AWS Single Sign-On (Entitlement APIs): Create permission sets, assign access for AWS SSO users and groups to AWS accounts using permission sets. Before you use this template, we assume you have enabled AWS SSO for your AWS Organization, added the AWS accounts to which you want to grant AWS SSO access to your organization, signed in to the AWS Management Console with your AWS Organizations management account credentials, and have the required permissions to use the AWS SSO console.",
  
    "Parameters": {
      "InstanceARN" : {
        "Type" : "String",
        "AllowedPattern": "arn:aws:sso:::instance/(sso)?ins-[a-zA-Z0-9-.]{16}",
        "Description" : "Enter AWS SSO InstanceARN. Ex: arn:aws:sso:::instance/ssoins-xxxxxxxxxxxxxxxx",
        "ConstraintDescription": "must be the name of an existing AWS SSO InstanceARN associated with the management account."
      },
      "ExampleOrgDevAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit Developer AWS Account ID. Ex: 123456789012"
        },
      "ExampleOrgTestAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit AWS Account ID. Ex: 123456789012"
        },
      "AliceDeveloperUserId" : {
        "Type" : "String",
        "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
        "Description" : "Enter Developer UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
        },
        "FrankInfosecUserId" : {
            "Type" : "String",
            "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
            "Description" : "Enter Test UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
            }
    },
    "Resources": {
        "EC2S3Access": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "EC2 and S3 access for developers",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : ["arn:aws:iam::aws:policy/amazonec2fullaccess","arn:aws:iam::aws:policy/amazons3fullaccess"],
                "Name" : "EC2-S3-FullAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "EC2S3Access"
                 } ]
              }
        },  
        "SecurityAuditAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Audit Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/SecurityAudit" ],
                "Name" : "AuditAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "SecurityAuditAccess"
                 } ]
              }
        },    
        "PowerUserAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Power User Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/PowerUserAccess"],
                "Name" : "PowerUserAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "PowerUserAccess"
                 } ]
              }      
        },
        "EC2S3userAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "EC2S3Access",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "AliceDeveloperUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "SecurityAudituserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "SecurityAuditAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "PowerUserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "PowerUserAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgTestAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          }
    }
}
//End of Template//

When you create the stack, provide the following information for setting the example permission sets for Frank Infosec and Alice Developer, as shown in Figure 22:

  • The Alice Developer and Frank Infosec user IDs
  • The ExampleOrgDev and ExampleOrgTest account IDs
  • The AWS SSO instance ARN

Then launch the CloudFormation stack.

Figure 22: User inputs to launch the CloudFormation template

Figure 22: User inputs to launch the CloudFormation template

AWS CloudFormation creates the resources that are shown in Figure 23.

Figure 23: Resources created from the CloudFormation stack

Figure 23: Resources created from the CloudFormation stack

Cleanup

To delete the resources you created by using the AWS CLI, use these commands.

Run the following command to delete the account assignment.

delete-account-assignment --instance-arn '<Instance ARN>' --target-id '<AWS Account ID>' --target-type 'AWS_ACCOUNT' --permission-set-arn '<PermissionSet ARN>' --principal-type '<USER/GROUP>' --principal-id '<user/group ID>'

After the account assignment is deleted, run the following command to delete the permission set.

delete-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<PermissionSet ARN>'

To delete the resource that you created by using the CloudFormation template, go to the AWS CloudFormation console. Select the appropriate stack you created, and then choose delete. Deleting the CloudFormation stack cleans up the resources that were created.

Summary

In this blog post, we showed how to use the AWS SSO account assignment API to automate the deployment of permission sets, how to add managed policies to permission sets, and how to assign access for AWS users and groups to AWS accounts by using specified permission sets.

To learn more about the AWS SSO APIs available for you, see the AWS Single Sign-On API Reference Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akhil Aendapally

Akhil is a Solutions Architect at AWS focused on helping customers with their AWS adoption. He holds a master’s degree in Network and Computer Security. Akhil has 8+ years of experience working with different cloud platforms, infrastructure automation, and security.

Author

Yuri Duchovny

Yuri is a New York-based Solutions Architect specializing in cloud security, identity, and compliance. He supports cloud transformations at large enterprises, helping them make optimal technology and organizational decisions. Prior to his AWS role, Yuri’s areas of focus included application and networking security, DoS, and fraud protection. Outside of work, he enjoys skiing, sailing, and traveling the world.

Author

Ballu Singh

Ballu is a principal solutions architect at AWS. He lives in the San Francisco Bay area and helps customers architect and optimize applications on AWS. In his spare time, he enjoys reading and spending time with his family.

Author

Nir Ozeri

Nir is a Solutions Architect Manager with Amazon Web Services, based out of New York City. Nir specializes in application modernization, application delivery, and mobile architecture.

Architecture Patterns for Red Hat OpenShift on AWS

Post Syndicated from Ryan Niksch original https://aws.amazon.com/blogs/architecture/architecture-patterns-for-red-hat-openshift-on-aws/

Editor’s note: Although this blog post and its accompanying code make use of the word “Master,” Red Hat is making open source code more inclusive by eradicating “problematic language.” Read more about this.

Introduction

Red Hat OpenShift is an application platform that provides customers with turnkey application platform that is much more than a simple Kubernetes orchestration.

OpenShift customers choose AWS as their cloud of choice because of the efficiency, security, and reliability, scalability, and elasticity it provides. Customers seeking to modernize their business, process, and application stacks are drawn to the rich AWS service and feature sets.

As such, we see some customers migrate from on-premises to AWS or exist in a hybrid context with application workloads running in various locations. For OpenShift customers, this poses a few questions and considerations:

  • What are the recommendations for the best way to deploy OpenShift on AWS?
  • How is this different from what customers were used to on-premises?
  • How does this ensure resilience and availability?
  • Do customers need a multi-region, multi-account approach?

For hybrid customers, there are assumptions and misconceptions:

  • Where does the control plane exist?
  •  Is there replication, and if so, what are the considerations and ramifications?

In this post I will run through some of the more common questions and patterns for OpenShift on AWS, while looking at some of the terminology and conceptual differences of AWS. I’ll explore migration and hybrid use cases and address some misconceptions.

OpenShift building blocks

On AWS, OpenShift 4x is the norm. To that effect, I will focus on OpenShift 4, but many of the considerations will apply to both OpenShift 3 and OpenShift 4.

Let’s unpack some of the OpenShift building blocks. An OpenShift cluster consists of Master, infrastructure, and worker nodes. The Master forms the control plane and infrastructure nodes cater to a routing layer and additional functions, such as logging, monitoring etc. Worker nodes are the nodes that customer application container workloads will exist on.

When deployed on-premises, OpenShift nodes will be placed in separate network subnets. Depending on distance, latency, etc., a single OpenShift cluster may span two data centers that have some nodes in a subnet in one data center and other subnets in a different data center. This applies to customers with data centers within a few miles of each other with high-speed connectivity. An alternative would be an OpenShift cluster in each data center.

AWS concepts and terminology

At AWS, the concept of “region” is a geolocation, such as EMEA (Europe, Middle East, and Africa) or APAC (Asian Pacific) rather than a data center or specific building. An Availability Zone (AZ) is the closest construct on AWS that maps to a physical data center. Within each region you will find multiple (typically three or more) AZs. Note that a single AZ will contain multiple physical data centers but we treat it as a single point of failure. For example, an event that impacts an AZ would be expected to impact all the data centers within that AZ. To this effect, customers should deploy workloads spanning multiple AZs to protect against any event that would impact a single AZ.

Read more about Regions, Availability Zones, and Edge Locations.

Deploying OpenShift

When deploying an OpenShift cluster on AWS, we recommend starting with three Master nodes spread across three AWS AZs and three worker nodes spread across three AZs. This allows for the combination of resilience and availably constructs provided by AWS as well as Red Hat OpenShift. The OpenShift installer provides a means of deploying the underlying AWS infrastructure in two ways: IPI Installer-provisioned infrastructure and UPI user-provisioned infrastructure. Both Red Hat and AWS collect customer feedback and use this to drive recommended patterns that are then included in the OpenShift installer. As such, the OpenShift installer IPI mode becomes a living reference architecture for deploying OpenShift on AWS.

Deploying OpenShift

The installer will require inputs for the environment on which it’s being deployed. In this case, since I am deploying on AWS, I will need to provide the AWS region, AZs, or subnets that related to the AZs, as well as EC2 instance type. The installer will then generate a set of ignition files that will be used during the deployment of OpenShift:

apiVersion: v1
baseDomain: example.com 
controlPlane: 
  hyperthreading: Enabled   
  name: master
  platform:
    aws:
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
      rootVolume:
        iops: 4000
        size: 500
        type: io1
      type: m5.xlarge 
  replicas: 3
compute: 
- hyperthreading: Enabled 
  name: worker
  platform:
    aws:
      rootVolume:
        iops: 2000
        size: 500
        type: io1 
      type: m5.xlarge
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
  replicas: 3
metadata:
  name: test-cluster 
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: us-west-2 
    userTags:
      adminContact: jdoe
      costCenter: 7536
pullSecret: '{"auths": ...}' 
fips: false 
sshKey: ssh-ed25519 AAAA... 

What does this look like at scale?

For larger implementations, we would see additional worker nodes spread across three or more AZs. As more worker nodes are added, use of the control plane increases. Initially scaling up the Amazon Elastic Compute Cloud (EC2) instance type to a larger instance type is an effective way of addressing this. It’s possible to add more Master nodes, and we recommend that an odd number of nodes are maintained. It is more common to see scaling out of the infrastructure nodes before there is a need to scale Masters. For large-scale implementations, infrastructure functions such as the router, monitoring, and logging functions can be moved to separate EC2 instances from the Master nodes, as well as from each other. It is important to spread the routing layer across multiple AZs, which is critical to maintaining availability and resilience.

The process of resource separation is now controlled by infrastructure machine sets within OpenShift. An infrastructure machine set would need to be defined, then the infrastructure role edited to be moved from the default to this new infrastructure machine set. Read about this in greater detail.

OpenShift in a multi-account context

Using AWS accounts as a means of separation is a common well-architected pattern. AWS Organizations and AWS Control Tower are services that are commonly adopted as part of a multi-account strategy. This is very much the case when looking to enable teams to use their own accounts and when an account vending process is needed to cater for self-service account provisioning.

OpenShift in a multi-account context

OpenShift clusters are deployed into multiple accounts. An OpenShift dev cluster is deployed into an AWS Dev account. This account would typically have AWS Developer Support associated with it. A separate production OpenShift cluster would be provisioned into an AWS production account with AWS Enterprise Support. Enterprise support provides for faster support case response times, and you get the benefit of dedicated resources such as a technical account manager and solutions architect.

CICD pipelines and processes are then used to control the application life cycle from code to dev to production. The pipelines would push the code to different OpenShift cluster end points at different stages of the life cycle.

Hybrid use case implementation

A common misconception of hybrid implementations is that there is a single cluster or control plan that has worker nodes in various locations. For example, there could be a cluster where the Master and infrastructure nodes are deployed in one location, but also worker nodes registered with this cluster that exist on-premises as well as in the cloud.

Having a single customer control plane for a hybrid implementation, even if technically possible, introduces undesired risks.

There is the potential to take multiple environments with very different resilience characteristics and make them interdependent of each other. This can result in performance and reliability issues, and these may increase not only the possibility of the risk manifesting, but also increase in the impact or blast radius.

Instead, hybrid implementations will see separate OpenShift clusters deployed into various locations. A customer may deploy clusters on-premises to cater for a workload that can’t be migrated to the cloud in the short term. Separate OpenShift clusters can then deployed into accounts in AWS for workloads on the cloud. Customers can also deploy separate OpenShift clusters in different AWS regions to cater for proximity to the consuming customer.

Though adding multiple clusters doesn’t add significant administrative overhead, there is a desire to be able to gain visibility and telemetry to all the deployed clusters from a central location. This may see the OpenShift clusters registered with Red Hat Advanced Cluster Manager for Kubernetes.

Summary

Take advantage of the IPI model, not only as a guide but to also save time. Make AWS Organizations, AWS Control Tower, and the AWS Service catalog part of your cloud and hybrid strategies. These will not only speed up migrations but also form building blocks for a modernized business with a focus of enabling prescriptive self-service. Consider Red Hat advanced cluster manager for multi cluster management.

Field Notes: Building a Shared Account Structure Using AWS Organizations

Post Syndicated from Abhijit Vaidya original https://aws.amazon.com/blogs/architecture/field-notes-building-a-shared-account-structure-using-aws-organizations/

For customers considering the AWS Solution Provider Program, there are challenges to mitigate when building a shared account model with SI partners. AWS Organizations make it possible to build the right account structure to support a resale arrangement. In this engagement model, the end customer gets an AWS invoice from an AWS authorized partner instead of AWS directly.

Partners and customers who want to engage in this service resale arrangement need to build a new account structure. This process includes linking or transferring existing customer accounts to the partner master account. This is so that all the billing data from customer accounts is consolidated into the partner master account.

While linking or transferring existing customer accounts to the new master account, the partner must check the new master account structure. It should not compromise any customer security controls and continue to provide full control of linked accounts to the customer.  The new account structure must fulfill the following requirements for both the AWS customer and partner:

  • The customer maintains full access to the AWS organization and able to perform all critical security-related tasks except access to billing data.
  • The Partner is able to control only billing information and not able to perform any other task in the root account (master payer account) without approval from the customer.
  • In case of contract breach / termination, the customer is able to gain back full control of all accounts including the Master.

In this post, you will learn about how partners can create a master account with shared ownership. We also show how to link or transfer customer organization accounts to the new organization master account and set up policies that would provide appropriate access to both partner and customer.

Account Structure

The following architecture represents the account structure setup that can fulfill customer and partner requirements as a part of a service resale arrangement.

As illustrated in the preceding diagram, the following list includes the key architectural components:

Architectural Components

As a part of resale arrangement, the customer’s existing AWS organization and related accounts are linked to the partner’s master payer account. The customer can continue to maintain their existing master root account, while all child accounts are linked to the master account (as shown in the list).

Customers may have valid concerns about linking/transferring accounts to the owned master payee account and may come up with many ‘what-if’ scenarios for example “What if the partner shuts down environment/servers?”  or, “What if partner blocks access to child accounts?”.

This account structure provides the right controls to both customer and partner that would address customer concerns around the security of their accounts. This includes the following benefits:

  • Starting with access to the root account, neither customer nor partner can access the root account without the other party’s involvement.
  • The partner controls the id/password for the root account while the customer maintains the MFA token for the account. The customer also controls the phone number, security questions associated with the root account. That way, the partner cannot replace the MFA token on their own.
  • The partner only has billing access and does not control any other parts of account including child accounts. Anytime the root account access is needed, both customer and partner team need to collaborate and access the root account.
  • The customer or partner cannot assign new IAM roles to themselves, therefore protecting the initial account setup.

Security considerations in the shared account setup

The following table highlights both customer and partner responsibilities and access controls provided by the architecture in the previous section.

The following points highlight security recommendations to provide adequate access rights to both partner and customers.

  • New master payer/ root account has a joint ownership between the Partner and the Customer.
  • AWS account root users (user id/password) would be with Partner and MFA (multi-factor authentication) device with Customer.
  • IAM (AWS Identity and Access Management) role to be created under the master payer with policies “FullOrganizationAccess”, “Amazon S3” (Amazon Simple Storage Service), “CloudTrail” (AWS CloudTrail), “CloudWatch”(Amazon CloudWatch) for the Customer.
  • Security team to log in and manage security of the account. Additional permissions to be added to this role as needed in future. This role does not have ANY billing permissions.
  • Only the partner has access to AWS billing and usage data.
  • IAM role / user would be created under master payer with just billing permission for the Partner team to log in and download all invoices. This role does not have any other permissions except billing and usage reports.
  • Any root login attempts to master payer triggers a notification to Customer’s SOC team and the Partner’s Customer account management team.
  • The Partner’ email address is used to create an account so invoices can be emailed to the partner’s email. The Customer cannot see these invoices.
  • The Customer phone number is used to create a master account and the customer maintains security questions/answers. This prevents replacement of MFA token by the Partner team without informing customer.  The Customer wouldn’t need the Partner’s help or permission to login and manage any security.
  • No aspect of  the Master Payer / Root Partner team can login to the master payer/Root without the Customer providing an MFA token.

Setting up the shared master AWS account structure

Create a playbook for the account transfer activity based on the following tasks. For each task, identify the owner. Make sure that owners have the right permissions to perform the tasks.

Part I – Setting up new partner master account

  1. Create new Partner Master payee Account
  2. Update payment details section with the required details for payment in Partner Master payee Account
  3. Enable MFA in the Partner Master payee Account
  4. Update contact for security and operations in the Partner Master payee Account
  5. Update demographics -Address and contact details in Partner Master payee Account
  6. Create an IAM role for Customer Team in Partner Master Payee account. IAM role is created under master payer with “FullOrganizationAccess”, “Amazon S3”, “CloudTrail”, “CloudWatch” “CloudFormationFullAccess” for the Customer SOC team to login and manage security of the account. Additional permissions can be added to this role in future if needed.

Select the roles:

create role

7. Create an IAM role/user for Partner billing role in the Partner Master Payee account.

Part II – Setting up customer master account

1.      Create an IAM user in the customer’s master account. This user assumes role into the new master payer/root account.

aws management console

 

2.      Confirm that when the IAM user from the customer account assumes a role in the new master account, and that the user does not have Billing Access.

Billing and cost management dashboard

Part III – Creating an organization structure in partner account

  1. Create an Organization in the Partner Master Payee Account
  2. Create Multiple Organizational Units (OU) in the Partner Master Payee Account

 

3. Enable Service Control Policies from AWS Organization’s Policies menu.

service control policies

5. Create/Copy Multiple in to Partner Master Payee Account from Customer root Account.  Any service control policies from the customer root account should be manually copied to new partner account.

6. If customer root account has any special software installed for example, security, install same software in Partner Master Payee Account.

7. Set alerts in Partner Master Payee root account. Any login to the root account would send alerts to customer and partner teams.

8. It is recommended to keep a copy of all billing history invoices for all accounts to be transferred to partner organization. This could be achieved by either downloading CSV or printing all invoices and storing files in Amazon S3 for long term archival. Billing history and invoices are found by clicking Orders and Invoices on Billing & Cost Management Dashboard. After accounts are transferred to new organization, historic billing data will not be available for those accounts.

9. Remove all the Member Accounts from the current Customer Root Account/ Organization. This step is performed by customer account admin and required before account can be transferred to Partner Account organization.

10. Send an invite from the Partner Master Payee Account to the delinked Member Account

master payee account

11.      Member Accounts to accept the invite from the Partner Master Payee Account.

invitations

12.      Move the Customer member account to the appropriate OU in the Partner Master Payee Account.

Setting the shared security model between partner and customer contact

While setting up the master account, three contacts need to be updated for notification.

  • Billing –  this is owned by the Partner
  • Operations – this is owned by the Customer
  • Security – this is owned by the Customer.

This will trigger a notification of any activity on the root account. The contact details contain Name, Title, Email Address and Phone number.  It is recommended to use the Customer’s SOC team distribution email for security and operations, and a phone number that belongs to the organization, and not the individual.

Alternate contacts

Additionally, before any root account activity takes place, AWS Support will verify using the security challenge questionnaire. These questions and answers are owned by the Customer’s SOC team.

security challenge questions

If a customer is not able to access the AWS account, alternate support options are available at Contact us by expanding the “I’m an AWS customer and I’m looking for billing or account support” menu. While contacting AWS Support, all the details that are listed on the account are needed, including full name, phone number, address, email address, and the last four digits of the credit card.

Clean Up

After recovering the account, the Customer should close any accounts that are not in use. It’s a good idea not to have open accounts in your name that could result in charges. For more information, review Closing an Account in the Billing and Cost Management User Guide.

The Shared master root account should be only used for selected activities referred to in the following document.

Conclusion

In this post, you learned how AWS Organizations features can be used to create a shared master account structure.  This helps both customer and partner engage in a service resale business engagement. Using AWS Organizations and cross account access, this solution allows customers to control all key aspects of managing the AWS Organization (Security / Logging / Monitoring) and also allows partners to control any billing related data.

Additional Resources

Cross Account Access

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Securing resource tags used for authorization using a service control policy in AWS Organizations

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/

In this post, I explain how you can use attribute-based access controls (ABAC) in Amazon Web Services (AWS) to help provision simple, maintainable access controls to different projects, teams, and workloads as your organization grows. ABAC gives you access to granular permissions and employee-attribute based authorization. By using ABAC, you need fewer AWS Identity and Access Management (IAM) roles and have less administrative overhead. The attributes used by ABAC in AWS are called tags, which are metadata associated with the principals (users or roles) and resources within your AWS account. Securing tags designated for authorization is important because they’re used to grant access to your resources. This post describes an approach to secure these tags across all of your AWS accounts through the use of AWS Organizations service control policies (SCPs) so that only authorized users can modify a resource’s tags. Organizations helps you centrally govern your accounts as you grow and scale your workloads on AWS; SCPs are a feature of Organizations that restricts what services and actions are allowed in your accounts. Together they provide flexible account management and security features that help you centrally secure your business.

Let’s say you want to give users a standard way to access their accounts and resources in AWS. For authentication, you want to continue using your existing identity provider (IdP). For authorization, you’ll need a method that can be scaled to meet the needs of your organization. To accomplish this, you can use ABAC in AWS, which requires fewer roles, helps you control permissions for many resources at once, and allows for simple access rules to different projects and teams. For example, all developers in your organization can have a project name assigned to their identities as a tag. The tag will be used as the basis for access to AWS resources via ABAC. Because access is granted based on these tags, they need to be secured from unauthorized changes.

Securing authorization tags

Using ABAC as your access control strategy for AWS depends on securing the tags associated with identities in your AWS organization’s accounts as well as with the resources those identities need to access. When users sign in, the authentication process looks something like this:
 

Figure 1: Using tags for secure access to resources

Figure 1: Using tags for secure access to resources

  1. Their identities are passed into AWS through federation.
  2. Federation is implemented through SAML assertions.
  3. Their identities each assume an AWS IAM role (via AWS Security Token Service).
  4. The project attribute is passed to AWS as a session tag named access-project (Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS has full details).
  5. The session tag acts as a principal tag, and if its value matches the access-project authorization tag of the resource, the user is given access to that resource.

To ensure that the role’s session tags persist even after assuming subsequent roles, you also need to set the TransitiveTagKeys SAML attribute.

Now that identity tags are persisted, you need to ensure that the resource tags used for authorization are secured. For sake of brevity, let’s call these authorization tags. Let’s assume that you intend to use ABAC for all the accounts in your AWS organization and control access through an SCP. SCPs ensure that resource authorization tags are initially set to an authorized value, and secure the tags against modification once set.

Authorization tag access control requirements

Before you implement a policy to secure your resource’s authorization tag from unintended modification, you need to define the requirements that the policy will enforce:

  • After the authorization tag is set on a resource:
    • It cannot be modified except by an IAM administrator.
    • Only a principal whose authorization tag key and value pair exactly match the key and value pair of the resource can modify the non-authorization tags for that resource.
  • If no authorization tag has been set on a resource, and if the authorization tag is present on the principal:
    • The resource’s authorization tag key and value pair can only be set if exactly matching the key and value tag of the principal.
    • All other non-authorization tags can be modified.
  • If any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations.

Review an SCP for securing resource tags

Let’s take a look at an SCP that fulfills the access control requirements listed above. SCPs are similar to IAM permission policies, however, an SCP never grants permissions. Instead, SCPs are IAM policies that specify the maximum permissions for an organization, organizational unit (OU), or account. Therefore, the recommended solution is to also assign identity policies that allow modification of tags to all roles that need the ability to create and delete tags. You can then assign your SCP to an OU that includes all of the accounts that need resource authorization tags secured. In the example that follows, the AWS Services That Work with IAM documentation page would be used to identify the first set of resources with ABAC support that your organization’s developers will use. These resources include Amazon Elastic Compute Cloud (Amazon EC2), IAM users and roles, Amazon Relational Database Service (Amazon RDS), and Amazon Elastic File System (Amazon EFS). The SCP itself consists of three statements, which I review in the following section.

Deny modification and deletion of tags if a resource’s authorization tags don’t match the principal’s

This first policy statement is below. It addresses what needs to be enforced after the authorization tag is set on a resource. It denies modification of any tag—including the resource’s authorization tags—if the resource’s authorization tag doesn’t match the principal’s tag. Note that the resource’s authorization tag must exist. Let’s review the components of the statement.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
            "Effect": "Deny",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags"
            ],
            "Resource": [
                "*"
            ],
            "Condition": {
                "StringNotEquals": {
                    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
                    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
                },
                "Null": {
                    "ec2:ResourceTag/access-project": false
                }
            }
        },
  • Line 6:
    
    "Effect": "Deny",
    

    First, specify the Effect to be Deny to deny the following actions under certain conditions.

  • Lines 7-10:
    
    "Action": [
    	"ec2:CreateTags",
    	"ec2:DeleteTags"
    ],
    

    Specify the policy actions to be denied, which are ec2:CreateTags and ec2:DeleteTags. When combined with line 6, this denies modification of tags. The ec2:CreateTags action is included as it also grants the permission to overwrite tags, and we want to prevent that.

  • Lines 14-22:
    
    "Condition": {
    	"StringNotEquals": {
    		"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    		"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    	},
    	"Null": {
    		"ec2:ResourceTag/access-project": false
    	}
    }
    

    Specify the conditions for which this statement—to deny policy actions—is in effect.

    The first condition operator is StringNotEquals. According to policy evaluation logic, if multiple condition keys are attached to an operator, each needs to evaluate as true in order for StringNotEquals to also evaluate as true.

  • Line 16:
    
    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Evaluate if the access-project EC2 resource tag’s key and value pairs don’t match the principal’s.

  • Line 17:
    
    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    

    Evaluate if the principal is not the iam-admin role for the account. aws:PrincipalArn is a global condition key for comparing the Amazon Resource Name (ARN) of the principal that made the request with the ARN that you specify in the policy.

  • Line 19:
    
    "Null": {
    

    The second condition operator is a Null condition. It evaluates if the attached condition key’s tag exists in the request context. If the key’s right side value is set to false, the expression will return true if the tag exists and has a value. The following table illustrates the evaluation logic:

    Condition key and righthand side value Description Evaluation of expression if tag value exists Evaluation of expression if tag value doesn’t exist
    ec2:ResourceTag/access-project: true If my access-project resource tag is null (set to true) FALSE TRUE
    ec2:ResourceTag/access-project: false If my access-project resource tag is not null (set to false) TRUE FALSE
  • Line 20:
    
    "ec2:ResourceTag/access-project": false
    

    Deny only when the access-project tag is present on the resource. This is needed because tagging actions shouldn’t be denied for new resources that might not yet have the access-project authorization tag set, which is covered next in the DenyModifyResAuthzTagIfPrinTagNotMatched statement.

Note that all elements of this condition are evaluated with a logical AND. For a deny to occur, both the StringNotEquals and Null operators must evaluate as true.

Deny modification and deletion of tags if requesting that a resource’s authorization tag be set to any value other than the principal’s

This second statement addresses what to do if there’s no authorization tag, or if the tag is on the principal but not on the resource. It denies modification of any tag, including the resource’s authorization tags, if the resource’s access-project tag is one of the requested tags to be modified and its value doesn’t match the principal’s access-project tag. The statement is needed when the resource might not have an authorization tag yet, and access shouldn’t be granted without a tag. Fortunately, access can be based on what the principal is requesting the resource authorization tag to be set to, which must be equal to the principal’s tag value.


{   
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]   
        }   
    }       
},
  • Line 36:
    
    "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Check to see if the desired access-project tag value for your resource, aws:RequestTag/access-project, isn’t equal to the principal’s access-project tag value. aws:RequestTag is a global condition key that can be used for tag actions to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy.

  • Line 39-43:
    
    "ForAnyValue:StringEquals": {
         "aws:TagKeys": [
             "access-project"
         ]               
    }          
    

    This ensures that a deny can only occur if the access-project tag is among the tags in the request context, which would be the case if the resource’s authorization tag was one of the requested tags to be modified.

Deny modification and deletion of tags if the principal’s access-project tag does not exist

The third statement addresses the requirement that if any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations. This policy statement will deny modification of any tag if the principal’s access-project tag doesn’t exist. The reason for this is that if authorization is intended to be based on the principal’s authorization tag, it must be present for any tag modification operation to be allowed.


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],      
    "Resource": [
        "*"     
    ],      
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {
            "aws:PrincipalTag/access-project": true
        }       
    }       
}

You’ve now created a basic SCP that protects against unintended modification of Amazon EC2 resource tags used for authorization. You will also need to create identity policies for your project’s roles that enable users to tag resources. You will also want to test your EC2 instances and ensure that your tags cannot be modified except by authorized principals. The next step is to add IAM, Amazon RDS, and Amazon EFS resources to this SCP.

Adding IAM users and roles to the SCP

Now that you’re confident you’ve done what you can to secure your Amazon EC2 tags, you can to do the same for your IAM resources, specifically users and roles. This is important because user and role resources can also be principals, and have authorization based off their tags. For example, not all roles will be assumed by employees and receive session tags. Applications can also assume a role and thus have authorization based on a non-session tag. To account for that possibility, you can add the following IAM actions to the prior statement, which denies modification and deletion of tags if a resource’s authorization tags don’t match the principal:


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser" 
    ],          
    "Resource": [
        "*" 
    ],          
    "Condition": {  
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {   
            "aws:PrincipalTag/access-project": true
        }   
    }   
}

You can use the following statements to add the IAM tagging actions to the previous statements that (1) deny modification and deletion of tags if requesting that a resource’s authorization tag be set to a different value than the principal’s and (2) denying modification and deletion of tags if the principal’s project tag doesn’t exist.


{
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]
        }
    }
},
{
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:PrincipalTag/access-project": true
        }
    }
} 

Adding resources that support the global aws:ResourceTag to the SCP

Now we will add Amazon RDS and Amazon EFS resources to complete the SCP. RDS and EFS are different with regards to tagging because they support the global aws:ResourceTag condition key instead of a service specific key such as ec2:ResourceTag. Instead of requiring you to create multiple statements similar to the code used above to deny modification and deletion of tags if a resource’s authorization tags don’t match the principal, you can create a single reusable policy statement that you can continue to add more actions to, as shown in the following code.


{
    "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "elasticfilesystem:TagResource",
        "elasticfilesystem:UntagResource",
        "rds:AddTagsToResource",
        "rds:RemoveTagsFromResource"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:ResourceTag/access-project": false
        }
    }
}, 

Review the full SCP to secure authorization tags

Let’s review the final SCP. It includes all the services you’ve targeted, as shown in the code sample below. This policy is 2,859 characters in length, and uses non-Unicode characters, which means that it uses a byte of data for each character. The policy therefore uses 2,859 bytes of the current 5,120 byte quota for an SCP. This means that roughly 4 to 18 more services can be added to this policy, depending on if the services use their own service-specific resource tag condition key or the global aws:ResourceTag key. Keep these numbers and limits in mind when adding additional resources, and remember that a maximum of five SCPs can be associated with an OU. It is unwise to consume the entirety of your available SCP policy space, as you’ll want to ensure you can make later changes and have growing room for your future business requirements.

This SCP doesn’t enforce tag-on-create, which lets you require that tags be applied when a resource is created, as shown in Example service control polices. Enforcing tag-on-create is necessary if you need resources to be accessible only from appropriately tagged identities from the time resources are created. Enforcement can be done through an SCP or by creating identity policies for the roles in each account that require it.

If necessary, you can use another employee attribute in addition to access-project, but that will use more of the available quota of bytes. Adding another attribute can potentially double the bytes used in a policy, especially for services that require use of a service-specific resource tag condition key. Using Amazon EC2 as an example, you would need to create separate statements just for the added employee attribute, duplicating the policy statements in the first three code samples.

Here’s the final SCP in its entirety:


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
			"Effect": "Deny",
			"Action": [
				"ec2:CreateTags",
				"ec2:DeleteTags"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"ec2:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedIam",
			"Effect": "Deny",
			"Action": [
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"iam:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"iam:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"ForAnyValue:StringEquals": {
					"aws:TagKeys": [
						"access-project"
					]
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfPrinTagNotExists",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:PrincipalTag/access-project": true
				}
			}
		}
	]
}

Next Steps

Now that you have an SCP that protects your resources’ authorization tags, you’ll also want to consider ensuring that identities cannot be assigned unauthorized tags when assuming roles, as shown in Using SAML Session Tags for ABAC. Additionally, you’ll also want detective and responsive controls to verify that identities are permitted to only access their own resources, and that responsive action can be taken if unintended access occurs.

Summary

You’ve created a multi-account organizational guardrail to protect the tags used for your ABAC implementation. Through the use of employee attributes, transitive session tags, and a service control policy you’ve created a preventive control that will deny anyone except an IAM administrator from modifying tags used for authorization once set on a resource.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael Chan

Michael is a Developer Advocate for AWS Identity. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

How to use AWS Organizations to simplify security at enormous scale

Post Syndicated from Pradeep Ramarao original https://aws.amazon.com/blogs/security/how-to-use-aws-organizations-to-simplify-security-at-enormous-scale/

AWS Organizations provides central governance and management across AWS accounts, and AWS Control Tower is the easiest way to set up and govern a new, secure multi-account AWS environment. In this post, we explain how AWS Organizations and AWS Control Tower can make the lives of your Information Security engineers easier, based on our experience in the Information Security team at Amazon. The service control policies (SCPs) feature in AWS Organizations offers you central control over permissions for all accounts in your organization, which helps you to ensure that your accounts stay within your organization’s access control guidelines. This can give your Information Security team an effective mechanism to prevent unsafe actions in your AWS accounts. AWS Control Tower sets up many of these best-practice SCPs automatically, and you can modify or add more SCPs if needed based on your specific business requirements.

In this post, we show you how to implement SCPs to restrict the types of actions that can be performed by your employees within your company’s AWS accounts, and we provide example SCPs that are used by Amazon’s Information Security team for the following restrictions:

  1. Prevent deletion or modification of AWS Identity and Access Management (IAM) roles created by the Information Security team
  2. Prevent creation of IAM users with a login profile
  3. Prevent tampering with AWS CloudTrail
  4. Prevent public Amazon Simple Storage Service (Amazon S3) access
  5. Prevent users from performing any action related to AWS Organizations

All of these rules, and the reasons for implementing them, are described in detail in this post. Note that Amazon uses many more SCPs than those listed above, but these are some examples that we consider to be best practices and that can be shared publicly.

Scope of problem

At Amazon, security is the number one priority, and it’s a common challenge in the technology industry to keep pace with emerging security threats while still encouraging experimentation and innovation. The Information Security team at Amazon is responsible for ensuring that employees can rapidly develop new products and services built on AWS technologies, while enforcing robust security standards. How do we do this? Among other things we use AWS Organizations and SCPs to uphold our security requirements.

Amazon has thousands of AWS accounts, and before the launch of AWS Organizations, the Information Security team at Amazon had to implement custom-built reactive rules in each account by using AWS Config, AWS CloudTrail, and Amazon CloudWatch to detect and mitigate any possible security concerns. This was effective, but managing all of the rules for thousands of AWS accounts was challenging. The Information Security team needed to tackle two specific challenges:

  1. How can we shift to a preventive approach—that is, blocking unsafe actions from happening in the first place—and then use reactive controls solely as a fallback mechanism?
  2. How can we continue to ensure that our critical infrastructure running in AWS accounts is tamper proof, without building custom monitoring and remediation infrastructure? For example, we create custom IAM roles to broker single sign-on access to our AWS accounts. We need to ensure that employees cannot delete these roles accidentally, and that unauthorized users cannot delete these roles maliciously.

The launch of AWS Organizations provided the Information Security team at Amazon with a way to tackle both challenges.

Replace reactive controls with preventive controls

Let’s take a look at some of the key rules that the Information Security team at Amazon has in place for managing security posture across AWS accounts. These are company-wide security controls, so we defined them in the organization root so that they are enforced across all AWS accounts in the organization. You can also implement these security controls at the organizational unit (OU) or individual account level, which gives you flexibility to implement different sets of guardrails for different parts of your organization.

Rule 1: Prevent deletion or modification of IAM roles created by the Information Security team

The Information Security team at Amazon creates IAM roles in every AWS account that is used by Amazon, and we use these IAM roles to centrally monitor and control access in accordance with Amazon security policies. Accordingly, no one other than the Information Security team should be able to delete or modify these roles.

Before the use of AWS Organizations and SCPs, the Information Security team at Amazon used reactive controls to scan every account, to make sure that nobody had deleted these important roles. Reactive controls detect whether someone has made a configuration update that does not comply with company security policies.

What are some possible drawbacks to this approach?

  1. This is a reactive approach that detects a problem only after it has occurred—in this case, after somebody has deleted one of the important roles.
  2. It puts the responsibility for security in the hands of your employees, and you rely on their goodwill and expertise to ensure that they uphold the rules.

Your employees want to focus on their core competencies and getting their jobs done effectively. Requiring that they also be information security experts is not an effective strategy. It’s much easier for everyone in your organization if you simply remove this burden in the first place. SCPs allow Amazon to do this. With SCPs, you can help avoid giving unwanted access to employees outside your security access team. As you will see in the following example, you can ensure that all roles required for the Information Security team begin with the string InfoSec, and you can create an SCP that prevents employees from deleting these roles.

The following is an example of an SCP that prevents the deletion or modification of any IAM role that begins with InfoSec in all accounts, unless the initiator uses an InfoSec role. This example demonstrates how you can use condition statements in your SCPs, which give you a lot of flexibility in determining when your SCP should be applied.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventInfoSecRoleActions",
            "Effect": "Deny",
            "Action": [
                "*"
            ],
            "Resource": [
                "arn:aws:iam::*:role/InfoSec*"
            ],
            "Condition": {
                "StringNotLike": {
                    "aws:PrincipalARN": "arn:aws:iam::*:role/InfoSec*"
                }
            }
        }
    ]
}

Rule 2: Prevent creation of IAM users with a login profile

When your employees are authenticated in your corporate network, it’s a security best practice to use identity federation to allow them to sign in to your company-owned AWS accounts, rather than to create individual IAM users for all of your employees, each with their own passwords.

The following are some of the reasons that it’s a best practice to use identity federation instead of creating individual IAM users for all of your employees:

  1. Using identity federation makes it easier for the Information Security team, so that they don’t have to control individual credentials across thousands of accounts and users.
  2. When using identity federation, users log into your AWS accounts with temporary credentials, and there is no need to store long-term, persistent passwords, thus reducing the possibility of an external entity guessing a user’s password.

The following is an example of an SCP you can use to prevent employees from creating individual IAM users.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventIAMUserWithLoginprofile",
            "Effect": "Deny",
            "Action": [
                "iam:ChangePassword",
                "iam:CreateLoginProfile",
                "iam:UpdateLoginProfile",
                "iam:UpdateAccountPasswordPolicy"
            ],
            "Resource": [
                "*"
            ]
     }]
}

Rule 3: Prevent tampering with AWS CloudTrail

It’s very important for your Information Security team to be able to track the activities performed by all users in all accounts. In addition to this being a best practice, many compliance standards require you to maintain an accurate history of all user activity in your AWS accounts. You can use AWS CloudTrail to keep this history, and use an SCP to help ensure that unauthorized users cannot disable AWS CloudTrail or modify any of its outputs.

The following is an example of an SCP to prevent tampering with CloudTrail. To use the example, replace exampletrail with the name of the trail that you want to protect.


{
            "Sid": "PreventTamperingWithCloudTrail",
            "Effect": "Deny",
            "Action": [
                "cloudtrail:DeleteTrail",
                "cloudtrail:StopLogging",
                "cloudtrail:UpdateTrail"
            ],
            "Resource": [
                "arn:aws:cloudtrail:*:*:trail/exampletrail"
            ]
}

Rule 4: Prevent public S3 access

It’s a security best practice to only allow public access to an S3 bucket and its contents if you specifically want external entities to be able to publicly access those items. However, you need to make sure that information that might be sensitive is securely protected and not made publicly available. For this reason, the Information Security team at Amazon prevents Amazon employees from making S3 buckets or content publicly available, unless they first go through a security review and get explicit approval from the Information Security team. We use the Amazon S3 Block Public Access feature in all of our accounts, and the following is an example of an SCP that prevents employees from changing those settings.


{
            "Sid": "PreventS3PublicAccess",
            "Action": [
                "s3:PutAccountPublicAccessBlock"
            ],
            "Resource": "*",
            "Effect": "Deny"
}

Rule 5: Prevent users from performing any action related to AWS Organizations

We also want to help ensure that only the Information Security team can perform actions related to AWS Organizations, and to prevent other users from performing activities such as leaving the organization or discovering other accounts in the organization. The following SCP prevents users from taking actions in AWS Organizations.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventOrgsActions",
            "Effect": "Deny",
            "Action": [
                "organizations:*"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

The preceding SCP also prevents the use of AWS Organizations-related features in some AWS services that rely on read permissions, such as AWS Resource Access Manager (AWS RAM). To allow AWS Organizations to have read permissions for those services, you can modify the SCP to specify more granular actions in the Action section of the SCP definition. For example, you can modify the SCP to prevent member accounts from leaving the organization while still allowing the use of features such as AWS RAM, as demonstrated in the following SCP. For more information about all of the AWS Organizations API actions that you can access in each account, see AWS Organizations API Reference: API operations by account.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventLeavingOrganization",
            "Effect": "Deny",
            "Action": [
                "organizations:LeaveOrganization"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Conclusion

Before the launch of AWS Organizations, the Information Security team at Amazon had to implement custom-built reactive rules to manage security controls across thousands of AWS accounts, and to ensure that all employees were managing their AWS resources in alignment with company policies. AWS Organizations has dramatically simplified these tasks, making it easier to control what Amazon employees are allowed to do in their AWS accounts. We’ve shared this information and examples of SCPs to help you implement similar controls.

Further reading

Learn more about Policy Evaluation Logic in the IAM User Guide.

Try it out for yourself. AWS has published information on how to create an AWS Organization and how to create SCPs.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Organizations forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Pradeep Ramarao

Pradeep is a Senior Manager of Software Development at Amazon. He enables Amazon developers to rapidly innovate on AWS technologies, while enforcing robust security standards. Outside of work, he enjoys watching movies and improving his soccer skills.

Author

Kieran Kavanagh

Kieran is a Principal Solutions Architect at AWS. He works with customers to design and build solutions on AWS across a broad range of technology domains including security, data analytics, and machine learning. In his spare time, he likes hiking, snowboarding, and practicing martial arts.

Author

Casey Falk

Casey is a Software Development Manager at Amazon. His team builds governance and security tooling that protects Amazon’s customer data. Outside of work he enjoys banjo, philosophy, and Dungeons & Dragons.

How to use G Suite as an external identity provider for AWS SSO

Post Syndicated from Yegor Tokmakov original https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/

Do you want to control access to your Amazon Web Services (AWS) accounts with G Suite? In this post, we show you how to set up G Suite as an external identity provider in AWS Single Sign-On (SSO). We also show you how to configure permissions for your users, and how they can access different accounts.

Introduction

G Suite is used for common business functions like email, calendar, and document sharing. If your organization is using AWS and G Suite, you can use G Suite as an identity provider (IdP) for AWS. You can connect AWS SSO to G Suite, allowing your users to access AWS accounts with their G Suite credentials.

You can grant access by assigning G Suite users to accounts governed by AWS Organizations. The user’s effective permissions in an account are determined by permission sets defined in AWS SSO. They allow you to define and grant permissions based on the user’s job function (such as administrator, data scientist, or developer). These should follow the least privilege principle, granting only permissions that are necessary to perform the job. This way, you can centrally manage user accounts for your employees in the Google Admin console and have fine-grained control over the access permissions of individual users to AWS resources.

In this post, we walk you through the process of setting up G Suite as an external IdP in AWS SSO.

How it works

AWS SSO authenticates your G Suite users by using Security Assertion Markup Language (SAML) 2.0 authentication. SAML is an open standard for secure exchange of authentication and authorization data between IdPs and service providers without exposing users’ credentials. When you use AWS as a service provider and G Suite as an external IdP, the login process is as follows:

  1. A user with a G Suite account opens the link to the AWS SSO user portal of your AWS Organizations.
  2. If the user isn’t already authenticated, they will be redirected to the G Suite account login. The user will log in using their G Suite credentials.
  3. If the login is successful, a response is created and sent to AWS SSO. It contains three different types of SAML assertions: authentication, authorization, and user attributes.
  4. When AWS SSO receives the response, the user’s access to the AWS SSO user portal is determined. A successful login shows accessible AWS accounts.
  5. The user selects the account to access and is redirected to the AWS Management Console.

This authentication flow is shown in the following diagram.

Figure 1: AWS SSO authentication flow

Figure 1: AWS SSO authentication flow

The user journey starts at the AWS SSO user portal and ends with the access to the AWS Management Console. Your users experience a unified access to the AWS Cloud, and you don’t have to manage user accounts in AWS Identity and Access Management (IAM) or AWS Directory Service.

User permissions in an AWS account are controlled by permission sets and groups in AWS SSO. A permission set is a collection of administrator-defined policies that determine a user’s effective permissions in an account. They can contain AWS managed policies or custom policies that are stored in AWS SSO, and are ultimately created as IAM roles in a given AWS account. Your users assume these roles when they access a given AWS account and get their effective permissions. This obliges you to fine control the access to the accounts, following the shared-responsibility model established in the cloud.

When you use G Suite to authenticate and manage your users, you have to create a user entity in AWS SSO. The user entity is not a user account, but a logical object. It maps a G Suite user via its primary email address as the username to the user account in AWS SSO. The user entity in AWS SSO allows you to grant a G Suite user access to AWS accounts and define its permissions in those accounts.

AWS SSO initial setup

The AWS SSO service has some prerequisites. You need to first set up AWS Organizations with All features set to enabled, and then sign in with the AWS Organization’s master account credentials. You also need super administrator privileges in G Suite and access to the Google Admin console.

If you’re already using AWS SSO in your account, refer to Considerations for Changing Your Identity Source before making changes.

To set up an external identity provider in AWS SSO

  1. Open the service page in the AWS Management Console. Then choose Enable AWS SSO.

    Figure 2: AWS SSO service welcome page

    Figure 2: AWS SSO service welcome page

  2. After AWS SSO is enabled, you can connect an identity source. On the overview page of the service, select Choose your identity source.

    Figure 3: Choose your identity source

    Figure 3: Choose your identity source

  3. In the Settings, look for Identity source and choose Change.

    Figure 4: Settings

    Figure 4: Settings

  4. By default, AWS SSO uses its own directory as the identity provider. To use G Suite as your identity provider, you have to switch to an external identity provider. Select External identity provider from the available identity sources.

    Figure 5: AWS SSO identity provider options

    Figure 5: AWS SSO identity provider options

  5. Choosing the External identity provider option reveals additional information needed to configure it. Choose Show individual metadata values to show the information you need to configure a custom SAML application.

    Figure 6: AWS SSO SAML metadata

    Figure 6: AWS SSO SAML metadata

For the next steps, you need to switch to your Google Admin console and use the service provider metadata information to configure AWS SSO as a custom SAML application.

G Suite SAML application setup

Open your Google Admin console in a new browser tab, so that you can easily copy the metadata information from the previous step. Now use the information to configure a custom SAML application.

To configure a custom SAML application in G Suite

  1. Navigate to the SAML Applications section in the Admin console and choose Add a service/App to your domain.

    Figure 7: Add a service or app

    Figure 7: Add a service or app

  2. In the modal dialog that opens, choose SETUP MY OWN CUSTOM APP.

    Figure 8: Set up a custom app

    Figure 8: Set up a custom app

  3. Go to Option 2 and choose Download to download the Google IdP metadata. It downloads an XML file named GoogleIDPMetadata-your_domain.xml, which you will use to configure G Suite as the IdP in AWS SSO. Choose Next.

    Figure 9: Download IdP metadata

    Figure 9: Download IdP metadata

  4. Configure the name and description of the application. Enter AWS SSO as the application name or use a name that clearly identifies this application for your users. Choose Next to continue.

    Figure 10: Name and describe your app

    Figure 10: Name and describe your app

  5. Fill in the Service Provider Details using the metadata information from AWS SSO, then choose Next to create your custom application. The mapping for the metadata is:
    • Enter the AWS SSO Sign in URL as the Start URL
    • Enter the AWS SSO ACS URL as the ACS URL
    • Enter the AWS SSO Issue URL as the Entity ID

     

    Figure 11: Add service provider details

    Figure 11: Add service provider details

  6. Next is a confirmation screen with a reminder that you have some steps still to do. Choose OK to continue.

    Figure 12: Reminder of remaining steps

    Figure 12: Reminder of remaining steps

  7. The final steps enable the application for your users. Select the application from the list and choose EDIT SERVICE from the top corner.

    Figure 13: Edit service

    Figure 13: Edit service

  8. Change the service status to ON for everyone and choose SAVE. If you want to manage access for particular users you can do this via organizational units (for example, you can enable the AWS SSO application for your engineering department). This doesn’t give access to any resources inside of your AWS accounts. Permissions are granted in AWS SSO.

    Figure 14: Service on for everyone

    Figure 14: Service on for everyone

You’re done configuring AWS SSO in G Suite. Return to the browser tab with the AWS SSO configuration.

AWS SSO configuration

After creating the G Suite application, you can finish SSO setup by uploading Google IdP metadata in the AWS Management Console.

To add identity provider metadata in AWS SSO

  1. When you configured the custom application in G Suite, you downloaded the GoogleIDPMetadata-your_domain.xml file. Choose Browse… on the configuration page and select this file from your download folder. Finish this step by choosing Next: Review.

    Figure 15: Upload IdP SAML metadata file

    Figure 15: Upload IdP SAML metadata file

  2. Type CONFIRM at the bottom of the list of changes and choose Change identity source to complete the setup.

    Figure 16: Confirm changes

    Figure 16: Confirm changes

  3. Next is a message that your change to the configuration is complete. At this point, you can choose Return to settings and proceed to user provisioning.

    Figure 17: Settings complete

    Figure 17: Settings complete

Manage Users and Permissions

AWS SSO supports automatic user provisioning via the System for Cross-Identity Management (SCIM). However, this is not yet supported for G Suite custom SAML applications. To add a user to AWS SSO, you have to add the user manually. The username in AWS SSO must be the primary email address of that user, and it must follow the pattern username@gsuite_domain.com.

To add a user to AWS SSO

  1. Select Users from the sidebar of the AWS SSO overview and then choose Add user.

    Figure 18: Add user

    Figure 18: Add user

  2. Enter the user details and use your user’s primary email address as the username. Choose Next: Groups to add the user to a group.

    Figure 19: Add user

    Figure 19: Add user

  3. We aren’t going to create user groups in this walkthrough. Skip the Add user to groups step by choosing Add user. You will reach the user list page displaying your newly created user and status enabled.

    Figure 20: User list

    Figure 20: User list

  4. The next step is to assign the user to a particular AWS account in your AWS Organization. This allows the user to access the assigned account. Select the account you want to assign your user to and choose Assign users.

    Figure 21: Assign users

    Figure 21: Assign users

  5. Select the user you just added, then choose Next: Permission sets to continue configuring the effective permissions of the user in the assigned account.

    Figure 22: Select user

    Figure 22: Select user

  6. Since you didn’t configure a permission set before, you need to configure one now. Choose Create new permission set.

    Figure 23: Create new permission set

    Figure 23: Create new permission set

  7. AWS SSO has managed permission sets that are similar to the AWS managed policies you already know. Make sure that Use an existing job function policy is selected, then select PowerUserAccess from the list of existing job function policies and choose Create.

    Figure 24: Create permission set

    Figure 24: Create permission set

  8. You can now select the created permission set from the list of available sets for the user. Select the PowerUserAccess permission set and choose Finish to assign the user to the account.

    Figure 25: Select permission set

    Figure 25: Select permission set

  9. You see a message that the assignment has been successful.

    Figure 26: Assignment complete

    Figure 26: Assignment complete

Access an AWS Account with G Suite

You can find your user portal URL in the AWS SSO settings, as shown in the following screenshot. Unauthenticated users who use the link will be redirected to the Google account login page and use their G Suite credentials to log in.

Figure 27: The user portal URL

Figure 27: The user portal URL

After authenticating, users are redirected to the user portal. They can select from the list of assigned accounts, as shown in the following example, and access the AWS Management Console of these accounts.

Figure 28: Select an assigned account

Figure 28: Select an assigned account

You’ve successfully set up G Suite as an external identity provider for AWS SSO. Your users can access your AWS accounts using the credentials they already use.

Another way your users can use AWS SSO is by selecting it from their Google Apps to be redirected to the user portal, as shown in the following screenshot. That is one of the quickest ways for users to access accounts.

Figure 29: Apps in the user portal

Figure 29: Apps in the user portal

Using AWS CLI with SSO

You can use the AWS Command Line Interface (CLI) to access AWS resources. AWS CLI version 2 supports access via AWS SSO. You can automatically or manually configure a profile for the CLI to access resources in your AWS accounts. To authenticate your user, it opens the user portal in your default browser. If you aren’t authenticated, you’re redirected to the G Suite login page. After a successful login, you can select the AWS account you want to access from the terminal.

To upgrade to AWS CLI version 2, follow the instructions in the AWS CLI user guide.

Conclusion

You’ve set up G Suite as an external IdP for AWS SSO, granted access to an AWS account for a G Suite user, and enforced fine-grained permission controls for this user. This enables your business to have easy access to the AWS Cloud.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-on forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Yegor Tokmakov

Yegor is a solutions architect at AWS, working with startups. Previously, he was Chief Technology Officer at a healthcare startup in Berlin and was responsible for architecture and operations, as well as product development and growth of the tech team. Yegor is passionate about novel AI applications and data analytics. In his free time, he enjoys traveling, surfing, and cycling.

Sebastian Doell

Sebastian is a solutions architect at AWS. He helps startups to execute on their ideas at speed and at scale. Sebastian maintains a number of open source projects and is an advocate of Dart and Flutter.

Best practices for organizing larger serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/best-practices-for-organizing-larger-serverless-applications/

Well-designed serverless applications are decoupled, stateless, and use minimal code. As projects grow, a goal for development managers is to maintain the simplicity of design and low-code implementation. This blog post provides recommendations for designing and managing code repositories in larger serverless projects, and best practices for deploying releases of production systems.

Organizing your code repositories

Many serverless applications begin as monolithic applications. This can occur either because a simple application has grown more complex over time, or because developers are following existing development practices. A monolithic application is represented by a single AWS Lambda function performing multiple tasks, and a mono-repo is a single repository containing the entire application logic.

Monoliths work well for the simplest serverless applications that perform single-purpose functions. These are small applications such as cron jobs, data processing tasks, and some asynchronous processes. As those applications evolve into workflows or develop new features, it becomes important to refactor the code into smaller services.

Using frameworks such as the AWS Serverless Application Model (SAM) or the Serverless Framework can make it easier to group common pieces of functionality into smaller services. Each of these can have a separate code repository. For SAM, the template.yaml file contains all the resources and function definitions needed for an application. Consequently, breaking an application into microservices with separate templates is a simple way to split repos and resource groups.

Separate templates for microservices

In the smallest unit of a serverless application, it’s also possible to create one repository per function. If these functions are independent and do not share other AWS resources, this may be appropriate. Helper functions and simple event processing code are examples of candidates for this kind of repo structure.

In most cases, it makes sense to create repos around groups of functions and resources that define a microservice. In an ecommerce example, “Payment processing” is a microservice with multiple smaller related functions that share common resources.

As with any software, the repo design depends upon the use-case and structure of development teams. One large repo makes it harder for developer teams to work on different features, and test and deploy. Having too many repos can create duplicate code, and difficulty in sharing resources across repos. Finding the balance for your project is an important step in designing your application architecture.

Using AWS services instead of code libraries

AWS services are important building blocks for your serverless applications. These can frequently provide greater scale, performance, and reliability than bundled code packages with similar functionality.

For example, many web applications that are migrated to Lambda use web frameworks like Flask (for Python) or Express (for Node.js). Both packages support routing and separate user contexts that are well suited if the application is running on a web server. Using these packages in Lambda functions results in architectures like this:

Web servers in Lambda functions

In this case, Amazon API Gateway proxies all requests to the Lambda function to handle routing. As the application develops more routes, the Lambda function grows in size and deployments of new versions replace the entire function. It becomes harder for multiple developers to work on the same project in this context.

This approach is generally unnecessary, and it’s often better to take advantage of the native routing functionality available in API Gateway. In many cases, there is no need for the web framework in the Lambda function, which increases the size of the deployment package. API Gateway is also capable of validating parameters, reducing the need for checking parameters with custom code. It can also provide protection against unauthorized access, and a range of other features more suited to be handled at the service level. When using API Gateway this way, the new architecture looks like this:

Using API Gateway for routing

Additionally, the Lambda functions consist of less code and fewer package dependencies. This makes testing easier and reduces the need to maintain code library versions. Different developers in a team can work on separate routing functions independently, and it becomes simpler to reuse code in future projects. You can configure routes in API Gateway in the application’s SAM template:

Resources:
  GetProducts:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: getProducts/
      Handler: app.handler
      Runtime: nodejs12.x
      Events:
        GetProductsAPI:
          Type: Api 
          Properties:
            Path: /getProducts
            Method: get

Similarly, you should usually avoid performing workflow orchestrations within Lambda functions. These are sections of code that call out to other services and functions, and perform subsequent actions based on successful execution or failure.

Lambda functions with embedded workflow orchestrations

These workflows quickly become fragile and difficult to modify for new requirements. They can cause idling in the Lambda function, meaning that the function is waiting for return values from external sources, increasingly the cost of execution.

Often, a better approach is to use AWS Step Functions, which can represent complex workflows as JSON definitions in the application’s SAM template. This service reduces the amount of custom code required, and enables long-lived workflows that minimize idling in Lambda functions. It also manages in-flight executions as workflows are upgraded. The example above, rearchitected with a Step Functions workflow, looks like this:

Using Step Functions for orchestration

Using multiple AWS accounts for development teams

There are many ways to deploy serverless applications to production. As applications grow and become more important to your business, development managers generally want to improve the robustness of the deployment process. You have a number of options within AWS for managing the development and deployment of serverless applications.

First, it is highly recommended to use more than one AWS account. Using AWS Organizations, you can centrally manage the billing, compliance, and security of these accounts. You can attach policies to groups of accounts to avoid custom scripts and manual processes. One simple approach is to provide each developer with an AWS account, and then use separate accounts for a beta deployment stage and production:

Multiple AWS accounts in a deployment pipeline

The developer accounts can contains copies of production resources and provide the developer with admin-level permissions to these resources. Each developer has their own set of limits for the account, so their usage does not impact your production environment. Individual developers can deploy CloudFormation stacks and SAM templates into these accounts with minimal risk to production assets.

This approach allows developers to test Lambda functions locally on their development machines against live cloud resources in their individual accounts. It can help create a robust unit testing process, and developers can then push code to a repository like AWS CodeCommit when ready.

By integrating with AWS Secrets Manager, you can store different sets of secrets in each environment and eliminate any need for credentials stored in code. As code is promoted from developer account through to the beta and production accounts, the correct set of credentials is automatically used. You do not need to share environment-level credentials with individual developers.

It’s also possible to implement a CI/CD process to start build pipelines when code is deployed. To deploy a sample application using a multi-account deployment flow, follow this serverless CI/CD tutorial.

Managing feature releases in serverless applications

As you implement CI/CD pipelines for your production serverless applications, it is best practice to favor safe deployments over entire application upgrades. Unlike traditional software deployments, serverless applications are a combination of custom code in Lambda functions and AWS service configurations.

A feature release may consist of a version change in a Lambda function. It may have a different endpoint in API Gateway, or use a new resource such as a DynamoDB table. Access to the deployed feature may be controlled via user configuration and feature toggles, depending upon the application. AWS SAM has AWS CodeDeploy built-in, which allows you to configure canary deployments in the YAML configuration:

Resources:
 GetProducts:
   Type: AWS::Serverless::Function
   Properties:
     CodeUri: getProducts/
     Handler: app.handler
     Runtime: nodejs12.x

     AutoPublishAlias: live

     DeploymentPreference:
       Type: Canary10Percent10Minutes 
       Alarms:
         # A list of alarms that you want to monitor
         - !Ref AliasErrorMetricGreaterThanZeroAlarm
         - !Ref LatestVersionErrorMetricGreaterThanZeroAlarm
       Hooks:
         # Validation Lambda functions run before/after traffic shifting
         PreTraffic: !Ref PreTrafficLambdaFunction
         PostTraffic: !Ref PostTrafficLambdaFunction

CodeDeploy automatically creates aliases pointing to the old and versions of a function. The canary deployment enables you to gradually shift traffic from the old to the new alias, as you become confident that the new version is working as expected. Or you can rollback the update if needed. You can also set PreTraffic and PostTraffic hooks to invoke Lambda functions before and after traffic shifting.

Conclusion

As any software application grows in size, it’s important for development managers to organize code repositories and manage releases. There are established patterns in serverless to help manage larger applications. Generally, it’s best to avoid monolithic functions and mono-repos, and you should scope repositories to either the microservice or function level.

Well-designed serverless applications use custom code in Lambda functions to connect with managed services. It’s important to identify libraries and packages that can be replaced with services to minimize the deployment size and simplify the code base. This is especially true in applications that have been migrated from server-based environments.

Using AWS Organizations, you manage groups of accounts to enable your developers to have their own AWS accounts for development. This enables engineers to clone production assets and test against the AWS Cloud when writing and debugging code. You can use a CI/CD pipeline to push code through a beta environment to production, while safeguarding secrets using Secrets Manager. You can also use CodeDeploy to manage canary deployments easily.

To learn more about deploying Lambda functions with SAM and CodeDeploy, follow the steps in this tutorial.

New – Use AWS IAM Access Analyzer in AWS Organizations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-use-aws-iam-access-analyzer-in-aws-organizations/

Last year at AWS re:Invent 2019, we released AWS Identity and Access Management (IAM) Access Analyzer that helps you understand who can access resources by analyzing permissions granted using policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues.

AWS IAM Access Analyzer uses automated reasoning, a form of mathematical logic and inference, to determine all possible access paths allowed by a resource policy. We call these analytical results provable security, a higher level of assurance for security in the cloud.

Today I am pleased to announce that you can create an analyzer in the AWS Organizations master account or a delegated member account with the entire organization as the zone of trust. Now for each analyzer, you can create a zone of trust to be either a particular account or an entire organization, and set the logical bounds for the analyzer to base findings upon. This helps you quickly identify when resources in your organization can be accessed from outside of your AWS Organization.

AWS IAM Access Analyzer for AWS Organizations – Getting started
You can enable IAM Access Analyzer, in your organization with one click in the IAM Console. Once enabled, IAM Access Analyzer analyzes policies and reports a list of findings for resources that grant public or cross-account access from outside your AWS Organizations in the IAM console and through APIs.

When you create an analyzer on your organization, it recognizes your organization as a zone of trust, meaning all accounts within the organization are trusted to have access to AWS resources. Access analyzer will generate a report that identifies access to your resources from outside of the organization.

For example, if you create an analyzer for your organization then it provides active findings for resource such as S3 buckets in your organization that are accessible publicly or from outside the organization.

When policies change, IAM Access Analyzer automatically triggers a new analysis and reports new findings based on the policy changes. You can also trigger a re-evaluation manually. You can download the details of findings into a report to support compliance audits.

Analyzers are specific to the region in which they are created. You need to create a unique analyzer for each region where you want to enable IAM Access Analyzer.

You can create multiple analyzers for your entire organization in your organization’s master account. Additionally, you can also choose a member account in your organization as a delegated administrator for IAM Access Analyzer. When you choose a member account as the delegated administrator, the member account has a permission to create analyzers within the organization. Additionally individual accounts can create analyzers to identify resources accessible from outside those accounts.

IAM Access Analyzer sends an event to Amazon EventBridge for each generated finding, for a change to the status of an existing finding, and when a finding is deleted. You can monitor IAM Access Analyzer findings with EventBridge. Also, all IAM Access Analyzer actions are logged by AWS CloudTrail and AWS Security Hub. Using the information collected by CloudTrail, you can determine the request that was made to Access Analyzer, the IP address from which the request was made, who made the request, when it was made, and additional details.

Now available!
This integration is available in all AWS Regions where IAM Access Analyzer is available. There is no extra cost for creating an analyzer with organization as the zone of trust. You can learn more through these talks of Dive Deep into IAM Access Analyzer and Automated Reasoning on AWS at AWS re:Invent 2019. Take a look at the feature page and the documentation to learn more.

Please send us feedback either in the AWS forum for IAM or through your usual AWS support contacts.

Channy;

The AWS Serverless Application Repository adds sharing for AWS Organizations

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/the-aws-serverless-application-repository-adds-sharing-for-aws-organizations/

The AWS Serverless Application Repository (SAR) enables builders to package serverless applications and reuse these within their own AWS accounts, or share with a broader audience. Previously, SAR applications could only be shared with specific AWS account IDs or made publicly available to all users. For organizations with large numbers of AWS accounts, this means managing a large list of IDs. It also involves tracking when accounts are added or removed to the organizational group.

This new feature allows developers to share SAR applications with AWS Organizations without specifying a list of AWS account IDs. Additionally, if you add or remove accounts later from AWS Organizations, you do not need to manually maintain a list for sharing your SAR application. This new feature also brings more granular controls over the permissions granted, including the ability for master accounts to unshare applications.

This blog post explains these new features and how you can use them to share your new and existing SAR applications.

Sharing a new SAR application with AWS Organizations

First, find your AWS Organization ID by visiting the AWS Organizations console, and choose Settings from the menu bar. Copy your Organization ID from the Organization details card and save this for later. If you do not currently have an organization configured and want to use this feature, see this tutorial for instructions on how to set up an AWS Organization.

Organization ID

Go to the Serverless Application Repository, choose Publish Application and follow the process for publishing an application to your own account. After the application is published, you will see a new tab on the Application Details page called Sharing.

SAR sharing tab

From the Sharing tab, choose Create Statement in the Application policy statements card. To share the application with an entire organization:

  • Enter a Statement Id, which is a helpful reference for the policy statement.
  • Select “With an organization” from the list of sharing options.
  • Enter the Organization ID from earlier.
  • Check the option to “Enable all actions needed to deploy”.
  • Check the acknowledgment check box.
  • Choose Save.

Statement configuration

Now your application is published to all the account IDs within your AWS Organization. As you add policy statements to define how an application is shared, these appear as cards at the end of the Sharing tab.

Policy statements in the sharing tab

Existing shared SAR applications

If you have previous created shared SAR applications with individual accounts, or shared these applications publicly, the policy statements have already been configured. In this case, the policy statements that are generated automatically reflect the existing scope of sharing, so there is no change in the level of visibility of the application.

For example, for a SAR application that was previously shared with two AWS account IDs, there is now a policy statement showing these two account principals and the permission to deploy the application. A Statement Id value is automatically generated as a random globally unique identifier.

GUID statement ID

To change these automatically generated policy statements, select the policy statement you want to change and choose Edit. The subsequent page allows you to modify the random Statement Id, configure the sharing options, and modify the deployment actions allowed.

Statement configuration

For a SAR application that was previously shared publicly, there is now a policy statement showing all accounts as an asterisk (*), with the permission to deploy. To disable public access, choose Edit in the Public Sharing card, and modify the settings on the panel.

Public sharing

More flexibility for defining permissions

This new feature allows you to define specific API actions allowed in each policy statement, and you can associate different permitted actions with different organizations. This allows you to more precisely control how applications are discovered and used within different accounts.

To learn more, read about the definition of these API actions in the SAR documentation.

Allowed actions

Any changes made to the resource policy sharing statements are effective immediately, and applications shared with AWS Organizations and other AWS accounts are only shared within the Region where the application was initially published. This new ability to share applications with organizations can be used in all Regions where the Serverless Application Repository is available.

In addition to creating and modifying resource policy statements within the AWS Management Console, builders can also choose to manage application sharing via the AWS SDK or AWS CLI.

Unsharing an application from AWS Organizations

You can also unshare an application from AWS Organizations through the AWS Management Console by doing the following:

  1. From the Serverless Application Repository console, choose Available Applications in the left navigation pane.
  2. In the application’s tile, choose Unshare.
  3. In the unshare confirmation dialog, enter the Organization ID and application name, then choose Save.

To learn more, read about unsharing published applications.

Conclusion

This post shows how to enable sharing for SAR applications across AWS Organizations using application policy statements, and how to modify existing resource policies. Additionally, it covers how existing SAR applications that are already shared now expose a resource policy that reflects the previously selected sharing preferences, and how you can also modify this policy statement.

The new sharing interface in the SAR console continues to support the previous capabilities of sharing applications with other AWS accounts, and making applications public. This new feature makes it much easier for builders to share SAR applications across large number of accounts within an organization, without needing to manually manage list of account IDs, and provides more granularity over the access controls granted.

For more information about sharing applications using AWS Organization IDs, visit the SAR documentation.

New: Use AWS CloudFormation StackSets for Multiple Accounts in an AWS Organization

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/

Infrastructure-as-code is the process of managing and creating IT infrastructure through machine-readable text files, such as JSON or YAML definitions or using familiar programming languages, such as Java, Python, or TypeScript. AWS Customers typically uses AWS CloudFormation or the AWS Cloud Development Kit to automate the creation and management of their cloud infrastructure.

CloudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts and in multiple Regions with just a couple of clicks. When we launched StackSets, grouping accounts was primarily for billing purposes. Since the launch of AWS Organizations, you can centrally manage multiple AWS accounts across diverse business needs including billing, access control, compliance, security and resource sharing.

Use CloudFormation StackSets with Organizations
Today, we are simplifying the use of CloudFormation StackSets for customers managing multiple accounts with AWS Organizations.

You can now centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions. For example, you can deploy your centralized AWS Identity and Access Management (IAM) roles, provision Amazon Elastic Compute Cloud (EC2) instances or AWS Lambda functions across AWS Regions and accounts in your organization. CloudFormation StackSets simplify the configuration of cross-accounts permissions and allow for automatic creation and deletion of resources when accounts are joining or are removed from your Organization.

You can get started by enabling data sharing between CloudFormation and Organizations from the StackSets console. Once done, you will be able to use StackSets in the Organizations master account to deploy stacks to all accounts in your organization or in specific organizational units (OUs). A new service managed permission model is available with these StackSets. Choosing Service managed permissions allows StackSets to automatically configure the necessary IAM permissions required to deploy your stack to the accounts in your organization.

In addition to setting permissions, CloudFormation StackSets now offers the option for automatically creating or removing your CloudFormation stacks when a new AWS account joins or quits your Organization. You do not need to remember to manually connect to the new account to deploy your common infrastructure or to delete infrastructure when an account is removed from your Organization. When an account leaves the organization, the stack will be removed from the management of StackSets. However, you can choose to either delete or retain the resources managed by the stack.

Lastly, you choose whether to deploy a stack to your entire Organization or just to one or more Organization Units (OU). You also choose a couple of deployment options: how many accounts will be prepared in parallel, and how many failures you tolerate before stopping the entire deployment.

For a full description how StackSets works, you can read the initial blog article from Jeff.

There is no extra cost for using AWS CloudFormation StackSet with AWS Organizations. The integration is available in all AWS Regions where StackSets is available.

— seb

New – Use Tag Policies to Manage Tags Across Multiple AWS Accounts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-tag-policies-to-manage-tags-across-multiple-aws-accounts/

Shortly after we launched EC2, customers started asking for ways to identify, classify, or categorize their instances. We launched tagging for EC2 instances and other EC2 resources way back in 2010, and have added support for many other resource types over the years. We added the ability to tag instances and EBS volumes at creation time a few years ago, and also launched tagging APIs and a Tag Editor. Today, tags serve many important purposes. They can be used to identify resources for cost allocation, and to control access to AWS resources (either directly or via tags on IAM users & roles). In addition to these tools, we have also provided you with comprehensive recommendations on Tag Strategies, which can be used as the basis for the tagging standards that you set up for your own organization.

Beyond Good Intentions
All of these tools and recommendations create a strong foundation, and you might start to use tags with only the very best of intentions. However, as Jeff Bezos, often reminds us, “Good intentions don’t work, but mechanisms do.” Standardizing on names, values, capitalization, and punctuation is a great idea, but challenging to put in to practice. When tags are used to control access to resources or to divvy up bills, small errors can create big problems!

Today we are giving you a mechanism that will help you to implement a consistent, high-quality tagging discipline that spans multiple AWS accounts and Organizational Units (OUs) within an AWS Organization. You can now create and apply Tag Policies and apply them to any desired AWS accounts or OUs within your Organization, or to the the entire Organization. The policies at each level are aggregated into an effective policy for an account.

Each tag policy contains a set of tag rules. Each rule maps a tag key to the allowable values for the key. The tag policies are checked when you perform operations that affect the tags on an existing resource. After you set up your tag policies, you can easily discover tagged resources that do not conform.

Creating Tag Policies
Tag Policies are easy to use. I start by logging in to the AWS account that represents my Organization, and ensure that it has Tag policies enabled in the Settings:

Then I click Policies and Tag policies to create a tag policy for my Organization:

I can see my existing policies, and I click Create policy to make another one:

I enter a name and a description for my policy:

Then I specify the tag key, indicate if capitalization must match, and optionally enter a set of allowable values:

I have three options at this point:

Create Policy – Create a policy that advises me (via a report) of any noncompliant resources in the Root, OUs, and accounts that I designate.

Add Tag Key – Add another tag key to the policy.

Prevent Noncompliant Operations – Strengthen the policy so that it enforces the proper use of the tag by blocking noncompliant operations. To do this, I must also choose the desired resource types:

Then I click Create policy, and I am ready to move ahead. I select my policy, and can then attach it to the Root, or to any desired OUs or Accounts:

Checking Policy Compliance
Once I have created and attached a policy, I can visit the Tag policies page in the Resource Groups console to check on compliance:

I can also download a compliance report (CSV format), or request that a fresh one be generated:

Things to Know
Here are a couple of things to keep in mind:

Policy Inheritance – The examples above used the built-in inheritance system: Organization to OUs to accounts. I can also fine-tune the inheritance model in order to add or remove acceptable tag values, or to limit the changes that child policies are allowed to make. To learn more, read How Policy Inheritance Works.

Tag Remediation – As an account administrator, I can use the Resource Groups console to view the effective tag policy (after inheritance) so that I can take action to fix any non-compliant tags.

Tags for New Resources – I can use Org-level Service Control Policies or IAM policies to prevent the creation of new resources that are not tagged in accord with my organization’s internal standards (see Require a Tag Upon Resource Creation for an example policy).

Access – This new feature is available from the AWS Management Console, AWS Command Line Interface (CLI), and through the AWS SDKs.

Available Now
You can use Tag Policies today in all commercial AWS Regions, at no extra charge.

Jeff;

Use IAM to share your AWS resources with groups of AWS accounts in AWS Organizations

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/iam-share-aws-resources-groups-aws-accounts-aws-organizations/

You can now reference Organizational Units (OUs), which are groups of AWS accounts in AWS Organizations, in AWS Identity and Access Management (IAM) policies, making it easier to define access for your IAM principals (users and roles) to the AWS resources in your organization. AWS Organizations lets you organize your accounts into OUs to align them with your business or security purposes. Now, you can use a new condition key, aws:PrincipalOrgPaths, in your policies to allow or deny access based on a principal’s membership in an OU. This makes it easier than ever to share resources between accounts you own in your AWS environments.

For example, you might have an Amazon S3 bucket you need to share with developers and applications from accounts that are members of a specific OU. To accomplish this, you can specify the aws:PrincipalOrgPaths condition and set the value to the organizational unit ID of the caller in the resource-based policy attached to the bucket. When a principal tries to access the bucket, AWS verifies that their account’s OU matches the one specified in the policy. With this condition, permissions automatically apply when you add accounts to the OU without any additional updates to the policy.

In this post, I introduce the new condition key, and show you how to use it in two examples. In the first example you will see how to use the aws:PrincipalOrgPaths condition key to grant multiple AWS accounts access to a resource, without needing to maintain a list of account IDs in your policy. In the second example, you will see how to add a guardrail to your administrative roles that prevents access to powerful actions unless coming from a protected OU in your organization.

AWS Organizations Concepts

Before I walk through the condition, let’s review some important concepts from AWS Organizations.

AWS Organizations allows you to group a set of AWS accounts into an organization that you can manage centrally. Once the accounts have joined the organization, you can group them into organizational units (OUs), allowing you to set policies that help you meet your security and compliance requirements. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form hierarchical relationships between your accounts. When you create an organization, AWS Organizations creates your first account container automatically. It has a special name, called a root. All OUs you create exist inside the root.

Organizations, roots, and OUs use a different format for their identifiers. You can see the differences in the table below:

Resource ID Format Example Value Globally Unique
Organization o-exampleorgid o-p8iu8lkook Yes
Root r-examplerootid r-tkh7 No
Organizational Unit ou-examplerootid-exampleouid ou-tkh7-pbevdy6h No

Organization IDs are globally unique, meaning no organizations share Organization IDs. OU and Root IDs are not globally unique. This means another customer’s organization OU may have the same ID as those from your organization. OU and Root IDs are unique within an organization. Therefore, you should always include the organization identifier when specifying an OU to make sure it is unique to your organization.

Control access to resources based on OU

You use condition keys in the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition.

Condition key Description Operator(s) Value(s)
aws:PrincipalOrgPaths The paths of the principals’ OU from AWS Organizations All string operators Paths of AWS Organization IDs and organizational unit IDs

The aws:PrincipalOrgPaths condition key is a global condition, meaning you can use it in conjunction with any AWS action. When you use it in the condition element of your IAM policy, it validates the organization, root, and OUs of the principal performing the action on the resource. For example, let’s say a principal was a member of an OU with the id ou-abcd-zzyyxxww inside a root r-abcd in the organization o-1122334455. When the principal makes a request on the resource, its aws:PrincipalOrgPaths value is:

["o-1122334455/r-abcd/ou-abcd-zzyyxxww/"]

The path includes the organization ID to ensure global uniqueness. This ensures only principals from your organization can access your AWS resources. You can use any string operator, such as StringEquals, with the condition. You can also use the wildcard characters (* and ?) when providing a path.

Aws:PrincipalOrgPaths is a multi-value condition key. Multi-value keys allow you to provide multiple values in a list format. Here’s a sample condition statement from a policy that uses the key to validate that a principal is from either ou-1 or ou-2:


"Condition":{
	"ForAnyValue:StringLike":{
		"aws:PrincipalOrgPaths":[
		  "o-1122334455/r-abcd/ou-1/",
		  "o-1122334455/r-abcd/ou-2/"
		]
	}
}

For all multi-value condition keys, you must provide the value as a JSON-formatted list as shown above, even if you’re only specifying one value. As shown in the example above, you also must use the ForAnyValue qualifier in your conditions to specify you’re checking membership of one OU path. For more information, see Creating a Condition That Tests Multiple Key Values in the IAM documentation.

In the next section, I’ll go over an example of how to use the new condition key to protect resources in your account from access outside of a given OU.

Example: Grant S3 bucket access to all principals in an OU in your organization

This example demonstrates how you can use the new condition key to share resources with groups of accounts. By placing the accounts into an OU and granting access based on membership, you can grant targeted access without having to list and maintain all the AWS account IDs in your permission policies.

Consider an example where I want to grant my Machine Learning team permissions to access an S3 bucket training-data that contains images that the team will use to train their machine learning models. I’ve set up my organization such that all AWS accounts owned by my Machine Learning team are part of a specific OU with the ID ou-machinelearn. For the purpose of this example, my organization ID is o-myorganization.

For this example, I want to allow users and applications from the Machine Learning OU or any OU beneath it to have permissions to read the training-data S3 bucket. Any other AWS accounts should not have the ability to view the resource.

To grant these permissions, I author an S3 bucket policy for my training-data resource as shown below.


{
	"Version":"2012-10-17",
	"Statement":{
		"Sid":"TrainingDataS3ReadOnly",
		"Effect":"Allow",
		"Principal": "*",
		"Action":"s3:GetObject",
		"Resource":"arn:aws:s3:::training-data/*",
		"Condition":{
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-machinelearn/*"]
			}
		}
	}
}

In the policy above, I assert that principals trying to read the contents of the training-data bucket must be either a member of the OU that corresponds to the ou-machinelearn ID I provided (my Machine Learning OU Identifier), or a member of any OUs that are children of it. For the aws:PrincipalOrgPaths value, I used two asterisk (*) wildcards. I used the first asterisk (*) between my organization ID and my OU ID because OU IDs are unique within my organization. This means specifying the full path is not necessary to select the OU I need. The second asterisk (*), at the end of the path, is used to specify that I want to allow all child OUs to be included in my string comparison. If I didn’t want to include the child OUs, I could remove the wildcard character.

With this policy on the bucket, any principals in the Machine Learning OU may read objects inside the bucket if the user or role has the appropriate S3 permissions. Note that if this policy did not have the condition statement, it would be accessible by any AWS account. As a best practice, AWS recommends only granting access to the principals that need it. As for next steps, I could edit the Principal section of the policy to restrict access to specific principals in my Machine Learning accounts. For more information, see Specifying a Principal in a Policy in the S3 documentation.

Example: Restrict access to an IAM role to only accounts in an OU in my organization

The next example will show how to use aws:PrincipalOrgPaths to add another layer of security to your existing IAM role trust policies, ensuring only members of specific OUs may assume your roles.

For this example, say my company requires that only network security engineers can create or manage AWS Virtual Private Cloud (VPC) resources in my accounts. The network security team has a dedicated OU, ou-netsec, for their workloads. I have the same organization ID as the previous example, o-myorganization.

Each account in my organization has a dedicated IAM role, VPCManager, with the permissions needed to manage VPCs. I want to ensure that only my network security team, who use principals that are tagged as such, has access to the role. To do this, I edited the role trust policy for VPCManager, which defines who can access an IAM role. In this case, I added a condition to the policy to require that anyone assuming the role must come from an account in ou-netsec.

This is the trust policy I created for VPCManager:


{
  "Version": "2012-10-17",
  "Statement": [
	{
			"Effect": "Allow",
			"Principal": {
			"AWS": [
				"123456789012",
				"345678901234",
				"567890123456"
			]
		},
		"Action": "sts:AssumeRole",
		"Condition":{
		"StringEquals":{
		"aws:PrincipalTag/JobRole":"NetworkAdmin"
			},
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-netsec/"]
            }
         }
      }
   ]
}

I started by adding the Effect, Principal, and Action to allow principals from three network security accounts to assume the role. To ensure they have the right job role, I added a condition to require the JobRole=NetworkAdmin tag must be applied to principals before they can assume the role. Finally, as an added layer of security, I added the second condition that requires anyone assuming the role must come from an account in the network security OU. This final step ensures that I specified the correct account IDs for my network security accounts—even if I accidentally provided an account that is not part of my organization, members of that account won’t be able to assume the role because they aren’t part of ou-netsec.

Though only members of the network security team may assume the role, it’s still possible for any principals with IAM permissions to modify it. As next steps, I could apply a Service Control Policy (SCP) that protects the role from modification and prevents other roles in the account from modifying VPCs. For more information, see How to use service control policies to set permission guardrails in the AWS Security Blog.

Summary

AWS offers tools to control access for individual principals, accounts, OUs, or entire organizations—this helps you manage permissions at the appropriate scale for your business. You can now use the aws:PrincipalOrgPaths condition key to control access to your resources based on OUs configured in AWS Organizations. For more information about these global condition keys and policy examples, read the IAM documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Identity and Access Management forum.

Want more AWS Security news? Follow us on Twitter.

Michael Switzer, Senior Product Manager AWS Identity

Michael Switzer

Mike Switzer is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. Mike holds a master’s degree in computational mathematics from the University of Washington.