Tag Archives: AWS IAM

New IAMCTL tool compares multiple IAM roles and policies

Post Syndicated from Sudhir Reddy Maddulapally original https://aws.amazon.com/blogs/security/new-iamctl-tool-compares-multiple-iam-roles-and-policies/

If you have multiple Amazon Web Services (AWS) accounts, and you have AWS Identity and Access Management (IAM) roles among those multiple accounts that are supposed to be similar, those roles can deviate over time from your intended baseline due to manual actions performed directly out-of-band called drift. As part of regular compliance checks, you should confirm that these roles have no deviations. In this post, we present a tool called IAMCTL that you can use to extract the IAM roles and policies from two accounts, compare them, and report out the differences and statistics. We will explain how to use the tool, and will describe the key concepts.

Prerequisites

Before you install IAMCTL and start using it, here are a few prerequisites that need to be in place on the computer where you will run it:

To follow along in your environment, clone the files from the GitHub repository, and run the steps in order. You won’t incur any charges to run this tool.

Install IAMCTL

This section describes how to install and run the IAMCTL tool.

To install and run IAMCTL

  1. At the command line, enter the following command:
    pip3 install git+ssh://[email protected]/aws-samples/[email protected]
    

    You will see output similar to the following.

    Figure 1: IAMCTL tool installation output

    Figure 1: IAMCTL tool installation output

  2. To confirm that your installation was successful, enter the following command.
    iamctl –h
    

    You will see results similar to those in figure 2.

    Figure 2: IAMCTL help message

    Figure 2: IAMCTL help message

Now that you’ve successfully installed the IAMCTL tool, the next section will show you how to use the IAMCTL commands.

Example use scenario

Here is an example of how IAMCTL can be used to find differences in IAM roles between two AWS accounts.

A system administrator for a product team is trying to accelerate a product launch in the middle of testing cycles. Developers have found that the same version of their application behaves differently in the development environment as compared to the QA environment, and they suspect this behavior is due to differences in IAM roles and policies.

The application called “app1” primarily reads from an Amazon Simple Storage Service (Amazon S3) bucket, and runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance. In the development (DEV) account, the application uses an IAM role called “app1_dev” to access the S3 bucket “app1-dev”. In the QA account, the application uses an IAM role called “app1_qa” to access the S3 bucket “app1-qa”. This is depicted in figure 3.

Figure 3: Showing the “app1” application in the development and QA accounts

Figure 3: Showing the “app1” application in the development and QA accounts

Setting up the scenario

To simulate this setup for the purpose of this walkthrough, you don’t have to create the EC2 instance or the S3 bucket, but just focus on the IAM role, inline policy, and trust policy.

As noted in the prerequisites, you will switch between the two AWS accounts by using the AWS CLI named profiles “dev-profile” and “qa-profile”, which are configured to point to the DEV and QA accounts respectively.

Start by using this command:

mkdir -p iamctl_test iamctl_test/dev iamctl_test/qa

The command creates a directory structure that looks like this:
Iamctl_test
|– qa
|– dev

Now, switch to the dev folder to run all the following example commands against the DEV account, by using this command:

cd iamctl_test/dev

To create the required policies, first create a file named “app1_s3_access_policy.json” and add the following policy to it. You will use this file’s content as your role’s inline policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-dev/shared/*"
            ]
        }
    ]
}

Second, create a file called “app1_trust_policy.json” and add the following policy to it. You will use this file’s content as your role’s trust policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now use the two files to create an IAM role with the name “app1_dev” in the account by using these command(s), run in the same order as listed here:

#create role with trust policy

aws --profile dev-profile iam create-role --role-name app1_dev --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role created above
 
aws --profile dev-profile iam put-role-policy --role-name app1_dev --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

In the QA account, the IAM role is named “app1_qa” and the S3 bucket is named “app1-qa”.

Repeat the steps from the prior example against the QA account by changing dev to qa where shown in bold in the following code samples. Change the directory to qa by using this command:

cd ../qa

To create the required policies, first create a file called “app1_s3_access_policy.json” and add the following policy to it.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Next, create a file, called “app1_trust_policy.json” and add the following policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now, use the two files created so far to create an IAM role with the name “app1_qa” in your QA account by using these command(s), run in the same order as listed here:

#create role with trust policy
aws --profile qa-profile iam create-role --role-name app1_qa --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role create above
aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

So far, you have two accounts with an IAM role created in each of them for your application. In terms of permissions, there are no differences other than the name of the S3 bucket resource the permission is granted against.

You can expect IAMCTL to generate a minimal set of differences between the DEV and QA accounts, assuming all other IAM roles and policies are the same, but to be sure about the current state of both accounts, in IAMCTL you can run a process called baselining.

Through the process of baselining, you will generate an equivalency dictionary that represents all the known string patterns that reduce the noise in the generated deviations list, and then you will introduce a change into one of the IAM roles in your QA account, followed by a final IAMCTL diff to show the deviations.

Baselining

Baselining is the process of bringing two accounts to an “equivalence” state for IAM roles and policies by establishing a baseline, which future diff operations can leverage. The process is as simple as:

  1. Run the iamctl diff command.
  2. Capture all string substitutions into an equivalence dictionary to remove or reduce noise.
  3. Save the generated detailed files as a snapshot.

Now you can go through these steps for your baseline.

Go ahead and run the iamctl diff command against these two accounts by using the following commands.

#change directory from qa to iamctl-test
cd ..

#run iamctl init
iamctl init

The results of running the init command are shown in figure 4.

Figure 4: Output of the iamctl init command

Figure 4: Output of the iamctl init command

If you look at the iamctl_test directory now, shown in figure 5, you can see that the init command created two files in the iamctl_test directory.

Figure 5: The directory structure after running the init command

Figure 5: The directory structure after running the init command

These two files are as follows:

  1. iam.jsonA reference file that has all AWS services and actions listed, among other things. IAMCTL uses this to map the resource listed in an IAM policy to its corresponding AWS resource, based on Amazon Resource Name (ARN) regular expression.
  2. equivalency_list.jsonThe default sample dictionary that IAMCTL uses to suppress false alarms when it compares two accounts. This is where the known string patterns that need to be substituted are added.

Note: A best practice is to make the directory where you store the equivalency dictionary and from which you run IAMCTL to be a Git repository. Doing this will let you capture any additions or modifications for the equivalency dictionary by using Git commits. This will not only give you an audit trail of your historical baselines but also gives context to any additions or modifications to the equivalency dictionary. However, doing this is not necessary for the regular functioning of IAMCTL.

Next, run the iamctl diff command:

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa
Figure 6: Result of diff command

Figure 6: Result of diff command

Figure 6 shows the results of running the diff command. You can see that IAMCTL considers the app1_qa and app1_dev roles as unique to the DEV and QA accounts, respectively. This is because IAMCTL uses role names to decide whether to compare the role or tag the role as unique.

You will add the strings “dev” and “qa” to the equivalency dictionary to instruct IAMCTL to substitute occurrences of these two strings with “accountname” by adding the follow JSON to the equivalency_list.json file. You will also clean up some defaults already present in there.

echo “{“accountname”:[“dev”,”qa”]}” > equivalency_list.json

Figure 7 shows the equivalency dictionary before you take these actions, and figure 8 shows the dictionary after these actions.

Figure 7: Equivalency dictionary before

Figure 7: Equivalency dictionary before

Figure 8: Equivalency dictionary after

Figure 8: Equivalency dictionary after

There’s another thing to notice here. In this example, one common role was flagged as having a difference. To know which role this is and what the difference is, go to the detail reports folder listed at the bottom of the summary report. The directory structure of this folder is shown in figure 9.

Notice that the reports are created under your home directory with a folder structure that mimics the time stamp down to the second. IAMCTL does this to maintain uniqueness for each run.

tree /Users/<username>/aws-idt/output/2020/08/24/08/38/49/
Figure 9: Files written to the output reports directory

Figure 9: Files written to the output reports directory

You can see there is a file called common_roles_in_dev_with_differences.csv, and it lists a role called “AwsSecurity***Audit”.

You can see there is another file called dev_to_qa_common_role_difference_items.csv, and it lists the granular IAM items from the DEV account that belong to the “AwsSecurity***Audit” role as compared to QA, but which have differences. You can see that all entries in the file have the DEV account number in the resource ARN, whereas in the qa_to_dev_common_role_difference_items.csv file, all entries have the QA account number for the same role “AwsSecurity***Audit”.

Add both of the account numbers to the equivalency dictionary to substitute them with a placeholder number, because you don’t want this role to get flagged as having differences.

echo “{“accountname”:[“dev”,”qa”],”000000000000”:[“123456789012”,”987654321098”]}” > equivalency_list.json

Now, re-run the diff command.

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa

As you can see in figure 10, you get back the result of the diff command that shows that the DEV account doesn’t have any differences in IAM roles as compared to the QA account.

Figure 10: Output showing no differences after completion of baselining

Figure 10: Output showing no differences after completion of baselining

This concludes the baselining for your DEV and QA accounts. Now you will introduce a change.

Introducing drift

Drift occurs when there is a difference in actual vs expected values in the definition or configuration of a resource. There are several reasons why drift occurs, but for this scenario you will use “intentional need to respond to a time-sensitive operational event” as a reason to mimic and introduce drift into what you have built so far.

To simulate this change, add “s3:PutObject” to the qa app1_s3_access_policy.json file as shown in the following example.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*",
                "s3:PutObject"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Put this new inline policy on your existing role “app1_qa” by using this command:

aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

The following table represents the new drift in the accounts.

ActionAccount-DEV
Role name: app1_dev
Account-QA
Role name: app1_qa
s3:Get*YesYes
s3:List*YesYes
s3: PutObjectNoYes

Next, run the iamctl diff command to see how it picks up the drift from your previously baselined accounts.

#change directory from qa to iamctL-test
cd ..
iamctl diff dev-profile dev qa-profile qa
Figure 11: Output showing the one deviation that was introduced

Figure 11: Output showing the one deviation that was introduced

You can see that IAMCTL now shows that the QA account has one difference as compared to DEV, which is what we expect based on the deviation you’ve introduced.

Open up the file qa_to_dev_common_role_difference_items.csv to look at the one difference. Again, adjust the following path example with the output from the iamctl diff command at the bottom of the summary report in Figure 11.

cat /Users/<username>/aws-idt/output/2020/09/18/07/38/15/qa_to_dev_common_role_difference_items.csv

As shown in figure 12, you can see that the file lists the specific S3 action “PutObject” with the role name and other relevant details.

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

You can use this information to remediate the deviation by performing corrective actions in either your DEV account or QA account. You can confirm the effectiveness of the corrective action by re-baselining to make sure that zero deviations appear.

Conclusion

In this post, you learned how to use the IAMCTL tool to compare IAM roles between two accounts, to arrive at a granular list of meaningful differences that can be used for compliance audits or for further remediation actions. If you’ve created your IAM roles by using an AWS CloudFormation stack, you can turn on drift detection and easily capture the drift because of changes done outside of AWS CloudFormation to those IAM resources. For more information about drift detection, see Detecting unmanaged configuration changes to stacks and resources. Lastly, see the GitHub repository where the tool is maintained with documentation describing each of the subcommand concepts. We welcome any pull requests for issues and enhancements.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sudhir Reddy Maddulapally

Sudhir is a Senior Partner Solution Architect who builds SaaS solutions for partners by day and is a tech tinkerer by night. He enjoys trips to state and national parks, and Yosemite is his favorite thus far!

Author

Soumya Vanga

Soumya is a Cloud Application Architect with AWS Professional Services in New York, NY, helping customers design solutions and workloads and to adopt Cloud Native services.

Enhance programmatic access for IAM users using a YubiKey for multi-factor authentication

Post Syndicated from Edouard Kachelmann original https://aws.amazon.com/blogs/security/enhance-programmatic-access-for-iam-users-using-yubikey-for-multi-factor-authentication/

Organizations are increasingly providing access to corporate resources from employee laptops and are required to apply the correct permissions to these computing devices to make sure that secrets and sensitive data are adequately protected. The combination of Amazon Web Services (AWS) long-term credentials and a YubiKey security token for multi-factor authentication (MFA) is an option for providing secure programmatic access to AWS for organizations that aren’t yet ready or able to use identity federation. For example, a user should be able to list AWS Identity and Access Management (IAM) roles with their default programmatic access, but would be required to provide MFA to assume an IAM role.

In this blog post, we show you how to use a YubiKey token for MFA with the AWS Command Line Interface (AWS CLI) to create temporary credentials with the permissions that developers need to perform tasks. The user will configure the long-term credentials and then temporarily assume a role with broader permissions by using MFA when needed. MFA adds extra security, because it requires users to provide second-factor authentication from an AWS-supported MFA mechanism in addition to static security credentials such as their user name and password.

The goal for any organization is to move to the recommended best practices for allowing individual programmatic access that include using temporary security credentials that aren’t stored with the user, but are generated dynamically and provided to the user when requested, such as identity federation due to the temporary nature of those credentials. If your organization uses AWS Single Sign-On (AWS SSO) along with an identity provider (IdP) such as Okta, Azure Active Directory (AD), or AWS Managed Microsoft AD, you can then use the instructions from this earlier blog post to leverage the AWS CLI v2 native integration with AWS SSO and take advantage of the multi-factor authentication support of your IdP.

Overview

This post describes the configuration of IAM users and roles and initialization of the YubiKey token as an MFA device by an administrator, and then how developers can use the YubiKey device to retrieve temporary credentials and assume a role with elevated permissions within the AWS CLI.

The overall process flow looks like this:

  1. Create an IAM user with programmatic access, MFA, and a policy that allows you to assume a more privileged IAM role. The user will retrieve a Time-based One-time Password (TOTP) token code by using a YubiKey as MFA.
  2. Assume the more privileged role, which is restricted by an MFA conditional, by using the TOTP token code.

Figure 1 shows the steps of the process.

Figure 1: A visual overview of the steps to assume roles with elevated permissions by using a YubiKey for MFA

Figure 1: A visual overview of the steps to assume roles with elevated permissions by using a YubiKey for MFA

Prerequisites

To get started you need:

  • An AWS account.
  • A YubiKey (available on Amazon.com). YubiKey 4 and 5 series are compatible, because they support the required OATH application.

    Note: The Yubico Security Keys (the blue tokens) aren’t supported, because they lack the OATH application. If you already have a corporate YubiKey device, this capability might have been disabled.

  • To complete the process for:

Notes:

  • AWS CLI v2 doesn’t yet support Universal 2nd factor (U2F) MFA. As a workaround, we use a YubiKey as a virtual device MFA.
  • OATH (Initiative for Open Authentication) is an organization that specifies two open authentication standards: TOTP and HMAC-based One-time Password (HOTP). For this solution, we use the TOTP standard.

Getting started

Initializing YubiKey for MFA

The following steps show you, as cloud administrator, how to initialize the YubiKey as a virtual MFA device and configure an IAM user that can assume a role with elevated permissions, on the condition that the user is using an MFA device. In this example, your developers will assume a role with permissions to access Amazon Elastic Compute Cloud (Amazon EC2).

To configure the IAM user and initialize the YubiKey device as MFA

  1. Create a role with elevated permissions that your developers can assume.
    1. Sign in to the AWS IAM console, and in the right-hand pane, choose Roles. Then choose Create role.

      Figure 2: Create a role in the IAM console

      Figure 2: Create a role in the IAM console

    2. For the type of trusted entity, choose Another AWS account. Enter your account ID, which you can find by using these methods, described in the IAM User Guide. Choose Next:Permissions.

      Figure 3: Select the type of trusted entity and provide the account ID

    3. Search for the AmazonEC2FullAccess policy, and select the check box next to it. Choose Next:Tags, and add relevant tags if needed. Choose Next:Review.
    4. Name the role developer-ec2-mfa, and then choose Create role.
    5. Go back to the role you just created. Change the maximum session duration value to limit how long the developer’s session can be valid after assuming the role. For this example, we set the duration to 1 hour (3,600 seconds) by using a custom value. Limit this duration to abide by your organization’s recommended authentication time.
    6. Take note of the Amazon Resource Name (ARN) for the new role as shown on the summary page.

      Figure 4: Summary page of the new role

      Figure 4: Summary page of the new role

  2. Create a new IAM policy that provides a limited scope of actions for users when they use their long-term credentials.
    1. Navigate to the AWS IAM console, and in the navigation pane, choose Policies. Choose Create policy.

      Figure 5: Create a policy in the IAM console

      Figure 5: Create a policy in the IAM console

    2. Because we’ve already written the policy in JSON, you don’t need to use the Visual Editor, so you can choose the JSON tab and paste the content of the following JSON policy document (remember to replace the placeholder for the role ARN).Following the least privilege approach, add only the Amazon Resource Names (ARNs) of the role or roles with required elevated permissions that the developer will be able to assume. In this case, use the developer-ec2-mfa ARN for the role that you created previously.
      {
         "Version": "2012-10-17",
         "Statement": [
            {
               "Sid": "",
               "Effect": "Allow",
               "Action": "sts:AssumeRole",
               "Resource": <Elevated Role ARN(s)>,
               "Condition": {
                  "Bool": {
                     "aws:multifactorAuthPresent": true
                  }
               }
            }
         ]
      }
      

      Note: The condition “aws:MultiFactorAuthPresent”: “true” requires that the user who assumes the role has been authenticated with an AWS MFA device.

    3. Choose Review policy.
    4. Name the policy yubi-policy-mfa-level-one. Choose Create policy.
  3. Create a new IAM group that lets you specify permissions for multiple users and makes it easier to manage the permissions for those users.
    1. Navigate to the IAM console, and in the navigation pane, choose Groups. Choose Create New Group.

      Figure 6: Create a group in the IAM console

      Figure 6: Create a group in the IAM console

    2. Enter developers-mfa as the group name. Choose Next Step.
    3. On the Attach Policy screen, in the filter box, search for the policy yubi-policy-mfa-level-one that you created in the previous step. Make sure you select the check box next to the policy, and then choose Next Step.

      Figure 7: Attach the policy to the IAM group

      Figure 7: Attach the policy to the IAM group

    4. Review the group information, and then choose Create Group.
  4. Create a user in IAM for the developer using the AWS CLI.
    1. Navigate to the IAM console and in the navigation pane, choose Users. Choose Add user.
    2. On the Add user screen, enter the name for your user. In this example, our developer is named JohnDoe. For Access type, select the check box next to Programmatic access. Choose Next: Permissions.

      Figure 8: Create an IAM user with programmatic access

      Figure 8: Create an IAM user with programmatic access

    3. For permissions, select Add user to group, and select the developers-mfa group. Choose Next: Tags.
    4. Add the relevant tags if needed, and then choose Next: Review.
    5. Review the user configuration, and then choose Create user.
    6. Make sure you save the access key ID and secret access key to share with your user. Choose Close.
  5. Assign an MFA device to the user.
    1. Go back to the Users section of the IAM console. Choose the IAM user that you created previously, and go to the Security credentials tab. For Assigned MFA device, choose Manage.

      Figure 9: Assign MFA device to the IAM user

      Figure 9: Assign MFA device to the IAM user

    2. Select Virtual MFA device, because the AWS CLI doesn’t yet support U2F MFA. Choose Continue.

      Figure 10: Select the Virtual MFA device type

      Figure 10: Select the Virtual MFA device type

    3. Instead of using the QR code, choose Show secret key.

      Note: The secret key is a randomly generated string shared between IAM and the physical YubiKey. It is used to generate a one-time password using a hash function with the current timestamp.

       

      Figure 11: Retrieve the secret key on the virtual MFA device configuration page

      Figure 11: Retrieve the secret key on the virtual MFA device configuration page

    4. Copy the secret key to use in the next step as the MFA_SECRET to configure the MFA device.
  6. To obtain the TOTP token codes from the YubiKey to synchronize the key with the IAM user, do the following.
    1. Insert the YubiKey token in your USB port, and verify that the OATH application is enabled for your YubiKey by running the following command and looking for Enabled USB interfaces: OTP+FIDO+CCID in the output.
      $ ykman info
      
      Device type: YubiKey 5 NFC
      Serial number: 123456789
      Firmware version: 5.2.4
      Form factor: Keychain (USB-A)
      Enabled USB interfaces: OTP+FIDO+CCID
      NFC interface is enabled.
      
      Applications USB NFC
      OTP Enabled Enabled
      FIDO U2F Enabled Enabled
      OpenPGP Enabled Enabled
      PIV Enabled Enabled
      OATH Enabled Enabled
      FIDO2 Enabled Enabled
      

    2. For each MFA device, you need to generate a unique identifier that will be used during the process. We recommend that you create this identifier based on the ARN of the IAM user, by using the following template.
      arn:aws:iam::<ACCOUNT_ID>:mfa/<IAM_USERNAME>
      

    3. Add a new credential to your YubiKey based on the MFA device ARN. Use the MFA_SECRET that you copied in the previous step (step 5).
      ykman oath add -t arn:aws:iam::<ACCOUNT_ID>:mfa/<IAM_USERNAME> <MFA_SECRET>
      

    4. Obtain two TOTP token codes by using the following command (remember to replace the placeholder for the <MFA device ARN>). Wait up to 30 seconds for the device to generate the second token code (you will be prompted to touch the token).
      ykman oath code <MFA device ARN>
      

    5. After obtaining each of the TOTP token codes, go back to the IAM console where you were setting up the virtual MFA device, and enter the code in the MFA code box. After entering the two MFA codes, choose Assign MFA.

      Figure 12: Enter the two consecutive YubiKey codes in the virtual MFA device configuration page

      Figure 12: Enter the two consecutive YubiKey codes in the virtual MFA device configuration page

  7. You can then provide the following information to your developer:
    1. The YubiKey device along with the generated MFA device ARN
    2. The ARNs for the roles that will be assumed
    3. The long-term AWS credentials

Assuming a role with the YubiKey as MFA

The following steps show how you, as a developer, can retrieve temporary credentials using the YubiKey device as MFA, and assume a role with wider permissions. You can do this after the YubiKey device, one or more role ARNs, and long-term credentials have been shared with you by the cloud administrator.

To assume a role with broader permissions by using YubiKey

  1. As part of the prerequisites, you should have the AWS CLI v2 already installed. Now configure the default profile with the long-term credentials provided by your cloud administrator, by using the following command.
    $ aws configure
    
    AWS Access Key ID [None]: <Enter your AWS access key>
    AWS Secret Access Key [None]: <Enter your AWS secret access key>
    Default region name [None]: <Enter your AWS default region>
    Default output format [None]: <Enter your output default format>
    

  2. Obtain a TOTP code from YubiKey (you will be prompted to touch the token). Submit your request immediately after generating the codes. If you generate the codes and then wait too long to submit the request, the code won’t be valid anymore.
    ykman oath code arn:aws:iam::<ACCOUNT_ID>:mfa/<IAM_USERNAME>
    

  3. Using the MFA token code you obtained by using the YubiKey, assume the relevant role that will provide access to larger permissions. In our example, the ARN is for the role developer-ec2-mfa that was provided by the IAM administrator. Enter a role session name that will uniquely identify a session when the same role is assumed by different principals.
    aws sts assume-role --role-arn <Role ARN> --role-session-name <Role Name> --serial-number <MFA device ARN> --token-code <token code> --duration-seconds 3600 
    

    Note: The user should only have access to sts:AssumeRole for a specific set of roles. Here we chose the session duration of one hour. You can edit the session duration so the developer can authenticate for the duration of a workday (the default value is 1 hour and can be up to 12 hours). Limit this duration to abide by your organization’s recommended authentication time.

    You should see the following output.

    {
       "AssumedRoleUser": {
          "AssumedRoleId": "ABCD123ABCDEFGHIJKLMN:<role-session-name>",
          "Arn": "arn:aws:sts::<ACCOUNT_ID>:assumed-role/developer-ec2-mfa/<role-session-name>"
       },
       "Credentials": {
          "SecretAccessKey": <aws_secret_access_key>,
          "SessionToken": <aws_session_token>,
          "Expiration": "2020-07-13T19:24:20Z",
          "AccessKeyId": <aws_access_key_id>
       }
    }
    

  4. Edit a new AWS CLI profile named johndoe-developer-role as seen following. Copy the access key and secret key that were retrieved as temporary credentials from the get-session-token command. Then set the additional parameter aws_session_token, which was returned along with the temporary credentials. Edit your CLI with the information for the new role.
    aws configure --profile johndoe-developer-role
    aws configure --profile johndoe-developer-role set aws_session_token <Session Token> 
    

  5. Attempt to make a call to relevant services that are allowed by the newly assumed role. Here’s an example using the Amazon EC2 API to describe the EC2 instances.
    aws --profile johndoe-developer-role ec2 describe-instances
    

The developer now has access to the larger permissions set through the assumed role for the next hour.

Summary

In this post, we introduced the capability to further secure long-term AWS credentials with a YubiKey for MFA, for organizations that are still using long-term credentials. These credentials are stored in the ~/.aws/credentials file. If an unauthorized user was able to retrieve these long-term credentials, they wouldn’t be able to use them, because the user needs to have the physical MFA in order to assume a role with broader permissions. The steps in this blog post can be converted to a script that your developers can use repeatedly to simplify the process.

In general, we recommend that all customers move away from using IAM users and static credentials and instead use IAM roles and temporary credentials wherever possible. An easy way to get started down that road is by using AWS SSO for identity federation.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Edouard Kachelmann

Edouard is an Enterprise Solutions Architect at Amazon Web Services. He is a passionate technology enthusiast who enjoys working with customers and helping them build innovative solutions. Prior to his work at AWS, Edouard worked for the French National Cybersecurity Agency, sharing its security expertise and assisting government departments and operators of vital importance. In his free time, Edouard likes to explore new places to eat, try new French recipes, and play with his kid.

Author

Anthony Pasquariello

Anthony is an Enterprise Solutions Architect based in New York City. He provides technical consultation to customers during their cloud journey, especially around security best practices. He has an MS and BS in electrical & computer engineering from Boston University. In his free time, he enjoys ramen, writing nonfiction, and philosophy.

How to use trust policies with IAM roles

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/

AWS Identity and Access Management (IAM) roles are a significant component in the way customers operate in Amazon Web Service (AWS). In this post, I’ll dive into the details on how Cloud security architects and account administrators can protect IAM roles from misuse by using trust policies. By the end of this post, you’ll know how to use IAM roles to build trust policies that work at scale, providing guardrails to control access to resources in your organization.

In general, there are four different scenarios where you might use IAM roles in AWS:

  • One AWS service accesses another AWS service – When an AWS service needs access to other AWS services or functions, you can create a role that will grant that access.
  • One AWS account accesses another AWS account – This use case is commonly referred to as a cross-account role pattern. This allows human or machine IAM principals from other AWS accounts to assume this role and act on resources in this account.
  • A third-party web identity needs access – This use case allows users with identities in third-party systems like Google and Facebook, or Amazon Cognito, to use a role to access resources in the account.
  • Authentication using SAML2.0 federation – This is commonly used by enterprises with Active Directory that want to connect using an IAM role so that their users can use single sign-on workflows to access AWS accounts.

In all cases, the makeup of an IAM role is the same as that of an IAM user and is only differentiated by the following qualities:

  • An IAM role does not have long term credentials associated with it; rather, a principal (an IAM user, machine, or other authenticated identity) assumes the IAM role and inherits the permissions assigned to that role.
  • The tokens issued when a principal assumes an IAM role are temporary. Their expiration reduces the risks associated with credentials leaking and being reused.
  • An IAM role has a trust policy that defines which conditions must be met to allow other principals to assume it. This trust policy reduces the risks associated with privilege escalation.

Recommendation: You should make extensive use of temporary IAM roles rather than permanent credentials such as IAM users. For more information review this page: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

While the list of users having access to your AWS accounts can change over time, the roles used to manage your AWS account probably won’t. The use of IAM roles essentially decouples your enterprise identity system (SAML 2.0) from your permission system (AWS IAM policies), simplifying management of each.

Managing access to IAM roles

Let’s dive into how you can create relationships between your enterprise identity system and your permissions system by looking at the policy types you can apply to an IAM role.

An IAM role has three places where it uses policies:

  • Permission policies (inline and attached) – These policies define the permissions that a principal assuming the role is able (or restricted) to perform, and on which resources.
  • Permissions boundary – A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based permission policies and its permissions boundaries.
  • Trust relationship – This policy defines which principals can assume the role, and under which conditions. This is sometimes referred to as a resource-based policy for the IAM role. We’ll refer to this policy simply as the ‘trust policy’.

A role can be assumed by a human user or a machine principal, such as an Amazon Elastic Computer Cloud (Amazon EC2) instance or an AWS Lambda function. Over the rest of this post, you’ll see how you’re able to reduce the conditions for principals to use roles by configuring their trust policies.

An example of a simple trust policy

A common use case is when you need to provide security audit access to your account, allowing a third party to review the configuration of that account. After attaching the relevant permission policies to an IAM role, you need to add a cross-account trust policy to allow the third-party auditor to make the sts:AssumeRole API call to elevate their access in the audited account. The following trust policy shows an example policy created through the AWS Management Console:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

As you can see, it has the same structure as other IAM policies with Effect, Action, and Condition components. It also has the Principal parameter, but no Resource attribute. This is because the resource, in the context of the trust policy, is the IAM role itself. For the same reason, the Action parameter will only ever be set to one of the following values: sts:AssumeRole, sts:AssumeRoleWithSAML, or sts:AssumeRoleWithWebIdentity.

Note: The suffix root in the policy’s Principal attribute equates to “authenticated and authorized principals in the account,” not the special and all-powerful root user principal that is created when an AWS account is created.

Using the Principal attribute to reduce scope

In a trust policy, the Principal attribute indicates which other principals can assume the IAM role. In the example above, 111122223333 represents the AWS account number for the auditor’s AWS account. In effect, this allows any principal in the 111122223333 AWS account with sts:AssumeRole permissions to assume this role.

To restrict access to a specific IAM user account, you can define the trust policy like the following example, which would allow only the IAM user LiJuan in the 111122223333 account to assume this role. LiJuan would also need to have sts:AssumeRole permissions attached to their IAM user for this to work:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/LiJuan"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

The principals set in the Principal attribute can be any principal defined by the IAM documentation, and can refer to an AWS or a federated principal. You cannot use a wildcard (“*” or “?”) within a Principal for a trust policy, other than one special condition, which I’ll come back to in a moment: You must define precisely which principal you are referring to because there is a translation that occurs when you submit your trust policy that ties it to each principal’s hidden principal ID, and it can’t do that if there are wildcards in the principal.

The only scenario where you can use a wildcard in the Principal parameter is where the parameter value is only the “*” wildcard. Use of the global wildcard “*” for the Principal isn’t recommended unless you have clearly defined Conditional attributes in the policy statement to restrict use of the IAM role, since doing so without Conditional attributes permits assumption of the role by any principal in any AWS account, regardless of who that is.

Using identity federation on AWS

Federated users from SAML 2.0 compliant enterprise identity services are given permissions to access AWS accounts through the use of IAM roles. While the user-to-role configuration of this connection is established within the SAML 2.0 identity provider, you should also put controls in the trust policy in IAM to reduce any abuse.

Because the Principal attribute contains configuration information about the SAML mapping, in the case of Active Directory, you need to use the Condition attribute in the trust policy to restrict use of the role from the AWS account management perspective. This can be done by restricting the SourceIp address, as demonstrated later, or by using one or more of the SAML-specific Condition keys available. My recommendation here is to be as specific as you can in reducing the set of principals that can use the role as is practical. This is best achieved by adding qualifiers into the Condition attribute of your trust policy.

There’s a very good guide on creating roles for SAML 2.0 federation that contains a basic example trust policy you can use.

Using the Condition attribute in a trust policy to reduce scope

The Condition statement in your trust policy sets additional requirements for the Principal trying to assume the role. If you don’t set a Condition attribute, the IAM engine will rely solely on the Principal attribute of this policy to authorize role assumption. Given that it isn’t possible to use wildcards within the Principal attribute, the Condition attribute is a really flexible way to reduce the set of users that are able to assume the role without necessarily specifying the principals.

Limiting role use based on an identifier

Occasionally teams managing multiple roles can become confused as to which role achieves what and can inadvertently assume the wrong role. This is referred to as the Confused Deputy problem. This next section shows you a way to quickly reduce this risk.

The following trust policy requires that principals from the 111122223333 AWS account have provided a special phrase when making their request to assume the role. Adding this condition reduces the risk that someone from the 111122223333 account will assume this role by mistake. This phrase is configured by specifying an ExternalID conditional context key.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "ExampleSpecialPhrase"
        }
      }
    }
  ]
}

In the example trust policy above, the value ExampleSpecialPhrase isn’t a secret or a password. Adding the ExternalID condition limits this role from being assumed using the console. The only way to add this ExternalID argument into the role assumption API call is to use the AWS Command Line Interface (AWS CLI) or a programming interface. Having this condition doesn’t prevent a user who knows about this relationship and the ExternalId from assuming what might be a privileged set of permissions, but does help manage risks like the Confused Deputy problem. I see customers using an ExternalID that matches the name of the AWS account, which works to ensure that an operator is working on the account they believe they’re working on.

Limiting role use based on multi-factor authentication

By using the Condition attribute, you can also require that the principal assuming this role has passed a multi-factor authentication (MFA) check before they’re permitted to use this role. This again limits the risk associated with mistaken use of the role and adds some assurances about the principal’s identity.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "BoolIfExists": { 
          "aws:MultiFactorAuthPresent" : "true" 
		}
      }
    }
  ]
}

In the example trust policy above, I also introduced the MultiFactorAuthPresent conditional context key. Per the AWS global condition context keys documentation, the MultiFactorAuthPresent conditional context key does not apply to sts:AssumeRole requests in the following contexts:

  • When using access keys in the CLI or with the API
  • When using temporary credentials without MFA
  • When a user signs in to the AWS Console
  • When services (like AWS CloudFormation or Amazon Athena) reuse session credentials to call other APIs
  • When authentication has taken place via federation

In the example above, the use of the BoolIfExists qualifier to the MultiFactorAuthPresent conditional context key evaluates the condition as true if:

  • The principal type can have an MFA attached, and does.
    or
  • The principal type cannot have an MFA attached.

This is a subtle difference but makes the use of this conditional key in trust policies much more flexible across all principal types.

Limiting role use based on time

During activities like security audits, it’s quite common for the activity to be time-bound and temporary. There’s a risk that the IAM role could be assumed even after the audit activity concludes, which might be undesirable. You can manage this risk by adding a time condition to the Condition attribute of the trust policy. This means that rather than being concerned with disabling the IAM role created immediately following the activity, customers can build the date restriction into the trust policy. You can do this by using policy attribute statements, like so:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-01T12:00:00Z"
        },
        "DateLessThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

Limiting role use based on IP addresses or CIDR ranges

If the auditor for a security audit is using a known fixed IP address, you can build that information into the trust policy, further reducing the opportunity for the role to be assumed by unauthorized actors calling the assumeRole API function from another IP address or CIDR range:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        }
      }
    }
  ]
}

Limiting role use based on tags

IAM tagging capabilities can also help to build flexible and adaptive trust policies, too, so that they create an attribute-based access control (ABAC) model for IAM management. You can build trust policies that only permit principals that have already been tagged with a specific key and value to assume a specific role. The following example requires that IAM principals in the AWS account 111122223333 be tagged with department = OperationsTeam for them to assume the IAM role.


tagged with department = OperationsTeam for them to assume the IAM role.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/department": "OperationsTeam"
        }
      }
    }
  ]
}

If you want to create this effect, I highly recommend the use of the PrincipalTag pattern above, but you must also be cautious about which principals are then also given iam:TagUser, iam:TagRole, iam:UnTagUser, and iam:UnTagRole permissions, perhaps even using the aws:PrincipalTag condition within the permissions boundary policy to restrict their ability to retag their own IAM principal or that of another IAM role they can assume.

Limiting or extending access to a role based on AWS Organizations

Since its announcement in 2016, almost every enterprise customer I work with uses AWS Organizations. This AWS service allows customers to create an organizational structure for their accounts by creating hard boundaries to manage blast-radius risks, among other advantages. You can use the PrincipalOrgID condition to limit assumption of an organization-wide core IAM role.

Caution: As you’ll see in the example below, you need to set the Principal attribute to “*” to do this, which would, without the conditional restriction, allow all role assumption requests to be accepted for this role, irrespective of the source of that assumption request. For that reason, be especially careful about the use of this pattern.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-abcd12efg1"
        }
      }
    }
  ]
}

It isn’t practical to write out all the AWS account identifiers into a trust policy, and because of the way policies like this are evaluated, you can’t include wildcard characters for the account number in the principal’s account number field. The use of the PrincipalOrgID global condition context key provides us with a neat and dynamic mechanism to create a short policy statement.

Role chaining

There are instances where a third party might themselves be using IAM roles, or where an AWS service resource that has already assumed a role needs to assume another role (perhaps in another account), and customers might need to allow only specific IAM roles in that remote account to assume the IAM role you create in your account. You can use role chaining to build permitted role escalation routes using role assumption from within the same account or AWS organization, or from third-party AWS accounts.

Consider the following trust policy example where I use a combination of the Principal attribute to scope down to an AWS account, and the aws:UserId global conditional context key to scope down to a specific role using its RoleId. To capture the RoleId for the role you want to be able to assume, you can run the following command using the AWS CLI:


# aws iam get-role --role-name CrossAccountAuditor
{
    "Role": {
        "Path": "/",
        "RoleName": "CrossAccountAuditor",
        "RoleId": "ARO1234567123456D",
        "Arn": "arn:aws:iam:: 111122223333:role/CrossAccountAuditor",
        "CreateDate": "2017-08-31T14:24:20+00:00",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Sid": "",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "arn:aws:iam::111122223333:root"
                    },
                    "Action": "sts:AssumeRole",
                    "Condition": {
                        "StringEquals": {
                            "sts:ExternalId": "ExampleSpecialPhrase"
                        }
                    }
                }
            ]
        }
    }
}

Here is the example trust policy that limits to only the CrossAccountAuditor role from AWS Account 111122223333.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringLike": {
          "aws:userId": "ARO1234567123456D:*"
        }
      }
    }
  ]
}

If you’re using an IAM user and have assumed the CrossAccountAuditor IAM role, the policy above will work through the AWS CLI with a call to aws sts assume-role and through the console.

This type of trust policy also works for services like Amazon EC2, allowing those instances using their assigned instance profile role to assume a role in another account to perform actions. We’ll touch on this use case later in the post.

Putting it all together

AWS customers can use combinations of all the above Principal and Condition attributes to hone the trust they’re extending out to any third party, or even within their own organization. They might create an accumulated trust policy for an IAM role which achieves the following effect:

Allows only a user named PauloSantos, in AWS account number 111122223333, to assume the role if they have also authenticated with an MFA, are logging in from an IP address in the 203.0.113.0 to 203.0.113.24 CIDR range, and the date is between noon of September 1, 2020, and noon of September 7, 2020.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111122223333:user/PauloSantos"
        ]
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "BoolIfExists": {
          "aws:MultiFactorAuthPresent": "true"
        },
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        },
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-01T12:00:00Z"
        },
        "DateLessThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

I’ve seen customers use this to create IAM users who have no permissions attached besides sts:AssumeRole. Trust relationships are then configured between the IAM users and the IAM roles, creating ultimate flexibility in defining who has access to what roles without needing to update the IAM user identity pool at all.

A word on Effect: Deny and NotPrincipal in IAM role trust policies

I have seen some customers make use of an “Effect”: “Deny” clause in their trust policies. This pattern can help manage a wildcard statement in another “Effect”: “Allow” clause of the same trust policy. However, this isn’t the best approach for most scenarios. You will typically be able to define each principal in your policy as being allowed access. An example of where this might not be true is where you have a clause that uses the global wildcard “*” as a principal, in which case it will be necessary to add Deny statements to further filter the access.

Putting a wildcard into the Principal attribute of an Allow policy statement, particularly in relation to trust policies, can be dangerous if you haven’t done a robust job of managing the Condition attribute in the same statement. Be as specific as possible in your Allow statement, and use Principal attributes first, rather than then relying on Deny statements to manage potential security gaps created by your use of wildcards.

The following trust policy allows all IAM principals within the o-abcd12efg1 organization to assume the IAM role, but only if it’s before September 7, 2020:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-abcd12efg1"
        }
      }
    },
    {
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

The use of NotPrincipal in trust policies

You can also build into your trust policies a NotPrincipal condition. Again, this is rarely the best choice, because you can introduce unnecessary complexity and confusion into your policies. Instead, you can avoid that problem by using fairly simple and prescriptive Principal statements.

Statements with NotPrincipal can also use a Deny statement as well, so it can create quite baffling policy logic, which if misunderstood could create unintended opportunities for misuse or abuse.

Here’s an example where you might think to use Deny and NotPrincipal in a trust policy—but notice this has the same effect as adding arn:aws:iam::123456789012:role/CoreAccess in a single Allow statement. In general, Deny with NotPrincipal statements in trust policies create unnecessary complexity, and should be avoided.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Deny",
      "NotPrincipal": {
        "AWS": "arn:aws:iam::111122223333:role/CoreAccess"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Remember, your Principal attribute should be very specific, to reduce the set of those able to assume the role, and an IAM role trust policy won’t permit access if a corresponding Allow statement isn’t explicitly present in the trust policy. It’s better to rely on the default deny policy evaluation logic where you’re able, rather than introducing unnecessary complexity into your policy logic.

Creating trust policies for AWS services that assume roles

There are two types of contexts where AWS services need access to IAM roles to function:

  1. Resources managed by an AWS service (like Amazon EC2 or Lambda, for example) need access to an IAM role to execute functions on other AWS resources, and need permissions to do so.
  2. An AWS service that abstracts its functionality from other AWS services, like Amazon Elastic Container Service (Amazon ECS) or Amazon Lex, needs access to execute functions on AWS resources. These are called service-linked roles and are a special case that’s out of the scope of this post.

In both contexts, you have the service itself as an actor. The service is assuming your IAM role so it can provide your credentials to your Lambda function (the first context) or use those credentials to do things (the second context). In the same way that IAM roles are used by human operators to provide an escalation mechanism for users operating with specific functions in the examples above, so, too, do AWS resources, such as Lambda functions, Amazon EC2 instances, and even AWS CloudFormation, require the same mechanism. You can find more information about how to create IAM Roles for AWS Services here.

An IAM role for a human operator and for an AWS service are exactly the same, even though they have a different principal defined in the trust policy. The policy’s Principal will define the AWS service that is permitted to assume the role for its function.

Here’s an example trust policy for a role designed for an Amazon EC2 instance to assume. You can see that the principal provided is the ec2.amazonaws.com service:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Every configuration of an AWS resource should be passed a specific role unique to its function. So, if you have two Amazon EC2 launch configurations, you should design two separate IAM roles, even if the permissions they require are currently the same. This allows each configuration to grow or shrink the permissions it requires over time, without needing to reattach IAM roles to configurations, which might create a privilege escalation risk. Instead, you update the permissions attached to each IAM role independently, knowing that it will only be used by that one service resource. This helps reduce the potential impact of risks. Automating your management of roles will help here, too.

Several customers have asked if it’s possible to design a trust policy for an IAM role such that it can only be passed to a specific Amazon EC2 instance. This isn’t directly possible. You cannot place the Amazon Resource Name (ARN) for an EC2 instance into the Principal of a trust policy, nor can you use tag-based condition statements in the trust policy to limit the ability for the role to be used by a specific resource.

The only option is to manage access to the iam:PassRole action within the permission policy for those IAM principals you expect to be attaching IAM roles to AWS resources. This special Action is evaluated when a principal tries to attach another IAM role to an AWS service or AWS resource.

You should use restrictions on access to the iam:PassRole action with permission policies and permission boundaries. This means that the ability to attach roles to instance profiles for Amazon EC2 is limited, rather than using the trust policy on the role assumed by the EC2 instance to achieve this. This approach makes it much easier to manage scaling for both those principals attaching roles to EC2 instances, and the instances themselves.

You could use a permission policy to limit the ability for the associated role to attach other roles to Amazon EC2 instances with the following permission policy, unless the role name is prefixed with EC2-Webserver-:


{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "iam:GetRole",
            "iam:PassRole"
        ],
        "Resource": "arn:aws:iam::111122223333:role/EC2-Webserver-*"
    }]
}

Conclusion

You now have all the tools you need to build robust and effective trust policies that work at scale, providing guardrails for your users and those who might want to access resources in your account from outside your organization.

Policy logic isn’t always simple, and I encourage you to use sandbox accounts to try out your ideas. In general, simplicity should win over cleverness. IAM policies and statements that might well be frugal in their use of policy language might also be difficult to read, interpret, and update by other IAM administrators in the future. Keeping your trust policies simple helps to build IAM relationships everyone understands and can manage, and use, effectively.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Securing resource tags used for authorization using a service control policy in AWS Organizations

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/

In this post, I explain how you can use attribute-based access controls (ABAC) in Amazon Web Services (AWS) to help provision simple, maintainable access controls to different projects, teams, and workloads as your organization grows. ABAC gives you access to granular permissions and employee-attribute based authorization. By using ABAC, you need fewer AWS Identity and Access Management (IAM) roles and have less administrative overhead. The attributes used by ABAC in AWS are called tags, which are metadata associated with the principals (users or roles) and resources within your AWS account. Securing tags designated for authorization is important because they’re used to grant access to your resources. This post describes an approach to secure these tags across all of your AWS accounts through the use of AWS Organizations service control policies (SCPs) so that only authorized users can modify a resource’s tags. Organizations helps you centrally govern your accounts as you grow and scale your workloads on AWS; SCPs are a feature of Organizations that restricts what services and actions are allowed in your accounts. Together they provide flexible account management and security features that help you centrally secure your business.

Let’s say you want to give users a standard way to access their accounts and resources in AWS. For authentication, you want to continue using your existing identity provider (IdP). For authorization, you’ll need a method that can be scaled to meet the needs of your organization. To accomplish this, you can use ABAC in AWS, which requires fewer roles, helps you control permissions for many resources at once, and allows for simple access rules to different projects and teams. For example, all developers in your organization can have a project name assigned to their identities as a tag. The tag will be used as the basis for access to AWS resources via ABAC. Because access is granted based on these tags, they need to be secured from unauthorized changes.

Securing authorization tags

Using ABAC as your access control strategy for AWS depends on securing the tags associated with identities in your AWS organization’s accounts as well as with the resources those identities need to access. When users sign in, the authentication process looks something like this:
 

Figure 1: Using tags for secure access to resources

Figure 1: Using tags for secure access to resources

  1. Their identities are passed into AWS through federation.
  2. Federation is implemented through SAML assertions.
  3. Their identities each assume an AWS IAM role (via AWS Security Token Service).
  4. The project attribute is passed to AWS as a session tag named access-project (Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS has full details).
  5. The session tag acts as a principal tag, and if its value matches the access-project authorization tag of the resource, the user is given access to that resource.

To ensure that the role’s session tags persist even after assuming subsequent roles, you also need to set the TransitiveTagKeys SAML attribute.

Now that identity tags are persisted, you need to ensure that the resource tags used for authorization are secured. For sake of brevity, let’s call these authorization tags. Let’s assume that you intend to use ABAC for all the accounts in your AWS organization and control access through an SCP. SCPs ensure that resource authorization tags are initially set to an authorized value, and secure the tags against modification once set.

Authorization tag access control requirements

Before you implement a policy to secure your resource’s authorization tag from unintended modification, you need to define the requirements that the policy will enforce:

  • After the authorization tag is set on a resource:
    • It cannot be modified except by an IAM administrator.
    • Only a principal whose authorization tag key and value pair exactly match the key and value pair of the resource can modify the non-authorization tags for that resource.
  • If no authorization tag has been set on a resource, and if the authorization tag is present on the principal:
    • The resource’s authorization tag key and value pair can only be set if exactly matching the key and value tag of the principal.
    • All other non-authorization tags can be modified.
  • If any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations.

Review an SCP for securing resource tags

Let’s take a look at an SCP that fulfills the access control requirements listed above. SCPs are similar to IAM permission policies, however, an SCP never grants permissions. Instead, SCPs are IAM policies that specify the maximum permissions for an organization, organizational unit (OU), or account. Therefore, the recommended solution is to also assign identity policies that allow modification of tags to all roles that need the ability to create and delete tags. You can then assign your SCP to an OU that includes all of the accounts that need resource authorization tags secured. In the example that follows, the AWS Services That Work with IAM documentation page would be used to identify the first set of resources with ABAC support that your organization’s developers will use. These resources include Amazon Elastic Compute Cloud (Amazon EC2), IAM users and roles, Amazon Relational Database Service (Amazon RDS), and Amazon Elastic File System (Amazon EFS). The SCP itself consists of three statements, which I review in the following section.

Deny modification and deletion of tags if a resource’s authorization tags don’t match the principal’s

This first policy statement is below. It addresses what needs to be enforced after the authorization tag is set on a resource. It denies modification of any tag—including the resource’s authorization tags—if the resource’s authorization tag doesn’t match the principal’s tag. Note that the resource’s authorization tag must exist. Let’s review the components of the statement.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
            "Effect": "Deny",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags"
            ],
            "Resource": [
                "*"
            ],
            "Condition": {
                "StringNotEquals": {
                    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
                    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
                },
                "Null": {
                    "ec2:ResourceTag/access-project": false
                }
            }
        },
  • Line 6:
    
    "Effect": "Deny",
    

    First, specify the Effect to be Deny to deny the following actions under certain conditions.

  • Lines 7-10:
    
    "Action": [
    	"ec2:CreateTags",
    	"ec2:DeleteTags"
    ],
    

    Specify the policy actions to be denied, which are ec2:CreateTags and ec2:DeleteTags. When combined with line 6, this denies modification of tags. The ec2:CreateTags action is included as it also grants the permission to overwrite tags, and we want to prevent that.

  • Lines 14-22:
    
    "Condition": {
    	"StringNotEquals": {
    		"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    		"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    	},
    	"Null": {
    		"ec2:ResourceTag/access-project": false
    	}
    }
    

    Specify the conditions for which this statement—to deny policy actions—is in effect.

    The first condition operator is StringNotEquals. According to policy evaluation logic, if multiple condition keys are attached to an operator, each needs to evaluate as true in order for StringNotEquals to also evaluate as true.

  • Line 16:
    
    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Evaluate if the access-project EC2 resource tag’s key and value pairs don’t match the principal’s.

  • Line 17:
    
    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    

    Evaluate if the principal is not the iam-admin role for the account. aws:PrincipalArn is a global condition key for comparing the Amazon Resource Name (ARN) of the principal that made the request with the ARN that you specify in the policy.

  • Line 19:
    
    "Null": {
    

    The second condition operator is a Null condition. It evaluates if the attached condition key’s tag exists in the request context. If the key’s right side value is set to false, the expression will return true if the tag exists and has a value. The following table illustrates the evaluation logic:

    Condition key and righthand side valueDescriptionEvaluation of expression if tag value existsEvaluation of expression if tag value doesn’t exist
    ec2:ResourceTag/access-project: trueIf my access-project resource tag is null (set to true)FALSETRUE
    ec2:ResourceTag/access-project: falseIf my access-project resource tag is not null (set to false)TRUEFALSE
  • Line 20:
    
    "ec2:ResourceTag/access-project": false
    

    Deny only when the access-project tag is present on the resource. This is needed because tagging actions shouldn’t be denied for new resources that might not yet have the access-project authorization tag set, which is covered next in the DenyModifyResAuthzTagIfPrinTagNotMatched statement.

Note that all elements of this condition are evaluated with a logical AND. For a deny to occur, both the StringNotEquals and Null operators must evaluate as true.

Deny modification and deletion of tags if requesting that a resource’s authorization tag be set to any value other than the principal’s

This second statement addresses what to do if there’s no authorization tag, or if the tag is on the principal but not on the resource. It denies modification of any tag, including the resource’s authorization tags, if the resource’s access-project tag is one of the requested tags to be modified and its value doesn’t match the principal’s access-project tag. The statement is needed when the resource might not have an authorization tag yet, and access shouldn’t be granted without a tag. Fortunately, access can be based on what the principal is requesting the resource authorization tag to be set to, which must be equal to the principal’s tag value.


{   
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]   
        }   
    }       
},
  • Line 36:
    
    "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Check to see if the desired access-project tag value for your resource, aws:RequestTag/access-project, isn’t equal to the principal’s access-project tag value. aws:RequestTag is a global condition key that can be used for tag actions to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy.

  • Line 39-43:
    
    "ForAnyValue:StringEquals": {
         "aws:TagKeys": [
             "access-project"
         ]               
    }          
    

    This ensures that a deny can only occur if the access-project tag is among the tags in the request context, which would be the case if the resource’s authorization tag was one of the requested tags to be modified.

Deny modification and deletion of tags if the principal’s access-project tag does not exist

The third statement addresses the requirement that if any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations. This policy statement will deny modification of any tag if the principal’s access-project tag doesn’t exist. The reason for this is that if authorization is intended to be based on the principal’s authorization tag, it must be present for any tag modification operation to be allowed.


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],      
    "Resource": [
        "*"     
    ],      
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {
            "aws:PrincipalTag/access-project": true
        }       
    }       
}

You’ve now created a basic SCP that protects against unintended modification of Amazon EC2 resource tags used for authorization. You will also need to create identity policies for your project’s roles that enable users to tag resources. You will also want to test your EC2 instances and ensure that your tags cannot be modified except by authorized principals. The next step is to add IAM, Amazon RDS, and Amazon EFS resources to this SCP.

Adding IAM users and roles to the SCP

Now that you’re confident you’ve done what you can to secure your Amazon EC2 tags, you can to do the same for your IAM resources, specifically users and roles. This is important because user and role resources can also be principals, and have authorization based off their tags. For example, not all roles will be assumed by employees and receive session tags. Applications can also assume a role and thus have authorization based on a non-session tag. To account for that possibility, you can add the following IAM actions to the prior statement, which denies modification and deletion of tags if a resource’s authorization tags don’t match the principal:


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser" 
    ],          
    "Resource": [
        "*" 
    ],          
    "Condition": {  
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {   
            "aws:PrincipalTag/access-project": true
        }   
    }   
}

You can use the following statements to add the IAM tagging actions to the previous statements that (1) deny modification and deletion of tags if requesting that a resource’s authorization tag be set to a different value than the principal’s and (2) denying modification and deletion of tags if the principal’s project tag doesn’t exist.


{
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]
        }
    }
},
{
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:PrincipalTag/access-project": true
        }
    }
} 

Adding resources that support the global aws:ResourceTag to the SCP

Now we will add Amazon RDS and Amazon EFS resources to complete the SCP. RDS and EFS are different with regards to tagging because they support the global aws:ResourceTag condition key instead of a service specific key such as ec2:ResourceTag. Instead of requiring you to create multiple statements similar to the code used above to deny modification and deletion of tags if a resource’s authorization tags don’t match the principal, you can create a single reusable policy statement that you can continue to add more actions to, as shown in the following code.


{
    "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "elasticfilesystem:TagResource",
        "elasticfilesystem:UntagResource",
        "rds:AddTagsToResource",
        "rds:RemoveTagsFromResource"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:ResourceTag/access-project": false
        }
    }
}, 

Review the full SCP to secure authorization tags

Let’s review the final SCP. It includes all the services you’ve targeted, as shown in the code sample below. This policy is 2,859 characters in length, and uses non-Unicode characters, which means that it uses a byte of data for each character. The policy therefore uses 2,859 bytes of the current 5,120 byte quota for an SCP. This means that roughly 4 to 18 more services can be added to this policy, depending on if the services use their own service-specific resource tag condition key or the global aws:ResourceTag key. Keep these numbers and limits in mind when adding additional resources, and remember that a maximum of five SCPs can be associated with an OU. It is unwise to consume the entirety of your available SCP policy space, as you’ll want to ensure you can make later changes and have growing room for your future business requirements.

This SCP doesn’t enforce tag-on-create, which lets you require that tags be applied when a resource is created, as shown in Example service control polices. Enforcing tag-on-create is necessary if you need resources to be accessible only from appropriately tagged identities from the time resources are created. Enforcement can be done through an SCP or by creating identity policies for the roles in each account that require it.

If necessary, you can use another employee attribute in addition to access-project, but that will use more of the available quota of bytes. Adding another attribute can potentially double the bytes used in a policy, especially for services that require use of a service-specific resource tag condition key. Using Amazon EC2 as an example, you would need to create separate statements just for the added employee attribute, duplicating the policy statements in the first three code samples.

Here’s the final SCP in its entirety:


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
			"Effect": "Deny",
			"Action": [
				"ec2:CreateTags",
				"ec2:DeleteTags"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"ec2:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedIam",
			"Effect": "Deny",
			"Action": [
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"iam:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"iam:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"ForAnyValue:StringEquals": {
					"aws:TagKeys": [
						"access-project"
					]
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfPrinTagNotExists",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:PrincipalTag/access-project": true
				}
			}
		}
	]
}

Next Steps

Now that you have an SCP that protects your resources’ authorization tags, you’ll also want to consider ensuring that identities cannot be assigned unauthorized tags when assuming roles, as shown in Using SAML Session Tags for ABAC. Additionally, you’ll also want detective and responsive controls to verify that identities are permitted to only access their own resources, and that responsive action can be taken if unintended access occurs.

Summary

You’ve created a multi-account organizational guardrail to protect the tags used for your ABAC implementation. Through the use of employee attributes, transitive session tags, and a service control policy you’ve created a preventive control that will deny anyone except an IAM administrator from modifying tags used for authorization once set on a resource.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael Chan

Michael is a Developer Advocate for AWS Identity. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Reduce Cost and Increase Security with Amazon VPC Endpoints

Post Syndicated from Nigel Harris original https://aws.amazon.com/blogs/architecture/reduce-cost-and-increase-security-with-amazon-vpc-endpoints/

Introduction

This blog explains the benefits of using Amazon VPC endpoints and highlights a self-paced workshop that will help you to learn more about them. Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

A VPC endpoint allows you to privately connect your VPC to supported AWS services without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Endpoints are virtual devices that are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

VPC endpoints enable you to reduce data transfer charges resulting from network communication between private VPC resources (such as Amazon Elastic Cloud Compute—or EC2—instances) and AWS Services (such as Amazon Quantum Ledger Database, or QLDB). Without VPC endpoints configured, communications that originate from within a VPC destined for public AWS services must egress AWS to the public Internet in order to access AWS services. This network path incurs outbound data transfer charges. Data transfer charges for traffic egressing from Amazon EC2 to the Internet vary based on volume. However, at the time of writing, after the first 1GB / Month ($0.00 per GB), transfers are charged at a rate of $ 0.09/GB (for AWS US-East 1 Virginia). With VPC endpoints configured, communication between your VPC and the associated AWS service does not leave the Amazon network. If your workload requires you to transfer significant volumes of data between your VPC and AWS, you can reduce costs by leveraging VPC endpoints.

There are two types of VPC endpoints: interface endpoints and gateway endpoints. Amazon Simple Storage Service (S3) and Amazon DynamoDB are accessed using gateway endpoints. You can configure resource policies on both the gateway endpoint and the AWS resource that the endpoint provides access to. A VPC endpoint policy is an AWS Identity and Access Management (AWS IAM) resource policy that you can attach to an endpoint. It is a separate policy for controlling access from the endpoint to the specified service. This enables granular access control and private network connectivity from within a VPC. For example, you could create a policy that restricts access to a specific DynamoDB table through a VPC endpoint.

Figure 1: Accessing S3 via a Gateway VPC Endpoint

Figure 1: Accessing S3 via a Gateway VPC Endpoint

Interface endpoints enable you to connect to services powered by AWS PrivateLink. This includes a large number of AWS services, services hosted by other AWS customers and partners in their own VPCs, and supported AWS Marketplace partner services. Like gateway endpoints, interface endpoints can be secured using resource policies on the endpoint itself and the resource that the endpoint provides access to. Interface endpoints enable the use of security groups to restrict access to the endpoint.

Figure 2: Accessing QLDB via an Interface VPC Endpoint

Figure 2: Accessing QLDB via an Interface VPC Endpoint

In larger multi-account AWS environments, network design can vary considerably. Consider an organization that has built a hub-and-spoke network with AWS Transit Gateway. VPCs have been provisioned into multiple AWS accounts, perhaps to facilitate network isolation or to enable delegated network administration. When deploying distributed architectures such as this, a popular approach is to build a “shared services VPC, which provides access to services required by workloads in each of the VPCs. This might include directory services or VPC endpoints. Sharing resources from a central location instead of building them in each VPC may reduce administrative overhead and cost. This approach was outlined by my colleague Bhavin Desai in his blog post Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway.

Figure 3: Centralized VPC Endpoints (multiple VPCs)

Figure 3: Centralized VPC Endpoints (multiple VPCs)

Alternatively, an organization may have centralized its network and chosen to leverage VPC sharing to enable multiple AWS accounts to create application resources (such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, and AWS Lambda functions) into a shared, centrally managed network. With either pattern, establishing granular set of controls to limit access to resources can be critical to support organizational security and compliance objectives while maintaining operational efficiency.

Figure 4: Centralized VPC Endpoints (shared VPC)

Figure 4: Centralized VPC Endpoints (shared VPC)

Learn how with the VPC Endpoint Workshop

Understanding how to appropriately restrict access to endpoints and the services they provide connectivity to is an often-misunderstood topic. I recently authored a hands-on workshop to help customers learn how to provision appropriate levels of access. Continue to learn about Amazon VPC Endpoints by taking the VPC Endpoint Workshop and then improve the security posture of your cloud workloads by leveraging network controls and VPC endpoint policies to manage access to your AWS resources.

How to use resource-based policies in the AWS Secrets Manager console to securely access secrets across AWS accounts

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-use-resource-based-policies-aws-secrets-manager-console-to-securely-access-secrets-aws-accounts/

AWS Secrets Manager now enables you to create and manage your resource-based policies using the Secrets Manager console. With this launch, we are also improving your security posture by both identifying and preventing creation of resource policies that grant overly broad access to your secrets across your Amazon Web Services (AWS) accounts. To achieve this, we use the Zelkova engine to mathematically analyze access granted by your resource policy and alert you if such permissions are found. The analysis verifies access across all resource-policy statements, actions, and the set of condition keys used in your policies. To be considered non-public, the resource policy must grant access only to fixed values (values that don’t contain a wildcard) of one or more of the following: aws:SourceArn, aws:SourceVpc, aws:SourceVpce, aws:SourceAccount, aws:SourceIP, and ensure the Principal does not include a “*” entry.

If the policy grants Public or overly broad access to your secrets across AWS accounts, Secrets Manager will block you from applying the policy in the console and alert you with a dashboard message. This prevents your policy from accidentally granting broader access to your secrets, instead ensuring you are restricting it to the intended AWS accounts, AWS services, and AWS Identity and Access Management (IAM) entities. Access to AWS Secrets Manager requires AWS credentials. Those credentials must contain permission to access the AWS resources you want to access, such as your Secrets Manager secrets. In this blog post, we use Public or broad access to refer to values (or a combination of values) in the resource policy that result in a wide access across AWS accounts and principals.

With AWS Secrets Manager, you have the option to store, rotate, manage, and retrieve many types of secrets. These can be database usernames and passwords, API keys, string values, and binary data. AWS supports the ability to share these secrets cross-account by applying resource policies via the AWS Command Line Interface (AWS CLI) and now via the Secrets Manager console.

Why would you need to share a secret? There are many reasons. Perhaps you have database credentials managed in a central account that are needed by applications in your production account. Maybe you have the binary stored for an encryption key that other accounts will use to create AWS Key Management Service (AWS KMS) keys in their accounts. To achieve this goal while ensuring a secure transfer of information and least privilege permissions, you will need a resource-based policy on your secret, a resource-based policy on your AWS KMS Customer Managed Key (CMK) used for encrypting the secret, and a user-based policy on your IAM principal.

You can still create a policy using AWS CLI or AWS SDK permitting access to a broader scope of entities if your business needs dictate. If you do permit this type of broader access, AWS Secrets Manager will show a notification in your dashboard, as shown in Figure 2, below.

Figure 1. This shows the warning when you try to create a resource policy that grants broad access to your secrets via the AWS Secrets Manager console.

Figure 1. This shows the warning when you try to create a resource policy that grants broad access to your secrets via the AWS Secrets Manager console.

 

Figure 2. This alert pops up when you click on a secret that has a resource policy attached via the CLI that grants broad access to the secret.

Figure 2. This alert pops up when you click on a secret that has a resource policy attached via the CLI that grants broad access to the secret.

In the example below, you’ll see how to use the AWS Secrets Manager console to attach a resource-based policy and allow access to your secret from a secondary account. A secret in the CENTRAL_SECURITY_ACCOUNT will be set to allow it to be accessed by an IAM role in the PRODUCTION_ACCOUNT.

In this example:

  • SECURITY_SECRET = The secret created in the CENTRAL_SECURITY_ACCOUNT.
  • SECURITY_CMK = The AWS KMS CMK used to encrypt the SECURITY_SECRET.
  • PRODUCTION_ROLE = The AWS IAM role used to access the SECURITY_SECRET.
  • PRODUCTION_ACCOUNT = The AWS account that owns the AWS IAM role used for cross-account access.

Overview of solution

The architecture of the solution can be broken down into four steps, which are outlined in Figure 3. The four main steps are:

  1. Create the resource-based policy via the AWS Secrets Manager console on the SECURITY_SECRET in the CENTRAL_SECURITY_ACCOUNT.
  2. Update the SECURITY_CMK policy in the CENTRAL_SECURITY_ACCOUNT to allow the role from the PRODUCTION account access.
  3. Grant the AWS IAM role in the PRODUCTION_ACCOUNT permissions to access the secret.
  4. Test and verify access from the PRODUCTION_ACCOUNT.

 

Figure 3. A visual overview of the four steps to use the AWS Secrets Manager console to attach a resource-based policy, allow access to your secret from a secondary account, and test and verify the process.

Figure 3. A visual overview of the four steps to use the AWS Secrets Manager console to attach a resource-based policy, allow access to your secret from a secondary account, and test and verify the process.

Prerequisites

To use the example in this post, you need:

  • An AWS account.
  • An IAM Role with permissions to make modifications in both the CENTRAL_SECURITY_ACCOUNT and the PRODUCTION_ACCOUNT.
  • An IAM role in the PRODUCTION_ACCOUNT you wish to grant permissions to access the SECURITY_SECRET.

Deploying the solution

Step 1: Create a resource-based policy in your CENTRAL_SECURITY account on the SECURITY_SECRET secret

  1. Log in to the AWS Secrets Manager console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_SECRET.
  3. Choose Edit Permissions next to Resource Permissions (optional).

    Figure 4. The dashboard view where you edit permissions.

    Figure 4. The dashboard view where you edit permissions.

  4. This will bring you to the page to add the resource policy. It will give you a basic template as shown in Figure 5, below.

    Figure 5. The basic template to add the resource policy.

    Figure 5. The basic template to add the resource policy.

  5. Since the full policy is provided for you in this example, delete the template from the text box.
  6. Copy the policy below and paste it in the text box. Make sure to replace PRODUCTION with your AWS account ID. You can also adjust the permissions you grant if needed. This policy allows a specific role in the PRODUCTION_ACCOUNT account to retrieve the current version of your secret. In the example, my IAM Role is called PRODUCTION_ROLE. Note you do not need to replace AWSCURRENT with any other value.
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::PRODUCTION:role/PRODUCTION_ROLE"
                },
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "*",
                "Condition": {
                    "ForAnyValue:StringEquals": {
                        "secretsmanager:VersionStage": "AWSCURRENT"
                    }
                }
            }
        ]
    }
    

    As shown in Figure 6, below, you’ll see this in the resource policy text area (with your AWS account ID in place of PRODUCTION).

    Figure 6. The example resource policy shown in the console.

    Figure 6. The example resource policy shown in the console.

  7. Choose Save.

Step 2: Update the resource-based policy in your CENTRAL_SECURITY account on the SECURITY_CMK

Note: Secrets in AWS Secrets Manager are encrypted by default. However, it is important for you to provide authorization for IAM Principals that need to access your secrets. Complete authorization requires access to the secret and the KMS CMK used to encrypt it, which prevents accidental public permissions on the secret. It is important to maintain both sets of authorization to provide appropriate access to secrets.

  1. Log in to the AWS KMS console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_CMK.
  3. Next to Key policy choose the Edit button.
  4. Paste the below code snippet into your key policy to allow the PRODUCTION_ACCOUNT access:
    
    {
        "Sid": "AllowUseOfTheKey",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::PRODUCTION:role/PRODUCTION_ROLE"
        },
        "Action": [
            "kms:Decrypt",
            "kms:DescribeKey"
        ],
        "Resource": "arn:aws:kms:us-east-1:CENTRAL_SECURITY:key/SECURITY_CMK"
    }
    

    You will need to replace PRODUCTION with your production account ID, PRODUCTION_ROLE with your production Role name, CENTRAL_SECURITY with your security account ID, and SECURITY_CMK with the CMK key ID of your security CMK. If you forget to swap out the account IDs in the policy with your own, you’ll see an error message similar to the one shown in Figure 7, below.

    Figure 7. Error message that appears if you don’t swap out your account number correctly.

    Figure 7. Error message that appears if you don’t swap out your account number correctly.

  5. Choose Save changes.

Step 3: Add permissions to the PRODUCTION_ROLE in the PRODUCTION account

  1. Log in to the AWS IAM console in the PRODUCTION_ACCOUNT account.
  2. In the left navigation pane, choose Roles.
  3. Select PRODUCTION_ROLE.
  4. Under the Permissions tab, choose Add inline policy.
  5. Choose the JSON tab and paste the below policy:
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": " arn:aws:secretsmanager:us-east-1:CENTRAL_SECURITY:secret:SECURITY_SECRET"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "kms:Decrypt",
                    "kms:DescribeKey"
                ],
                "Resource": "arn:aws:kms:us-east-1:CENTRAL_SECURITY:key/SECURITY_CMK"
            }
        ]
    }
    

    You will need to replace CENTRAL_SECURITY with your security account id, SECURITY_SECRET with the secret id, and SECURITY_CMK with the CMK key id of your security CMK.

  6. Choose Review policy.
  7. Name the policy Central_Security_Account_Security-Secret-Access, and choose Create policy.

Step 4: Test access to the SECURITY_SECRET from the PRODUCTION account

Verification of access via AWS CLI

  1. From the AWS CLI, use the PRODUCTION_ROLE credentials to run the get-secret-value command.
  2. Returned output should look like the example, below, in Figure 8.
    
    $aws secretsmanager get-secret-value --secret-id SECURITY_SECRET --version-stage AWSCURRENT
    
    {
        “ARN”: “arn:aws:secretsmanager:us-east-1:CENTRAL_SECURITY:secret:SECURITY_SECRET”,
        “Name”: “SECURITY_SECRET”,
        “SecretString”: “TheSecretString”,
        “CreatedDate”: 123456789,
        “VersionId”: “64c4250d-0b81-42e0-9a0c-e189d3c9aea8”,
        “VersionsStages”: [
            “AWSCURRENT”
        ]
    }
    

You can also verify the policy was attached from the CENTRAL_SECURITY_ACCOUNT by following the steps below.

Verification of policy via console

  1. Log into the AWS Secrets Manager console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_SECRET.
  3. Scroll down to where it shows Resource Permissions (optional), and you’ll see your resource policy stored in the console, as shown in Figure 8, below.

    Figure 8. What the example resource policy looks like in the console.

    Figure 8. What the example resource policy looks like in the console.

Conclusion

In this post, you saw how to add a resource-based policy on a secret in AWS Secrets Manager using the console, and how to update your AWS KMS CMK resource-based policy to enable access. The example showed setting up cross-account access, and allowing a role from the PRODUCTION_ACCOUNT to use the secret in the CENTRAL_SECURITY_ACCOUNT. By using the AWS Secrets Manager console to set up the resource-based policy, you now have a straight-forward, visual way to add and manage resource-based policies for your secrets and receive notifications if that policy is too broad.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Secrets Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security and Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

How to use G Suite as an external identity provider for AWS SSO

Post Syndicated from Yegor Tokmakov original https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/

Do you want to control access to your Amazon Web Services (AWS) accounts with G Suite? In this post, we show you how to set up G Suite as an external identity provider in AWS Single Sign-On (SSO). We also show you how to configure permissions for your users, and how they can access different accounts.

Introduction

G Suite is used for common business functions like email, calendar, and document sharing. If your organization is using AWS and G Suite, you can use G Suite as an identity provider (IdP) for AWS. You can connect AWS SSO to G Suite, allowing your users to access AWS accounts with their G Suite credentials.

You can grant access by assigning G Suite users to accounts governed by AWS Organizations. The user’s effective permissions in an account are determined by permission sets defined in AWS SSO. They allow you to define and grant permissions based on the user’s job function (such as administrator, data scientist, or developer). These should follow the least privilege principle, granting only permissions that are necessary to perform the job. This way, you can centrally manage user accounts for your employees in the Google Admin console and have fine-grained control over the access permissions of individual users to AWS resources.

In this post, we walk you through the process of setting up G Suite as an external IdP in AWS SSO.

How it works

AWS SSO authenticates your G Suite users by using Security Assertion Markup Language (SAML) 2.0 authentication. SAML is an open standard for secure exchange of authentication and authorization data between IdPs and service providers without exposing users’ credentials. When you use AWS as a service provider and G Suite as an external IdP, the login process is as follows:

  1. A user with a G Suite account opens the link to the AWS SSO user portal of your AWS Organizations.
  2. If the user isn’t already authenticated, they will be redirected to the G Suite account login. The user will log in using their G Suite credentials.
  3. If the login is successful, a response is created and sent to AWS SSO. It contains three different types of SAML assertions: authentication, authorization, and user attributes.
  4. When AWS SSO receives the response, the user’s access to the AWS SSO user portal is determined. A successful login shows accessible AWS accounts.
  5. The user selects the account to access and is redirected to the AWS Management Console.

This authentication flow is shown in the following diagram.

Figure 1: AWS SSO authentication flow

Figure 1: AWS SSO authentication flow

The user journey starts at the AWS SSO user portal and ends with the access to the AWS Management Console. Your users experience a unified access to the AWS Cloud, and you don’t have to manage user accounts in AWS Identity and Access Management (IAM) or AWS Directory Service.

User permissions in an AWS account are controlled by permission sets and groups in AWS SSO. A permission set is a collection of administrator-defined policies that determine a user’s effective permissions in an account. They can contain AWS managed policies or custom policies that are stored in AWS SSO, and are ultimately created as IAM roles in a given AWS account. Your users assume these roles when they access a given AWS account and get their effective permissions. This obliges you to fine control the access to the accounts, following the shared-responsibility model established in the cloud.

When you use G Suite to authenticate and manage your users, you have to create a user entity in AWS SSO. The user entity is not a user account, but a logical object. It maps a G Suite user via its primary email address as the username to the user account in AWS SSO. The user entity in AWS SSO allows you to grant a G Suite user access to AWS accounts and define its permissions in those accounts.

AWS SSO initial setup

The AWS SSO service has some prerequisites. You need to first set up AWS Organizations with All features set to enabled, and then sign in with the AWS Organization’s master account credentials. You also need super administrator privileges in G Suite and access to the Google Admin console.

If you’re already using AWS SSO in your account, refer to Considerations for Changing Your Identity Source before making changes.

To set up an external identity provider in AWS SSO

  1. Open the service page in the AWS Management Console. Then choose Enable AWS SSO.

    Figure 2: AWS SSO service welcome page

    Figure 2: AWS SSO service welcome page

  2. After AWS SSO is enabled, you can connect an identity source. On the overview page of the service, select Choose your identity source.

    Figure 3: Choose your identity source

    Figure 3: Choose your identity source

  3. In the Settings, look for Identity source and choose Change.

    Figure 4: Settings

    Figure 4: Settings

  4. By default, AWS SSO uses its own directory as the identity provider. To use G Suite as your identity provider, you have to switch to an external identity provider. Select External identity provider from the available identity sources.

    Figure 5: AWS SSO identity provider options

    Figure 5: AWS SSO identity provider options

  5. Choosing the External identity provider option reveals additional information needed to configure it. Choose Show individual metadata values to show the information you need to configure a custom SAML application.

    Figure 6: AWS SSO SAML metadata

    Figure 6: AWS SSO SAML metadata

For the next steps, you need to switch to your Google Admin console and use the service provider metadata information to configure AWS SSO as a custom SAML application.

G Suite SAML application setup

Open your Google Admin console in a new browser tab, so that you can easily copy the metadata information from the previous step. Now use the information to configure a custom SAML application.

To configure a custom SAML application in G Suite

  1. Navigate to the SAML Applications section in the Admin console and choose Add a service/App to your domain.

    Figure 7: Add a service or app

    Figure 7: Add a service or app

  2. In the modal dialog that opens, choose SETUP MY OWN CUSTOM APP.

    Figure 8: Set up a custom app

    Figure 8: Set up a custom app

  3. Go to Option 2 and choose Download to download the Google IdP metadata. It downloads an XML file named GoogleIDPMetadata-your_domain.xml, which you will use to configure G Suite as the IdP in AWS SSO. Choose Next.

    Figure 9: Download IdP metadata

    Figure 9: Download IdP metadata

  4. Configure the name and description of the application. Enter AWS SSO as the application name or use a name that clearly identifies this application for your users. Choose Next to continue.

    Figure 10: Name and describe your app

    Figure 10: Name and describe your app

  5. Fill in the Service Provider Details using the metadata information from AWS SSO, then choose Next to create your custom application. The mapping for the metadata is:
    • Enter the AWS SSO Sign in URL as the Start URL
    • Enter the AWS SSO ACS URL as the ACS URL
    • Enter the AWS SSO Issue URL as the Entity ID

     

    Figure 11: Add service provider details

    Figure 11: Add service provider details

  6. Next is a confirmation screen with a reminder that you have some steps still to do. Choose OK to continue.

    Figure 12: Reminder of remaining steps

    Figure 12: Reminder of remaining steps

  7. The final steps enable the application for your users. Select the application from the list and choose EDIT SERVICE from the top corner.

    Figure 13: Edit service

    Figure 13: Edit service

  8. Change the service status to ON for everyone and choose SAVE. If you want to manage access for particular users you can do this via organizational units (for example, you can enable the AWS SSO application for your engineering department). This doesn’t give access to any resources inside of your AWS accounts. Permissions are granted in AWS SSO.

    Figure 14: Service on for everyone

    Figure 14: Service on for everyone

You’re done configuring AWS SSO in G Suite. Return to the browser tab with the AWS SSO configuration.

AWS SSO configuration

After creating the G Suite application, you can finish SSO setup by uploading Google IdP metadata in the AWS Management Console.

To add identity provider metadata in AWS SSO

  1. When you configured the custom application in G Suite, you downloaded the GoogleIDPMetadata-your_domain.xml file. Choose Browse… on the configuration page and select this file from your download folder. Finish this step by choosing Next: Review.

    Figure 15: Upload IdP SAML metadata file

    Figure 15: Upload IdP SAML metadata file

  2. Type CONFIRM at the bottom of the list of changes and choose Change identity source to complete the setup.

    Figure 16: Confirm changes

    Figure 16: Confirm changes

  3. Next is a message that your change to the configuration is complete. At this point, you can choose Return to settings and proceed to user provisioning.

    Figure 17: Settings complete

    Figure 17: Settings complete

Manage Users and Permissions

AWS SSO supports automatic user provisioning via the System for Cross-Identity Management (SCIM). However, this is not yet supported for G Suite custom SAML applications. To add a user to AWS SSO, you have to add the user manually. The username in AWS SSO must be the primary email address of that user, and it must follow the pattern username@gsuite_domain.com.

To add a user to AWS SSO

  1. Select Users from the sidebar of the AWS SSO overview and then choose Add user.

    Figure 18: Add user

    Figure 18: Add user

  2. Enter the user details and use your user’s primary email address as the username. Choose Next: Groups to add the user to a group.

    Figure 19: Add user

    Figure 19: Add user

  3. We aren’t going to create user groups in this walkthrough. Skip the Add user to groups step by choosing Add user. You will reach the user list page displaying your newly created user and status enabled.

    Figure 20: User list

    Figure 20: User list

  4. The next step is to assign the user to a particular AWS account in your AWS Organization. This allows the user to access the assigned account. Select the account you want to assign your user to and choose Assign users.

    Figure 21: Assign users

    Figure 21: Assign users

  5. Select the user you just added, then choose Next: Permission sets to continue configuring the effective permissions of the user in the assigned account.

    Figure 22: Select user

    Figure 22: Select user

  6. Since you didn’t configure a permission set before, you need to configure one now. Choose Create new permission set.

    Figure 23: Create new permission set

    Figure 23: Create new permission set

  7. AWS SSO has managed permission sets that are similar to the AWS managed policies you already know. Make sure that Use an existing job function policy is selected, then select PowerUserAccess from the list of existing job function policies and choose Create.

    Figure 24: Create permission set

    Figure 24: Create permission set

  8. You can now select the created permission set from the list of available sets for the user. Select the PowerUserAccess permission set and choose Finish to assign the user to the account.

    Figure 25: Select permission set

    Figure 25: Select permission set

  9. You see a message that the assignment has been successful.

    Figure 26: Assignment complete

    Figure 26: Assignment complete

Access an AWS Account with G Suite

You can find your user portal URL in the AWS SSO settings, as shown in the following screenshot. Unauthenticated users who use the link will be redirected to the Google account login page and use their G Suite credentials to log in.

Figure 27: The user portal URL

Figure 27: The user portal URL

After authenticating, users are redirected to the user portal. They can select from the list of assigned accounts, as shown in the following example, and access the AWS Management Console of these accounts.

Figure 28: Select an assigned account

Figure 28: Select an assigned account

You’ve successfully set up G Suite as an external identity provider for AWS SSO. Your users can access your AWS accounts using the credentials they already use.

Another way your users can use AWS SSO is by selecting it from their Google Apps to be redirected to the user portal, as shown in the following screenshot. That is one of the quickest ways for users to access accounts.

Figure 29: Apps in the user portal

Figure 29: Apps in the user portal

Using AWS CLI with SSO

You can use the AWS Command Line Interface (CLI) to access AWS resources. AWS CLI version 2 supports access via AWS SSO. You can automatically or manually configure a profile for the CLI to access resources in your AWS accounts. To authenticate your user, it opens the user portal in your default browser. If you aren’t authenticated, you’re redirected to the G Suite login page. After a successful login, you can select the AWS account you want to access from the terminal.

To upgrade to AWS CLI version 2, follow the instructions in the AWS CLI user guide.

Conclusion

You’ve set up G Suite as an external IdP for AWS SSO, granted access to an AWS account for a G Suite user, and enforced fine-grained permission controls for this user. This enables your business to have easy access to the AWS Cloud.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-on forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Yegor Tokmakov

Yegor is a solutions architect at AWS, working with startups. Previously, he was Chief Technology Officer at a healthcare startup in Berlin and was responsible for architecture and operations, as well as product development and growth of the tech team. Yegor is passionate about novel AI applications and data analytics. In his free time, he enjoys traveling, surfing, and cycling.

Sebastian Doell

Sebastian is a solutions architect at AWS. He helps startups to execute on their ideas at speed and at scale. Sebastian maintains a number of open source projects and is an advocate of Dart and Flutter.

Tighten S3 permissions for your IAM users and roles using access history of S3 actions

Post Syndicated from Mathangi Ramesh original https://aws.amazon.com/blogs/security/tighten-s3-permissions-iam-users-and-roles-using-access-history-s3-actions/

Customers tell us that when their teams and projects are just getting started, administrators may grant broad access to inspire innovation and agility. Over time administrators need to restrict access to only the permissions required and achieve least privilege. Some customers have told us they need information to help them determine the permissions an application really needs, and which permissions they can remove without impacting applications. To help with this, AWS Identity and Access Management (IAM) reports the last time users and roles used each service, so you can know whether you can restrict access. This helps you to refine permissions to specific services, but we learned that customers also need to set more granular permissions to meet their security requirements.

We are happy to announce that we now include action-level last accessed information for Amazon Simple Storage Service (Amazon S3). This means you can tighten permissions to only the specific S3 actions that your application requires. The action-level last accessed information is available for S3 management actions. As you try it out, let us know how you’re using action-level information and what additional information would be valuable as we consider supporting more services.

The following is an example snapshot of S3 action last accessed information.
 

Figure 1: S3 action last accessed information snapshot

Figure 1: S3 action last accessed information snapshot

You can use the new action last accessed information for Amazon S3 in conjunction with other features that help you to analyze access and tighten S3 permissions. AWS IAM Access Analyzer generates findings when your resource policies allow access to your resources from outside your account or organization. Specifically for Amazon S3, when an S3 bucket policy changes, Access Analyzer alerts you if the bucket is accessible by users from outside the account, which helps you to protect your data from unintended access. You can use action last accessed information for your user or role, in combination with Access Analyzer findings, to improve the security posture of your S3 permissions. You can review the action last accessed information in the IAM console, or programmatically using the AWS Command Line Interface (AWS CLI) or a programmatic client.

Example use case for reviewing action last accessed details

Now I’ll walk you through an example to demonstrate how you identify unused S3 actions and reduce permissions for your IAM principals. In this example a system administrator, Martha Rivera, is responsible for managing access for her IAM principals. She periodically reviews permissions to ensure that teams follow security best practices. Specifically, she ensures that the team has only the minimum S3 permissions required to work on their application and achieve their use cases. To do this, Martha reviews the last accessed timestamp for each supported S3 action that the roles in her account have access to. Martha then uses this information to identify the S3 actions that are not used, and she restricts access to those actions by updating the policies.

To view action last accessed information in the AWS Management Console

  1. Open the IAM Console.
  2. In the navigation pane, select Roles, then choose the role that you want to analyze (for example, PaymentAppTestRole).
  3. Select the Access Advisor tab. This tab displays all the AWS services to which the role has permissions, as shown in Figure 2.
     
    Figure 2: List of AWS services to which the role has permissions

    Figure 2: List of AWS services to which the role has permissions

  4. On the Access Advisor tab, select Amazon S3 to view all the supported actions to which the role has permissions, when each action was last used by the role, and the AWS Region in which it was used, as shown in Figure 3.
     
    Figure 3: List of S3 actions with access data

    Figure 3: List of S3 actions with access data

In this example, Martha notices that PaymentAppTestRole has read and write S3 permissions. From the information in Figure 3, she sees that the role is using read actions for GetBucketLogging, GetBucketPolicy, and GetBucketTagging. She also sees that the role hasn’t used write permissions for CreateAccessPoint, CreateBucket, PutBucketPolicy, and others in the last 30 days. Based on this information, Martha updates the policies to remove write permissions. To learn more about updating permissions, see Modifying a Role in the AWS IAM User Guide.

At launch, you can review 50 days of access data, that is, any use of S3 actions in the preceding 50 days will show up as a last accessed timestamp. As this tracking period continues to increase, you can start making permissions decisions that apply to use cases with longer period requirements (for example, when 60 or 90 days is available).

Martha sees that the GetAccessPoint action shows Not accessed in the tracking period, which means that the action was not used since IAM started tracking access for the service, action, and AWS Region. Based on this information, Martha confidently removes this permission to further reduce permissions for the role.

Additionally, Martha notices that an action she expected does not show up in the list in Figure 3. This can happen for two reasons, either PaymentAppTestRole does not have permissions to the action, or IAM doesn’t yet track access for the action. In such a situation, do not update permission for those actions, based on action last accessed information. To learn more, see Refining Permissions Using Last Accessed Data in the AWS IAM User Guide.

To view action last accessed information programmatically

The action last accessed data is available through updates to the following existing APIs. These APIs now generate action last accessed details, in addition to service last accessed details:

  • generate-service-last-accessed-details: Call this API to generate the service and action last accessed data for a user or role. You call this API first to start a job that generates the action last accessed data for a user or role. This API returns a JobID that you will then use with get-service-last-accessed-details to determine the status of the job completion.
  • get-service-last-accessed-details: Call this API to retrieve the service and action last accessed data for a user or role based on the JobID you pass in. This API is paginated at the service level.

To learn more, see GenerateServiceLastAccessedDetails in the AWS IAM User Guide.

Conclusion

By using action last accessed information for S3, you can review access for supported S3 actions, remove unused actions, and restrict access to S3 to achieve least privilege. To learn more about how to use action last accessed information, see Refining Permissions Using Last Accessed Data in the AWS IAM User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Mathangi Ramesh

Mathangi Ramesh

Mathangi is the product manager for AWS Identity and Access Management. She enjoys talking to customers and working with data to solve problems. Outside of work, Mathangi is a fitness enthusiast and a Bharatanatyam dancer. She holds an MBA degree from Carnegie Mellon University.

AWS IAM introduces updated policy defaults for IAM user passwords

Post Syndicated from Mark Burr original https://aws.amazon.com/blogs/security/aws-iam-introduces-updated-policy-defaults-for-iam-user-passwords/

To improve the default security for all AWS customers, we are adding a default password policy for AWS Identity and Access Management (IAM) users in AWS accounts. This update will be made globally to the IAM service on August 3rd, 2020. You can implement this change today by creating an IAM password policy in your AWS account. AWS accounts with an existing IAM password policy will not be affected by this change, but it is important to review the details below so you can evaluate any necessary changes to your environment.

What is an IAM password policy?

The IAM password policy is an account-level setting that applies to all IAM users, excluding the root user. You can create a policy to do things like require a minimum password length and specific character types, along with setting mandatory rotation periods. These password settings apply only to passwords assigned to IAM users and do not affect any access keys they might have.

What is the new default policy?

The new default IAM policy will have the following minimum requirements and must:

  • be a minimum of 8 or more characters
  • include a minimum of three of the following mix of character types: uppercase, lowercase, numbers, non-alphanumeric symbols, for example [email protected]#$%^&*()_+-[]{}|‘
  • not be identical to your AWS account name or email address

You can determine your own password requirements by setting a custom policy. Please note that this change does not apply to the root user, which has a separate password policy.

What should customers do to prepare for this update?

For AWS accounts with no password policy applied — the experience will be unchanged until you update user passwords. The new password will need to align with the minimum requirements of the default policy. Likewise, when you create new IAM users in these AWS accounts, the passwords must meet the new minimum requirements of the default policy. A default password policy will be set for all AWS accounts that do not currently have one.

For AWS accounts with an existing password policy — there is no change for any new and existing user passwords, and they will not be affected by this update. If you disable the existing password policy, then any new IAM users created from that point onward will require passwords that meet the minimum requirements of the default policy.

For AWS accounts using automation workflows which create IAM users — If you have implemented an automated user creation workflow that does not produce passwords that meet the new required complexity and have not implemented your own custom policy, you will be affected. You should inspect and evaluate your existing workflows, and they should either be updated to meet the default password policy or set with a custom policy prior to August 3rd to ensure continued operation.

When will these changes happen?

To provide time for you to evaluate potential impact by this change, AWS is updating the default password policy in 90 days, which will take effect at the beginning of August 2020. We encourage all customers to be proactive about assessing and modifying any automation workflows that create IAM users and passwords without a corresponding password policy.

How do I check if a policy is already set?

You can navigate to the AWS IAM console then click on Account settings that will state whether or not a password policy has been set for the account. Click here for an example of how to check this via the AWS Command Line Interface (AWS CLI). For further information and to learn how to check this using the API, please refer to the documentation.

AWS Single Sign-On (AWS SSO)

Note: if you are primarily using IAM users as the source of your identities across multiple accounts, you may want to evaluate AWS SSO, that simplifies the user experience and improves security by eliminating individual passwords in each account. It also allows you to quickly and easily assign your employees access to AWS accounts managed with AWS Organizations, business cloud applications, and custom applications that support Security Assertion Markup Language (SAML) 2.0. To learn more, visit the AWS Single Sign-on page.

Need more assistance?

AWS IQ enables AWS customers to find, securely collaborate with, and pay AWS Certified third-party experts for on-demand project work. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log into your console and select Get Started with AWS IQ to start a request.

The AWS Technical Support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support does not include code development for client applications.

If you have any questions or issues, please start a new thread on the AWS IAM forum, or contact AWS Support or your Technical Account Manager (TAM). If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mark Burr

Mark is a Principal Consultant with the Worldwide Public Sector Professional Services team. He specializes in security, automation, large-scale migrations, enterprise transformation, and executive strategy. Mark enjoys helping global customers achieve amazing outcomes in AWS. When he’s not in the cloud, he’s on a bicycle or drinking a Belgian ale.

IAM Access Analyzer flags unintended access to S3 buckets shared through access points

Post Syndicated from Andrea Nedic original https://aws.amazon.com/blogs/security/iam-access-analyzer-flags-unintended-access-to-s3-buckets-shared-through-access-points/

Customers use Amazon Simple Storage Service (S3) buckets to store critical data and manage access to data at scale. With Amazon S3 Access Points, customers can easily manage shared data sets by creating separate access points for individual applications. Access points are unique hostnames attached to a bucket and customers can set distinct permissions using access point policies. To help you identify buckets that can be accessed publicly or from other AWS accounts or organizations, AWS Identity and Access Management (IAM) Access Analyzer mathematically analyzes resource policies. Now, Access Analyzer analyzes access point policies in addition to bucket policies and bucket ACLs. This helps you find unintended access to S3 buckets that use access points. Access Analyzer makes it easier to identify and remediate unintended public, cross-account, or cross-organization sharing of your S3 buckets that use access points. This enables you to restrict bucket access and adhere to the security best practice of least privilege.

In this post, first I review Access Analyzer and how to enable it. Then I walk through an example of how to use Access Analyzer to identify an S3 bucket that is shared through an access point. Finally, I show you how to view Access Analyzer bucket findings in the S3 Management Console.

IAM Access Analyzer overview

Access Analyzer helps you determine which resources can be accessed publicly or from other accounts or organizations. Access Analyzer determines this by mathematically analyzing access control policies attached to resources. This form of analysis, called automated reasoning, applies logic and mathematical inference to determine all possible access paths allowed by a resource policy. This is how IAM Access Analyzer uses provable security to deliver comprehensive findings for unintended bucket access. You can enable Access Analyzer by navigating to the IAM console. From there, select Access Analyzer to create an analyzer for an account or an organization.

How to use IAM Access Analyzer to identify an S3 bucket shared through an access point

Once you’ve created your analyzer, you can view findings for resources that can be accessed publicly or from other AWS accounts or organizations. For your S3 bucket findings, the Shared through column indicates whether a bucket is shared through its S3 bucket policy, one of its access points, or the bucket ACL. Looking at the Shared through column in the image below, we see the first finding is shared through an Access point.

Figure 1: IAM Access Analyzer report of findings for resources shared outside of my account

Figure 1: IAM Access Analyzer report of findings for resources shared outside of my account

If you use access points to manage bucket access and one of your buckets is shared through an access point, you will see the bucket finding indicate ‘Access Point’. In this example, I select the first finding to learn more. In the detail image below, you can see that the Shared through field lists the Amazon Resource Name (arn) of the access point that grants access to the bucket and the details of the resources and principals. If this access wasn’t your intent, you can review the access point details in the S3 console. There you can modify the access point policy to remove access.

Figure 2: IAM Access Analyzer finding details for a bucket shared through an access point

Figure 2: IAM Access Analyzer finding details for a bucket shared through an access point

How to use Access Analyzer for S3 to identify an S3 bucket shared through an access point

You can also view Access Analyzer findings for S3 buckets in the S3 Management Console with Access Analyzer for S3. This view reports S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts. This includes accounts outside of your AWS organization. For each public or shared bucket, Access Analyzer for S3 displays whether the bucket is shared through the bucket policy, access points, or the bucket ACL. In the example below, we see the my-test-public-bucket is set to public access using a Bucket policy and bucket ACL. Additionally, the my-test-bucket is shared access to other AWS accounts using a Bucket policy and one or more access points. After you identify a bucket with unintended access using Access Analyzer for S3, you can Block Public Access to the bucket. Amazon S3 block public access settings override the bucket policies that are applied to the bucket. The settings also override the access point policies applied to the bucket’s access points.

Figure 3: Access Analyzer for S3 findings report in the S3 Management Console

Figure 3: Access Analyzer for S3 findings report in the S3 Management Console

Next steps

To turn on IAM Access Analyzer at no additional cost, head over to the IAM console. IAM Access Analyzer is available in the IAM console and through APIs in all commercial AWS Regions and AWS GovCloud (US). To learn more about IAM Access Analyzer, visit the feature page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM Forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Andrea Nedic

Andrea is a Senior Technical Program Manager in the AWS Automated Reasoning Group. She enjoys hearing from customers about how they build on AWS. Outside of work, Andrea likes to ski, dance, and be outdoors. She holds a PhD from Princeton University.

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/identify-unintended-resource-access-with-aws-identity-and-access-management-iam-access-analyzer/

Today I get to share my favorite kind of announcement. It’s the sort of thing that will improve security for just about everyone that builds on AWS, it can be turned on with almost no configuration, and it costs nothing to use. We’re launching a new, first-of-its-kind capability called AWS Identity and Access Management (IAM) Access Analyzer. IAM Access Analyzer mathematically analyzes access control policies attached to resources and determines which resources can be accessed publicly or from other accounts. It continuously monitors all policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues. With IAM Access Analyzer, you have visibility into the aggregate impact of your access controls, so you can be confident your resources are protected from unintended access from outside of your account.

Let’s look at a couple examples. An IAM Access Analyzer finding might indicate an S3 bucket named my-bucket-1 is accessible to an AWS account with the id 123456789012 when originating from the source IP 11.0.0.0/15. Or IAM Access Analyzer may detect a KMS key policy that allow users from another account to delete the key, identifying a data loss risk you can fix by adjusting the policy. If the findings show intentional access paths, they can be archived.

So how does it work? Using the kind of math that shows up on unexpected final exams in my nightmares, IAM Access Analyzer evaluates your policies to determine how a given resource can be accessed. Critically, this analysis is not based on historical events or pattern matching or brute force tests. Instead, IAM Access Analyzer understands your policies semantically. All possible access paths are verified by mathematical proofs, and thousands of policies can be analyzed in a few seconds. This is done using a type of cognitive science called automated reasoning. IAM Access Analyzer is the first service powered by automated reasoning available to builders everywhere, offering functionality unique to AWS. To start learning about automated reasoning, I highly recommend this short video explainer. If you are interested in diving a bit deeper, check out this re:Invent talk on automated reasoning from Byron Cook, Director of the AWS Automated Reasoning Group. And if you’re really interested in understanding the methodology, make yourself a nice cup of chamomile tea, grab a blanket, and get cozy with a copy of Semantic-based Automated Reasoning for AWS Access Policies using SMT.

Turning on IAM Access Analyzer is way less stressful than an unexpected nightmare final exam. There’s just one step. From the IAM Console, select Access analyzer from the menu on the left, then click Create analyzer.

Creating an Access Analyzer

Analyzers generate findings in the account from which they are created. Analyzers also work within the region defined when they are created, so create one in each region for which you’d like to see findings.

Once our analyzer is created, findings that show accessible resources appear in the Console. My account has a few findings that are worth looking into, such as KMS keys and IAM roles that are accessible by other accounts and federated users.Viewing Access Analyzer Findings

I’m going to click on the first finding and take a look at the access policy for this KMS key.

An Access Analyzer Finding

From here we can see the open access paths and details about the resources and principals involved. I went over to the KMS console and confirmed that this is intended access, so I archived this particular finding.

All IAM Access Analyzer findings are visible in the IAM Console, and can also be accessed using the IAM Access Analyzer API. Findings related to S3 buckets can be viewed directly in the S3 Console. Bucket policies can then be updated right in the S3 Console, closing the open access pathway.

An Access Analyzer finding in S3

You can also see high-priority findings generated by IAM Access Analyzer in AWS Security Hub, ensuring a comprehensive, single source of truth for your compliance and security-focused team members. IAM Access Analyzer also integrates with CloudWatch Events, making it easy to automatically respond to or send alerts regarding findings through the use of custom rules.

Now that you’ve seen how IAM Access Analyzer provides a comprehensive overview of cloud resource access, you should probably head over to IAM and turn it on. One of the great advantages of building in the cloud is that the infrastructure and tools continue to get stronger over time and IAM Access Analyzer is a great example. Did I mention that it’s free? Fire it up, then send me a tweet sharing some of the interesting things you find. As always, happy building!

— Brandon

Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/rely-employee-attributes-from-corporate-directory-create-fine-grained-permissions-aws/

In my earlier post Simplify granting access to your AWS resources by using tags on AWS IAM users and roles, I explained how to implement attribute-based access control (ABAC) in AWS to simplify permissions management at scale. In that scenario, I talked about relying on attributes on your IAM users and roles for access control in AWS. But more often, customers manage workforce user identities with an identity provider (IdP) and want to use identity attributes from their IdP for fine-grained permissions in AWS. In this post I introduce a new capability that enables you to do just that.

In AWS, you can configure your IdP to allow workforce users federated access to AWS resources using credentials from your corporate directory. Along with user credentials, your directory also stores user attributes such as cost center, department, and email address. Now you can configure your IdP to pass in user attributes as tags in federated AWS sessions. These are called session tags. You can then control access to AWS resources based on these session tags. Moreover, when user attributes change or new users are added to your directory, permissions automatically apply based on these attributes. For example, developers can federate into AWS using an IAM role, but can only access resources specific to their project. This is because you define permissions that require the project attribute from their IdP to match the project tag on AWS resources. Additionally, AWS logs these attributes in AWS CloudTrail and enable security administrators to track the user identity for a given role session.

In this post, I introduce session tags and walk you through an example of how to use session tags for ABAC and tracking user activity.

What are session tags?

Session tags are attributes passed in the AWS session. You can use session tags for access control in IAM policies and for monitoring. These tags are not stored in AWS and are valid only for the duration of the session. You define session tags just like tags in AWS—consisting of a customer-defined key and an optional value.

How to pass session tags in the AWS session?

One of the most widely used mechanisms for requesting a session in AWS is by assuming an IAM role. For user identities stored in an external directory, you can configure your SAML IdP in IAM to allow your users federated access to AWS using IAM roles. To understand how to set up SAML federation using an IdP, read AWS Federated Authentication with Active Directory Federation Services (ADFS). If you’re using IAM users, you can also request a session in AWS using AssumeRole and GetFederationToken APIs or using AssumeRoleWithWebIdentity API for applications that require access to AWS resources.

For session tags, you can use all of the above-mentioned APIs to pass tags into your AWS session based on your use case. For details on how to use these APIs to pass session tags, please visit Tags in AWS Sessions.

What permissions do I need to use session tags?

To perform any action in AWS, developers need permissions. For example, to assume a role, your developers need sts:AssumeRole permission. Similarly with session tags, we’re introducing a new action, sts:TagSession, that is required to pass session tags in the session. Additionally, you can require and control session tags using existing AWS conditions:

ActionUse CaseWhere to add
sts:TagSessionRequired to pass attributes as session tags when using AssumeRole, AssumRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederatioToken APIRole’s trust policy or IAM user’s permissions policy based on the API you are using to pass session tags.
Condition KeyUse CaseActions that supports the condition key
aws:RequestTagUse this condition to require specific tags in the session.sts:TagSession
aws:TagKeysUse this condition key to control the tag keys that are allowed in the session.sts:TagSession
aws:PrincipalTag*Use this condition in IAM policies to compare tags on AWS resources.AWS Global Condition Keys (all actions across all services support this condition key)

Note: The table above explains only the additional use cases that the keys now support. Support for existing use cases, such as IAM users and roles remains unchanged. For details please visit AWS Global Condition Keys.

Now, I’ll show you how to create fine-grained permissions based on user attributes from your directory and how permissions automatically apply based on attributes when employees switch projects within your organization.

Example: Grant employees access to their project resources in AWS based on their job function

Consider a scenario where your organization deployed AWS EC2 and RDS instances in your AWS account for your company’s production web applications. Your systems engineers manage the EC2 instances and database engineers manage the RDS instances. They both access AWS by federating into your AWS account from a SAML IdP. Your organization’s security policy requires employees to have access to manage only the resources related to their job function and project they work on.

To meet these requirements, your cloud administrator, Michelle, implements attribute-based access control (ABAC) using the jobfunction and project attributes as session tags by following three steps:

  1. Michelle tags all existing EC2 and RDS instances with the corresponding project attribute.
  2. She creates a MyProjectResources IAM role and an IAM permission policy for this role such that employees can access resources with their jobfunction and project tags.
  3. She then configures your SAML IdP to pass the jobfunction and project attributes in the federated session when employees federate into AWS using the MyProjectResources role.

Let’s have a look at these steps in detail.

Step 1: Tag all the project resources

Michelle tags all the project resources with the appropriate project tag. This is important since she wants to create permission rules based on this tag to implement ABAC. To learn how to tag resources in EC2 and RDS, read tagging your Amazon EC2 resources and tagging Amazon RDS resources.

Step 2: Create an IAM role with permissions based on attributes

Next, Michelle creates an IAM role called MyProjectResources using the AWS Management Console or CLI. This is the role that your systems engineers and database engineers will assume when they federate into AWS to access and manage the EC2 and RDS instances respectively. To grant this role permissions, Michelle creates the following IAM policy and attaches it to the MyProjectResources role.

IAM Permissions Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": "rds:DescribeDBInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"rds:RebootDBInstance",
				"rds:StartDBInstance",
				"rds:StopDBInstance"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "DatabaseEngineer",
					"rds:db-tag/project": "${aws:PrincipalTag/project}"
				}
			}
		},
		{
			"Effect": "Allow",
			"Action": "ec2:DescribeInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"ec2:StartInstances",
				"ec2:StopInstances",
				"ec2:RebootInstances",
				"ec2:TerminateInstances"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "SystemsEngineer",
					"ec2:ResourceTag/project": "${aws:PrincipalTag/project}"
				}
			}
		}
	]
}

In the policy above, Michelle allows specific actions related to EC2 and RDS that the systems engineers and database engineers need to manage their project instances. In the condition element of the policy statements, Michelle adds a condition based on the jobfunction and project attributes to ensure engineers can access only the instances which belong to their jobfunction and have a matching project tag.

To ensure your systems engineers and database engineers can assume this role when they federate into AWS from your IdP, Michelle modifies the role’s trust policy to trust your SAML IdP as shown in the policy statement below. Since we also want to include session tags when engineers federate in, Michelle adds the new action sts:TagSession in the policy statement as shown below. She also adds a condition that requires the jobfunction and project attributes to be included as session tags when engineers assume this role.

Role Trust Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"Federated": "arn:aws:iam::999999999999:saml-provider/ExampleCorpProvider"
			},
			"Action": [
				"sts:AssumeRoleWithSAML",
				"sts:TagSession"
			],
			"Condition": {
				"StringEquals": {
					"SAML:aud": "https://signin.aws.amazon.com/saml"
				},
				"StringLike": {
					"aws:RequestTag/project": "*",
					"aws:RequestTag/jobfunction": [
						"SystemsEngineer",
						"DatabaseEngineer"
					]
				}
			}
		}
	]
}

Step 3: Configuring your SAML IdP to pass the jobfunction and project attributes as session tags

Once Michelle creates the role and permissions policy in AWS, she configures her SAML IdP to include the jobfunction and project attributes as session tags in the SAML assertion when engineers federate into AWS using this role.

To pass attributes as session tags in the federated session, the SAML assertion must contain the attributes with the following prefix:

https://aws.amazon.com/SAML/Attributes/PrincipalTag

The example given below shows a part of the SAML assertion generated from my IdP with two attributes (project:Automation and jobfunction:SystemsEngineer) that we want to pass as session tags.


<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:project">
		< AttributeValue >Automation<AttributeValue>
</ Attribute>
<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:jobfunction">
		< AttributeValue >SystemsEngineer<AttributeValue>
</ Attribute>

Note: This sample only contains the new properties in the SAML assertion. There are additional required fields in the SAML assertion that must be present to successfully federate into AWS. To learn more about creating SAML assertions with session tags, visit configuring SAML assertions for the authentication response.

AWS identity partners such as Ping Identity, OneLogin, Auth0, ForgeRock, IBM, Okta, and RSA have validated the end-to-end experience for this new capability with their identity solutions, and we look forward to additional partners validating this capability. To learn more about how to use these identity providers for configuring session tags, please visit integrating third-party SAML solution providers with AWS. If you are using Active Directory Federation Services (ADFS) for SAML federation with AWS, then please visit Configuring ADFS to start using session tags for attribute-based access control.

Now, when your systems engineers and database engineers federate into AWS using the MyProjectResources role, they only get access to their project resources based on the project and jobfunction attributes passed in their federated session. Session tags enabled Michelle to define unique permissions based on user attributes without having to create and manage multiple roles and policies. This helps simplify permissions management in her company.

Permissions automatically apply when employees change projects

Consider the same example with a scenario where your systems engineer, Bob, switches from the automation project to the integration project. Due to this switch, Michelle sets Bob’s project attribute in the IdP to integration. Now, the next time Bob federates into AWS he automatically has access to resources in integration project. Using session tags, permissions automatically apply when you update attributes or create new AWS resources with appropriate attributes without requiring any permissions updates in AWS.

Track user identity using session tags

When developers federate into AWS with session tags, AWS CloudTrail logs these tags to make it easier for security administrators to track the user identity of the session. To view session tags in CloudTrail, your administrator Michelle looks for the AssumeRoleWithSAML event in the eventName filter of CloudTrail. In the example below, Michelle has configured the SAML IdP to pass three session tags: project, jobfunction, and userID. When developers federate into your account, Michelle views the AssumeRoleWithSAML event in CloudTrail to track the user identity of the session using the session tags project, jobfunction, and userID as shown below:
 

Figure 1: Search for the logged events

Figure 1: Search for the logged events

Note: You can use session tags in conjunction with the instructions to track account activity to its origin using AWS CloudTrail to trace the identity of the session.

Summary

You can use session tags to rely on your employee attributes from your corporate directory to create fine-grained permissions at scale in AWS to simplify your permissions management workflows. To learn more about session tags, please visit tags in AWS session.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon IAM forum.

Want more AWS Security news? Follow us on Twitter.

Sulay Shah

Sulay is a Senior Product Manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Use attribute-based access control with AD FS to simplify IAM permissions management

Post Syndicated from Louay Shaat original https://aws.amazon.com/blogs/security/attribute-based-access-control-ad-fs-simplify-iam-permissions-management/

AWS Identity and Access Management (IAM) allows customers to provide granular access control to resources in AWS. One approach to granting access to resources is to use attribute-based access control (ABAC) to centrally govern and manage access to your AWS resources across accounts. Using ABAC enables you to simplify your authentication strategy by enabling you to scale your authorization strategy by granting access to groups of resources, as specified by tags, as opposed to managing long lists of individual resources. The new ability to include tags in sessions—combined with the ability to tag IAM users and roles—means that you can now incorporate user attributes from your AD FS environment as part of your tagging and authorization strategy.

In other words, you can use ABAC to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the developers’ project tag. As the team of developers adds resources to projects, permissions are automatically applied based on attributes (tags, in this case). As a result, each new resource that gets added requires no update to the IAM permissions policy.

In this blog post, I walk you through how to enable AD FS to pass tags as part of the SAML 2.0 token, so that you can enable ABAC for your AWS resources.

AD FS federated authentication process

The following diagram describes the process that a user follows to authenticate to AWS by using Active Directory and AD FS as the identity provider:
 

Figure 1: AD FS federation to AWS

Figure 1: AD FS federation to AWS

  1. A corporate user accesses the corporate Active Directory Federation Services (AD FS) portal sign-in page and provides their Active Directory authentication credentials.
  2. AD FS authenticates the user against Active Directory.
  3. Active Directory returns the user’s information, including Active Directory group membership information.
  4. AD FS dynamically builds a list of Amazon Resource Names (ARNs) for IAM Roles in one or more AWS accounts; these mappings are defined in advance by the administrator and rely on user attributes and Active Directory group memberships.
  5. AD FS sends a signed SAML 2.0 token to the user’s browser with a redirect to post the token to AWS Security Token Service (STS) including the attributes that use define in the claim rules.
  6. Temporary credentials are returned using STS AssumeRoleWithSAML.
  7. The user is authenticated and provided access to the AWS Management Console.

Prerequisites

Attribute-based access control in AWS relies on the use of tags for access-control decisions. Therefore, it’s important to have in place a tagging strategy for your resources. Please see AWS Tagging Strategies.

Implementing ABAC enables organizations to enhance the use of tags from an operational and billing construct to a security construct. Ensuring that tagging is enforced and secure is essential to an enterprise-wide strategy.

For more information about enforcing a tagging policy, see the blog post Enforce Centralized Tag Compliance Using AWS Service Catalog, DynamoDB, Lambda, and CloudWatch Events.

AD FS session tagging setup

After you’ve set up AD FS federation to AWS, you can enable additional attributes to be sent as part of the SAML token. For information about how to enable AD FS, see the blog post AWS Federated Authentication with Active Directory Federation Services (AD FS).

Follow these steps to send standard Active Directory attributes to AWS in the SAML token:

  1. Open Server Manager, choose Tools, then choose AD FS Management.
  2. Under Relying Party Trusts, choose AWS.
  3. Choose Edit Claim Issuance Policy, choose Add Rule, choose Send LDAP Attributes as Claims, then choose Next.
  4. On the Edit Rule page, add the requested details.
    For example, to create a Department Attribute claim rule, add the following details:

    • Claim rule name: Department Attribute
    • Attribute Store: Active Directory
    • LDAP Attribute: Department
    • Outgoing Claim Type:
      https://aws.amazon.com/SAML/Attributes/PrincipalTag:<department> (where <department> is the tag that will be passed in the session)

     

    Figure 2: Claim Rule

    Figure 2: Claim Rule

  5. Repeat the previous step for each attribute you want to send, modifying the details as necessary. For more information about how to configure claim rules, see Configuring Claim Rules in the Windows Server 2012 AD FS Deployment Guide.

Send custom attributes to AWS as part of a federated session

Session Tags in AWS can be derived from Active Directory custom attributes as well as standard attributes, which we demonstrated in the example above. For more information on custom attributes, please How to Create a Custom Attribute in Active Directory.

To ensure that your setup is correct, you should confirm that AD FS is configured correctly:

  1. Perform an identity provider (IdP) initiated authentication. Go to the following URL: https://<your.domain.name>/adfs/ls/idpinitiatedsignon.aspx, where <your.domain.name> is the DNS name of your AD FS server.
  2. Select Sign in to one of the following sites, then choose AWS from the dropdown:
     
    Figure 3: Choose AWS

    Figure 3: Choose AWS

  3. Choose Sign in, then enter your credentials.
  4. After you’ve successfully logged into AWS, navigate to AWS CloudTrail events within 15 minutes of your login event.
  5. Filter on AssumeRoleWithSAML, then locate your login event and look under principalTags to ensure you see the tags that you configured.
     
    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    After you see the tags were sent, you’re ready to use the tags to build your IAM policies.

The following is an example policy that uses multiple tags.

Example: grant IAM users access to your AWS resources by using tags

In this example I will assume that two tags will be passed from AD FS to enable you to build your ABAC tagging strategy: Project and Department. This example assumes that you have multiple teams of developers who need permissions to start and stop specific Amazon Elastic Compute Cloud (Amazon EC2) instances, based on their project allocation. Because these EC2 instances are part of AWS Autoscaling Groups, they come and go depending on scaling conditions; therefore, it would be impractical to try to write policies that refers to lists of individual EC2 instances; writing policies against the tags on the EC2 instances is a more manageable approach. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element, and all resources in the Resource element of the policy. In the Condition element of the policy, I use two conditions:

  • Matching statement where the resource tag project ec2:ResourceTag/project matches the key aws:PrincipalTag for project.
  • Matching statement where the resource tag project ec2:ResourceTag/department matches the key aws:PrincipalTag for department.

This ensures that the principal is able to start and stop an instance only if the project and department tags match value of the tags on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, because you only need to manage a single policy for all your development teams that require permissions to start and stop instances, and you can rely on tag values to specify the resources.


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/project": "${aws:PrincipalTag/project}",
"ec2:ResourceTag/department": "${aws:PrincipalTag/department}" }  } }
]
}

This policy will ensure that the user can only start and stop EC2 instances for the resources that are assigned to their department and project.

Conclusion

In this post, I’ve shown how you can enable AD FS to pass tags as part of the SAML token, so that you can enable ABAC for your AWS resources to simplify permissions management at scale.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security news? Follow us on Twitter.

Louay Shaat

Louay Shaat

Louay Shaat

Louay is a Solutions Architect with AWS based out of Melbourne. He spends his days working with customers, from startups to the largest of enterprises helping them build cool new capabilities and accelerating their cloud journey. He has a strong focus on Security and Automation helping customers improve their security, risk, and compliance in the cloud.

Continuously monitor unused IAM roles with AWS Config

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/continuously-monitor-unused-iam-roles-aws-config/

Developing in the cloud encourages you to iterate frequently as your applications and resources evolve. You should also apply this iterative approach to the AWS Identity and Access Management (IAM) roles you create. Periodically ensuring that all the resources you’ve created are still being used can reduce operational complexity by eliminating the need to track unnecessary resources. It also improves security: identifying unused IAM roles helps reduce the potential for improper or unintended access to your critical infrastructure and workloads.

The IAM API now provides you with information about when a role has last been used to make an AWS request. In this post, I demonstrate how you can identify inactive roles using role last used information. Additionally, I’ll show you how to implement continuous monitoring of role activity using AWS Config.

AWS services and features used

This solution uses the following services and features:

  • AWS IAM: This service enables you to manage access to AWS services and resources securely. It provides an API to retrieve the timestamp of an IAM role’s last use when making an AWS request, and the region where the request was made.
  • AWS Config: This service allows you to continuously monitor and record your AWS resource configurations. It will periodically trigger your AWS Config rule (see next bullet) and will record compliance status.
  • AWS Config Rule: This resource represents your desired configuration settings for specific AWS resources or for an entire AWS account. This resource will check the compliance status of your AWS resources. You can provide the logic that determines compliance, which enables you to mark IAM roles in use as “compliant” and inactive roles as “non-compliant.”
  • AWS Lambda: This service lets you run code without provisioning or managing servers. Lambda will be used to execute API calls to retrieve role last used information and to provide compliance evaluations to AWS Config.
  • Amazon Simple Storage Service (Amazon S3): This is a highly available and durable object store. You’ll use it to store your Lambda code in .zip format prior to deploying your Lambda function.
  • AWS CloudFormation: This service provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. You’ll use it to provision all the resources described in this solution.

Solution logic

This solution identifies unused IAM roles within your account. First, you’ll identify unused roles based on a time window (last number of days) you set. I use 60 days in my example, but this range is configurable. Second, you’ll use AWS Lambda to process all the roles in your account. Third, you’ll determine if they’re compliant based on their creation time and role last used information. Last, you’ll send your evaluations to AWS Config, which records the results and reports if each role is compliant or not. If not, you can take steps to remediate, such as denying all actions that the role can perform.

Prerequisites

This solution has the following prerequisites:

Solution architecture

 

Figure 1: Solution architecture

Figure 1: Solution architecture

As shown in the diagram, AWS Config (1) executes the AWS Config custom rule daily, and this frequency is configurable (2), which in turn invokes the Lambda function (3). The Lambda function enumerates each role and determines its creation date and role last used timestamp, both of which are provided via IAM’s GetAccountAuthorizationDetails API (4). When the Lambda function has determined the compliance of all your roles, the function returns the compliance results to AWS Config (5). AWS Config retains the history of compliance changes evaluated by the rule. If configured, compliance notifications can be sent to an Amazon Simple Notification Service (Amazon SNS) topic. Compliance status is viewable either in the AWS Management Console or through use of the AWS CLI or AWS SDK.

Deploying the solution

The resources for this solution are deployed through AWS CloudFormation. You must prepare the Lambda function’s source code for packaging before AWS CloudFormation can deploy the complete solution into your account.

Step 1: Prepare the Lambda deployment

First, make sure you’re running a *nix prompt (Linux, Mac, or Windows subsystem for Linux). Follow the commands below to create an empty folder named iam-role-last-used where you’ll place your Lambda source code.


mkdir iam-role-last-used
cd iam-role-last-used
touch lambda_function.py

Note that the directory you create and the code it contains will later be compressed into a .zip file by the AWS CLI’s cloudformation package command. This command also uploads the deployment .zip file to your S3 bucket. The cloudformation deploy command will reference this bucket when deploying the solution.

Next, create a Lambda layer with the latest boto3 package. This ensures that your Lambda function is using an up-to-date boto3 SDK and allows you to control the dependencies in your function’s deployment package. You can do this by following steps 1 through 4 in these directions. Be sure to record the Lambda layer ARN that you create because you will use it later.

Finally, open the lambda_function.py file in your favorite editor or integrated development environment (IDE), and place the following code into the lambda_function.py file:


import boto3
from botocore.exceptions import ClientError
from botocore.config import Config
import datetime
import fnmatch
import json
import os
import re
import logging


logger = logging.getLogger()
logging.basicConfig(
    format="[%(asctime)s] %(levelname)s [%(module)s.%(funcName)s:%(lineno)d] %(message)s", datefmt="%H:%M:%S"
)
logger.setLevel(os.getenv('log_level', logging.INFO))

# Configure boto retries
BOTO_CONFIG = Config(retries=dict(max_attempts=5))

# Define the default resource to report to Config Rules
DEFAULT_RESOURCE_TYPE = 'AWS::IAM::Role'

CONFIG_ROLE_TIMEOUT_SECONDS = 60

# Set to True to get the lambda to assume the Role attached on the Config service (useful for cross-account).
ASSUME_ROLE_MODE = False

# Evaluation strings for Config evaluations
COMPLIANT = 'COMPLIANT'
NON_COMPLIANT = 'NON_COMPLIANT'


# This gets the client after assuming the Config service role either in the same AWS account or cross-account.
def get_client(service, execution_role_arn):
    if not ASSUME_ROLE_MODE:
        return boto3.client(service)
    credentials = get_assume_role_credentials(execution_role_arn)
    return boto3.client(service, aws_access_key_id=credentials['AccessKeyId'],
                        aws_secret_access_key=credentials['SecretAccessKey'],
                        aws_session_token=credentials['SessionToken'],
                        config=BOTO_CONFIG
                        )


def get_assume_role_credentials(execution_role_arn):
    sts_client = boto3.client('sts')
    try:
        assume_role_response = sts_client.assume_role(RoleArn=execution_role_arn,
                                                      RoleSessionName="configLambdaExecution",
                                                      DurationSeconds=CONFIG_ROLE_TIMEOUT_SECONDS)
        return assume_role_response['Credentials']
    except ClientError as ex:
        if 'AccessDenied' in ex.response['Error']['Code']:
            ex.response['Error']['Message'] = "AWS Config does not have permission to assume the IAM role."
        else:
            ex.response['Error']['Message'] = "InternalError"
            ex.response['Error']['Code'] = "InternalError"
        raise ex


# Validates role pathname whitelist as passed via AWS Config parameters and returns a list of comma separated patterns.
def validate_whitelist(unvalidated_role_pattern_whitelist):
    # Names of users, groups, roles must be alphanumeric, including the following common
    # characters: plus (+), equal (=), comma (,), period (.), at (@), underscore (_), and hyphen (-).

    if not unvalidated_role_pattern_whitelist:
        return None

    regex = re.compile('^[-a-zA-Z0-9+=,[email protected]_/|*]+')
    if regex.search(unvalidated_role_pattern_whitelist):
        raise ValueError("[Error] Provided whitelist has invalid characters")

    return unvalidated_role_pattern_whitelist.split('|')


# This uses Unix filename pattern matching (as opposed to regular expressions), as documented here:
# https://docs.python.org/3.7/library/fnmatch.html.  Please note that if using a wildcard, e.g. "*", you should use
# it sparingly/appropriately.
# If the rolename matches the pattern, then it is whitelisted
def is_whitelisted_role(role_pathname, pattern_list):
    if not pattern_list:
        return False

    # If role_pathname matches pattern, then return True, else False
    # eg. /service-role/aws-codestar-service-role matches pattern /service-role/*
    # https://docs.python.org/3.7/library/fnmatch.html
    for pattern in pattern_list:
        if fnmatch.fnmatch(role_pathname, pattern):
            # whitelisted
            return True

    # not whitelisted
    return False


# Form an evaluation as a dictionary. Suited to report on scheduled rules.  More info here:
#   https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/config.html#ConfigService.Client.put_evaluations
def build_evaluation(resource_id, compliance_type, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=None):
    evaluation = {}
    if annotation:
        evaluation['Annotation'] = annotation
    evaluation['ComplianceResourceType'] = resource_type
    evaluation['ComplianceResourceId'] = resource_id
    evaluation['ComplianceType'] = compliance_type
    evaluation['OrderingTimestamp'] = notification_creation_time
    return evaluation


# Determine if any roles were used to make an AWS request
def determine_last_used(role_name, role_last_used, max_age_in_days, notification_creation_time):

    last_used_date = role_last_used.get('LastUsedDate', None)
    used_region = role_last_used.get('Region', None)

    if not last_used_date:
        compliance_result = NON_COMPLIANT
        reason = "No record of usage"
        logger.info(f"NON_COMPLIANT: {role_name} has never been used")
        return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)


    days_unused = (datetime.datetime.now() - last_used_date.replace(tzinfo=None)).days

    if days_unused > max_age_in_days:
        compliance_result = NON_COMPLIANT
        reason = f"Was used {days_unused} days ago in {used_region}"
        logger.info(f"NON_COMPLIANT: {role_name} has not been used for {days_unused} days, last use in {used_region}")
        return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)

    compliance_result = COMPLIANT
    reason = f"Was used {days_unused} days ago in {used_region}"
    logger.info(f"COMPLIANT: {role_name} used {days_unused} days ago in {used_region}")
    return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)


# Returns a list of docts, each of which has authorization details of each role.  More info here:
#   https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iam.html#IAM.Client.get_account_authorization_details
def get_role_authorization_details(iam_client):

    roles_authorization_details = []
    roles_list = iam_client.get_account_authorization_details(Filter=['Role'])

    while True:
        roles_authorization_details += roles_list['RoleDetailList']
        if 'Marker' in roles_list:
            roles_list = iam_client.get_account_authorization_details(Filter=['Role'], MaxItems=100, Marker=roles_list['Marker'])
        else:
            break

    return roles_authorization_details


# Check the compliance of each role by determining if role last used is > than max_days_for_last_used
def evaluate_compliance(event, context):

    # Initialize our AWS clients
    iam_client = get_client('iam', event["executionRoleArn"])
    config_client = get_client('config', event["executionRoleArn"])

    # List of resource evaluations to return back to AWS Config
    evaluations = []

    # List of dicts of each role's authorization details as returned by boto3
    all_roles = get_role_authorization_details(iam_client)

    # Timestamp of when AWS Config triggered this evaluation
    notification_creation_time = str(json.loads(event['invokingEvent'])['notificationCreationTime'])

    # ruleParameters is received from AWS Config's user-defined parameters
    rule_parameters = json.loads(event["ruleParameters"])

    # Maximum allowed days that a role can be unused, or has been last used for an AWS request
    max_days_for_last_used = int(os.environ.get('max_days_for_last_used', '60'))
    if 'max_days_for_last_used' in rule_parameters:
        max_days_for_last_used = int(rule_parameters['max_days_for_last_used'])

    whitelisted_role_pattern_list = []
    if 'role_whitelist' in rule_parameters:
        whitelisted_role_pattern_list = validate_whitelist(rule_parameters['role_whitelist'])

    # Iterate over all our roles.  If the creation date of a role is <= max_days_for_last_used, it is compliant
    for role in all_roles:

        role_name = role['RoleName']
        role_path = role['Path']
        role_creation_date = role['CreateDate']
        role_last_used = role['RoleLastUsed']
        role_age_in_days = (datetime.datetime.now() - role_creation_date.replace(tzinfo=None)).days

        if is_whitelisted_role(role_path + role_name, whitelisted_role_pattern_list):
            compliance_result = COMPLIANT
            reason = "Role is whitelisted"
            evaluations.append(
                build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason))
            logger.info(f"COMPLIANT: {role} is whitelisted")
            continue

        if role_age_in_days <= max_days_for_last_used:
            compliance_result = COMPLIANT
            reason = f"Role age is {role_age_in_days} days"
            evaluations.append(
                build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason))
            logger.info(f"COMPLIANT: {role_name} - {role_age_in_days} is newer or equal to {max_days_for_last_used} days")
            continue

        evaluation_result = determine_last_used(role_name, role_last_used, max_days_for_last_used, notification_creation_time)
        evaluations.append(evaluation_result)

    # Iterate over our evaluations 100 at a time, as put_evaluations only accepts a max of 100 evals.
    evaluations_copy = evaluations[:]
    while evaluations_copy:
        config_client.put_evaluations(Evaluations=evaluations_copy[:100], ResultToken=event['resultToken'])
        del evaluations_copy[:100]

Here’s how the above code works. The AWS Config custom rule invokes the Lambda function, calling the evaluate_compliance() method. evaluate_compliance() does the following:

  1. Retrieves information on all roles from IAM using the GetAccountAuthorizationDetails API as mentioned previously. This includes each role’s creation date and role last used timestamp.
  2. Marks each role as compliant if the role name matches one of the patterns in your whitelisted_role_pattern_list. This pattern list is passed to your rule via a user-configurable AWS CloudFormation parameter named RolePatternWhitelist. “Whitelisting roles,” below, provides instructions about how to do this.
  3. Marks each role as compliant if the age of the role in days (role_age_in_days) is less than or equal to the parameter MaxDaysForLastUsed (max_days_for_last_used). This is set via a user-configurable parameter in your CloudFormation stack. You’ll use this parameter to set the time window for how long a role can be inactive.
  4. If neither of the above conditions are met, then determine_last_used() is called, and each role will be marked as non-compliant if days_unused is greater than max_age_in_days.
  5. Finally, evaluate_compliance() calls put_evaluations() against AWS Config to store your evaluations of each role.

Step 2: Deploy the AWS CloudFormation template

Next, create an AWS CloudFormation template file named  iam-role-last-used.yml. This template uses the AWS Serverless Application Model (AWS SAM), which is an extension of CloudFormation. AWS SAM simplifies the deployment so that you don’t have to manually upload your deployment .zip file to your Amazon S3 bucket. To ensure that your template knows the location of your code .zip file, place the file on the same directory level as the iam-role-last-used directory that you created above. Then copy and paste the code below and save it to the iam-role-last-used.yml file.


AWSTemplateFormatVersion: '2010-09-09'
Description: "Creates an AWS Config rule and Lambda to check all roles' last used compliance"
Transform: 'AWS::Serverless-2016-10-31'
Parameters:

  MaxDaysForLastUsed:
    Description: Checks the number of days allowed for a role to not be used before being non-compliant
    Type: Number
    Default: 60
    MaxValue: 365

  NameOfSolution:
    Type: String
    Default: iam-role-last-used
    Description: The name of the solution - used for naming of created resources

  RolePatternWhitelist:
    Description: Pipe separated whitelist of role pathnames using simple pathname matching
    Type: String
    Default: ''
    AllowedPattern: '[-a-zA-Z0-9+=,[email protected]_/|*]+|^$'

  LambdaLayerArn:
    Type: String
    Description: The ARN for the Lambda Layer you will use.
  
Resources:
  LambdaInvokePermission:
    Type: 'AWS::Lambda::Permission'
    DependsOn: CheckRoleLastUsedLambda
    Properties: 
      FunctionName: !GetAtt CheckRoleLastUsedLambda.Arn
      Action: lambda:InvokeFunction
      Principal: config.amazonaws.com
      SourceAccount: !Ref 'AWS::AccountId'

  LambdaExecutionRole:
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: !Sub '${NameOfSolution}-${AWS::Region}'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: /
      Policies:
      - PolicyName: !Sub '${NameOfSolution}'
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - config:PutEvaluations
            Resource: '*'
          - Effect: Allow
            Action:
            - iam:GetAccountAuthorizationDetails
            Resource: '*'
          - Effect: Allow
            Action:
            - logs:CreateLogStream
            - logs:PutLogEvents
            Resource:
            - !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:*:log-group:/aws/lambda/${NameOfSolution}:log-stream:*'

  CheckRoleLastUsedLambda:
    Type: 'AWS::Serverless::Function'
    Properties:
      Description: "Checks IAM roles' last used info for AWS Config"
      FunctionName: !Sub '${NameOfSolution}'
      Handler: lambda_function.evaluate_compliance
      MemorySize: 256
      Role: !GetAtt LambdaExecutionRole.Arn
      Runtime: python3.7
      Timeout: 300
      CodeUri: ./iam-role-last-used
      Layers:
      - !Ref LambdaLayerArn

  LambdaLogGroup:
    Type: 'AWS::Logs::LogGroup'
    Properties: 
      LogGroupName: !Sub '/aws/lambda/${NameOfSolution}'
      RetentionInDays: 30

  ConfigCustomRule:
    Type: 'AWS::Config::ConfigRule'
    DependsOn:
    - LambdaInvokePermission
    - LambdaExecutionRole
    Properties:
      ConfigRuleName: !Sub '${NameOfSolution}'
      Description: Checks the number of days that an IAM role has not been used to make a service request. If the number of days exceeds the specified threshold, it is marked as non-compliant.
      InputParameters: !Sub '{"role-whitelist":"${RolePatternWhitelist}","max_days_for_last_used":"${MaxDaysForLastUsed}"}'
      Source: 
        Owner: CUSTOM_LAMBDA
        SourceDetails: 
        - EventSource: aws.config
          MaximumExecutionFrequency: TwentyFour_Hours
          MessageType: ScheduledNotification
        SourceIdentifier: !GetAtt CheckRoleLastUsedLambda.Arn

For your reference, below is a summary of the template.

  • Parameters (these are user-configurable variables):
    • MaxDaysForLastUsed—maximum amount of days allowed for a role that has not been used to make an AWS request before becoming non-compliant
    • NameOfSolution—the name of the solution, used for naming of created resources
    • RolePatternWhitelist—a pipe (“|”) separated whitelist of role pathnames using simple pathname matching (see Whitelisting roles below)
    • LambdaLayerArn—the unique ARN for your Lambda layer
  • Resources (these are the AWS resources that will be created within your account):
    • LambdaInvokePermission—allows AWS Config to invoke your Lambda function
    • LambdaExecutionRole—the role and permissions that Lambda will assume to process your roles. The policies assigned to this role allow you to perform the iam:GetAccountAuthorizationDetails, config:PutEvaluations, logs:CreateLogStream, and logs:PutLogEvents actions. The PutEvaluations action allows you to send evaluation results back to AWS Config. The CreateLogStream and PutLogEvents actions allows you to write the Lambda execution logs to AWS CloudWatch Logs.
    • CheckRoleLastUsedLambda—defines your Lambda function and its attributes
    • LambdaLogGroup—logs from Lambda will be written to this CloudWatch Log Group
    • ConfigCustomRule—defines your custom AWS Config rule and its attributes

With the CloudFormation template you created above, use the AWS CLI’s cloudformation package command to zip the deployment package and upload it to the S3 bucket that you specify, as shown below. Make sure to replace <YOUR S3 BUCKET> with your bucket name only. Do not include the s3:// prefix:


aws cloudformation package --region <YOUR REGION> --template-file iam-role-last-used.yml \
--s3-bucket <YOUR S3 BUCKET> \
--output-template-file iam-role-last-used-transformed.yml

This will create the file iam-role-last-used-transformed.yml, which adds a reference to the S3 bucket and the pathname needed by CloudFormation to deploy your Lambda function.

Finally, deploy the solution into your AWS account using the cloudformation deploy command below. You can provide different values for NameOfSolutionMaxDaysForLastAccess, or RolePatternWhitelist by using the –parameter-overrides option. Otherwise, defaults will be used. These are specified at the top of the AWS Cloudformation template pasted above, under the Parameters section.


aws cloudformation deploy --region <YOUR REGION> --template-file iam-role-last-used-transformed.yml \
--stack-name iam-role-last-used \
--parameter-overrides NameOfSolution='iam-role-last-used' \
MaxDaysForLastUsed=60 \
RolePatternWhitelist='/breakglass-role|/security-*' \
LambdaLayerArn='<YOUR LAMBDA LAYER ARN>' \
--capabilities CAPABILITY_NAMED_IAM

The deployment is complete after the AWS CLI indicates success. This typically takes only a few minutes:


Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - iam-role-last-used

Step 3: View your findings

Now that your deployment is complete, you can view your compliance findings by going to the AWS Config console.

  1. Select the same region where you deployed the CloudFormation template.
  2. Select Rules in the left pane, which brings up the current list of rules in your account.
  3. Select the iam-role-last-used rule to view the rule’s details, as shown in Figure 2.

When a successful evaluation is indicated in the Overall rule status field, the compliance evaluation is complete. You may need to wait a few minutes for the function to complete successfully as results may not be available yet. You can periodically refresh your web browser to check for completion.
 

Figure 2: AWS Config rule details

Figure 2: AWS Config custom rule details

After the rule completes its evaluations of your roles, you’ll be able to view your compliance results on the same page. In the screenshot below, you can see that there are multiple non-compliant roles. You can switch between viewing compliant and non-compliant resources by selecting the dropdown menu under Compliance status.
 

Figure 3: Viewing the compliance status

Figure 3: Viewing the compliance status

For more insight, you can hover over the “i” symbol, which provides additional information about the role’s non-compliant status (see Figure 4).
 

Figure 4: Hover over the information icon

Figure 4: Hover over the information icon

Step 4: Export a report of your compliance

Once a successful evaluation has completed, you may want to create an exportable report of compliance. You can use the AWS CLI to programmatically script and automatically generate reports for your application, infrastructure, and security teams. They can use these reports to review non-compliant roles and take action if the role is no longer needed. The AWS CLI command below demonstrates how you can achieve this. Note that the command below encompasses a single line:

aws configservice get-compliance-details-by-config-rule –config-rule-name iam-role-last-used –output text –query ‘EvaluationResults [*].{A:EvaluationResultIdentifier.EvaluationResultQualifier.ResourceId,B:ComplianceType,C:Annotation}’

The output is tab-delimited and will be similar to the lines below. The first column displays the role name. The second column states the compliance status. The last column explains the reason for the compliance status:

AdminRole   COMPLIANT      Was last used in us-west-2 46 days ago
Ec2DevRole  NON_COMPLIANT  No record of usage

Remediation

Now that you have a report of non-compliant roles, you must decide what to do with them. If your teams agree that a role is not necessary, the remediation can be to simply delete the role. If unsure, you can retain the role but deny it from performing any action. You can do this by attaching a new permissions policy that will deny all actions for all resources. Re-enabling the role would be as easy as removing the added policy. Otherwise, if the role is necessary but not frequently used, you can whitelist the role through the method below.

Whitelisting roles

Whitelisted roles will be reported as compliant by the custom rule even if left unused. You might have roles such as a security incident response or a break-glass role that require whitelisting.

The whitelist is supplied via the CloudFormation parameter RolePatternWhitelist and is stored as an AWS Config rule parameter. The syntax uses UNIX filename pattern matching. If you need to specify multiple patterns, you can use the | (pipe) character as a delimiter between each pattern. Each delimited pattern will then be matched against the role name, including the path. For example, if you wish to whitelist the breakglass-role, security-incident-response-role and security-audit-role roles, the whitelist patterns you provide to the AWS CloudFormation template might be:

/breakglass-role|/security-*

Important: The use of wildcards (*) should be used thoughtfully, as they will match anything.

Enhancements

In this walkthrough, I’ve kept the architecture and code simple to make the solution easier to follow. You can further customize the solution through the following enhancements:

Conclusion

In this post, I’ve shown you how to use AWS IAM and AWS Config to implement a detective security control that provides visibility into your IAM roles and their last time of use. I’ve also shown how you can view the results in the AWS Management Console and export them using the AWS CLI. Finally, I’ve presented different options for remediation and a means to whitelist roles that are necessary but infrequently used. These techniques can augment your security and compliance program by preventing unintended access through your IAM roles.

Additional resources

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Michael Chan

Michael Chan

Michael is a Developer Advocate for AWS Identity. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Roland AbiHanna

Roland is a Sr. Solutions Architect with Amazon Web Services. He’s focused on helping enterprise customers realize their business needs through cloud solutions, specializing in DevOps and automation. Prior to AWS, Roland ran DevOps for a variety of start-ups in Europe and the Middle East. Outside of work, Roland enjoys hiking and searching for the perfect blend of hops, barley, and water.

Identify unused IAM roles and remove them confidently with the last used timestamp

Post Syndicated from Mathangi Ramesh original https://aws.amazon.com/blogs/security/identify-unused-iam-roles-remove-confidently-last-used-timestamp/

As you build on AWS, you create AWS Identity and Access Management (IAM) roles to enable teams and applications to use AWS services. As those teams and applications evolve, you might only rely on a sub-set of your original roles to meet your needs. This can leave unused roles in your AWS account. To help you identify these unused roles, IAM now reports the last-used timestamp that represents when a role was last used to make an AWS request. You or your security team can use this information to identify, analyze, and then confidently remove unused roles. This helps you improve the security posture of your AWS environments. Additionally, by removing unused roles, you can simplify your monitoring and auditing efforts by focusing only on roles that are in use. You can review when a role was last used to access your AWS environment in the IAM console, using the AWS Command Line Interface (AWS CLI), or AWS SDK.

In this post, I demonstrate how to identify and remove roles that your team or applications don’t use by viewing the last-used timestamp in the IAM console.  Before I share an example, I’ll describe the existing IAM APIs where we now also report the last-used timestamp:

  • Get-role: Returns role details, including the path, ARN (Amazon resource number), and trust policy. You can now use this API to retrieve the last-used timestamp.
  • Get-account-authorization-details: Retrieves information about all the IAM users, groups, roles, and policies in your AWS account. You can now view the last-used timestamp along with the other role details.

How to use the AWS Management Console to view last-used information for roles

Imagine you’re a system administrator for Example Inc. and your development team is working on a new application. To enable them to get started with AWS quickly, you create roles for the team and their application. As the application goes through final review, you learn the team and application now rely on a smaller set of roles to access AWS services. This leaves unused roles in your AWS accounts that you might want to remove. You’re going to check the last time each role made a request to AWS and use this information to determine whether the team is using the role. If they aren’t, you plan to remove it knowing the team doesn’t need it for the application.

To view role-last-used information in the IAM Console, select Roles in the IAM navigation pane, then look for the Last activity column (see Figure 1 below). This displays the number of days that have passed since each role made an AWS service request. AWS records last-used information for the trailing 400 days. This is referred to as the tracking period. You can sort the column to identify the roles your team has not used recently.

In the case of Example Inc., let’s say you want to get rid of any roles that have been inactive for 90 days or more. From the information in Figure 1, you see that your team is using ApplicationEC2Access, TestRole, and CodeDeployRole. You also see they haven’t used AdminAccess, EC2FullAccess, and InfraSetupRole in the last 90 days. You can now delete these roles confidently. (Last activity “None”, as seen for the AdminAccess role, means that the role was not used within the trailing 400-day tracking period to make any service request.)
 

Figure 1: "Last activity" column in IAM console

Figure 1: “Last activity” column in IAM console

While analyzing the last-used timestamp for each role, you notice that the MigrationRole role was last active two months ago. You want to gather more information about the role’s access patterns to determine whether you ought to delete it. To do this, select the name of the role. From the role detail page, navigate to the Access Advisor tab and investigate the list of accessed services and verify what the role was used for. Access advisor provides a report that displays a list of services and timestamps that indicate when the selected IAM principal last accessed each of the services that it has permissions to. Based on this report, you can decide to follow up with the development team to see if they still need this role. Thus, you have reduced the number of roles in your account from 9 to 6, making it easier to monitor active roles and restrict access to your AWS environments.
 

Figure 2: Access Advisor report

Summary

In this post, I showed you how to use role-last-used information to identify and remove unused roles. By removing unused roles, you can simplify monitoring and improve your security posture. To learn more about deleting roles, visit the deleting roles or instance profiles documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Mathangi Ramesh

Mathangi Ramesh

Mathangi is the product manager for AWS Identity and Access Management. She enjoys talking to customers and working with data to solve problems. Outside of work, Mathangi is a fitness enthusiast and a Bharatanatyam dancer. She holds an MBA degree from Carnegie Mellon University.

New! Set permission guardrails confidently by using IAM access advisor to analyze service-last-accessed information for accounts in your AWS organization

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/set-permission-guardrails-using-iam-access-advisor-analyze-service-last-accessed-information-aws-organization/

You can use AWS Organizations to centrally govern and manage multiple accounts as you scale your AWS workloads. With AWS Organizations, central security administrators can use service control policies (SCPs) to establish permission guardrails that all IAM users and roles in the organization’s accounts adhere to. When teams and projects are just getting started, administrators may allow access to a broader range of AWS services to inspire innovation and agility. However, as developers and applications settle into common access patterns, administrators need to set permission guardrails to remove permissions for services that have not or should not be accessed by their accounts. Whether you’re just getting started with SCPs or have existing SCPs, you can now use AWS Identity and Access Management (IAM) access advisor to help you restrict permissions confidently.

IAM access advisor uses data analysis to help you set permission guardrails confidently by providing you service-last-accessed information for accounts in your organization. By analyzing last-accessed information, you can determine the services not used by IAM users and roles. You can implement permissions guardrails using SCPs that restrict access to those services. For example, you can identify services not accessed in an organizational units (OU) for the last 90 days, create an SCP that denies access to these services, and attach it to the OU to restrict access to all IAM users and roles across the accounts in the OU. You can view service-last-accessed information for your accounts, OUs, and your organization using the IAM console in the account you used to create your organization. You can access this information programmatically using IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first review the service-last-accessed information provided by IAM access advisor using the IAM console. Next, I walk through an example to demonstrate how you can use this information to remove permissions for services not accessed by IAM users and roles within your production OU by creating an SCP.

Use IAM access advisor to view service-last-accessed information using the AWS management console

Access advisor provides an access report that displays a list of services and the last-accessed timestamps for when an IAM principal accessed each service. To view the access report in the console, sign in to the IAM console using the account you used to create your organization. Additionally, you need to enable SCPs on your organization root to view the access report. You can view the service-last-accessed information in two ways. First, you can use the Organization activity view to review the service-last-accessed information for an organizational entity such as an account or OU. Second, you can use the SCP view to review the service-last-accessed information for services allowed by existing SCPs attached to your organizational entities.

The Organization activity view lists your OUs and accounts. You can select an OU or account to view the services that the entity is allowed to access and the service-last-accessed information for those services. This tells you services that have not been accessed in an organizational entity. Using this information, you can remove permissions for these services by creating a new SCP and attaching it the organizational entity or updating an existing SCP attached to the entity.

The SCP view lists all the SCPs in your organization. You can select a SCP to view the services allowed by the SCP and the service-last-accessed information for those services. The service-last-accessed information is the last-accessed timestamp across all the organizational entities that the SCP is attached to. This tells you services that have not been accessed but are allowed by the SCP. Using this information, you can refine your existing permission guardrails to remove permissions for services not accessed for your existing SCPs.

Figure 1 shows an example of the access report for an OU. You can see the service-last-accessed information for all services that IAM users and roles can access in all the accounts in ProductionOU. You can see that services such as AWS Ground Station and Amazon GameLift have not been used in the last year. You can also see that Amazon DynamoDB was last accessed in account Application1 10 days ago.
 

Figure 1: An example access report for an OU

Figure 1: An example access report for an OU

Now that I’ve described how to view service-last-accessed information, I will walk through an example.

Example: Restrict access to services not accessed in production by creating an SCP

For this example, assume ExampleCorp uses AWS Organizations to organize their development, test, and production environments into organizational units (OUs). Alice is a central security administrator responsible for managing the accounts in the production OU for ExampleCorp. She wants to ensure that her production OU called ProductionOU has permissions to only the services that are required to run existing workloads. Currently, Alice hasn’t set any permission guardrails on her production OU. I will show you how you can help Alice review the service-last-accessed information for her production OU and set a permission guardrail confidently using a SCP to restrict access to services not accessed by ExampleCorp developers and applications in production.

Prerequisites

  1. Ensure that the SCP policy type is enabled for the organization. If you haven’t enabled SCPs, you can enable it for your organization root by following the steps mentioned in Enabling and Disabling a Policy Type on a Root.
  2. Ensure that your IAM roles or users have appropriate permissions to view the access report, you can do so by attaching the IAMAccessAdvisorReadOnly managed policy.

How to review service-last-accessed information for ProductionOU in the IAM console

In this section, you’ll review the service-last-accessed information using IAM access advisor to determine the services that have not been accessed across all the accounts in ProductionOU.

  1. Start by signing in to the IAM console in the account that you used to create the organization.
  2. In the left navigation pane, under the AWS Organizations section, select the Organization activity view.

    Note: Enabling the SCP policy type does not set any permission guardrails for your organization unless you start attaching SCPs to accounts and OUs in your organization.

  3. In the Organization activity view, select ProductionOU from the organization structure displayed on the console so you can review the service last accessed information across all accounts in that OU.
     
    Figure 2: Select 'ProductionOU' from the organizational structure

    Figure 2: Select ‘ProductionOU’ from the organizational structure

  4. Selecting ProductionOU opens the Details and activity tab, which displays the access report for this OU. In this example, I have no permission guardrail set on the ProductionOU, so the default FULLAWSACCESS SCP is attached, allowing the ProductionOU to have access to all services. The access report displays all AWS services along with their last-accessed timestamps across accounts in the OU.
     
    Figure 3: The service access report

    Figure 3: The service access report

  5. Review the access report for ProductionOU to determine services that have not been accessed across accounts in this OU. In this example, there are multiple accounts in ProductionOU. Based on the report, you can identify that services Ground Station and GameLift have not been used in 365 days. Using this information, you can confidently set a permission guardrail by creating and attaching a new SCP that removes permissions for these services from ProductionOU. You can use a different time period, such as 90 days or 6 months, to determine if a service is not accessed based on your preference.
     
    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

Create and attach a new SCP to ProductionOU in the AWS Organizations console

In this section, you’ll use the access insights you gained from using IAM access advisor to create and attach a new SCP to ProductionOU that removes permissions to Ground Station and GameLift.

  1. In the AWS Organizations console, select the Policies tab, and then select Create policy.
  2. In the Create new policy window, give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: ProductionGuardrail
    • Description: Restricts permissions to services not accessed in ProductionOU.
  3. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.
     
    Figure 5: SCP editor tool

    Figure 5: SCP editor tool

  4. Next, add the services you want to restrict. Using the left panel, select services Ground Station and GameLift. Denying access to services using SCPs is a powerful action if these services are in use. From the service last accessed information I reviewed in step 6 of the previous section, I know these services haven’t been used for more than 365 days, so it is safe to remove access to these services. In this example, I’m not adding any resource or condition to my policy statement.
     
    Figure 6: Add the services you want to restrict

    Figure 6: Add the services you want to restrict

  5. Next, use the Resource policy element, which allows you to provide specific resources. In this example, I select the resource type as All Resources.
  6.  

    Figure 9: Select resource type as All Resources

    Figure 7: Select resource type as “All Resources”

  7. Select the Create Policy button to create your policy. You can see the new policy in the Policies tab.
     
    Figure 10: The new policy on the “Policies” tab

    Figure 8: The new policy on the “Policies” tab

  8. Finally, attach the policy to ProductionOU where you want to apply the permission guardrail.

Alice can now review the service-last-accessed information for the ProductionOU and set permission guardrails for her production accounts. This ensures that the permission guardrail Alice set for her production accounts provides permissions to only the services that are required to run existing workloads.

Summary

In this post, I reviewed how access advisor provides service-last-accessed information for AWS organizations. Then, I demonstrated how you can use the Organization activity view to review service-last-accessed information and set permission guardrails to restrict access only to the services that are required to run existing workloads. You can also retrieve service-last-accessed information programmatically. To learn more, visit the documentation for retrieving service last accessed information using APIs.

If you have comments about using IAM access advisor for your organization, submit them in the Comments section below. For questions related to reviewing the service last accessed information through the console or programmatically, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Working backward: From IAM policies and principal tags to standardized names and tags for your AWS resources

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/working-backward-from-iam-policies-and-principal-tags-to-standardized-names-and-tags-for-your-aws-resources/

When organizations first adopt AWS, they have to make many decisions that will lay the foundation for their future footprint in the cloud. Part of this includes making decisions about the number of AWS accounts you choose to operate, but another fundamental task is constructing practical access control policies so that your application teams can’t affect each other’s resources within the same account. With AWS Identity and Access Management (IAM), you can customize granular access control policies that are appropriate for your organization, helping you follow security best practices such as separation-of-duties and least-privilege. As every organization is different, you’ll need to carefully consider what your cloud security policies are and how they relate to your cloud engineering teams. Things to consider include who should be authorized to perform which actions, how your teams operate with one another, and which IAM mechanisms are suitable for ensuring that only authorized access is allowed.

In this blog post, I’ll show you an approach that works backwards, starting with a set of customer requirements, then utilizing AWS features such as IAM conditions and principal tagging. Combined with an AWS resource naming and tagging strategy, this approach can help you meet your access control objectives. AWS recently enabled tags on IAM principals (users and roles), which allows you to create a single reusable policy that provides access based on the tags of the IAM principal. When you combine this feature with a standardized resource naming and tagging convention, you can craft a set of IAM roles and policies suitable for your organization.

AWS features used in this approach

To follow along, you should have a working knowledge of IAM and tagging, and familiarity with the following concepts:

Introducing Example Corporation

To illustrate the strategies I discuss, I’ll refer to a fictitious customer throughout my post: Example Corporation is a large organization that wants to use their existing Microsoft Active Directory (AD) as their identity store, with Active Directory Federation Services (AD FS) as the means to federate into their AWS accounts. They also have multiple business projects, some of which will need their own AWS accounts, and others that will share AWS accounts due to the dependencies of the applications within those projects. Each project has multiple application teams who do not need to access each other’s AWS resources.

Example Corporation’s access control requirements

Example Corporation doesn’t always dedicate a single AWS account to one team or one environment. Sometimes, multiple project teams work within the same account, and sometimes they have more than one environment in an account. Figure 1 shows how the Website Marketing and Customer Marketing project teams (each of which has multiple application teams) share two AWS accounts: a development and staging AWS account and a production AWS account. Although production has a dedicated AWS account, Example Corporation has decided that a shared development and staging account is acceptable.
 

Figure 1: AWS accounts shared by Example Corp's teams

Figure 1: AWS accounts shared by Example Corp’s teams

The development and staging environments share an AWS account, and the two teams do work closely together. All projects within an account will be allowed access to the read-only metadata of other resources, such as EC2 instance names, tags, and IAM information. However, each project team wants to prevent their application resources from being modified by the other team’s members.

Initial decisions for supporting shared account access control

Example Corporation decides to continue using their existing identity federation solution for access to AWS, as the existing processes for handling joiners, movers, and leavers can be extended to manage identities within AWS. They will enable this via Security Assertion Markup Language (SAML) provided by ADFS to allow Example Corporation’s AD users to access AWS by assuming IAM roles. Initially, they will create three IAM roles—project administrator, application administrator, and application operator—with additional roles to come later.

The company knows they need to implement access controls through IAM, and they’ve created an initial list of AWS services (EC2, RDS, S3, SNS, and Amazon CloudWatch) to secure. Infrastructure as code (IaC) is a new concept at Example Corporation, so they want to keep initial IAM roles and policies as simple as possible. IAM principal tags will help them reuse standard policies across accounts. Principal tags are global condition keys assigned to a user or role. They can be used within a condition to ensure that a new resource is tagged on creation with a value that matches your principal. They can also be used to verify that an existing resource has a matching tag prior to allowing an action against that resource.

Many, but not all, AWS services support tag-based authorization of AWS resources. For services that don’t support tag-based authorization, Example Corporation will enable access control by utilizing ARN paths with wildcards (ARN matching). The name of the resource and its ARN path will explicitly state which projects, applications, and operators have access to that resource. This will require the company to design and enforce a mandatory naming convention.

Please see the IAM user guide for an up-to-date a list of resources that support tag-based authorization.

Using multiple tags to meet access control requirements

The web and marketing teams have settled on three common roles and have decided their access levels as follows:

  • Project administrator: Able to access and modify all resources for a specific project, including all the resources belonging to application teams under the project.
  • Application administrator: Able to access and modify only the resources owned by a particular application team.
  • Application operator: Able to access and modify only the resources owned by a specific application team, plus those that reside within one of three environments: development, staging, or production.

 

Figure 2: Example Corp's teams - administrators and operators with AWS access

Figure 2: Example Corp’s teams—administrators and operators with AWS access

As for the principal tags, there will be three unique tags named with the prefix access-, with tag values that differentiate the roles and their resources from other projects, applications, and environments.

Finally, because the AWS account is shared, Example Corporation needs to account for the service usage costs of the two teams. By adding a mandatory tag for “cost center,” they can determine the costs of the web team’s resources versus the marketing team’s resources in AWS Cost Explorer and AWS Cost and Usage Report.

Below is an example of the web team’s tags.

IAM principal tags used for the website project administrator role:

Tag nameTag value
access-projectweb
cost-center123456

Tags for the website application administrator role:

Tag nameTag value
access-projectweb
access-applicationnginx
cost-center123456

Tags for the website application operator role—specifically for developer access to the dev environment:

Tag nameTag value
access-projectweb
access-applicationnginx
access-environmentdev
cost-center123456

Access control for AWS services and resources that support tag-based authorization

Example Corporation now needs to write IAM policies for their targeted resources. They begin with EC2, as that will be their most widely used service. The IAM documentation for EC2 shows that most write actions (create, modify, delete) support tag-based authorization, allowing the principal to execute the action only if the resource’s tag matches a predefined value.

For example, the following policy statement will only allow EC2 instances to be started or stopped if the resource tag value matches the “web” project name:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"web"
        }
    }
}         

However, if Example Corporation uses a policy variable instead of hardcoding the project name, the company can reuse the policy by taking advantage of the aws:PrincipalTag condition key:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"${aws:PrincipalTag/access-project}"
        }
    }
}    

Without policy variables, every IAM policy for every project would need a unique value to control access to the resource. Because the text of every policy document would be different, Example Corporation wouldn’t be able to reuse policies from one account to another or from one environment to another. Variables allow them to deploy the same policy file to all of their accounts, while allowing the effect of the policy to differ based on the tags that are used in each account.

As a result, Example Corporation will base the right to manipulate resources like EC2 on resource tags as much as possible. It is important, then, for their teams to tag each resource at the time of creation, if the resource supports it. Untagged resources won’t be manageable, but resources tagged properly will become automatically manageable. The company will use the aws:RequestTag IAM condition key to ensure that the requested access tags and cost allocation tags are assigned at the time of EC2 creation. The IAM policy associated with the application-operator role will therefore be:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "ec2:CreateAction": "RunInstances"
        }
    }
}

If someone tries to create an EC2 instance without setting proper tags, the RunInstances API call will fail. The application-administrator policy will be similar, with the added ability to create a resource in any environment:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-zone": [ "dev", "stg", "prd" ],   
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}    

And finally, the project-administrator policy will have the most access. Note that even though this policy is for a project administrator, the user is still limited to modifying resources only within three environments. In addition, to ensure that all resources have the required access-application tag, Example Corporation has added a null condition to verify that the tag value is non-empty:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        },
        "Null": {
            "aws:RequestTag/access-application": false
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}

Access control for AWS services and resources without tag-based authorization

Some services don’t support tag-based authorization. In those cases, Example Corporation will use ARN pattern matching. Many AWS resources use ARNs that contain a user-created name. Therefore, the company’s proposal is to name resources following a naming convention. A name will look like: [project]-[application]-[environment]-myresourcename. For resources that are globally unique, such as S3, Example Corporation additionally requires its abbreviated name, “exco,” to be at the beginning of the resource so as to avoid a naming collision with another corporation’s buckets:


arn:aws:s3:::exco-web-nginx-dev-staticassets

To enforce this naming convention, they craft a reusable IAM policy that ensures that only intended users with matching access-project, access-application, and access-environment tag values can modify their resources. In addition, using * wildcard matches, they are able to allow for custom resource name suffixes such as staticassets in the above example. Using an AWS SNS topic as an example, a snippet of the IAM policy associated with the application-operator role will look like this:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-${aws:PrincipalTag/access-environment}-*"
    ]
} 

And here’s an IAM policy for an application-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],            
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-dev-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-stg-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-prd-*"
    ]
}

And finally, here’s the IAM policy for a project-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:*" 
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-*"
    ]
}

The above policies have two caveats, however. First, they require that the principal tags have values that do not include a hyphen, as it is used as a delimiter according to Example Corporation’s new tag-based convention for access control. In addition, a forward slash cannot be used, as it is in use within ARNs by many AWS resources, such as S3 buckets:


arn:aws:s3:::awsexamplebucket/exampleobject.png

It is important that the company doesn’t let users create resources with disallowed or invalid tags. The following application admin permissions boundary policy uses a condition to permit IAM roles to be created, but only if they are tagged appropriately. Please note that these are just snippets of the boundary policy for the sake of illustration:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*"
    ]
}

And likewise, this permissions boundary policy attached to the project admin will do the same:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-*"
    ]
}

Note that the above boundary policies can be also be crafted using allow statements and multiple explicit deny statements.

Example Corporation’s resource naming convention requirements

As shown in the above examples, Example Corporation has given project teams the ability to create resources with name-based access control for services that currently do not support tag-based authorization (such as SQS and S3). Through the use of wildcards, teams can still provide custom names to their resources to differentiate from other resources created within the same team.

AWS resources have various limits on the structure and composition of names, so the company restricts the character length on access tags. For example, Amazon ElastiCache cluster names must be 20 alphanumeric characters or less, including hyphens. Most AWS resources have higher character limits, but Example Corporation limits their [project]-[application]-[environment] prefix to a 3-character project ID, 5-character application ID, and 3-character maximum environment name to satisfy their requirements, as this will equal a total of 14 characters (for example, web-nginx-prd-), which leaves 6 characters remaining for the user-specified cluster name.

Summary of Key Decisions

  • Services that support tag-based authorization (TBA) must have resources that follow a tagging convention for access control. Tagging on resource creation will be enforced where possible.
  • Services that do not support TBA must have resources that follow a naming convention. The cost center tag will still be required and will be applied after resource creation.
  • Services that do not support TBA, and cannot have user-specified names in their ARN (less common), will be addressed on a case-by-case basis. They will either need to allow access for all projects and application teams sharing the same account, or allow access via a custom IAM policy created on a case-by-case basis so that only the desired team can access the resource. Each IAM role should leave a few unused slots short of the maximum number of policies allowed per role in order to accommodate custom policies.
  • It is acceptable to allow basic List* and Describe* IAM permissions for AWS resources for all users who log in to the account, as the company’s project teams work closely together.
  • IAM user and role names created by project and application admins must adhere to the approved resource naming conventions. Admins themselves will have a permissions boundary policy applied to their roles. This policy, in turn, will require that all users and roles the admins create have a permissions boundary policy. This is especially important for roles associated with resources that can potentially create or modify IAM resources, such as EC2 and Lambda.
  • Active Directory users who need access to AWS resources must assume different IAM roles in order to utilize the different levels of access that the project admin, application admin, and application operator each provide. Users must also assume a different role if they need access to a different project. This is because each role’s tag has a single value. In this scheme, a single role cannot be assigned to multiple projects or application teams.

Conclusion

Example Corporation was able to allow their project teams to share the same AWS account while still limiting access to a majority of the account’s AWS resources. Through the use of IAM principal tagging, combined with a resource naming and tagging convention, they created a reusable set of IAM policies that restricted access not only between project and application admins and users, but also between different development, stage, and production users.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the IAM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael is a Professional Services Consultant who has assisted commercial and Federal customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Setting permissions to enable accounts for upcoming AWS Regions

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/setting-permissions-to-enable-accounts-for-upcoming-aws-regions/

The AWS Cloud spans 61 Availability Zones within 20 geographic regions around the world, and has announced plans to expand to 12 more Availability Zones and four more Regions: Hong Kong, Bahrain, Cape Town, and Milan. Customers have told us that they want an easier way to control the Regions where their AWS accounts operate. Based on this feedback, AWS is changing the default behavior for these four and all future Regions so customers will opt in the accounts they want to operate in each new Region. For new AWS Regions, Identity and Access Management (IAM) resources such as users and roles will only be propagated to the Regions that you enable. When the next Region launches, you can enable this Region for your account using the AWS Regions setting under My Account in the AWS Management Console. You will need to enable a new Region for your account before you can create and manage resources in that Region. At this time, there are no changes to existing AWS Regions.

We recommend that you review who in your account will have access to enable and disable AWS Regions. Additionally, you can prepare for this change by setting permissions so that only approved account administrators can enable and disable AWS Regions. Starting today, you can use IAM permissions policies to control which IAM principals (users and roles) can perform these actions.

In this post, I describe the new account permissions for enabling and disabling new AWS Regions. I also describe the updates we’ve made to deny these permissions in the AWS-managed PowerUserAccess policy that many customers use to restrict access to administrative actions. For customers who use custom policies to manage administrative access, I show how to secure access to enable and disable new AWS Regions using IAM permissions policies and Service Control Policies in AWS Organizations. Finally, I explain the compatibility of Security Token Service (STS) session tokens with Regions.

IAM Permissions to enable and disable new AWS Regions for your account

To control access to enable and disable new AWS Regions for your account, you can set IAM permissions using two new account actions. By default, IAM denies access to new actions unless you have explicitly allowed these permissions in an existing policy. You can use IAM permissions policies to allow or deny the actions to enable and disable AWS Regions to IAM principals in your account. The new actions are:

ActionDescription
account:EnableRegionAllows you to opt in an account to a new AWS Region (for Regions launched after March 20, 2019). This action propagates your IAM resources such as users and roles to the Region.
account:DisableRegionAllows you to opt out an account from a new AWS Region (for Regions launched after March 20, 2019). This action removes your IAM resources such as users and roles from the Region.

When granting permissions using IAM policies, some administrators may have granted full access to AWS services except for administrative services such as IAM and Organizations. These IAM principals will automatically get access to the new administrative actions in your account to enable and disable AWS Regions. If you prefer not to provide account permissions to enable or disable AWS Regions to these principals, we recommend that you add a statement to your policies to deny access to account permissions. To do this, you can add a deny statement for account:*. As new Regions launch, you will be able to specify the Regions where these permissions are granted or denied.

At this time, the account actions to enable and disable AWS Regions apply to all upcoming AWS Regions launched after March 20, 2019. To learn more about managing access to existing AWS Regions, review my post, Easier way to control access to AWS regions using IAM policies.

Updates to AWS managed PowerUserAccess Policy

If you’re using the AWS managed PowerUserAccess policy to grant permissions to AWS services without granting access to administrative actions for IAM and Organizations, we have updated this policy as shown below to exclude access to account actions to enable and disable new AWS Regions. You do not need to take further action to restrict these actions for any IAM principals for which this policy applies. We updated the first policy statement, which now allows access to all existing and future AWS service actions except for IAM, AWS Organizations, and account. We also updated the second policy statement to allow the read-only action for listing Regions. The rest of the policy remains unchanged.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "NotAction": [
                "iam:*",
                "organizations:*",
				"account:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole",
                "iam:DeleteServiceLinkedRole",
                "iam:ListRoles",
                "organizations:DescribeOrganization",
	  			"account:ListRegions"
            ],
            "Resource": "*"
        }
    ]
}

Restrict Region permissions across multiple accounts using Service Control Policies in AWS Organizations

You can also centrally restrict access to enable and disable Regions for all principals across all accounts in AWS Organizations using Service Control Policies (SCPs). You would use SCPs to restrict this access if you do not anticipate using new Regions. SCPs enable administrators to set permission guardrails that apply to accounts in your organization or an organization unit. To learn more about SCPs and how to create and attach them, read About Service Control Policies.

Next, I show how to restrict the Region enable and disable actions for accounts in an AWS organization using an SCP. In the policy below, I explicitly deny using the Effect block of the policy statement. In the Action block, you add the new permissions account:EnableRegion and account:DisableRegion.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "account:EnableRegion",
                "account:DisableRegion"
            ],
            "Resource": "*"
        }
    ]
}

Once you create the policy, you can attach this policy to the root of your organization. This will restrict permissions across all accounts in your organization.

Check if users have permissions to enable or disable new AWS Regions in my account

You can use the IAM Policy Simulator to check if any IAM principal in your account has access to the new account actions for enabling and disabling Regions. The simulator evaluates the policies that you choose for a user or role and determines the effective permissions for each of the actions that you specify. Learn more about using the IAM Policy Simulator.

Region compatibility of AWS STS session tokens

For new AWS Regions, we’re also changing region compatibility for session tokens from the AWS Security Token Service (STS) global endpoint. As a best practice, we recommend using the regional STS endpoints to reduce latency. If you’re using regional STS endpoints or don’t plan to operate in new AWS Regions, then the following change doesn’t apply to you and no action is required.

If you’re using the global STS endpoint (https://sts.amazonaws.com) for session tokens and plan to operate in new AWS Regions, the session token size is going to increase. This may impact functionality if you store session tokens in any of your systems. To ensure your systems work with this change, we recommend that you update your existing systems to use regional STS endpoints using the AWS SDK.

Summary

AWS is changing the default behavior for all new Regions going forward. For new AWS Regions, you will opt in to enable your account to operate in those Regions. This makes it easier for you to select the regions where you can create and manage AWS resources. To prepare for upcoming Region launches, we recommend that you validate the capability to enable and disable AWS Regions to ensure only approved IAM principals can enable and disable AWS Regions for your account.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

How to rotate Amazon DocumentDB and Amazon Redshift credentials in AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/how-to-rotate-amazon-documentdb-and-amazon-redshift-credentials-in-aws-secrets-manager/

Using temporary credentials is an AWS Identity and Access Management (IAM) best practice. Even Dilbert is learning to set up temporary credentials. Today, AWS Secrets Manager made it easier to follow this best practice by launching support for rotating credentials for Amazon DocumentDB and Amazon Redshift automatically. Now, with a few clicks, you can configure Secrets Manager to rotate these credentials automatically, turning a typical, long-term credential into a temporary credential.

In this post, I summarize the key features of AWS Secrets Manager. Then, I show you how to store a database credential for an Amazon DocumentDB cluster and how your applications can access this secret. Finally, I show you how to configure AWS Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications, turning long-term secrets into temporary secrets. Secrets Manager natively supports rotating secrets for all Amazon database services—Amazon RDS, Amazon DocumentDB, and Amazon Redshift—that require a user name and password. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets.
  • Manage access with fine-grained policies. You can store all your secrets centrally and control access to these securely using fine-grained AWS Identity and Access Management (IAM) policies and resource-based policies. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Audit and monitor secrets centrally. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.
  • Compliance. You can use AWS Secrets Manager to manage secrets for workloads that are subject to U.S. Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI-DSS), and ISO/IEC 27001, ISO/IEC 27017, ISO/IEC 27018, or ISO 9001.

Phase 1: Store a secret in Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a DocumentDB cluster. To demonstrate how to retrieve and use the secret, I use a Python application running on Amazon EC2 that requires this database credential to access the DocumentDB cluster. Finally, I show how to configure Secrets Manager to rotate this database credential automatically.

  1. In the Secrets Manager console, select Store a new secret.
     
    Figure 1: Select "Store a new secret"

    Figure 1: Select “Store a new secret”

  2. Next, select Credentials for DocumentDB database. For this example, I store the credentials for the database masteruser. I start by securing the masteruser because it’s the most powerful database credential and has full access over the database.
     
    Figure 2: Select "Credentials for DocumentDB database"

    Figure 2: Select “Credentials for DocumentDB database”

    Note: To follow along, you need the AWSSecretsManagerReadWriteAccess managed policy because this policy grants permissions to store secrets in Secrets Manager. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. By default, Secrets Manager creates a unique encryption key for each AWS region and AWS account where you use Secrets Manager. I chose to encrypt this secret with the default encryption key.
     
    Figure 3: Select the default or your CMK

    Figure 3: Select the default or your CMK

  4. Next, view the list of DocumentDB clusters in my account and select the database this credential accesses. For this example, I select the DB instance documentdb-instance, and then select Next.
     
    Figure 4: Select the instance you created

    Figure 4: Select the instance you created

  5. In this step, specify values for Secret Name and Description. Based on where you will use this secret, give it a hierarchical name, such as Applications/MyApp/Documentdb-instancee, and then select Next.
     
    Figure 5: Provide a name and description

    Figure 5: Provide a name and description

  6. For the next step, I chose to keep the Disable automatic rotation default setting because in my example my application that uses the secret is running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. Select Next.
     
    Figure 6: Choose to either enable or disable automatic rotation

    Figure 6: Choose to either enable or disable automatic rotation

    Note:If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See AWS Secrets Manager getting started guide on rotation for details.

  7. Review the information on the next screen and, if everything looks correct, select Store. You’ve now successfully stored a secret in Secrets Manager.
  8. Next, select See sample code in Python.
     
    Figure 7: Select the "See sample code" button

    Figure 7: Select the “See sample code” button

  9. Finally, take note of the code samples provided. You will use this code to update your application to retrieve the secret using Secrets Manager APIs.
     
    Figure 8: Copy the code sample for use in your application

    Figure 8: Copy the code sample for use in your application

Phase 2: Update an application to retrieve a secret from Secrets Manager

Now that you’ve stored the secret in Secrets Manager, you can update your application to retrieve the database credential from Secrets Manager instead of hard-coding this information in a configuration file or source code. For this example, I show how to configure a Python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
    
        import DocumentDB
        import config
        
        def no_secrets_manager_sample()
        
        # Get the user name, password, and database connection information from a config file.
        database = config.database
        user_name = config.user_name
        password = config.password                
        

  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    
        # Use the user name, password, and database connection information to connect to the database
        db = Database.connect(database.endpoint, user_name, password, database.db_name, database.port) 
        

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client, then retrieves and decrypts the secret Applications/MyApp/Documentdb-instance. I’ve added comments to the code to make the code easier to understand.
    
        # Use this code snippet in your app.
        # If you need more information about configurations or implementing the sample code, visit the AWS docs:   
        # https://aws.amazon.com/developers/getting-started/python/
        
        import boto3
        import base64
        from botocore.exceptions import ClientError
        
        
        def get_secret():
        
            secret_name = "Applications/MyApp/Documentdb-instance"
            region_name = "us-west-2"
        
            # Create a Secrets Manager client
            session = boto3.session.Session()
            client = session.client(
                service_name='secretsmanager',
                region_name=region_name
            )
        
            # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
            # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
            # We rethrow the exception by default.
        
            try:
                get_secret_value_response = client.get_secret_value(
                    SecretId=secret_name
                )
            except ClientError as e:
                if e.response['Error']['Code'] == 'DecryptionFailureException':
                    # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InternalServiceErrorException':
                    # An error occurred on the server side.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidParameterException':
                    # You provided an invalid value for a parameter.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidRequestException':
                    # You provided a parameter value that is not valid for the current state of the resource.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'ResourceNotFoundException':
                    # We can't find the resource that you asked for.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
            else:
                # Decrypts secret using the associated KMS CMK.
                # Depending on whether the secret is a string or binary, one of these fields will be populated.
                if 'SecretString' in get_secret_value_response:
                    secret = get_secret_value_response['SecretString']
                else:
                    decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                    
            # Your code goes here.                          
        

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read a secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/Documentdb-instance secret from Secrets Manager. You can visit the AWS Secrets Manager documentation to understand the minimum IAM permissions required to retrieve a secret.
    
        {
        "Version": "2012-10-17",
        "Statement": {
        "Sid": "RetrieveDbCredentialFromSecretsManager",
        "Effect": "Allow",
        "Action": "secretsmanager:GetSecretValue",
        "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/Documentdb-instance"
        }
        }                   
        

Phase 3: Enable rotation for your secret

Rotating secrets regularly is a security best practice. Secrets Manager makes it easier to follow this security best practice by offering built-in integrations and supporting extensibility with Lambda. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions, and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Now, I show you how to configure Secrets Manager to rotate the secret
Applications/MyApp/Documentdb-instance automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in phase 1, Applications/MyApp/Documentdb-instance.
     
    Figure 9: Choose the secret from Phase 1

    Figure 9: Choose the secret from Phase 1

  2. Scroll to Rotation configuration, and then select Edit rotation.
     
    Figure 10: Select the Edit rotation configuration

    Figure 10: Select the Edit rotation configuration

  3. To enable rotation, select Enable automatic rotation, and then choose how frequently Secrets Manager rotates this secret. For this example, I set the rotation interval to 30 days. Then, choose create a new Lambda function to perform rotation and give the function an easy to remember name. For this example, I choose the name RotationFunctionforDocumentDB.
     
    Figure 11: Chose to enable automatic rotation, select a rotation interval, create a new Lambda function, and give it a name

    Figure 11: Chose to enable automatic rotation, select a rotation interval, create a new Lambda function, and give it a name

  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the masteruser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use this secret, and then select Save.
     
    Figure12: Select credentials for Secret Manager to use

    Figure12: Select credentials for Secret Manager to use

  5. The banner on the next screen confirms that I successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 30 days.
     
    Figure 13: The banner at the top of the screen will show the status of the rotation

    Figure 13: The banner at the top of the screen will show the status of the rotation

Summary

I explained the key benefits of AWS Secrets Manager and showed how you can use temporary credentials to access your Amazon DocumentDB clusters and Amazon Redshift instances securely. You can follow similar steps to rotate credentials for Amazon Redshift.

Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, read the Secrets Manager documentation. If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

How to centralize and automate IAM policy creation in sandbox, development, and test environments

Post Syndicated from Mahmoud ElZayet original https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-environments/

To keep pace with AWS innovation, many customers allow their application teams to experiment with AWS services in sandbox environments as they move toward production-ready architecture. These teams need timely access to various sets of AWS services and resources, which means they also need a mechanism to help ensure least privilege is granted. In other words, your application team generally shouldn’t have access to administrative resources, such as an AWS Lambda function that takes periodic Amazon Elastic Block Store snapshot backups, or an Amazon CloudWatch Events rule that sends events to a centralized information security account managed by your security team.

In this blog post, I’ll show you how to create a centralized and automated workflow that creates and validates AWS Identity and Access Management (IAM) policies for application teams working in various sandbox, development, and test environments. Your security developers can customize this workflow according to the specific requirements of your security team. They can create logic to limit the allowed permission sets based on account type or owning team. I’ll use AWS CodePipeline to create and manage a workflow containing various stages and spanning multiple AWS accounts that I’ll describe in more detail in the next section.

Solution overview

I’ll start with this scenario: Alice is an administrator for an AWS sandbox account used by her organization’s data scientists to try out AWS analytics services such as Amazon Athena and Amazon EMR. The data scientists assess the suitability of these services for their production use cases by running sample analytics jobs on portions of real data sets after any sensitive information has been taken out. The data sets are stored in an existing Amazon Simple Storage Service (Amazon S3) bucket. For every new project, Alice authors a new IAM policy that allows the project team to access their requested Amazon S3 bucket and create their analytics clusters. However, Alice must follow a company guideline that sandbox accounts can only launch specific Amazon Elastic Compute Cloud (Amazon EC2) instance types. She must also restrict access to all administrative AWS Lambda functions and CloudWatch Events rules that the security team use to monitor sandbox account compliance. Below is the solution that meets these requirements and makes it easier for Alice and other administrators to perform their tasks.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Alice uses the IAM visual editor to author a template that gives the data science team access to launch and manage EMR clusters that analyze S3-based data sets. She then uploads the IAM JSON policy document into an existing S3 bucket using an AWS Key Management Service (AWS KMS) key. The key and the S3 bucket are already created by the security team as part of account baselining, which I will detail later in this post.
  2. AWS CodePipeline automatically fetches the IAM JSON policy document and invokes a sequence of validation checks that use a single and central Lambda function hosted in an AWS account managed by the security team.
  3. If the IAM JSON policy adheres to all account and general security requirements coded by the security developers, the central Lambda function automatically creates the policy in Alice’s account and the pipeline will succeed. The central validation Lambda function will also attach a set of predefined explicit denies to the IAM policy to ensure that it limits undesired user capabilities in the sandbox account. If the IAM JSON policy fails the checks, the pipeline will fail and provide Alice the specific reason for non-compliance. Alice must then modify the policy and resubmit. When the policy has been successfully created, Alice will attach it to the right IAM user, group, or role.

Solution deployment

This solution includes the following three steps:

Prerequisites

As this solution manages permissions granted to AWS services or IAM entities, I highly recommend that you try the solution first in an isolated test environment to make sure it meets all your security requirements.

  1. You’ll need administrator access in two AWS accounts to set up the solution. The deployment of this solution is typically done by one of your organization’s administrators while setting up new AWS accounts. These are the two account types you’ll need access to:
     

    • A sandbox account. This lets application teams experiment with various AWS architectures. This could be a development or test account, as mentioned earlier.
    • A central information security account. Typically, this is owned by an information security team who monitors and enforces security compliance within a multi-account structure.


Important
: Because the Lambda function that you’ll create in the information security account has highly privileged permissions, it’s important to strictly follow best practices for securing the account. You need to limit account access to security team members. Sandbox account administrators should also not give this central Lambda function any IAM permissions in their sandbox account beyond IAM Policy creation.

  1. Because you’ll use the AWS Management Console for both AWS accounts, I strongly recommended that you have roles in both AWS accounts and use the console’s Switch Role feature. You can attach an alias to each account and give each a different color code so that you always know which one you’re logged into.
  2. Make sure to use the same AWS region for all the resources that you create for this solution.

Step 1: Deploy the solution prerequisites

Before building the pipeline across the two AWS accounts, you must first configure the required resources in both accounts, such as IAM roles and encryption keys. This configuration is typically done according to your security team’s guidelines when your organization first sets up the sandbox, development, or test environment.

Important

  • In addition to the initial setup you’ll create in this section, your security team must explicitly deny sandbox, development, or test account administrators from attaching IAM Policies that do not meet the allowed security policies for that account type, such as the AdministratorAccess IAM policy. Moreover, your security team must ensure any current or future users, groups, or roles in the account have no permissions to directly set or update IAM policies like (for example) CreatePolicy, CreatePolicyVersion, PutRolePolicy, PutUserPolicy, PutGroupPolicy, or UpdateAssumeRolePolicy. You want to ensure that creating permissions can only be done through the automation pipeline, which I’ll show you how to build shortly.
  • Because the solution I’ll be describing focuses on the creation of least privilege permissions, it’s highly advisable that your security team combines the solution with IAM permission boundaries to make sure that any permissions defined in this solution are scoped by a set of pre-defined permissions for every type of account in the organization. For example, your account administrators might only be allowed to create IAM users or roles with a pre-defined set of permission boundaries that limit the permissions attached to those principals. For more information about permission boundaries, please refer to this AWS Security blog post.

Create the sandbox account prerequisites

Follow the steps below to deploy an AWS CloudFormation template that will create the following resources in the sandbox account:

  • An S3 bucket where your sandbox administrators will upload IAM policies
  • An IAM role that your automated pipeline will use to access the S3 bucket that stores the IAM policies
  • An AWS KMS key that you will use to encrypt the IAM policies in your S3 bucket
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
     
    Figure 2: CloudFormation console

    Figure 2: CloudFormation console with prepopulated URL

  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. The template defines an input parameter called CentralAccount that you can populate with the AWS account ID of your security account. For more information on how to find the account ID of your security account, check here.
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles that your pipeline will use, select the check box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Select the Stack info tab and refresh periodically while watching the Stack Status field value. After your stack reaches the state CREATE_COMPLETE, navigate to the CloudFormation Outputs tab and copy the following output values to the text editor of your choice. You’ll use these values in subsequent CloudFormation stacks.
     
    Figure 3: CloudFormation Outputs tab

    Figure 3: CloudFormation Outputs tab

Create the information security account prerequisites

Follow the steps below to deploy a CloudFormation template that will create the following resources in your information security account:

  • An IAM role used by your automated pipeline to invoke your central Lambda function and to provide access to the sandbox account KMS key
  • An IAM role used by the central Lambda function to assume a role in the sandbox account and manage IAM policies
  1. While logged in to your security account in your default browser, select this link to launch an AWS stack with the security environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. Populate the following input parameter fields:
    • SandboxAccount: The AWS account ID for the sandbox account.
    • ArtifactBucket: The bucket name that you noted in your text editor from the previous stack run in the sandbox account
    • CMKARN: The Amazon Resource Name (ARN) of the KMS key that you noted in your text editor from the previous stack run in the sandbox account
    • PolicyCheckerFunctionName: The name of the Lambda function to be created later. The default value is PolicyChecker
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles used by your pipeline, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Create the sandbox account pipeline

Now, switch back to your sandbox account and deploy the CloudFormation template that will create the following resources in the sandbox account:

  • An AWS CodePipeline automation pipeline that fetches the IAM policy document from S3 and sends it to the security account for centralized validation. If valid, a Lambda function in the information security account will also create the IAM policy in the sandbox account.
  • An S3 bucket policy to allow your central Lambda function to fetch the IAM policy JSON document from your bucket
  • An IAM role that will be assumed by the Lambda function in the central information security account and used to create IAM policies in the sandbox account. Sandbox account administrator can then attach those IAM policies to the required entities, like an IAM user or role.
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Click Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Pipeline, should already be populated.
  3. Populate the following input parameter fields:
    • CentralAccount: The AWS account ID of the information security account, without hyphens.
    • ArtifactBucket: The same bucket name that you noted in your text editor earlier and used in the previous stack in the information security account.
    • CMKARN: The ARN of the KMS key that you noted in your text editor earlier and used in the previous stack in the information security account.
    • PolicyCheckerFunctionName: Again, the name of the Lambda function to be created later. It must be the same value you provided to the information security account template.
  4. Select Next, and then select Next again.
  5. To have the stack create the required IAM roles, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Step 2: Set up the policy validation Lambda function in the central information security account

In the central information security account, create the Lambda function to validate the IAM policies created in sandbox environment.

  1. In the AWS Lambda console, select Create Function and then select Author from scratch. Provide values for the following fields:
    • Name. This must be the same function name defined as input parameter PolicyCheckerFunctionName to CloudFormation in step 1, when you set up the information security account prerequisites. If you did not change the default value in step 1, the default is still PolicyChecker.
    • Runtime. Python 2.7.
    • Role. To set the role, select Choose an existing role, and then select the role named policy-checker-lambda-role. This is the role you created in step 1, when you set up the information security account prerequisites.

    Choose Create Function, scroll down to Function Code, and then paste the following code into the editor (replacing the existing code):

    
    #  Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #  Licensed under the Apache License, Version 2.0 (the "License"). You may not
    #  use this file except in compliance with
    #  the License. A copy of the License is located at
    #      http://aws.amazon.com/apache2.0/
    #  or in the "license" file accompanying this file. This file is distributed
    #  on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
    #  either express or implied. See the License for the
    #  specific language governing permissions and
    #  limitations under the License.
    from __future__ import print_function
    import json
    import boto3
    import zipfile
    import tempfile
    import os
    
    print('Loading function')
    PERMISSIVE_ERROR_MSG = """Policy creation request rejected: * permissions not
                             allowed in both actions and resources"""
    GENERAL_ERROR_MSG = """An error has occurred while validating policy.
                            Please contact admin"""
    
    
    def get_template(event, s3, artifact, file_in_zip):
        tmp_file = tempfile.NamedTemporaryFile()
        bucket = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['bucketName']
        key = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['objectKey']
    
        with tempfile.NamedTemporaryFile() as tmp_file:
            s3.download_file(bucket, key, tmp_file.name)
            with zipfile.ZipFile(tmp_file.name, 'r') as zip:
                return zip.read(file_in_zip)
    
    
    def get_sts_session(event, account, rolename):
        sts = boto3.client("sts")
        RoleArn = str("arn:aws:iam::" + account + ":role/" + rolename)
        response = sts.assume_role(
            RoleArn=RoleArn,
            RoleSessionName='SecurityManageAccountPermissions',
            DurationSeconds=900)
        sts_session = boto3.Session(
            aws_access_key_id=response['Credentials']['AccessKeyId'],
            aws_secret_access_key=response['Credentials']['SecretAccessKey'],
            aws_session_token=response['Credentials']['SessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        return (sts_session)
    
    
    def ManagePolicy(event, context):
        # Set boto session to get pipeline artifact from sandbox/dev/test account
        artifact_session = boto3.Session(
            aws_access_key_id=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['accessKeyId'],
            aws_secret_access_key=event['CodePipeline.job']['data']
                                       ['artifactCredentials']['secretAccessKey'],
            aws_session_token=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['sessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        # Fetch pipeline artifact from S3
        s3 = artifact_session.client('s3')
        permission_doc = get_template(event, s3, '', 'policy.json')
        metadata_doc = json.loads(get_template(event, s3, '', 'metadata.json'))
        permission_doc_json = json.loads(permission_doc)
        # Assume the central account role in sandbox/dev/test account
        global STS_SESSION
        STS_SESSION = ''  
        STS_SESSION = get_sts_session(
            event, event['CodePipeline.job']['accountId'], 'central-account-role')
        iam = STS_SESSION.client('iam')
        codepipeline = STS_SESSION.client('codepipeline')
        policy_arn = 'arn:aws:iam::' + event['CodePipeline.job']['accountId'] + ':policy/' + metadata_doc['PolicyName']
    
        try:
            # 1.Sample code - Validate policy sent from sandbox/dev/test account:
            # look for * actions and * resources
            for statement in permission_doc_json['Statement']:
                if statement['Action'] == '*' and statement['Resource'] == '*':
                    return codepipeline.put_job_failure_result(
                                        jobId=event['CodePipeline.job']['id'],
                                        failureDetails={
                                            'type': 'JobFailed',
                                            'message': PERMISSIVE_ERROR_MSG})
            # 2.Sample code - Attach any required denies from central
            # pre-defined policy
            iam_local = boto3.client('iam')
            account_id = context.invoked_function_arn.split(":")[4]
            local_policy_arn = 'arn:aws:iam::' + account_id + ':policy/central-deny-policy-sandbox'
            policy_response = iam_local.get_policy(PolicyArn=local_policy_arn)
            policy_version_id = policy_response['Policy']['DefaultVersionId']
            policy_version_doc = iam_local.get_policy_version(
                PolicyArn=local_policy_arn,
                VersionId=policy_version_id)
            for statement in policy_version_doc['PolicyVersion']['Document']['Statement']:
                permission_doc_json['Statement'].append(
                   statement
                )
            # 3. If validated successfully, create policy in
            # sandbox/dev/test account
            iam.create_policy(
                PolicyName=metadata_doc['PolicyName'],
                PolicyDocument=json.dumps(permission_doc_json),
                Description=metadata_doc['PolicyDescription'])
    
            # successful creation, put result back to
            # sandbox/dev/test account pipeline
            codepipeline.put_job_success_result(
                jobId=event['CodePipeline.job']['id'])
        except Exception as e:
            print('Error: ' + str(e))
            codepipeline.put_job_failure_result(
                jobId=event['CodePipeline.job']['id'],
                failureDetails={'type': 'JobFailed', 'message': GENERAL_ERROR_MSG})
    
    def lambda_handler(event, context):
        print(event)
        ManagePolicy(event, context)
    

    This sample code shows how the Lambda function checks the IAM JSON policy submitted by Alice for policies that are too permissive because they allow all IAM actions on all account resources. The sample code also shows an IAM Deny action that prevents the launch of Amazon EC2 instances that are not part of the T2 EC2 instance family. An explicit deny here ensures that only T2 instances can be launched. Your security developers should author code similar to this sample code, in order to meet the security policies of every account type and control the IAM policies created in various sandbox, development, and test environments.

  2. Before saving your new Lambda function code, scroll further down to the Basic Settings section and increase the function timeout to 10 seconds.
  3. Select Save.

Step 3: Test the sandbox account pipeline

Now it’s time to deploy the solution in your sandbox account.

  1. Create the following files and compress them into an archive with the name policy.zip (this is the name expected by your created pipeline).
    • metadata.json: This file contains metadata like the name and description of the IAM policy to be created.
      
                      {
                      "PolicyDescription": "ec2 start permission policy",
                      "PolicyName": "Ec2RunTeamA"
                      }
                      

    • policy.json: This file contains the JSON body of the IAM policy to be created.
      
                      {
                      "Version": "2012-10-17",
                      "Statement": [
                              {
                              "Sid": "EC2Run",
                              "Effect": "Allow",
                              "Action": "ec2:RunInstances",
                              "Resource": "*"
                              }
                      ]
                      }
                      

  2. To upload your policy.zip file to the bucket you created earlier, go to the Amazon S3 console in the sandbox account and, in the search box at the top of the page, search for the bucket you noted in your text editor earlier as ArtifactBucket.
  3. When you locate your bucket, select the bucket name, and then select Upload. The upload dialog will appear.
  4. Select Add Files and navigate to the folder with the policy.zip file. Select the file, select Open, select Next, and then select Next again.
     
    Figure 4: S3 upload dialog

    Figure 4: S3 upload dialog

  5. Select the AWS KMS master-key radio button, and then select the KMS key that has the alias codepipeline-policy-crossaccounts.
     
    Figure 5: Selecting the KMS key

    Figure 5: Selecting the KMS key

  6. Select Next, and then select Upload.
  7. Go to AWS CodePipeline console, select your sandbox pipeline, and wait for the pipeline to start running. It can take up to a minute for it to start.
     
    Figure 6: AWS CodePipeline console

    Figure 6: AWS CodePipeline console

  8. Wait for your pipeline to complete. There should be no validation errors for the IAM policy you just uploaded and your IAM policy should be successfully created. To view the newly created IAM policy, open the AWS IAM console.
  9. Select Policies on the left and search for the policy with the name defined in the metadata.json file.
     
    Figure 7: Viewing your new policy

    Figure 7: Viewing your new policy

  10. Select the policy name. Note the IAM deny that was automatically added to your defined policy.

If you’d like to test the pipeline further, you can modify the policy to permit all actions on all resources. When policy.zip is uploaded again, the pipeline should return the following error:


Policy creation request rejected: * permissions not allowed in both actions and resources

If you encounter any errors as you modify your Lambda function code, you can always go back to the Lambda function logs in the central information security account. For more information on how to access Lambda function logs, please refer to the documentation.

The same logic used here can be extended to other sandbox, development, or test environments. However, for the central information security account, the existing roles will need to be updated to trust and have access to the resources in the newly added sandbox, development, or test account.

Summary

In this blog post, I showed you how to centralize the validation and creation of IAM policies across various AWS accounts. This allows your security developers to start coding your security best practices; permitting automatic creation and validation of IAM policies across your various sandbox, development, and test accounts. Account administrators can then attach those validated IAM policies to the required IAM users, groups or roles. This process strikes the balance between agility and control. It empowers your account administrators to create compliant and least-privilege permission IAM policies, while also allowing your application teams to keep quickly experimenting and innovating. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mahmoud ElZayet

Mahmoud is a Global Accounts Solutions Architect at AWS. He works with large enterprise customers providing guidance and technical assistance for building cloud solutions. Mahmoud is passionate about DevOps and Cloud Compliance topics. Outside of work, he enjoys exploring new places with his wife and two kids.