Tag Archives: IAM policies

IAM Access Analyzer simplifies inspection of unused access in your organization

Post Syndicated from Achraf Moussadek-Kabdani original https://aws.amazon.com/blogs/security/iam-access-analyzer-simplifies-inspection-of-unused-access-in-your-organization/

AWS Identity and Access Management (IAM) Access Analyzer offers tools that help you set, verify, and refine permissions. You can use IAM Access Analyzer external access findings to continuously monitor your AWS Organizations organization and Amazon Web Services (AWS) accounts for public and cross-account access to your resources, and verify that only intended external access is granted. Now, you can use IAM Access Analyzer unused access findings to identify unused access granted to IAM roles and users in your organization.

If you lead a security team, your goal is to manage security for your organization at scale and make sure that your team follows best practices, such as the principle of least privilege. When your developers build on AWS, they create IAM roles for applications and team members to interact with AWS services and resources. They might start with broad permissions while they explore AWS services for their use cases. To identify unused access, you can review the IAM last accessed information for a given IAM role or user and refine permissions gradually. If your company has a multi-account strategy, your roles and policies are created in multiple accounts. You then need visibility across your organization to make sure that teams are working with just the required access.

Now, IAM Access Analyzer simplifies inspection of unused access by reporting unused access findings across your IAM roles and users. IAM Access Analyzer continuously analyzes the accounts in your organization to identify unused access and creates a centralized dashboard with findings. From a delegated administrator account for IAM Access Analyzer, you can use the dashboard to review unused access findings across your organization and prioritize the accounts to inspect based on the volume and type of findings. The findings highlight unused roles, unused access keys for IAM users, and unused passwords for IAM users. For active IAM users and roles, the findings provide visibility into unused services and actions. With the IAM Access Analyzer integration with Amazon EventBridge and AWS Security Hub, you can automate and scale rightsizing of permissions by using event-driven workflows.

In this post, we’ll show you how to set up and use IAM Access Analyzer to identify and review unused access in your organization.

Generate unused access findings

To generate unused access findings, you need to create an analyzer. An analyzer is an IAM Access Analyzer resource that continuously monitors your accounts or organization for a given finding type. You can create an analyzer for the following findings:

An analyzer for unused access findings is a new analyzer that continuously monitors roles and users, looking for permissions that are granted but not actually used. This analyzer is different from an analyzer for external access findings; you need to create a new analyzer for unused access findings even if you already have an analyzer for external access findings.

You can centrally view unused access findings across your accounts by creating an analyzer at the organization level. If you operate a standalone account, you can get unused access findings by creating an analyzer at the account level. This post focuses on the organization-level analyzer setup and management by a central team.

Pricing

IAM Access Analyzer charges for unused access findings based on the number of IAM roles and users analyzed per analyzer per month. You can still use IAM Access Analyzer external access findings at no additional cost. For more details on pricing, see IAM Access Analyzer pricing.

Create an analyzer for unused access findings

To enable unused access findings for your organization, you need to create your analyzer by using the IAM Access Analyzer console or APIs in your management account or a delegated administrator account. A delegated administrator is a member account of the organization that you can delegate with administrator access for IAM Access Analyzer. A best practice is to use your management account only for tasks that require the management account and use a delegated administrator for other tasks. For steps on how to add a delegated administrator for IAM Access Analyzer, see Delegated administrator for IAM Access Analyzer.

To create an analyzer for unused access findings (console)

  1. From the delegated administrator account, open the IAM Access Analyzer console, and in the left navigation pane, select Analyzer settings.
  2. Choose Create analyzer.
  3. On the Create analyzer page, do the following, as shown in Figure 1:
    1. For Findings type, select Unused access analysis.
    2. Provide a Name for the analyzer.
    3. Select a Tracking period. The tracking period is the threshold beyond which IAM Access Analyzer considers access to be unused. For example, if you select a tracking period of 90 days, IAM Access Analyzer highlights the roles that haven’t been used in the last 90 days.
    4. Set your Selected accounts. For this example, we select Current organization to review unused access across the organization.
    5. Select Create.
       
    Figure 1: Create analyzer page

    Figure 1: Create analyzer page

Now that you’ve created the analyzer, IAM Access Analyzer starts reporting findings for unused access across the IAM users and roles in your organization. IAM Access Analyzer will periodically scan your IAM roles and users to update unused access findings. Additionally, if one of your roles, users or policies is updated or deleted, IAM Access Analyzer automatically updates existing findings or creates new ones. IAM Access Analyzer uses a service-linked role to review last accessed information for all roles, user access keys, and user passwords in your organization. For active IAM roles and users, IAM Access Analyzer uses IAM service and action last accessed information to identify unused permissions.

Note: Although IAM Access Analyzer is a regional service (that is, you enable it for a specific AWS Region), unused access findings are linked to IAM resources that are global (that is, not tied to a Region). To avoid duplicate findings and costs, enable your analyzer for unused access in the single Region where you want to review and operate findings.

IAM Access Analyzer findings dashboard

Your analyzer aggregates findings from across your organization and presents them on a dashboard. The dashboard aggregates, in the selected Region, findings for both external access and unused access—although this post focuses on unused access findings only. You can use the dashboard for unused access findings to centrally review the breakdown of findings by account or finding types to identify areas to prioritize for your inspection (for example, sensitive accounts, type of findings, type of environment, or confidence in refinement).

Unused access findings dashboard – Findings overview

Review the findings overview to identify the total findings for your organization and the breakdown by finding type. Figure 2 shows an example of an organization with 100 active findings. The finding type Unused access keys is present in each of the accounts, with the most findings for unused access. To move toward least privilege and to avoid long-term credentials, the security team should clean up the unused access keys.

Figure 2: Unused access finding dashboard

Figure 2: Unused access finding dashboard

Unused access findings dashboard – Accounts with most findings

Review the dashboard to identify the accounts with the highest number of findings and the distribution per finding type. In Figure 2, the Audit account has the highest number of findings and might need attention. The account has five unused access keys and six roles with unused permissions. The security team should prioritize this account based on volume of findings and review the findings associated with the account.

Review unused access findings

In this section, we’ll show you how to review findings. We’ll share two examples of unused access findings, including unused access key findings and unused permissions findings.

Finding example: unused access keys

As shown previously in Figure 2, the IAM Access Analyzer dashboard showed that accounts with the most findings were primarily associated with unused access keys. Let’s review a finding linked to unused access keys.

To review the finding for unused access keys

  1. Open the IAM Access Analyzer console, and in the left navigation pane, select Unused access.
  2. Select your analyzer to view the unused access findings.
  3. In the search dropdown list, select the property Findings type, the Equals operator, and the value Unused access key to get only Findings type = Unused access key, as shown in Figure 3.
     
    Figure 3: List of unused access findings

    Figure 3: List of unused access findings

  4. Select one of the findings to get a view of the available access keys for an IAM user, their status, creation date, and last used date. Figure 4 shows an example in which one of the access keys has never been used, and the other was used 137 days ago.
     
    Figure 4: Finding example - Unused IAM user access keys

    Figure 4: Finding example – Unused IAM user access keys

From here, you can investigate further with the development teams to identify whether the access keys are still needed. If they aren’t needed, you should delete the access keys.

Finding example: unused permissions

Another goal that your security team might have is to make sure that the IAM roles and users across your organization are following the principle of least privilege. Let’s walk through an example with findings associated with unused permissions.

To review findings for unused permissions

  1. On the list of unused access findings, apply the filter on Findings type = Unused permissions.
  2. Select a finding, as shown in Figure 5. In this example, the IAM role has 148 unused actions on Amazon Relational Database Service (Amazon RDS) and has not used a service action for 200 days. Similarly, the role has unused actions for other services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), and Amazon DynamoDB.
     
    Figure 5: Finding example - Unused permissions

    Figure 5: Finding example – Unused permissions

The security team now has a view of the unused actions for this role and can investigate with the development teams to check if those permissions are still required.

The development team can then refine the permissions granted to the role to remove the unused permissions.

Unused access findings notify you about unused permissions for all service-level permissions and for 200 services at the action-level. For the list of supported actions, see IAM action last accessed information services and actions.

Take actions on findings

IAM Access Analyzer categorizes findings as active, resolved, and archived. In this section, we’ll show you how you can act on your findings.

Resolve findings

You can resolve unused access findings by deleting unused IAM roles, IAM users, IAM user credentials, or permissions. After you’ve completed this, IAM Access Analyzer automatically resolves the findings on your behalf.

To speed up the process of removing unused permissions, you can use IAM Access Analyzer policy generation to generate a fine-grained IAM policy based on your access analysis. For more information, see the blog post Use IAM Access Analyzer to generate IAM policies based on access activity found in your organization trail.

Archive findings

You can suppress a finding by archiving it, which moves the finding from the Active tab to the Archived tab in the IAM Access Analyzer console. To archive a finding, open the IAM Access Analyzer console, select a Finding ID, and in the Next steps section, select Archive, as shown in Figure 6.

Figure 6: Archive finding in the AWS management console

Figure 6: Archive finding in the AWS management console

You can automate this process by creating archive rules that archive findings based on their attributes. An archive rule is linked to an analyzer, which means that you can have archive rules exclusively for unused access findings.

To illustrate this point, imagine that you have a subset of IAM roles that you don’t expect to use in your tracking period. For example, you might have an IAM role that is used exclusively for break glass access during your disaster recovery processes—you shouldn’t need to use this role frequently, so you can expect some unused access findings. For this example, let’s call the role DisasterRecoveryRole. You can create an archive rule to automatically archive unused access findings associated with roles named DisasterRecoveryRole, as shown in Figure 7.

Figure 7: Example of an archive rule

Figure 7: Example of an archive rule

Automation

IAM Access Analyzer exports findings to both Amazon EventBridge and AWS Security Hub. Security Hub also forwards events to EventBridge.

Using an EventBridge rule, you can match the incoming events associated with IAM Access Analyzer unused access findings and send them to targets for processing. For example, you can notify the account owners so that they can investigate and remediate unused IAM roles, user credentials, or permissions.

For more information, see Monitoring AWS Identity and Access Management Access Analyzer with Amazon EventBridge.

Conclusion

With IAM Access Analyzer, you can centrally identify, review, and refine unused access across your organization. As summarized in Figure 8, you can use the dashboard to review findings and prioritize which accounts to review based on the volume of findings. The findings highlight unused roles, unused access keys for IAM users, and unused passwords for IAM users. For active IAM roles and users, the findings provide visibility into unused services and actions. By reviewing and refining unused access, you can improve your security posture and get closer to the principle of least privilege at scale.

Figure 8: Process to address unused access findings

Figure 8: Process to address unused access findings

The new IAM Access Analyzer unused access findings and dashboard are available in AWS Regions, excluding the AWS GovCloud (US) Regions and AWS China Regions. To learn more about how to use IAM Access Analyzer to detect unused accesses, see the IAM Access Analyzer documentation.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Achraf Moussadek-Kabdani

Achraf Moussadek-Kabdani

Achraf is a Senior Security Specialist at AWS. He works with global financial services customers to assess and improve their security posture. He is both a builder and advisor, supporting his customers to meet their security objectives while making security a business enabler.

Author

Yevgeniy Ilyin

Yevgeniy is a Solutions Architect at AWS. He has over 20 years of experience working at all levels of software development and solutions architecture and has used programming languages from COBOL and Assembler to .NET, Java, and Python. He develops and code clouds native solutions with a focus on big data, analytics, and data engineering.

Mathangi Ramesh

Mathangi Ramesh

Mathangi is the product manager for IAM. She enjoys talking to customers and working with data to solve problems. Outside of work, Mathangi is a fitness enthusiast and a Bharatanatyam dancer. She holds an MBA degree from Carnegie Mellon University.

Introducing IAM Access Analyzer custom policy checks

Post Syndicated from Mitch Beaumont original https://aws.amazon.com/blogs/security/introducing-iam-access-analyzer-custom-policy-checks/

AWS Identity and Access Management (IAM) Access Analyzer was launched in late 2019. Access Analyzer guides customers toward least-privilege permissions across Amazon Web Services (AWS) by using analysis techniques, such as automated reasoning, to make it simpler for customers to set, verify, and refine IAM permissions. Today, we are excited to announce the general availability of IAM Access Analyzer custom policy checks, a new IAM Access Analyzer feature that helps customers accurately and proactively check IAM policies for critical permissions and increases in policy permissiveness.

In this post, we’ll show how you can integrate custom policy checks into builder workflows to automate the identification of overly permissive IAM policies and IAM policies that contain permissions that you decide are sensitive or critical.

What is the problem?

Although security teams are responsible for the overall security posture of the organization, developers are the ones creating the applications that require permissions. To enable developers to move fast while maintaining high levels of security, organizations look for ways to safely delegate the ability of developers to author IAM policies. Many AWS customers implement manual IAM policy reviews before deploying developer-authored policies to production environments. Customers follow this practice to try to prevent excessive or unwanted permissions finding their way into production. Depending on the volume and complexity of the policies that need to be reviewed; these reviews can be intensive and take time. The result is a slowdown in development and potential delay in deployment of applications and services. Some customers write custom tooling to remove the manual burden of policy reviews, but this can be costly to build and maintain.

How do custom policy checks solve that problem?

Custom policy checks are a new IAM Access Analyzer capability that helps security teams accurately and proactively identify critical permissions in their policies. Custom policy checks can also tell you if a new version of a policy is more permissive than the previous version. Custom policy checks use automated reasoning, a form of static analysis, to provide a higher level of security assurance in the cloud. For more information, see Formal Reasoning About the Security of Amazon Web Services.

Custom policy checks can be embedded in a continuous integration and continuous delivery (CI/CD) pipeline so that checks can be run against policies without having to deploy the policies. In addition, developers can run custom policy checks from their local development environments and get fast feedback about whether or not the policies they are authoring are in line with your organization’s security standards.

How to analyze IAM policies with custom policy checks

In this section, we provide step-by-step instructions for using custom policy checks to analyze IAM policies.

Prerequisites

To complete the examples in our walkthrough, you will need the following:

  1. An AWS account, and an identity that has permissions to use the AWS services, and create the resources, used in the following examples. For more information, see the full sample code used in this blog post on GitHub.
  2. An installed and configured AWS CLI. For more information, see Configure the AWS CLI.
  3. The AWS Cloud Development Kit (AWS CDK). For installation instructions, refer to Install the AWS CDK.

Example 1: Use custom policy checks to compare two IAM policies and check that one does not grant more access than the other

In this example, you will create two IAM identity policy documents, NewPolicyDocument and ExistingPolicyDocument. You will use the new CheckNoNewAccess API to compare these two policies and check that NewPolicyDocument does not grant more access than ExistingPolicyDocument.

Step 1: Create two IAM identity policy documents

  1. Use the following command to create ExistingPolicyDocument.
    cat << EOF > existing-policy-document.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:StartInstances",
                    "ec2:StopInstances"
                ],
                "Resource": "arn:aws:ec2:*:*:instance/*",
                "Condition": {
                    "StringEquals": {
                        "aws:ResourceTag/Owner": "\${aws:username}"
                    }
                }
            }
        ]
    }
    EOF

  2. Use the following command to create NewPolicyDocument.
    cat << EOF > new-policy-document.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:StartInstances",
                    "ec2:StopInstances"
                ],
                "Resource": "arn:aws:ec2:*:*:instance/*"
            }
        ]
    }
    EOF

Notice that ExistingPolicyDocument grants access to the ec2:StartInstances and ec2:StopInstances actions if the condition key aws:ResourceTag/Owner resolves to true. In other words, the value of the tag matches the policy variable aws:username. NewPolicyDocument grants access to the same actions, but does not include a condition key.

Step 2: Check the policies by using the AWS CLI

  1. Use the following command to call the CheckNoNewAccess API to check whether NewPolicyDocument grants more access than ExistingPolicyDocument.
    aws accessanalyzer check-no-new-access \
    --new-policy-document file://new-policy-document.json \
    --existing-policy-document file://existing-policy-document.json \
    --policy-type IDENTITY_POLICY

After a moment, you will see a response from Access Analyzer. The response will look similar to the following.

{
    "result": "FAIL",
    "message": "The modified permissions grant new access compared to your existing policy.",
    "reasons": [
        {
            "description": "New access in the statement with index: 1.",
            "statementIndex": 1
        }
    ]
}

In this example, the validation returned a result of FAIL. This is because NewPolicyDocument is missing the condition key, potentially granting any principal with this identity policy attached more access than intended or needed.

Example 2: Use custom policy checks to check that an IAM policy does not contain sensitive permissions

In this example, you will create an IAM identity-based policy that contains a set of permissions. You will use the CheckAccessNotGranted API to check that the new policy does not give permissions to disable AWS CloudTrail or delete any associated trails.

Step 1: Create a new IAM identity policy document

  • Use the following command to create IamPolicyDocument.
    cat << EOF > iam-policy-document.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "cloudtrail:StopLogging",
                    "cloudtrail:Delete*"
                ],
                "Resource": ["*"] 
            }
        ]
    }
    EOF

Step 2: Check the policy by using the AWS CLI

  • Use the following command to call the CheckAccessNotGranted API to check if the new policy grants permission to the set of sensitive actions. In this example, you are asking Access Analyzer to check that IamPolicyDocument does not contain the actions cloudtrail:StopLogging or cloudtrail:DeleteTrail (passed as a list to the access parameter).
    aws accessanalyzer check-access-not-granted \
    --policy-document file://iam-policy-document.json \
    --access actions=cloudtrail:StopLogging,cloudtrail:DeleteTrail \
    --policy-type IDENTITY_POLICY

Because the policy that you created contains both cloudtrail:StopLogging and cloudtrail:DeleteTrail actions, Access Analyzer returns a FAIL.

{
    "result": "FAIL",
    "message": "The policy document grants access to perform one or more of the listed actions.",
    "reasons": [
        {
            "description": "One or more of the listed actions in the statement with index: 0.",
            "statementIndex": 0
        }
    ]
}

Example 3: Integrate custom policy checks into the developer workflow

Building on the previous two examples, in this example, you will automate the analysis of the IAM policies defined in an AWS CloudFormation template. Figure 1 shows the workflow that will be used. The workflow will initiate each time a pull request is created against the main branch of an AWS CodeCommit repository called my-iam-policy (the commit stage in Figure 1). The first check uses the CheckNoNewAccess API to determine if the updated policy is more permissive than a reference IAM policy. The second check uses the CheckAccessNotGranted API to automatically check for critical permissions within the policy (the validation stage in Figure 1). In both cases, if the updated policy is more permissive, or contains critical permissions, a comment with the results of the validation is posted to the pull request. This information can then be used to decide whether the pull request is merged into the main branch for deployment (the deploy stage is shown in Figure 1).

Figure 1: Diagram of the pipeline that will check policies

Figure 1: Diagram of the pipeline that will check policies

Step 1: Deploy the infrastructure and set up the pipeline

  1. Use the following command to download and unzip the Cloud Development Kit (CDK) project associated with this blog post.
    git clone https://github.com/aws-samples/access-analyzer-automated-policy-analysis-blog.git
    cd ./access-analyzer-automated-policy-analysis-blog

  2. Create a virtual Python environment to contain the project dependencies by using the following command.
    python3 -m venv .venv

  3. Activate the virtual environment with the following command.
    source .venv/bin/activate

  4. Install the project requirements by using the following command.
    pip install -r requirements.txt

  5. Use the following command to update the CDK CLI to the latest major version.
    npm install -g aws-cdk@2 --force

  6. Before you can deploy the CDK project, use the following command to bootstrap your AWS environment. Bootstrapping is the process of creating resources needed for deploying CDK projects. These resources include an Amazon Simple Storage Service (Amazon S3) bucket for storing files and IAM roles that grant permissions needed to perform deployments.
    cdk bootstrap

  7. Finally, use the following command to deploy the pipeline infrastructure.
    cdk deploy --require-approval never

    The deployment will take a few minutes to complete. Feel free to grab a coffee and check back shortly.

    When the deployment completes, there will be two stack outputs listed: one with a name that contains CodeCommitRepo and another with a name that contains ConfigBucket. Make a note of the values of these outputs, because you will need them later.

    The deployed pipeline is displayed in the AWS CodePipeline console and should look similar to the pipeline shown in Figure 2.

    Figure 2: AWS CodePipeline and CodeBuild Management Console view

    Figure 2: AWS CodePipeline and CodeBuild Management Console view

    In addition to initiating when a pull request is created, the newly deployed pipeline can also be initiated when changes to the main branch of the AWS CodeCommit repository are detected. The pipeline has three stages, CheckoutSources, IAMPolicyAnalysis, and deploy. The CheckoutSource stage checks out the contents of the my-iam-policy repository when the pipeline is triggered due to a change in the main branch.

    The IAMPolicyAnalysis stage, which runs after the CheckoutSource stage or when a pull request has been created against the main branch, has two actions. The first action, Check no new access, verifies that changes to the IAM policies in the CloudFormation template do not grant more access than a pre-defined reference policy. The second action, Check access not granted, verifies that those same updates do not grant access to API actions that are deemed sensitive or critical. Finally, the Deploy stage will deploy the resources defined in the CloudFormation template, if the actions in the IAMPolicyAnalysis stage are successful.

    To analyze the IAM policies, the Check no new access and Check access not granted actions depend on a reference policy and a predefined list of API actions, respectively.

  8. Use the following command to create the reference policy.
    cd ../ 
    cat << EOF > cnna-reference-policy.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "*",
                "Resource": "*"
            },
            {
                "Effect": "Deny",
                "Action": "iam:PassRole",
                "Resource": "arn:aws:iam::*:role/my-sensitive-roles/*"
            }
        ]
    }	
    EOF

    This reference policy sets out the maximum permissions for policies that you plan to validate with custom policy checks. The iam:PassRole permission is a permission that allows an IAM principal to pass an IAM role to an AWS service, like Amazon Elastic Compute Cloud (Amazon EC2) or AWS Lambda. The reference policy says that the only way that a policy is more permissive is if it allows iam:PassRole on this group of sensitive resources: arn:aws:iam::*:role/my-sensitive-roles/*”.

    Why might a reference policy be useful? A reference policy helps ensure that a particular combination of actions, resources, and conditions is not allowed in your environment. Reference policies typically allow actions and resources in one statement, then deny the problematic permissions in a second statement. This means that a policy that is more permissive than the reference policy allows access to a permission that the reference policy has denied.

    In this example, a developer who is authorized to create IAM roles could, intentionally or unintentionally, create an IAM role for an AWS service (like EC2 for AWS Lambda) that has permission to pass a privileged role to another service or principal, leading to an escalation of privilege.

  9. Use the following command to create a list of sensitive actions. This list will be parsed during the build pipeline and passed to the CheckAccessNotGranted API. If the policy grants access to one or more of the sensitive actions in this list, a result of FAIL will be returned. To keep this example simple, add a single API action, as follows.
    cat << EOF > sensitive-actions.file
    dynamodb:DeleteTable
    EOF

  10. So that the CodeBuild projects can access the dependencies, use the following command to copy the cnna-reference-policy.file and sensitive-actions.file to an S3 bucket. Refer to the stack outputs you noted earlier and replace <ConfigBucket> with the name of the S3 bucket created in your environment.
    aws s3 cp ./cnna-reference-policy.json s3://<ConfgBucket>/cnna-reference-policy.json
    aws s3 cp ./sensitive-actions.file s3://<ConfigBucket>/sensitive-actions.file

Step 2: Create a new CloudFormation template that defines an IAM policy

With the pipeline deployed, the next step is to clone the repository that was created and populate it with a CloudFormation template that defines an IAM policy.

  1. Install git-remote-codecommit by using the following command.
    pip install git-remote-codecommit

    For more information on installing and configuring git-remote-codecommit, see the AWS CodeCommit User Guide.

  2. With git-remote-codecommit installed, use the following command to clone the my-iam-policy repository from AWS CodeCommit.
    git clone codecommit://my-iam-policy && cd ./my-iam-policy

    If you’ve configured a named profile for use with the AWS CLI, use the following command, replacing <profile> with the name of your named profile.

    git clone codecommit://<profile>@my-iam-policy && cd ./my-iam-policy

  3. Use the following command to create the CloudFormation template in the local clone of the repository.
    cat << EOF > ec2-instance-role.yaml
    ---
    AWSTemplateFormatVersion: 2010-09-09
    Description: CloudFormation Template to deploy base resources for access_analyzer_blog
    Resources:
      EC2Role:
        Type: AWS::IAM::Role
        Properties:
          AssumeRolePolicyDocument:
            Version: 2012-10-17
            Statement:
            - Effect: Allow
              Principal:
                Service: ec2.amazonaws.com
              Action: sts:AssumeRole
          Path: /
          Policies:
          - PolicyName: my-application-permissions
            PolicyDocument:
              Version: 2012-10-17
              Statement:
              - Effect: Allow
                Action:
                  - 'ec2:RunInstances'
                  - 'lambda:CreateFunction'
                  - 'lambda:InvokeFunction'
                  - 'dynamodb:Scan'
                  - 'dynamodb:Query'
                  - 'dynamodb:UpdateItem'
                  - 'dynamodb:GetItem'
                Resource: '*'
              - Effect: Allow
                Action:
                  - iam:PassRole 
                Resource: "arn:aws:iam::*:role/my-custom-role"
            
      EC2InstanceProfile:
        Type: AWS::IAM::InstanceProfile
        Properties:
          Path: /
          Roles:
            - !Ref EC2Role
    EOF

The actions in the IAMPolicyValidation stage are run by a CodeBuild project. CodeBuild environments run arbitrary commands that are passed to the project using a buildspec file. Each project has already been configured to use an inline buildspec file.

You can inspect the buildspec file for each project by opening the project’s Build details page as shown in Figure 3.

Figure 3: AWS CodeBuild console and build details

Figure 3: AWS CodeBuild console and build details

Step 3: Run analysis on the IAM policy

The next step involves checking in the first version of the CloudFormation template to the repository and checking two things. First, that the policy does not grant more access than the reference policy. Second, that the policy does not contain any of the sensitive actions defined in the sensitive-actions.file.

  1. To begin tracking the CloudFormation template created earlier, use the following command.
    git add ec2-instance-role.yaml 

  2. Commit the changes you have made to the repository.
    git commit -m 'committing a new CFN template with IAM policy'

  3. Finally, push these changes to the remote repository.
    git push

  4. Pushing these changes will initiate the pipeline. After a few minutes the pipeline should complete successfully. To view the status of the pipeline, do the following:
    1. Navigate to https://<region>.console.aws.amazon.com/codesuite/codepipeline/pipelines (replacing <region> with your AWS Region).
    2. Choose the pipeline called accessanalyzer-pipeline.
    3. Scroll down to the IAMPolicyValidation stage of the pipeline.
    4. For both the check no new access and check access not granted actions, choose View Logs to inspect the log output.
  5. If you inspect the build logs for both the check no new access and check access not granted actions within the pipeline, you should see that there were no blocking or non-blocking findings, similar to what is shown in Figure 4. This indicates that the policy was validated successfully. In other words, the policy was not more permissive than the reference policy, and it did not include any of the critical permissions.
    Figure 4: CodeBuild log entry confirming that the IAM policy was successfully validated

    Figure 4: CodeBuild log entry confirming that the IAM policy was successfully validated

Step 4: Create a pull request to merge a new update to the CloudFormation template

In this step, you will make a change to the IAM policy in the CloudFormation template. The change deliberately makes the policy grant more access than the reference policy. The change also includes a critical permission.

  1. Use the following command to create a new branch called add-new-permissions in the local clone of the repository.
    git checkout -b add-new-permissions

  2. Next, edit the IAM policy in ec2-instance-role.yaml to include an additional API action, dynamodb:Delete* and update the resource property of the inline policy to use an IAM role in the /my-sensitive-roles/*” path. You can copy the following example, if you’re unsure of how to do this.
    ---
    AWSTemplateFormatVersion: 2010-09-09
    Description: CloudFormation Template to deploy base resources for access_analyzer_blog
    Resources:
      EC2Role:
        Type: AWS::IAM::Role
        Properties:
          AssumeRolePolicyDocument:
            Version: 2012-10-17
            Statement:
            - Effect: Allow
              Principal:
                Service: ec2.amazonaws.com
              Action: sts:AssumeRole
          Path: /
          Policies:
          - PolicyName: my-application-permissions
            PolicyDocument:
              Version: 2012-10-17
              Statement:
              - Effect: Allow
                Action:
                  - 'ec2:RunInstances'
                  - 'lambda:CreateFunction'
                  - 'lambda:InvokeFunction'
                  - 'dynamodb:Scan'
                  - 'dynamodb:Query'
                  - 'dynamodb:UpdateItem'
                  - 'dynamodb:GetItem'
                  - 'dynamodb:Delete*'
                Resource: '*'
              - Effect: Allow
                Action:
                  - iam:PassRole 
                Resource: "arn:aws:iam::*:role/my-sensitive-roles/my-custom-admin-role"
            
      EC2InstanceProfile:
        Type: AWS::IAM::InstanceProfile
        Properties:
          Path: /
          Roles:
            - !Ref EC2Role

  3. Commit the policy change and push the updated policy document to the repo by using the following commands.
    git add ec2-instance-role.yaml 
    git commit -m "adding new permission and allowing my ec2 instance to assume a pass sensitive IAM role"

  4. The add-new-permissions branch is currently a local branch. Use the following command to push the branch to the remote repository. This action will not initiate the pipeline, because the pipeline only runs when changes are made to the repository’s main branch.
    git push -u origin add-new-permissions

  5. With the new branch and changes pushed to the repository, follow these steps to create a pull request:
    1. Navigate to https://console.aws.amazon.com/codesuite/codecommit/repositories (don’t forget to the switch to the correct Region).
    2. Choose the repository called my-iam-policy.
    3. Choose the branch add-new-permissions from the drop-down list at the top of the repository screen.
      Figure 5: my-iam-policy repository with new branch available

      Figure 5: my-iam-policy repository with new branch available

    4. Choose Create pull request.
    5. Enter a title and description for the pull request.
    6. (Optional) Scroll down to see the differences between the current version and new version of the CloudFormation template highlighted.
    7. Choose Create pull request.
  6. The creation of the pull request will Initiate the pipeline to fetch the CloudFormation template from the repository and run the check no new access and check access not granted analysis actions.
  7. After a few minutes, choose the Activity tab for the pull request. You should see a comment from the pipeline that contains the results of the failed validation.
    Figure 6: Results from the failed validation posted as a comment to the pull request

    Figure 6: Results from the failed validation posted as a comment to the pull request

Why did the validations fail?

The updated IAM role and inline policy failed validation for two reasons. First, the reference policy said that no one should have more permissions than the reference policy does. The reference policy in this example included a deny statement for the iam:PassRole permission with a resource of /my-sensitive-role/*. The new created inline policy included an allow statement for the iam:PassRole permission with a resource of arn:aws:iam::*:role/my-sensitive-roles/my-custom-admin-role. In other words, the new policy had more permissions than the reference policy.

Second, the list of critical permissions included the dynamodb:DeleteTable permission. The inline policy included a statement that would allow the EC2 instance to perform the dynamodb:DeleteTable action.

Cleanup

Use the following command to delete the infrastructure that was provisioned as part of the examples in this blog post.

cdk destroy 

Conclusion

In this post, I introduced you to two new IAM Access Analyzer APIs: CheckNoNewAccess and CheckAccessNotGranted. The main example in the post demonstrated one way in which you can use these APIs to automate security testing throughout the development lifecycle. The example did this by integrating both APIs into the developer workflow and validating the developer-authored IAM policy when the developer created a pull request to merge changes into the repository’s main branch. The automation helped the developer to get feedback about the problems with the IAM policy quickly, allowing the developer to take action in a timely way. This is often referred to as shifting security left — identifying misconfigurations early and automatically supporting an iterative, fail-fast model of continuous development and testing. Ultimately, this enables teams to make security an inherent part of a system’s design and architecture and can speed up product development workflow.

You can find the full sample code used in this blog post on GitHub.

To learn more about IAM Access Analyzer and the new custom policy checks feature, see the IAM Access Analyzer documentation.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Mitch Beaumont

Mitch Beaumont

Mitch is a Principal Solutions Architect for AWS, based in Sydney, Australia. Mitch works with some of Australia’s largest financial services customers, helping them to continually raise the security bar for the products and features that they build and ship. Outside of work, Mitch enjoys spending time with his family, photography, and surfing.

Author

Matt Luttrell

Matt is a Principal Solutions Architect on the AWS Identity Solutions team. When he’s not spending time chasing his kids around, he enjoys skiing, cycling, and the occasional video game.

Use scalable controls for AWS services accessing your resources

Post Syndicated from James Greenwood original https://aws.amazon.com/blogs/security/use-scalable-controls-for-aws-services-accessing-your-resources/

Sometimes you want to configure an AWS service to access your resource in another service. For example, you can configure AWS CloudTrail, a service that monitors account activity across your AWS infrastructure, to write log data to your bucket in Amazon Simple Storage Service (Amazon S3). When you do this, you want assurance that the service will only access your resource on your behalf—you don’t want an untrusted entity to be able to use the service to access your resource. Before today, you could achieve this by using the two AWS Identity and Access Management (IAM) condition keys, aws:SourceAccount and aws:SourceArn. You can use these condition keys to help make sure that a service accesses your resource only on behalf of specific accounts or resources that you trust. However, because these condition keys require you to specify individual accounts and resources, they can be difficult to manage at scale, especially in larger organizations.

Recently, IAM launched two new condition keys that can help you achieve this in a more scalable way that is simpler to manage within your organization:

  • aws:SourceOrgID — use this condition key to make sure that an AWS service can access your resources only when the request originates from a particular organization ID in AWS Organizations.
  • aws:SourceOrgPaths — use this condition key to make sure that an AWS service can access your resources only when the request originates from one or more organizational units (OUs) in your organization.

In this blog post, we describe how you can use the four available condition keys, including the two new ones, to help you control how AWS services access your resources.

Background

Imagine a scenario where you configure an AWS service to access your resource in another service. Let’s say you’re using Amazon CloudWatch to observe resources in your AWS environment, and you create an alarm that activates when certain conditions occur. When the alarm activates, you want it to publish messages to a topic that you create in Amazon Simple Notification Service (Amazon SNS) to generate notifications.

Figure 1 depicts this process.

Figure 1: Amazon CloudWatch publishing messages to an SNS topic

Figure 1: Amazon CloudWatch publishing messages to an SNS topic

In this scenario, there’s a resource-based policy controlling access to your SNS topic. For CloudWatch to publish messages to it, you must configure the policy to allow access by CloudWatch. When you do this, you identify CloudWatch using an AWS service principal, in this case cloudwatch.amazonaws.com.

Cross-service access

This is an example of a common pattern known as cross-service access. With cross-service access, a calling service accesses your resource in a called service, and a resource-based policy attached to your resource grants access to the calling service. The calling service is identified using an AWS service principal in the form <SERVICE-NAME>.amazonaws.com, and it accesses your resource on behalf of an originating resource, such as a CloudWatch alarm.

Figure 2 shows cross-service access.

Figure 2: Cross-service access

Figure 2: Cross-service access

When you configure cross-service access, you want to make sure that the calling service will access your resource only on your behalf. That means you want the originating resource to be controlled by someone whom you trust. If an untrusted entity creates their own CloudWatch alarm in their AWS environment, for example, then their alarm should not be able to publish messages to your SNS topic.

If an untrusted entity could use a calling service to access your resource on their behalf, it would be an example of what’s known as the confused deputy problem. The confused deputy problem is a security issue in which an entity that doesn’t have permission to perform an action coerces a more privileged entity (in this case, a calling service) to perform the action instead.

Use condition keys to help prevent cross-service confused deputy issues

AWS provides global condition keys to help you prevent cross-service confused deputy issues. You can use these condition keys to control how AWS services access your resources.

Before today, you could use the aws:SourceAccount or aws:SourceArn condition keys to make sure that a calling service accesses your resource only when the request originates from a specific account (with aws:SourceAccount) or a specific originating resource (with aws:SourceArn). However, there are situations where you might want to allow multiple resources or accounts to use a calling service to access your resource. For example, you might want to create many VPC flow logs in an organization that publish to a central S3 bucket. To achieve this using the aws:SourceAccount or aws:SourceArn condition keys, you must enumerate all the originating accounts or resources individually in your resource-based policies. This can be difficult to manage, especially in large organizations, and can potentially cause your resource-based policy documents to reach size limits.

Now, you can use the new aws:SourceOrgID or aws:SourceOrgPaths condition keys to make sure that a calling service accesses your resource only when the request originates from a specific organization (with aws:SourceOrgID) or a specific organizational unit (with aws:SourceOrgPaths). This helps avoid the need to update policies when accounts are added or removed, reduces the size of policy documents, and makes it simpler to create and review policy statements.

The following table summarizes the four condition keys that you can use to help prevent cross-service confused deputy issues. These keys work in a similar way, but with different levels of granularity.

Use case Condition key Value Allowed operators Single/multi valued Example value
Allow a calling service to access your resource only on behalf of an organization that you trust. aws:SourceOrgID AWS organization ID of the resource making a cross-service access request String operators Single-valued key o-a1b2c3d4e5
Allow a calling service to access your resource only on behalf of an organizational unit (OU) that you trust. aws:SourceOrgPaths Organization entity paths of the resource making a cross-service access request Set operators and string operators Multivalued key o-a1b2c3d4e5/r-ab12/ou-ab12-11111111/ou-ab12-22222222/
Allow a calling service to access your resource only on behalf of an account that you trust. aws:SourceAccount AWS account ID of the resource making a cross-service access request String operators Single-valued key 111122223333
Allow a calling service to access your resource only on behalf of a resource that you trust. aws:SourceArn Amazon Resource Name (ARN) of the resource making a cross- service access request ARN operators (recommended) or string operators Single-valued key arn:aws:cloudwatch:eu-west-1:111122223333:alarm:myalarm

When to use the condition keys

AWS recommends that you use these condition keys in any resource-based policy statements that allow access by an AWS service, except where the relevant condition key is not yet supported by the service. To find out whether a condition key is supported by a particular service, see AWS global condition context keys in the AWS Identity and Access Management User Guide.

Note: Only use these condition keys in resource-based policies that allow access by an AWS service. Don’t use them in other use cases, including identity-based policies and service control policies (SCPs), where these condition keys won’t be populated.

Use condition keys for defense in depth

AWS services use a variety of mechanisms to help prevent cross-service confused deputy issues, and the details vary by service. For example, where a calling service accesses an S3 bucket, some services use S3 prefixes to help prevent confused deputy issues. For more information, see the relevant service documentation.

Where supported by the service, AWS recommends that you use the condition keys we describe in this post regardless of whether the service has another mechanism in place to help prevent cross-service confused deputy issues. This helps to make your intentions explicit, provide defense in depth, and guard against misconfigurations.

Example use cases

Let’s walk through some example use cases to learn how to use these condition keys in practice.

First, imagine you’re using Amazon Virtual Private Cloud (Amazon VPC) to manage logically isolated virtual networks. In Amazon VPC, you can configure flow logs, which capture information about your network traffic. Let’s say you want a flow log to write data into an S3 bucket for later analysis. This process is depicted in Figure 3.

Figure 3: Amazon VPC writing flow logs to an S3 bucket

Figure 3: Amazon VPC writing flow logs to an S3 bucket

This constitutes another cross-service access scenario. In this case, Amazon VPC is the calling service, Amazon S3 is the called service, the VPC flow log is the originating resource, and the S3 bucket is your resource in the called service.

To allow access, the resource-based policy for your S3 bucket (known as a bucket policy) must allow Amazon VPC to put objects there. The Principal element in this policy specifies the AWS service principal of the service that will access the resource, which for VPC flow logs is delivery.logs.amazonaws.com.

Initial policy without confused deputy prevention

The following is an initial version of the bucket policy that allows Amazon VPC to put objects in the bucket but doesn’t yet provide confused deputy prevention. We’re showing this policy for illustration purposes; don’t use it in its current form.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PARTIAL-EXAMPLE-DO-NOT-USE",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<DOC-EXAMPLE-BUCKET>/*"
        }
    ]
}

Note: For simplicity, we only show one of the policy statements that you need to allow VPC flow logs to write to a bucket. In a real-life bucket policy for flow logs, you need two policy statements: one allowing actions on the bucket, and one allowing actions on the bucket contents. These are described in Publish flow logs to Amazon S3. Both policy statements work in the same way with respect to confused deputy prevention.

This policy statement allows Amazon VPC to put objects in the bucket. However, it allows Amazon VPC to do that on behalf of any flow log in any account. There’s nothing in the policy to tell Amazon VPC that it should access this bucket only if the flow log belongs to a specific organization, OU, account, or resource that you trust.

Let’s now update the policy to help prevent cross-service confused deputy issues. For the rest of this post, the remaining policy samples provide confused deputy protection, but at different levels of granularity.

Specify a trusted organization

Continuing with the previous example, imagine that you now have an organization in AWS Organizations, and you want to create VPC flow logs in various accounts within your organization that publish to a central S3 bucket. You want Amazon VPC to put objects in the bucket only if the request originates from a flow log that resides in your organization.

You can achieve this by using the new aws:SourceOrgID condition key. In a cross-service access scenario, this condition key evaluates to the ID of the organization that the request came from. You can use this condition key in the Condition element of a resource-based policy to allow actions only if aws:SourceOrgID matches the ID of a specific organization, as shown in the following example. In your own policy, make sure to replace <DOC-EXAMPLE-BUCKET> and <MY-ORGANIZATION-ID> with your own information.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VPCLogsDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::<DOC-EXAMPLE-BUCKET>/*",
            "Condition": {
                "StringEquals": {
                    "aws:SourceOrgID": "<MY-ORGANIZATION-ID>"
                }
            }
        }
    ]
}

The revised policy states that Amazon VPC can put objects in the bucket only if the request originates from a flow log in your organization. Now, if someone creates a flow log outside your organization and configures it to access your bucket, they will get an access denied error.

You can use aws:SourceOrgID in this way to allow a calling service to access your resource only if the request originates from a specific organization, as shown in Figure 4.

Figure 4: Specify a trusted organization using aws:SourceOrgID

Figure 4: Specify a trusted organization using aws:SourceOrgID

Specify a trusted OU

What if you don’t want to trust your entire organization, but only part of it? Let’s consider a different scenario. Imagine that you want to send messages from Amazon SNS into a queue in Amazon Simple Queue Service (Amazon SQS) so they can be processed by consumers. This is depicted in Figure 5.

Figure 5: Amazon SNS sending messages to an SQS queue

Figure 5: Amazon SNS sending messages to an SQS queue

Now imagine that you want your SQS queue to receive messages only if they originate from an SNS topic that resides in a specific organizational unit (OU) in your organization. For example, you might want to allow messages only if they originate from a production OU that is subject to change control.

You can achieve this by using the new aws:SourceOrgPaths condition key. As before, you use this condition key in a resource-based policy attached to your resource. In a cross-service access scenario, this condition key evaluates to the AWS Organizations entity path that the request came from. An entity path is a text representation of an entity within an organization.

You build an entity path for an OU by using the IDs of the organization, root, and all OUs in the path down to and including the OU. For example, consider the organizational structure shown in Figure 6.

Figure 6: Example organization structure

Figure 6: Example organization structure

In this example, you can specify the Prod OU by using the following entity path:

o-a1b2c3d4e5/r-ab12/ou-ab12-11111111/ou-ab12-22222222/

For more information about how to construct an entity path, see Understand the AWS Organizations entity path.

Let’s now match the aws:SourceOrgPaths condition key against a specific entity path in the Condition element of a resource-based policy for an SQS queue. In your own policy, make sure to replace <MY-QUEUE-ARN> and <MY-ENTITY-PATH> with your own information.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Allow-SNS-SendMessage",
            "Effect": "Allow",
            "Principal": {
                "Service": "sns.amazonaws.com"
            },
            "Action": "sqs:SendMessage",
            "Resource": "<MY-QUEUE-ARN>",
            "Condition": {
                "Null": {
                    "aws:SourceOrgPaths": "false"
                },
                "ForAllValues:StringEquals": {
                    "aws:SourceOrgPaths": "<MY-ENTITY-PATH>"
                }
            }
        }
    ]
}

Note: aws:SourceOrgPaths is a multivalued condition key, which means it’s capable of having multiple values in the request context. At the time of writing, it contains a single entity path if the request originates from an account in an organization, and a null value if the request originates from an account that’s not in an organization. Because this key is multivalued, you need to use both a set operator and a string operator to compare values.

In this policy, there are two conditions in the Condition block. The first uses the Null condition operator and compares with a false value to confirm that the condition key’s value is not null. The second uses set operator ForAllValues, which returns true if every condition key value in the request matches at least one value in your policy condition, and string operator StringEquals, which requires an exact match with a value specified in your policy condition.

Note: The reason for the null check is that set operator ForAllValues returns true when a condition key resolves to null. With an Allow effect and the null check in place, access is denied if the request originates from an account that’s not in an organization.

With this policy applied to your SQS queue, Amazon SNS can send messages to your queue only if the message came from an SNS topic in a specific OU.

You can use aws:SourceOrgPaths in this way to allow a calling service to access your resource only if the request originates from a specific organizational unit, as shown in Figure 7.

Figure 7: Specify a trusted OU using aws:SourceOrgPaths

Figure 7: Specify a trusted OU using aws:SourceOrgPaths

Specify a trusted OU and its children

In the previous example, we specified a trusted OU, but that didn’t include its child OUs. What if you want to include its children as well?

You can achieve this by replacing the string operator StringEquals with StringLike. This allows you to use wildcards in the entity path. Using the organization structure from the previous example, the following Condition evaluates to true only if the condition key value is not null and the request originates from the Prod OU or any of its child OUs.

"Condition": {
    "Null": {
        "aws:SourceOrgPaths": "false"
    },
    "ForAllValues:StringLike": {
        "aws:SourceOrgPaths": "o-a1b2c3d4e5/r-ab12/ou-ab12-11111111/ou-ab12-22222222/*"
    }
}

Specify a trusted account

If you want to be more granular, you can allow a service to access your resource only if the request originates from a specific account. You can achieve this by using the aws:SourceAccount condition key. In a cross-service access scenario, this condition key evaluates to the ID of the account that the request came from. 

The following Condition evaluates to true only if the request originates from the account that you specify in the policy. In your own policy, make sure to replace <MY-ACCOUNT-ID> with your own information.

"Condition": {
    "StringEquals": {
        "aws:SourceAccount": "<MY-ACCOUNT-ID>"
    }
}

You can use this condition element within a resource-based policy to allow a calling service to access your resource only if the request originates from a specific account, as shown in Figure 8.

Figure 8: Specify a trusted account using aws:SourceAccount

Figure 8: Specify a trusted account using aws:SourceAccount

Specify a trusted resource

If you want to be even more granular, you can allow a service to access your resource only if the request originates from a specific resource. For example, you can allow Amazon SNS to send messages to your SQS queue only if the request originates from a specific topic within Amazon SNS.

You can achieve this by using the aws:SourceArn condition key. In a cross-service access scenario, this condition key evaluates to the Amazon Resource Name (ARN) of the originating resource. This provides the most granular form of cross-service confused deputy prevention.

The following Condition evaluates to true only if the request originates from the resource that you specify in the policy. In your own policy, make sure to replace <MY-RESOURCE-ARN> with your own information.

"Condition": {
    "ArnEquals": {
        "aws:SourceArn": "<MY-RESOURCE-ARN>"
    }
}

Note: AWS recommends that you use an ARN operator rather than a string operator when comparing ARNs. This example uses ArnEquals to match the condition key value against the ARN specified in the policy.

You can use this condition element within a resource-based policy to allow a calling service to access your resource only if the request comes from a specific originating resource, as shown in Figure 9.

Figure 9: Specify a trusted resource using aws:SourceArn

Figure 9: Specify a trusted resource using aws:SourceArn

Specify multiple trusted resources, accounts, OUs, or organizations

The four condition keys allow you to specify multiple trusted entities by matching against an array of values. This allows you to specify multiple trusted resources, accounts, OUs, or organizations in your policies.

Conclusion

In this post, you learned about cross-service access, in which an AWS service communicates with another AWS service to access your resource. You saw that it’s important to make sure that such services access your resources only on your behalf in order to help avoid cross-service confused deputy issues.

We showed you how to help prevent cross-service confused deputy issues by using two new condition keys aws:SourceOrgID and aws:SourceOrgPaths, as well as the other available condition keys aws:SourceAccount and aws:SourceArn. You learned that you should use these condition keys in any resource-based policy statements that allow access by an AWS service, if the condition key is supported by the service. This helps make sure that a calling service can access your resource only when the request originates from a specific organization, OU, account, or resource that you trust.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on AWS IAM re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

James Greenwood

James is a Principal Security Solutions Architect who helps AWS Financial Services customers meet their security and compliance objectives in the AWS Cloud. James has a background in identity and access management, authentication, credential management, and data protection with more than 20 years of experience in the financial services industry.

Sophia Yang

Sophia Yang

Sophia is a Senior Product Manager on the AWS Identity and Access Management (IAM) service. She is passionate about enabling customers to build and innovate in AWS in a secure manner.

How to control access to AWS resources based on AWS account, OU, or organization

Post Syndicated from Rishi Mehrotra original https://aws.amazon.com/blogs/security/how-to-control-access-to-aws-resources-based-on-aws-account-ou-or-organization/

AWS Identity and Access Management (IAM) recently launched new condition keys to make it simpler to control access to your resources along your Amazon Web Services (AWS) organizational boundaries. AWS recommends that you set up multiple accounts as your workloads grow, and you can use multiple AWS accounts to isolate workloads or applications that have specific security requirements. By using the new conditions, aws:ResourceOrgID, aws:ResourceOrgPaths, and aws:ResourceAccount, you can define access controls based on an AWS resource’s organization, organizational unit (OU), or account. These conditions make it simpler to require that your principals (users and roles) can only access resources inside a specific boundary within your organization. You can combine the new conditions with other IAM capabilities to restrict access to and from AWS accounts that are not part of your organization.

This post will help you get started using the new condition keys. We’ll show the details of the new condition keys and walk through a detailed example based on the following scenario. We’ll also provide references and links to help you learn more about how to establish access control perimeters around your AWS accounts.

Consider a common scenario where you would like to prevent principals in your AWS organization from adding objects to Amazon Simple Storage Service (Amazon S3) buckets that don’t belong to your organization. To accomplish this, you can configure an IAM policy to deny access to S3 actions unless aws:ResourceOrgID matches your unique AWS organization ID. Because the policy references your entire organization, rather than individual S3 resources, you have a convenient way to maintain this security posture across any number of resources you control. The new conditions give you the tools to create a security baseline for your IAM principals and help you prevent unintended access to resources in accounts that you don’t control. You can attach this policy to an IAM principal to apply this rule to a single user or role, or use service control policies (SCPs) in AWS Organizations to apply the rule broadly across your AWS accounts. IAM principals that are subject to this policy will only be able to perform S3 actions on buckets and objects within your organization, regardless of their other permissions granted through IAM policies or S3 bucket policies.

New condition key details

You can use the aws:ResourceOrgID, aws:ResourceOrgPaths, and aws:ResourceAccount condition keys in IAM policies to place controls on the resources that your principals can access. The following table explains the new condition keys and what values these keys can take.

Condition key Description Operator Single/multi value Value
aws:ResourceOrgID AWS organization ID of the resource being accessed All string operators Single value key Any AWS organization ID
aws:ResourceOrgPaths Organization path of the resource being accessed All string operators Multi-value key Organization paths of AWS organization IDs and organizational unit IDs
aws:ResourceAccount AWS account ID of the resource being accessed All string operators Single value key Any AWS account ID

Note: Of the three keys, only aws:ResourceOrgPaths is a multi-value condition key, while aws:ResourceAccount and aws:ResourceOrgID are single-value keys. For information on how to use multi-value keys, see Creating a condition with multiple keys or values in the IAM documentation.

Resource owner keys compared to principal owner keys

The new IAM condition keys complement the existing principal condition keys aws:PrincipalAccount, aws:PrincipalOrgPaths, and aws:PrincipalOrgID. The principal condition keys help you define which AWS accounts, organizational units (OUs), and organizations are allowed to access your resources. For more information on the principal conditions, see Use IAM to share your AWS resources with groups of AWS accounts in AWS Organizations on the AWS Security Blog.

Using the principal and resource keys together helps you establish permission guardrails around your AWS principals and resources, and makes it simpler to keep your data inside the organization boundaries you define as you continue to scale. For example, you can define identity-based policies that prevent your IAM principals from accessing resources outside your organization (by using the aws:ResourceOrgID condition). Next, you can define resource-based policies that prevent IAM principals outside your organization from accessing resources that are inside your organization boundary (by using the aws:PrincipalOrgID condition). The combination of both policies prevents any access to and from AWS accounts that are not part of your organization. In the next sections, we’ll walk through an example of how to configure the identity-based policy in your organization. For the resource-based policy, you can follow along with the example in An easier way to control access to AWS resources by using the AWS organization of IAM principals on the AWS Security blog.

Setup for the examples

In the following sections, we’ll show an example IAM policy for each of the new conditions. To follow along with Example 1, which uses aws:ResourceAccount, you’ll just need an AWS account.

To follow along with Examples 2 and 3 that use aws:ResourceOrgPaths and aws:ResourceOrgID respectively, you’ll need to have an organization in AWS Organizations and at least one OU created. This blog post assumes that you have some familiarity with the basic concepts in IAM and AWS Organizations. If you need help creating an organization or want to learn more about AWS Organizations, visit Getting Started with AWS Organizations in the AWS documentation.

Which IAM policy type should I use?

You can implement the following examples as identity-based policies, or in SCPs that are managed in AWS Organizations. If you want to establish a boundary for some of your IAM principals, we recommend that you use identity-based policies. If you want to establish a boundary for an entire AWS account or for your organization, we recommend that you use SCPs. Because SCPs apply to an entire AWS account, you should take care when you apply the following policies to your organization, and account for any exceptions to these rules that might be necessary for some AWS services to function properly.

Example 1: Restrict access to AWS resources within a specific AWS account

Let’s look at an example IAM policy that restricts access along the boundary of a single AWS account. For this example, say that you have an IAM principal in account 222222222222, and you want to prevent the principal from accessing S3 objects outside of this account. To create this effect, you could attach the following IAM policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": " DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceAccount": [
            "222222222222"
          ]
        }
      }
    }
  ]
}

Note: This policy is not meant to replace your existing IAM access controls, because it does not grant any access. Instead, this policy can act as an additional guardrail for your other IAM permissions. You can use a policy like this to prevent your principals from access to any AWS accounts that you don’t know or control, regardless of the permissions granted through other IAM policies.

This policy uses a Deny effect to block access to S3 actions unless the S3 resource being accessed is in account 222222222222. This policy prevents S3 access to accounts outside of the boundary of a single AWS account. You can use a policy like this one to limit your IAM principals to access only the resources that are inside your trusted AWS accounts. To implement a policy like this example yourself, replace account ID 222222222222 in the policy with your own AWS account ID. For a policy you can apply to multiple accounts while still maintaining this restriction, you could alternatively replace the account ID with the aws:PrincipalAccount condition key, to require that the principal and resource must be in the same account (see example #3 in this post for more details how to accomplish this).

Organization setup: Welcome to AnyCompany

For the next two examples, we’ll use an example organization called AnyCompany that we created in AWS Organizations. You can create a similar organization to follow along directly with these examples, or adapt the sample policies to fit your own organization. Figure 1 shows the organization structure for AnyCompany.

Figure 1: Organization structure for AnyCompany

Figure 1: Organization structure for AnyCompany

Like all organizations, AnyCompany has an organization root. Under the root are three OUs: Media, Sports, and Governance. Under the Sports OU, there are three more OUs: Baseball, Basketball, and Football. AWS accounts in this organization are spread across all the OUs based on their business purpose. In total, there are six OUs in this organization.

Example 2: Restrict access to AWS resources within my organizational unit

Now that you’ve seen what the AnyCompany organization looks like, let’s walk through another example IAM policy that you can use to restrict access to a specific part of your organization. For this example, let’s say you want to restrict S3 object access within the following OUs in the AnyCompany organization:

  • Media
  • Sports
  • Baseball
  • Basketball
  • Football

To define a boundary around these OUs, you don’t need to list all of them in your IAM policy. Instead, you can use the organization structure to your advantage. The Baseball, Basketball, and Football OUs share a parent, the Sports OU. You can use the new aws:ResourceOrgPaths key to prevent access outside of the Media OU, the Sports OU, and any OUs under it. Here’s the IAM policy that achieves this effect.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": " DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "*",
      "Condition": {
        "ForAllValues:StringNotLike": {
          "aws:ResourceOrgPaths": [
            "o-acorg/r-acroot/ou-acroot-mediaou/",
            "o-acorg/r-acroot/ou-acroot-sportsou/*"
          ] 
        }
      }
    }
  ]
}

Note: Like the earlier example, this policy does not grant any access. Instead, this policy provides a backstop for your other IAM permissions, preventing your principals from accessing S3 objects outside an OU-defined boundary. If you want to require that your IAM principals consistently follow this rule, we recommend that you apply this policy as an SCP. In this example, we attached this policy to the root of our organization, applying it to all principals across all accounts in the AnyCompany organization.

The policy denies access to S3 actions unless the S3 resource being accessed is in a specific set of OUs in the AnyCompany organization. This policy is identical to Example 1, except for the condition block: The condition requires that aws:ResourceOrgPaths contains any of the listed OU paths. Because aws:ResourceOrgPaths is a multi-value condition, the policy uses the ForAllValues:StringNotLike operator to compare the values of aws:ResourceOrgPaths to the list of OUs in the policy.

The first OU path in the list is for the Media OU. The second OU path is the Sports OU, but it also adds the wildcard character * to the end of the path. The wildcard * matches any combination of characters, and so this condition matches both the Sports OU and any other OU further down its path. Using wildcards in the OU path allows you to implicitly reference other OUs inside the Sports OU, without having to list them explicitly in the policy. For more information about wildcards, refer to Using wildcards in resource ARNs in the IAM documentation.

Example 3: Restrict access to AWS resources within my organization

Finally, we’ll look at a very simple example of a boundary that is defined at the level of an entire organization. This is the same use case as the preceding two examples (restrict access to S3 object access), but scoped to an organization instead of an account or collection of OUs.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "arn:aws:s3:::*/*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceOrgID": "${aws:PrincipalOrgID}"
        }
      }
    }
  ]
}

Note: Like the earlier examples, this policy does not grant any access. Instead, this policy provides a backstop for your other IAM permissions, preventing your principals from accessing S3 objects outside your organization regardless of their other access permissions. If you want to require that your IAM principals consistently follow this rule, we recommend that you apply this policy as an SCP. As in the previous example, we attached this policy to the root of our organization, applying it to all accounts in the AnyCompany organization.

The policy denies access to S3 actions unless the S3 resource being accessed is in the same organization as the IAM principal that is accessing it. This policy is identical to Example 1, except for the condition block: The condition requires that aws:ResourceOrgID and aws:PrincipalOrgID must be equal to each other. With this requirement, the principal making the request and the resource being accessed must be in the same organization. This policy also applies to S3 resources that are created after the policy is put into effect, so it is simple to maintain the same security posture across all your resources.

For more information about aws:PrincipalOrgID, refer to AWS global condition context keys in the IAM documentation.

Learn more

In this post, we explored the new conditions, and walked through a few examples to show you how to restrict access to S3 objects across the boundary of an account, OU, or organization. These tools work for more than just S3, though: You can use the new conditions to help you protect a wide variety of AWS services and actions. Here are a few links that you may want to look at:

If you have any questions, comments, or concerns, contact AWS Support or start a new thread on the AWS Identity and Access Management forum. Thanks for reading about this new feature. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Rishi Mehrotra

Rishi Mehrotra

Rishi is a Product Manager in AWS IAM. He enjoys working with customers and influencing products decisions. Prior to Amazon, Rishi worked for enterprise IT customers after receiving engineering degree in computer science. He recently pursued MBA from The University of Chicago Booth School of Business. Outside of work, Rishi enjoys biking, reading, and playing with his kids.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

Techniques for writing least privilege IAM policies

Post Syndicated from Ben Potter original https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-policies/

In this post, I’m going to share two techniques I’ve used to write least privilege AWS Identity and Access Management (IAM) policies. If you’re not familiar with IAM policy structure, I highly recommend you read understanding how IAM works and policies and permissions.

Least privilege is a principle of granting only the permissions required to complete a task. Least privilege is also one of many Amazon Web Services (AWS) Well-Architected best practices that can help you build securely in the cloud. For example, if you have an Amazon Elastic Compute Cloud (Amazon EC2) instance that needs to access an Amazon Simple Storage Service (Amazon S3) bucket to get configuration data, you should only allow read access to the specific S3 bucket that contains the relevant data.

There are a number of ways to grant access to different types of resources, as some resources support both resource-based policies and IAM policies. This blog post will focus on demonstrating how you can use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

In AWS, an IAM principal can be a user, role, or group. These identities start with no permissions and you add permissions using a policy. In AWS, there are different types of policies that are used for different reasons. In this blog, I only give examples for identity-based policies that attach to IAM principals to grant permissions to an identity. You can create and attach multiple identity-based policies to your IAM principals, and you can reuse them across your AWS accounts. There are two types of managed policies. Customer managed policies are created and managed by you, the customer. AWS managed policies are provided as examples, cannot be modified, but can be copied, enhanced, and saved as Customer managed policies. The main elements of a policy statement are:

  • Effect: Specifies whether the statement will Allow or Deny an action.
  • Action: Describes a specific action or actions that will either be allowed or denied to run based on the Effect entered. API actions are unique to each service. For example, s3:ListBuckets is an Amazon S3 service API action that enables an IAM Principal to list all S3 buckets in the same account.
  • NotAction: Can be used as an alternative to using Action. This element will allow an IAM principal to invoke all API actions to a specific AWS service except those actions specified in this list.
  • Resource: Specifies the resources—for example, an S3 bucket or objects—that the policy applies to in Amazon Resource Name (ARN) format.
  • NotResource: Can be used instead of the Resource element to explicitly match every AWS resource except those specified.
  • Condition: Allows you to build expressions to match the condition keys and values in the policy against keys and values in the request context sent by the IAM principal. Condition keys can be service-specific or global. A global condition key can be used with any service. For example, a key of aws:CurrentTime can be used to allow access based on date and time.

Starting with the visual editor

The visual editor is my default starting place for building policies as I like the wizard and seeing all available services, actions, and conditions without looking at the documentation. If there is a complex policy with many services, I often look at the AWS managed policies as a starting place for the actions that are required, then use the visual editor to fine tune and check the resources and conditions.

The policy I’m going to walk you through creating is to grant an AWS Lambda function permission to get specific objects from Amazon S3, and put items in a specific table in Amazon DynamoDB. You can access the visual editor when you choose Create policy under policies in the IAM console, or add policies when viewing a role, group, or user as shown in Figure 1. If you’re not familiar with creating policies, you can follow the full instructions in the IAM documentation.

Figure 1: Use the visual editor to create a policy

Figure 1: Use the visual editor to create a policy

Begin by choosing the first service—S3—to grant access to as shown in Figure 2. You can only choose one service at a time, so you’ll need to add DynamoDB after.

Figure 2: Select S3 service

Figure 2: Select S3 service

Now you will see a list of access levels with the option to manually add actions. Expand the read access level to show all read actions that are supported by the Amazon S3 service. You can now see all read access level actions. For getting an object, check the box for GetObject. Selecting the ? next to an action expands information including a description, supported resource types, and supported condition keys as shown in Figure 3.

Figure 3: Expand Read in Access level, select GetObject, and select the ? next to GetObject

Figure 3: Expand Read in Access level, select GetObject, and select the ? next to GetObject

Expand Resources, you will see that the visual editor has listed object as that is the only resource supported by the GetObject action as shown in Figure 4.

Figure 4: Expand Resources

Figure 4: Expand Resources

Select Add ARN, which opens a dialogue to help you specify the ARN for the objects. Enter a bucket name—such as doc-example-bucket—and then the object name. For the object name you can use a wildcard (*) as a suffix. For example, to allow objects beginning with alpha you would enter alpha*. This is an important step. For this least privileged policy, you are restricting to a specific bucket, and an object prefix. You could even specify an individual object depending on your use case.

Figure 5: Enter bucket name and object name

Figure 5: Enter bucket name and object name

If you have multiple ARNs (bucket and objects) to allow, you can repeat the step.

Figure 6: ARN added for S3 object

Figure 6: ARN added for S3 object

The final step is to expand the request conditions, and choose Add condition. The Add request condition dialogue will open. Select the drop down next to Condition key to list the global condition keys, then the service level condition keys are listed after. You’ll see that there’s an s3:ExistingObjectTag condition that—as the name suggests—matches an existing object tag. You can use this condition key to allow the GetObject request only when the object tag meets your condition. That means you can tag your objects with a specific tag key and value pair, and your policy condition must match this key-value pair to allow the action to execute. When you’re using condition keys with multiple keys or values, you can use condition operators and evaluation logic. As shown in Figure 7, tag-key is entered directly below the condition key. This is the key of the tag to match. For the Operator, select StringEquals to match the tag exactly. Checking If exists tests at least one member of the set of request values, and at least one member of the set of condition key values. The Value to enter is the actual tag value: tag-value as shown in figure 7.

Figure 7: ARN added for S3 object

Figure 7: ARN added for S3 object

That’s it for adding the S3 action, as shown in figure 8.

Figure 8: S3 GetObject action with resource and conditions configured

Figure 8: S3 GetObject action with resource and conditions configured

Now you need to add the DynamoDB permissions by selecting Add additional permissions. Select Choose a service and then select DynamoDB. For actions, expand the Write access level, then choose PutItem.

Figure 9: Choose write access level

Figure 9: Choose write access level

Expand Resources and then select Add ARN. The dialogue that appears will help you build the ARN just like it did for the Amazon S3 service. Enter the Region, for example the ap-southeast-2 (Sydney) Region, the account ID, and the table name. Choosing Add will add the resource ARN to your policy.

Figure 10: Enter Region, account, and table name

Figure 10: Enter Region, account, and table name

Now it’s time to add conditions. Expand Request conditions and then choose Add condition.

There are many DynamoDB conditions that you could use, however you can choose dynamodb:LeadingKeys to represent the first key, or partition keys in a table. You can see from the documentation that a qualifier of For all values in request is recommend. For the Operator you can use StringEquals as your string is going to exactly match, then a Value can use a prefix with wildcard, such as alpha* as shown in figure 11.

Figure 11: Add request conditions

Figure 11: Add request conditions

Choosing Add will take you back to the main visual editor where you can choose Review policy to continue. Enter a name and description for the policy, and then choose Create policy.

You can now attach it to a role to test.

You can see in this example that a policy can use least privilege by using specific resources and conditions. Note that sometimes when you use the AWS Management Console, it requires additional permissions to provide information for the console experience.

Starting with AWS managed policies

AWS managed policies can be a good starting place to see the actions typically associated with a particular service or job function. For example, you can attach the AmazonS3ReadOnlyAccess policy to a role used by an Amazon EC2 instance that allows read-only access to all Amazon S3 buckets. It has an effect of Allow to allow access, and there are two actions that use wildcards (*) to allow all Get and List actions for S3—for example, s3:GetObject and s3:ListBuckets. The resource is a wildcard to allow all S3 buckets the account has access to. A useful feature of this policy is that it only allows read and list access to S3, but not to any other services or types of actions.

Let’s make our own custom IAM policy to make it least privilege. Starting with the action element, you can use the reference for Amazon S3 to see all actions, a description of what each action does, the resource type for each action, and condition keys for each action. Now let’s imagine this policy is used by an Amazon EC2 instance to fetch an application configuration object from within an S3 bucket. Looking at the descriptions for actions starting with Get you can see that the only action that we really need is GetObject. You can then use the resource element to restrict an action to a set of objects prefixed with config within a specific bucket.

         "Effect": "Allow",
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3::: <doc-example-bucket>/<config*>"

Now that you’ve reduced the scope of what this policy can do for service actions and resources, you can add a condition element that uses attribute based access control (ABAC) to define conditions based on attributes—in this case, a resource tag. In this example, when you’re reading objects from a single bucket, you can set specific conditions to further reduce the scope of permissions given to an IAM principal. There’s an s3:ExistingObjectTag condition that you can use to allow the GetObject request only when the object tag meets your condition. That means you can tag your objects with a specific tag key and value pair, and your IAM policy condition must match this key-value pair to allow the API action to successfully run. When you’re using condition keys with multiple keys or values, you can use condition operators and evaluation logic. You can see that ForAnyValue tests at least one member of the set of request values, and at least one member of the set of condition key values. Alternatively, you can use global condition keys that apply to all services:

         "Effect": "Allow",
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3:::<doc-example-bucket>/<config*>",
         "Condition": {
                "ForAnyValue:StringEquals": {
                    "s3:ExistingObjectTag/<tag-key>": "<tag-value>"
            }

In the preceding policy example, the condition element only allows s3:GetObject permissions if the object is tagged with a key of tag-key and a value of tag-value. While you’re experimenting, you can identify errors in your custom policies by using the IAM policy simulator or reviewing the errors messages recorded in AWS CloudTrail logs.

Conclusion

In this post, I’ve shown two different techniques that you can use to create least privilege policies for IAM. You can adapt these methods to create AWS Single Sign-On permission sets and AWS Organizations service control policies (SCPs). Starting with managed policies is a useful strategy when an AWS supplied managed policy already exists for your use case, and then to reduce the scope of what it can do through permissions. I tend to use the visual editor the most for editing policies because it saves looking up the resource and conditions for each action. I suggest that you start by reviewing the policies you’re already using. Start with policies that grant excessive permissions—like the example Administrator policy—and tie them back to the use case of the users or things that need the access. Use the last accessed information, IAM best practices, and look at the AWS Well-Architected best practices and AWS Well-Architected tool.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Potter

Ben is the global security leader for the AWS Well-Architected Framework and is responsible for sharing best practices in security with customers and partners. Ben is also an ambassador for the No More Ransom initiative helping fight cyber crime with Europol, McAfee, and law enforcement across the globe. You can learn more about him in this interview.