Tag Archives: AWS IAM

Implement tenant isolation for Amazon S3 and Aurora PostgreSQL by using ABAC

Post Syndicated from Ashutosh Upadhyay original https://aws.amazon.com/blogs/security/implement-tenant-isolation-for-amazon-s3-and-aurora-postgresql-by-using-abac/

In software as a service (SaaS) systems, which are designed to be used by multiple customers, isolating tenant data is a fundamental responsibility for SaaS providers. The practice of isolation of data in a multi-tenant application platform is called tenant isolation. In this post, we describe an approach you can use to achieve tenant isolation in Amazon Simple Storage Service (Amazon S3) and Amazon Aurora PostgreSQL-Compatible Edition databases by implementing attribute-based access control (ABAC). You can also adapt the same approach to achieve tenant isolation in other AWS services.

ABAC in Amazon Web Services (AWS), which uses tags to store attributes, offers advantages over the traditional role-based access control (RBAC) model. You can use fewer permissions policies, update your access control more efficiently as you grow, and last but not least, apply granular permissions for various AWS services. These granular permissions help you to implement an effective and coherent tenant isolation strategy for your customers and clients. Using the ABAC model helps you scale your permissions and simplify the management of granular policies. The ABAC model reduces the time and effort it takes to maintain policies that allow access to only the required resources.

The solution we present here uses the pool model of data partitioning. The pool model helps you avoid the higher costs of duplicated resources for each tenant and the specialized infrastructure code required to set up and maintain those copies.

Solution overview

In a typical customer environment where this solution is implemented, the tenant request for access might land at Amazon API Gateway, together with the tenant identifier, which in turn calls an AWS Lambda function. The Lambda function is envisaged to be operating with a basic Lambda execution role. This Lambda role should also have permissions to assume the tenant roles. As the request progresses, the Lambda function assumes the tenant role and makes the necessary calls to Amazon S3 or to an Aurora PostgreSQL-Compatible database. This solution helps you to achieve tenant isolation for objects stored in Amazon S3 and data elements stored in an Aurora PostgreSQL-Compatible database cluster.

Figure 1 shows the tenant isolation architecture for both Amazon S3 and Amazon Aurora PostgreSQL-Compatible databases.

Figure 1: Tenant isolation architecture diagram

Figure 1: Tenant isolation architecture diagram

As shown in the numbered diagram steps, the workflow for Amazon S3 tenant isolation is as follows:

  1. AWS Lambda sends an AWS Security Token Service (AWS STS) assume role request to AWS Identity and Access Management (IAM).
  2. IAM validates the request and returns the tenant role.
  3. Lambda sends a request to Amazon S3 with the assumed role.
  4. Amazon S3 sends the response back to Lambda.

The diagram also shows the workflow steps for tenant isolation for Aurora PostgreSQL-Compatible databases, as follows:

  1. Lambda sends an STS assume role request to IAM.
  2. IAM validates the request and returns the tenant role.
  3. Lambda sends a request to IAM for database authorization.
  4. IAM validates the request and returns the database password token.
  5. Lambda sends a request to the Aurora PostgreSQL-Compatible database with the database user and password token.
  6. Aurora PostgreSQL-Compatible database returns the response to Lambda.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  1. An AWS account for your workload.
  2. An Amazon S3 bucket.
  3. An Aurora PostgreSQL-Compatible cluster with a database created.

    Note: Make sure to note down the default master database user and password, and make sure that you can connect to the database from your desktop or from another server (for example, from Amazon Elastic Compute Cloud (Amazon EC2) instances).

  4. A security group and inbound rules that are set up to allow an inbound PostgreSQL TCP connection (Port 5432) from Lambda functions. This solution uses regular non-VPC Lambda functions, and therefore the security group of the Aurora PostgreSQL-Compatible database cluster should allow an inbound PostgreSQL TCP connection (Port 5432) from anywhere (0.0.0.0/0).

Make sure that you’ve completed the prerequisites before proceeding with the next steps.

Deploy the solution

The following sections describe how to create the IAM roles, IAM policies, and Lambda functions that are required for the solution. These steps also include guidelines on the changes that you’ll need to make to the prerequisite components Amazon S3 and the Aurora PostgreSQL-Compatible database cluster.

Step 1: Create the IAM policies

In this step, you create two IAM policies with the required permissions for Amazon S3 and the Aurora PostgreSQL database.

To create the IAM policies

  1. Open the AWS Management Console.
  2. Choose IAM, choose Policies, and then choose Create policy.
  3. Use the following JSON policy document to create the policy. Replace the placeholder <111122223333> with the bucket name from your account.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:Get*",
                    "s3:List*"
                ],
                "Resource": "arn:aws:s3:::sts-ti-demo-<111122223333>/${aws:PrincipalTag/s3_home}/*"
            }
        ]
    }
    

  4. Save the policy with the name sts-ti-demo-s3-access-policy.

    Figure 2: Create the IAM policy for Amazon S3 (sts-ti-demo-s3-access-policy)

    Figure 2: Create the IAM policy for Amazon S3 (sts-ti-demo-s3-access-policy)

  5. Open the AWS Management Console.
  6. Choose IAM, choose Policies, and then choose Create policy.
  7. Use the following JSON policy document to create a second policy. This policy grants an IAM role permission to connect to an Aurora PostgreSQL-Compatible database through a database user that is IAM authenticated. Replace the placeholders with the appropriate Region, account number, and cluster resource ID of the Aurora PostgreSQL-Compatible database cluster, respectively.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:<us-west-2>:<111122223333>:dbuser:<cluster- ZTISAAAABBBBCCCCDDDDEEEEL4>/${aws:PrincipalTag/dbuser}"
            ]
        }
    ]
}
  • Save the policy with the name sts-ti-demo-dbuser-policy.

    Figure 3: Create the IAM policy for Aurora PostgreSQL database (sts-ti-demo-dbuser-policy)

    Figure 3: Create the IAM policy for Aurora PostgreSQL database (sts-ti-demo-dbuser-policy)

Note: Make sure that you use the cluster resource ID for the clustered database. However, if you intend to adapt this solution for your Aurora PostgreSQL-Compatible non-clustered database, you should use the instance resource ID instead.

Step 2: Create the IAM roles

In this step, you create two IAM roles for the two different tenants, and also apply the necessary permissions and tags.

To create the IAM roles

  1. In the IAM console, choose Roles, and then choose Create role.
  2. On the Trusted entities page, choose the EC2 service as the trusted entity.
  3. On the Permissions policies page, select sts-ti-demo-s3-access-policy and sts-ti-demo-dbuser-policy.
  4. On the Tags page, add two tags with the following keys and values.

    Tag key Tag value
    s3_home tenant1_home
    dbuser tenant1_dbuser
  5. On the Review screen, name the role assumeRole-tenant1, and then choose Save.
  6. In the IAM console, choose Roles, and then choose Create role.
  7. On the Trusted entities page, choose the EC2 service as the trusted entity.
  8. On the Permissions policies page, select sts-ti-demo-s3-access-policy and sts-ti-demo-dbuser-policy.
  9. On the Tags page, add two tags with the following keys and values.

    Tag key Tag value
    s3_home tenant2_home
    dbuser tenant2_dbuser
  10. On the Review screen, name the role assumeRole-tenant2, and then choose Save.

Step 3: Create and apply the IAM policies for the tenants

In this step, you create a policy and a role for the Lambda functions. You also create two separate tenant roles, and establish a trust relationship with the role that you created for the Lambda functions.

To create and apply the IAM policies for tenant1

  1. In the IAM console, choose Policies, and then choose Create policy.
  2. Use the following JSON policy document to create the policy. Replace the placeholder <111122223333> with your AWS account number.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": "sts:AssumeRole",
                "Resource": [
                    "arn:aws:iam::<111122223333>:role/assumeRole-tenant1",
                    "arn:aws:iam::<111122223333>:role/assumeRole-tenant2"
                ]
            }
        ]
    }
    

  3. Save the policy with the name sts-ti-demo-assumerole-policy.
  4. In the IAM console, choose Roles, and then choose Create role.
  5. On the Trusted entities page, select the Lambda service as the trusted entity.
  6. On the Permissions policies page, select sts-ti-demo-assumerole-policy and AWSLambdaBasicExecutionRole.
  7. On the review screen, name the role sts-ti-demo-lambda-role, and then choose Save.
  8. In the IAM console, go to Roles, and enter assumeRole-tenant1 in the search box.
  9. Select the assumeRole-tenant1 role and go to the Trust relationship tab.
  10. Choose Edit the trust relationship, and replace the existing value with the following JSON document. Replace the placeholder <111122223333> with your AWS account number, and choose Update trust policy to save the policy.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<111122223333>:role/sts-ti-demo-lambda-role"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

To verify that the policies are applied correctly for tenant1

In the IAM console, go to Roles, and enter assumeRole-tenant1 in the search box. Select the assumeRole-tenant1 role and on the Permissions tab, verify that sts-ti-demo-dbuser-policy and sts-ti-demo-s3-access-policy appear in the list of policies, as shown in Figure 4.

Figure 4: The assumeRole-tenant1 Permissions tab

Figure 4: The assumeRole-tenant1 Permissions tab

On the Trust relationships tab, verify that sts-ti-demo-lambda-role appears under Trusted entities, as shown in Figure 5.

Figure 5: The assumeRole-tenant1 Trust relationships tab

Figure 5: The assumeRole-tenant1 Trust relationships tab

On the Tags tab, verify that the following tags appear, as shown in Figure 6.

Tag key Tag value
dbuser tenant1_dbuser
s3_home tenant1_home

 

Figure 6: The assumeRole-tenant1 Tags tab

Figure 6: The assumeRole-tenant1 Tags tab

To create and apply the IAM policies for tenant2

  1. In the IAM console, go to Roles, and enter assumeRole-tenant2 in the search box.
  2. Select the assumeRole-tenant2 role and go to the Trust relationship tab.
  3. Edit the trust relationship, replacing the existing value with the following JSON document. Replace the placeholder <111122223333> with your AWS account number.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<111122223333>:role/sts-ti-demo-lambda-role"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

  4. Choose Update trust policy to save the policy.

To verify that the policies are applied correctly for tenant2

In the IAM console, go to Roles, and enter assumeRole-tenant2 in the search box. Select the assumeRole-tenant2 role and on the Permissions tab, verify that sts-ti-demo-dbuser-policy and sts-ti-demo-s3-access-policy appear in the list of policies, you did for tenant1. On the Trust relationships tab, verify that sts-ti-demo-lambda-role appears under Trusted entities.

On the Tags tab, verify that the following tags appear, as shown in Figure 7.

Tag key Tag value
dbuser tenant2_dbuser
s3_home tenant2_home
Figure 7: The assumeRole-tenant2 Tags tab

Figure 7: The assumeRole-tenant2 Tags tab

Step 4: Set up an Amazon S3 bucket

Next, you’ll set up an S3 bucket that you’ll use as part of this solution. You can either create a new S3 bucket or re-purpose an existing one. The following steps show you how to create two user homes (that is, S3 prefixes, which are also known as folders) in the S3 bucket.

  1. In the AWS Management Console, go to Amazon S3 and select the S3 bucket you want to use.
  2. Create two prefixes (folders) with the names tenant1_home and tenant2_home.
  3. Place two test objects with the names tenant.info-tenant1_home and tenant.info-tenant2_home in the prefixes that you just created, respectively.

Step 5: Set up test objects in Aurora PostgreSQL-Compatible database

In this step, you create a table in Aurora PostgreSQL-Compatible Edition, insert tenant metadata, create a row level security (RLS) policy, create tenant users, and grant permission for testing purposes.

To set up Aurora PostgreSQL-Compatible

  1. Connect to Aurora PostgreSQL-Compatible through a client of your choice, using the master database user and password that you obtained at the time of cluster creation.
  2. Run the following commands to create a table for testing purposes and to insert a couple of testing records.
    CREATE TABLE tenant_metadata (
        tenant_id VARCHAR(30) PRIMARY KEY,
        email     VARCHAR(50) UNIQUE,
        status    VARCHAR(10) CHECK (status IN ('active', 'suspended', 'disabled')),
        tier      VARCHAR(10) CHECK (tier IN ('gold', 'silver', 'bronze')));
    
    INSERT INTO tenant_metadata (tenant_id, email, status, tier) 
    VALUES ('tenant1_dbuser','[email protected]','active','gold');
    INSERT INTO tenant_metadata (tenant_id, email, status, tier) 
    VALUES ('tenant2_dbuser','[email protected]','suspended','silver');
    ALTER TABLE tenant_metadata ENABLE ROW LEVEL SECURITY;
    

  3. Run the following command to query the newly created database table.
    SELECT * FROM tenant_metadata;
    

    Figure 8: The tenant_metadata table content

    Figure 8: The tenant_metadata table content

  4. Run the following command to create the row level security policy.
    CREATE POLICY tenant_isolation_policy ON tenant_metadata
    USING (tenant_id = current_user);
    

  5. Run the following commands to establish two tenant users and grant them the necessary permissions.
    CREATE USER tenant1_dbuser WITH LOGIN;
    CREATE USER tenant2_dbuser WITH LOGIN;
    GRANT rds_iam TO tenant1_dbuser;
    GRANT rds_iam TO tenant2_dbuser;
    
    GRANT select, insert, update, delete ON tenant_metadata to tenant1_dbuser, tenant2_dbuser;
    

  6. Run the following commands to verify the newly created tenant users.
    SELECT usename AS role_name,
      CASE
         WHEN usesuper AND usecreatedb THEN
           CAST('superuser, create database' AS pg_catalog.text)
         WHEN usesuper THEN
            CAST('superuser' AS pg_catalog.text)
         WHEN usecreatedb THEN
            CAST('create database' AS pg_catalog.text)
         ELSE
            CAST('' AS pg_catalog.text)
      END role_attributes
    FROM pg_catalog.pg_user
    WHERE usename LIKE (‘tenant%’)
    ORDER BY role_name desc;
    

    Figure 9: Verify the newly created tenant users output

    Figure 9: Verify the newly created tenant users output

Step 6: Set up the AWS Lambda functions

Next, you’ll create two Lambda functions for Amazon S3 and Aurora PostgreSQL-Compatible. You also need to create a Lambda layer for the Python package PG8000.

To set up the Lambda function for Amazon S3

  1. Navigate to the Lambda console, and choose Create function.
  2. Choose Author from scratch. For Function name, enter sts-ti-demo-s3-lambda.
  3. For Runtime, choose Python 3.7.
  4. Change the default execution role to Use an existing role, and then select sts-ti-demo-lambda-role from the drop-down list.
  5. Keep Advanced settings as the default value, and then choose Create function.
  6. Copy the following Python code into the lambda_function.py file that is created in your Lambda function.
    import json
    import os
    import time 
    
    def lambda_handler(event, context):
        import boto3
        bucket_name     =   os.environ['s3_bucket_name']
    
        try:
            login_tenant_id =   event['login_tenant_id']
            data_tenant_id  =   event['s3_tenant_home']
        except:
            return {
                'statusCode': 400,
                'body': 'Error in reading parameters'
            }
    
        prefix_of_role  =   'assumeRole'
        file_name       =   'tenant.info' + '-' + data_tenant_id
    
        # create an STS client object that represents a live connection to the STS service
        sts_client = boto3.client('sts')
        account_of_role = sts_client.get_caller_identity()['Account']
        role_to_assume  =   'arn:aws:iam::' + account_of_role + ':role/' + prefix_of_role + '-' + login_tenant_id
    
        # Call the assume_role method of the STSConnection object and pass the role
        # ARN and a role session name.
        RoleSessionName = 'AssumeRoleSession' + str(time.time()).split(".")[0] + str(time.time()).split(".")[1]
        try:
            assumed_role_object = sts_client.assume_role(
                RoleArn         = role_to_assume, 
                RoleSessionName = RoleSessionName, 
                DurationSeconds = 900) #15 minutes
    
        except:
            return {
                'statusCode': 400,
                'body': 'Error in assuming the role ' + role_to_assume + ' in account ' + account_of_role
            }
    
        # From the response that contains the assumed role, get the temporary 
        # credentials that can be used to make subsequent API calls
        credentials=assumed_role_object['Credentials']
        
        # Use the temporary credentials that AssumeRole returns to make a connection to Amazon S3  
        s3_resource=boto3.resource(
            's3',
            aws_access_key_id=credentials['AccessKeyId'],
            aws_secret_access_key=credentials['SecretAccessKey'],
            aws_session_token=credentials['SessionToken']
        )
    
        try:
            obj = s3_resource.Object(bucket_name, data_tenant_id + "/" + file_name)
            return {
                'statusCode': 200,
                'body': obj.get()['Body'].read()
            }
        except:
            return {
                'statusCode': 400,
                'body': 'error in reading s3://' + bucket_name + '/' + data_tenant_id + '/' + file_name
            }
    

  7. Under Basic settings, edit Timeout to increase the timeout to 29 seconds.
  8. Edit Environment variables to add a key called s3_bucket_name, with the value set to the name of your S3 bucket.
  9. Configure a new test event with the following JSON document, and save it as testEvent.
    {
      "login_tenant_id": "tenant1",
      "s3_tenant_home": "tenant1_home"
    }
    

  10. Choose Test to test the Lambda function with the newly created test event testEvent. You should see status code 200, and the body of the results should contain the data for tenant1.

    Figure 10: The result of running the sts-ti-demo-s3-lambda function

    Figure 10: The result of running the sts-ti-demo-s3-lambda function

Next, create another Lambda function for Aurora PostgreSQL-Compatible. To do this, you first need to create a new Lambda layer.

To set up the Lambda layer

  1. Use the following commands to create a .zip file for Python package pg8000.

    Note: This example is created by using an Amazon EC2 instance running the Amazon Linux 2 Amazon Machine Image (AMI). If you’re using another version of Linux or don’t have the Python 3 or pip3 packages installed, install them by using the following commands.

    sudo yum update -y 
    sudo yum install python3 
    sudo pip3 install pg8000 -t build/python/lib/python3.8/site-packages/ 
    cd build 
    sudo zip -r pg8000.zip python/
    

  2. Download the pg8000.zip file you just created to your local desktop machine or into an S3 bucket location.
  3. Navigate to the Lambda console, choose Layers, and then choose Create layer.
  4. For Name, enter pgdb, and then upload pg8000.zip from your local desktop machine or from the S3 bucket location.

    Note: For more details, see the AWS documentation for creating and sharing Lambda layers.

  5. For Compatible runtimes, choose python3.6, python3.7, and python3.8, and then choose Create.

To set up the Lambda function with the newly created Lambda layer

  1. In the Lambda console, choose Function, and then choose Create function.
  2. Choose Author from scratch. For Function name, enter sts-ti-demo-pgdb-lambda.
  3. For Runtime, choose Python 3.7.
  4. Change the default execution role to Use an existing role, and then select sts-ti-demo-lambda-role from the drop-down list.
  5. Keep Advanced settings as the default value, and then choose Create function.
  6. Choose Layers, and then choose Add a layer.
  7. Choose Custom layer, select pgdb with Version 1 from the drop-down list, and then choose Add.
  8. Copy the following Python code into the lambda_function.py file that was created in your Lambda function.
    import boto3
    import pg8000
    import os
    import time
    import ssl
    
    connection = None
    assumed_role_object = None
    rds_client = None
    
    def assume_role(event):
        global assumed_role_object
        try:
            RolePrefix  = os.environ.get("RolePrefix")
            LoginTenant = event['login_tenant_id']
        
            # create an STS client object that represents a live connection to the STS service
            sts_client      = boto3.client('sts')
            # Prepare input parameters
            role_to_assume  = 'arn:aws:iam::' + sts_client.get_caller_identity()['Account'] + ':role/' + RolePrefix + '-' + LoginTenant
            RoleSessionName = 'AssumeRoleSession' + str(time.time()).split(".")[0] + str(time.time()).split(".")[1]
        
            # Call the assume_role method of the STSConnection object and pass the role ARN and a role session name.
            assumed_role_object = sts_client.assume_role(
                RoleArn         =   role_to_assume, 
                RoleSessionName =   RoleSessionName,
                DurationSeconds =   900) #15 minutes 
            
            return assumed_role_object['Credentials']
        except Exception as e:
            print({'Role assumption failed!': {'role': role_to_assume, 'Exception': 'Failed due to :{0}'.format(str(e))}})
            return None
    
    def get_connection(event):
        global rds_client
        creds = assume_role(event)
    
        try:
            # create an RDS client using assumed credentials
            rds_client = boto3.client('rds',
                aws_access_key_id       = creds['AccessKeyId'],
                aws_secret_access_key   = creds['SecretAccessKey'],
                aws_session_token       = creds['SessionToken'])
    
            # Read the environment variables and event parameters
            DBEndPoint   = os.environ.get('DBEndPoint')
            DatabaseName = os.environ.get('DatabaseName')
            DBUserName   = event['dbuser']
    
            # Generates an auth token used to connect to a database with IAM credentials.
            pwd = rds_client.generate_db_auth_token(
                DBHostname=DBEndPoint, Port=5432, DBUsername=DBUserName, Region='us-west-2'
            )
    
            ssl_context             = ssl.SSLContext()
            ssl_context.verify_mode = ssl.CERT_REQUIRED
            ssl_context.load_verify_locations('rds-ca-2019-root.pem')
    
            # create a database connection
            conn = pg8000.connect(
                host        =   DBEndPoint,
                user        =   DBUserName,
                database    =   DatabaseName,
                password    =   pwd,
                ssl_context =   ssl_context)
            
            return conn
        except Exception as e:
            print ({'Database connection failed!': {'Exception': "Failed due to :{0}".format(str(e))}})
            return None
    
    def execute_sql(connection, query):
        try:
            cursor = connection.cursor()
            cursor.execute(query)
            columns = [str(desc[0]) for desc in cursor.description]
            results = []
            for res in cursor:
                results.append(dict(zip(columns, res)))
            cursor.close()
            retry = False
            return results    
        except Exception as e:
            print ({'Execute SQL failed!': {'Exception': "Failed due to :{0}".format(str(e))}})
            return None
    
    
    def lambda_handler(event, context):
        global connection
        try:
            connection = get_connection(event)
            if connection is None:
                return {'statusCode': 400, "body": "Error in database connection!"}
    
            response = {'statusCode':200, 'body': {
                'db & user': execute_sql(connection, 'SELECT CURRENT_DATABASE(), CURRENT_USER'), \
                'data from tenant_metadata': execute_sql(connection, 'SELECT * FROM tenant_metadata')}}
            return response
        except Exception as e:
            try:
                connection.close()
            except Exception as e:
                connection = None
            return {'statusCode': 400, 'statusDesc': 'Error!', 'body': 'Unhandled error in Lambda Handler.'}
    

  9. Add a certificate file called rds-ca-2019-root.pem into the Lambda project root by downloading it from https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem.
  10. Under Basic settings, edit Timeout to increase the timeout to 29 seconds.
  11. Edit Environment variables to add the following keys and values.

    Key Value
    DBEndPoint Enter the database cluster endpoint URL
    DatabaseName Enter the database name
    RolePrefix assumeRole
    Figure 11: Example of environment variables display

    Figure 11: Example of environment variables display

  12. Configure a new test event with the following JSON document, and save it as testEvent.
    {
      "login_tenant_id": "tenant1",
      "dbuser": "tenant1_dbuser"
    }
    

  13. Choose Test to test the Lambda function with the newly created test event testEvent. You should see status code 200, and the body of the results should contain the data for tenant1.

    Figure 12: The result of running the sts-ti-demo-pgdb-lambda function

    Figure 12: The result of running the sts-ti-demo-pgdb-lambda function

Step 7: Perform negative testing of tenant isolation

You already performed positive tests of tenant isolation during the Lambda function creation steps. However, it’s also important to perform some negative tests to verify the robustness of the tenant isolation controls.

To perform negative tests of tenant isolation

  1. In the Lambda console, navigate to the sts-ti-demo-s3-lambda function. Update the test event to the following, to mimic a scenario where tenant1 attempts to access other tenants’ objects.
    {
      "login_tenant_id": "tenant1",
      "s3_tenant_home": "tenant2_home"
    }
    

  2. Choose Test to test the Lambda function with the updated test event. You should see status code 400, and the body of the results should contain an error message.

    Figure 13: The results of running the sts-ti-demo-s3-lambda function (negative test)

    Figure 13: The results of running the sts-ti-demo-s3-lambda function (negative test)

  3. Navigate to the sts-ti-demo-pgdb-lambda function and update the test event to the following, to mimic a scenario where tenant1 attempts to access other tenants’ data elements.
    {
      "login_tenant_id": "tenant1",
      "dbuser": "tenant2_dbuser"
    }
    

  4. Choose Test to test the Lambda function with the updated test event. You should see status code 400, and the body of the results should contain an error message.

    Figure 14: The results of running the sts-ti-demo-pgdb-lambda function (negative test)

    Figure 14: The results of running the sts-ti-demo-pgdb-lambda function (negative test)

Cleaning up

To de-clutter your environment, remove the roles, policies, Lambda functions, Lambda layers, Amazon S3 prefixes, database users, and the database table that you created as part of this exercise. You can choose to delete the S3 bucket, as well as the Aurora PostgreSQL-Compatible database cluster that we mentioned in the Prerequisites section, to avoid incurring future charges.

Update the security group of the Aurora PostgreSQL-Compatible database cluster to remove the inbound rule that you added to allow a PostgreSQL TCP connection (Port 5432) from anywhere (0.0.0.0/0).

Conclusion

By taking advantage of attribute-based access control (ABAC) in IAM, you can more efficiently implement tenant isolation in SaaS applications. The solution we presented here helps to achieve tenant isolation in Amazon S3 and Aurora PostgreSQL-Compatible databases by using ABAC with the pool model of data partitioning.

If you run into any issues, you can use Amazon CloudWatch and AWS CloudTrail to troubleshoot. If you have feedback about this post, submit comments in the Comments section below.

To learn more, see these AWS Blog and AWS Support articles:

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ashutosh Upadhyay

Ashutosh works as a Senior Security, Risk, and Compliance Consultant in AWS Professional Services (GFS) and is based in Singapore. Ashutosh started off his career as a developer, and for the past many years has been working in the Security, Risk, and Compliance field. Ashutosh loves to spend his free time learning and playing with new technologies.

Author

Chirantan Saha

Chirantan works as a DevOps Engineer in AWS Professional Services (GFS) and is based in Singapore. Chirantan has 20 years of development and DevOps experience working for large banking, financial services, and insurance organizations. Outside of work, Chirantan enjoys traveling, spending time with family, and helping his children with their science projects.

Building fine-grained authorization using Amazon Cognito, API Gateway, and IAM

Post Syndicated from Artem Lovan original https://aws.amazon.com/blogs/security/building-fine-grained-authorization-using-amazon-cognito-api-gateway-and-iam/

June 5, 2021: We’ve updated Figure 1: User request flow.


Authorizing functionality of an application based on group membership is a best practice. If you’re building APIs with Amazon API Gateway and you need fine-grained access control for your users, you can use Amazon Cognito. Amazon Cognito allows you to use groups to create a collection of users, which is often done to set the permissions for those users. In this post, I show you how to build fine-grained authorization to protect your APIs using Amazon Cognito, API Gateway, and AWS Identity and Access Management (IAM).

As a developer, you’re building a customer-facing application where your users are going to log into your web or mobile application, and as such you will be exposing your APIs through API Gateway with upstream services. The APIs could be deployed on Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Lambda, or Elastic Load Balancing where each of these options will forward the request to your Amazon Elastic Compute Cloud (Amazon EC2) instances. Additionally, you can use on-premises services that are connected to your Amazon Web Services (AWS) environment over an AWS VPN or AWS Direct Connect. It’s important to have fine-grained controls for each API endpoint and HTTP method. For instance, the user should be allowed to make a GET request to an endpoint, but should not be allowed to make a POST request to the same endpoint. As a best practice, you should assign users to groups and use group membership to allow or deny access to your API services.

Solution overview

In this blog post, you learn how to use an Amazon Cognito user pool as a user directory and let users authenticate and acquire the JSON Web Token (JWT) to pass to the API Gateway. The JWT is used to identify what group the user belongs to, as mapping a group to an IAM policy will display the access rights the group is granted.

Note: The solution works similarly if Amazon Cognito would be federating users with an external identity provider (IdP)—such as Ping, Active Directory, or Okta—instead of being an IdP itself. To learn more, see Adding User Pool Sign-in Through a Third Party. Additionally, if you want to use groups from an external IdP to grant access, Role-based access control using Amazon Cognito and an external identity provider outlines how to do so.

The following figure shows the basic architecture and information flow for user requests.

Figure 1: User request flow

Figure 1: User request flow

Let’s go through the request flow to understand what happens at each step, as shown in Figure 1:

  1. A user logs in and acquires an Amazon Cognito JWT ID token, access token, and refresh token. To learn more about each token, see using tokens with user pools.
  2. A RestAPI request is made and a bearer token—in this solution, an access token—is passed in the headers.
  3. API Gateway forwards the request to a Lambda authorizer—also known as a custom authorizer.
  4. The Lambda authorizer verifies the Amazon Cognito JWT using the Amazon Cognito public key. On initial Lambda invocation, the public key is downloaded from Amazon Cognito and cached. Subsequent invocations will use the public key from the cache.
  5. The Lambda authorizer looks up the Amazon Cognito group that the user belongs to in the JWT and does a lookup in Amazon DynamoDB to get the policy that’s mapped to the group.
  6. Lambda returns the policy and—optionally—context to API Gateway. The context is a map containing key-value pairs that you can pass to the upstream service. It can be additional information about the user, the service, or anything that provides additional information to the upstream service.
  7. The API Gateway policy engine evaluates the policy.

    Note: Lambda isn’t responsible for understanding and evaluating the policy. That responsibility falls on the native capabilities of API Gateway.

  8. The request is forwarded to the service.

Note: To further optimize Lambda authorizer, the authorization policy can be cached or disabled, depending on your needs. By enabling cache, you could improve the performance as the authorization policy will be returned from the cache whenever there is a cache key match. To learn more, see Configure a Lambda authorizer using the API Gateway console.

Let’s have a closer look at the following example policy that is stored as part of an item in DynamoDB.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"PetStore-API",
         "Effect":"Allow",
         "Action":"execute-api:Invoke",
         "Resource":[
            "arn:aws:execute-api:*:*:*/*/*/petstore/v1/*",
            "arn:aws:execute-api:*:*:*/*/GET/petstore/v2/status"
         ],
         "Condition":{
            "IpAddress":{
               "aws:SourceIp":[
                  "192.0.2.0/24",
                  "198.51.100.0/24"
               ]
            }
         }
      }
   ]
}

Based on this example policy, the user is allowed to make calls to the petstore API. For version v1, the user can make requests to any verb and any path, which is expressed by an asterisk (*). For v2, the user is only allowed to make a GET request for path /status. To learn more about how the policies work, see Output from an Amazon API Gateway Lambda authorizer.

Getting started

For this solution, you need the following prerequisites:

  • The AWS Command Line Interface (CLI) installed and configured for use.
  • Python 3.6 or later, to package Python code for Lambda

    Note: We recommend that you use a virtual environment or virtualenvwrapper to isolate the solution from the rest of your Python environment.

  • An IAM role or user with enough permissions to create Amazon Cognito User Pool, IAM Role, Lambda, IAM Policy, API Gateway and DynamoDB table.
  • The GitHub repository for the solution. You can download it, or you can use the following Git command to download it from your terminal.

    Note: This sample code should be used to test out the solution and is not intended to be used in production account.

     $ git clone https://github.com/aws-samples/amazon-cognito-api-gateway.git
     $ cd amazon-cognito-api-gateway
    

    Use the following command to package the Python code for deployment to Lambda.

     $ bash ./helper.sh package-lambda-functions
     …
     Successfully completed packaging files.
    

To implement this reference architecture, you will be utilizing the following services:

Note: This solution was tested in the us-east-1, us-east-2, us-west-2, ap-southeast-1, and ap-southeast-2 Regions. Before selecting a Region, verify that the necessary services—Amazon Cognito, API Gateway, and Lambda—are available in those Regions.

Let’s review each service, and how those will be used, before creating the resources for this solution.

Amazon Cognito user pool

A user pool is a user directory in Amazon Cognito. With a user pool, your users can log in to your web or mobile app through Amazon Cognito. You use the Amazon Cognito user directory directly, as this sample solution creates an Amazon Cognito user. However, your users can also log in through social IdPs, OpenID Connect (OIDC), and SAML IdPs.

Lambda as backing API service

Initially, you create a Lambda function that serves your APIs. API Gateway forwards all requests to the Lambda function to serve up the requests.

An API Gateway instance and integration with Lambda

Next, you create an API Gateway instance and integrate it with the Lambda function you created. This API Gateway instance serves as an entry point for the upstream service. The following bash command below creates an Amazon Cognito user pool, a Lambda function, and an API Gateway instance. The command then configures proxy integration with Lambda and deploys an API Gateway stage.

Deploy the sample solution

From within the directory where you downloaded the sample code from GitHub, run the following command to generate a random Amazon Cognito user password and create the resources described in the previous section.

 $ bash ./helper.sh cf-create-stack-gen-password
 ...
 Successfully created CloudFormation stack.

When the command is complete, it returns a message confirming successful stack creation.

Validate Amazon Cognito user creation

To validate that an Amazon Cognito user has been created successfully, run the following command to open the Amazon Cognito UI in your browser and then log in with your credentials.

Note: When you run this command, it returns the user name and password that you should use to log in.

 $ bash ./helper.sh open-cognito-ui
  Opening Cognito UI. Please use following credentials to login:
  Username: cognitouser
  Password: xxxxxxxx

Alternatively, you can open the CloudFormation stack and get the Amazon Cognito hosted UI URL from the stack outputs. The URL is the value assigned to the CognitoHostedUiUrl variable.

Figure 2: CloudFormation Outputs - CognitoHostedUiUrl

Figure 2: CloudFormation Outputs – CognitoHostedUiUrl

Validate Amazon Cognito JWT upon login

Since we haven’t installed a web application that would respond to the redirect request, Amazon Cognito will redirect to localhost, which might look like an error. The key aspect is that after a successful log in, there is a URL similar to the following in the navigation bar of your browser:

http://localhost/#id_token=eyJraWQiOiJicVhMYWFlaTl4aUhzTnY3W...

Test the API configuration

Before you protect the API with Amazon Cognito so that only authorized users can access it, let’s verify that the configuration is correct and the API is served by API Gateway. The following command makes a curl request to API Gateway to retrieve data from the API service.

 $ bash ./helper.sh curl-api
{"pets":[{"id":1,"name":"Birds"},{"id":2,"name":"Cats"},{"id":3,"name":"Dogs"},{"id":4,"name":"Fish"}]}

The expected result is that the response will be a list of pets. In this case, the setup is correct: API Gateway is serving the API.

Protect the API

To protect your API, the following is required:

  1. DynamoDB to store the policy that will be evaluated by the API Gateway to make an authorization decision.
  2. A Lambda function to verify the user’s access token and look up the policy in DynamoDB.

Let’s review all the services before creating the resources.

Lambda authorizer

A Lambda authorizer is an API Gateway feature that uses a Lambda function to control access to an API. You use a Lambda authorizer to implement a custom authorization scheme that uses a bearer token authentication strategy. When a client makes a request to one of the API operations, the API Gateway calls the Lambda authorizer. The Lambda authorizer takes the identity of the caller as input and returns an IAM policy as the output. The output is the policy that is returned in DynamoDB and evaluated by the API Gateway. If there is no policy mapped to the caller identity, Lambda will generate a deny policy and request will be denied.

DynamoDB table

DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. This is ideal for this use case to ensure that the Lambda authorizer can quickly process the bearer token, look up the policy, and return it to API Gateway. To learn more, see Control access for invoking an API.

The final step is to create the DynamoDB table for the Lambda authorizer to look up the policy, which is mapped to an Amazon Cognito group.

Figure 3 illustrates an item in DynamoDB. Key attributes are:

  • Group, which is used to look up the policy.
  • Policy, which is returned to API Gateway to evaluate the policy.

 

Figure 3: DynamoDB item

Figure 3: DynamoDB item

Based on this policy, the user that is part of the Amazon Cognito group pet-veterinarian is allowed to make API requests to endpoints https://<domain>/<api-gateway-stage>/petstore/v1/* and https://<domain>/<api-gateway-stage>/petstore/v2/status for GET requests only.

Update and create resources

Run the following command to update existing resources and create a Lambda authorizer and DynamoDB table.

 $ bash ./helper.sh cf-update-stack
Successfully updated CloudFormation stack.

Test the custom authorizer setup

Begin your testing with the following request, which doesn’t include an access token.

$ bash ./helper.sh curl-api
{"message":"Unauthorized"}

The request is denied with the message Unauthorized. At this point, the Amazon API Gateway expects a header named Authorization (case sensitive) in the request. If there’s no authorization header, the request is denied before it reaches the lambda authorizer. This is a way to filter out requests that don’t include required information.

Use the following command for the next test. In this test, you pass the required header but the token is invalid because it wasn’t issued by Amazon Cognito but is a simple JWT-format token stored in ./helper.sh. To learn more about how to decode and validate a JWT, see decode and verify an Amazon Cognito JSON token.

$ bash ./helper.sh curl-api-invalid-token
{"Message":"User is not authorized to access this resource"}

This time the message is different. The Lambda authorizer received the request and identified the token as invalid and responded with the message User is not authorized to access this resource.

To make a successful request to the protected API, your code will need to perform the following steps:

  1. Use a user name and password to authenticate against your Amazon Cognito user pool.
  2. Acquire the tokens (id token, access token, and refresh token).
  3. Make an HTTPS (TLS) request to API Gateway and pass the access token in the headers.

Before the request is forwarded to the API service, API Gateway receives the request and passes it to the Lambda authorizer. The authorizer performs the following steps. If any of the steps fail, the request is denied.

  1. Retrieve the public keys from Amazon Cognito.
  2. Cache the public keys so the Lambda authorizer doesn’t have to make additional calls to Amazon Cognito as long as the Lambda execution environment isn’t shut down.
  3. Use public keys to verify the access token.
  4. Look up the policy in DynamoDB.
  5. Return the policy to API Gateway.

The access token has claims such as Amazon Cognito assigned groups, user name, token use, and others, as shown in the following example (some fields removed).

{
    "sub": "00000000-0000-0000-0000-0000000000000000",
    "cognito:groups": [
        "pet-veterinarian"
    ],
...
    "token_use": "access",
    "scope": "openid email",
    "username": "cognitouser"
}

Finally, let’s programmatically log in to Amazon Cognito UI, acquire a valid access token, and make a request to API Gateway. Run the following command to call the protected API.

$ bash ./helper.sh curl-protected-api
{"pets":[{"id":1,"name":"Birds"},{"id":2,"name":"Cats"},{"id":3,"name":"Dogs"},{"id":4,"name":"Fish"}]}

This time, you receive a response with data from the API service. Let’s examine the steps that the example code performed:

  1. Lambda authorizer validates the access token.
  2. Lambda authorizer looks up the policy in DynamoDB based on the group name that was retrieved from the access token.
  3. Lambda authorizer passes the IAM policy back to API Gateway.
  4. API Gateway evaluates the IAM policy and the final effect is an allow.
  5. API Gateway forwards the request to Lambda.
  6. Lambda returns the response.

Let’s continue to test our policy from Figure 3. In the policy document, arn:aws:execute-api:*:*:*/*/GET/petstore/v2/status is the only endpoint for version V2, which means requests to endpoint /GET/petstore/v2/pets should be denied. Run the following command to test this.

 $ bash ./helper.sh curl-protected-api-not-allowed-endpoint
{"Message":"User is not authorized to access this resource"}

Note: Now that you understand fine grained access control using Cognito user pool, API Gateway and lambda function, and you have finished testing it out, you can run the following command to clean up all the resources associated with this solution:

 $ bash ./helper.sh cf-delete-stack

Advanced IAM policies to further control your API

With IAM, you can create advanced policies to further refine access to your APIs. You can learn more about condition keys that can be used in API Gateway, their use in an IAM policy with conditions, and how policy evaluation logic determines whether to allow or deny a request.

Summary

In this post, you learned how IAM and Amazon Cognito can be used to provide fine-grained access control for your API behind API Gateway. You can use this approach to transparently apply fine-grained control to your API, without having to modify the code in your API, and create advanced policies by using IAM condition keys.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Artem Lovan

Artem is a Senior Solutions Architect based in New York. He helps customers architect and optimize applications on AWS. He has been involved in IT at many levels, including infrastructure, networking, security, DevOps, and software development.

How to delegate management of identity in AWS Single Sign-On

Post Syndicated from Louay Shaat original https://aws.amazon.com/blogs/security/how-to-delegate-management-of-identity-in-aws-single-sign-on/

In this blog post, I show how you can use AWS Single Sign-On (AWS SSO) to delegate administration of user identities. Delegation is the process of providing your teams permissions to manage accounts and identities associated with their teams. You can achieve this by using the existing integration that AWS SSO has with AWS Organizations, and by using tags and conditions in AWS Identity and Access Management (IAM).

AWS SSO makes it easy to centrally manage access to multiple Amazon Web Services (AWS) accounts and business applications, and to provide users with single sign-on access to all their assigned accounts and applications from one place.

AWS SSO uses permission sets—a collection of administrator-defined policies—to determine a user’s effective permissions to access a given AWS account. Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. Policies are documents that act as containers for one or more permission statements. These statements represent individual access controls (allow or deny) for various tasks, which determine what tasks users can or cannot perform within the AWS account. Permission sets are provisioned as IAM roles in your organizational accounts, and are managed centrally using AWS SSO.

AWS SSO is tightly integrated with AWS Organizations, and runs in your AWS Organizations management account. This integration enables AWS SSO to retrieve and manage permission sets across your AWS Organizations configuration.

As you continue to build more of your workloads on AWS, managing access to AWS accounts and services becomes more time consuming for team members that manage identities. With a centralized identity approach that uses AWS SSO, there’s an increased need to delegate control of permission sets and accounts to domain and application owners. Although this is a valid use case, access to the management account in Organizations should be tightly guarded as a security best practice. As an administrator in the management account of an organization, you can control how teams and users access your AWS accounts and applications.

This post shows how you can build comprehensive delegation models in AWS SSO to securely and effectively delegate control of identities to various teams.

Solution overview

Suppose you’ve implemented AWS SSO in Organizations to manage identity across your entire AWS environment. Your organization is growing and the number of accounts and teams that need access to your AWS environment is also growing. You have a small Identity team that is constantly adding, updating, or deleting users or groups and permission sets to enable your teams to gain access to their required services and accounts.

Note: You can learn how to enable AWS SSO from the Introducing AWS Single Sign-On blog post.

As the number of teams grows, you want to start using a delegation model to enable account and application owners to manage access to their resources, in order to reduce the heavy lifting that is done by teams that manage identities.

Figure 1 shows a simple organizational structure that your organization implemented.
 

Figure 1: AWS SSO with AWS Organizations

Figure 1: AWS SSO with AWS Organizations

In this scenario, you’ve already built a collection of organizational-approved permission sets that are used across your organization. You have a tagging strategy for permission sets, and you’ve implemented two tags across all your permission sets:

  • Environment: The values for this tag are Production or Development. You only apply Production permission sets to Production accounts.
  • OU: This tag identifies the organizational unit (OU) that the permission set belongs to.

A value of All can be assigned to either tag to identify organization-wide use of the permission set.

You identified three models of delegation that you want to enable based on the setup just described, and your Identity team has identified three use cases that they want to implement:

  • A simple delegation model for a team to manage all permission sets for a set of accounts.
  • A delegation model for support teams to apply read-only permission sets to all accounts.
  • A delegation model based on AWS Organizations, where a team can manage only the permission sets intended for a specific OU.

The AWS SSO delegation model enables three key conditions for restricting user access:

  • Permission sets.
  • Accounts
  • Tags that use the condition aws:ResourceTag, to ensure that tags are present on your permission sets as part of your delegation model.

In the rest of this blog post, I show you how AWS SSO administrators can use these conditions to implement the use cases highlighted here to build a delegation model.

See Delegating permission set administration and Actions, resources, and condition keys for AWS SSO for more information.

Important: The use cases that follow are examples that can be adopted by your organization. The permission sets in these use cases show only what is needed to delegate the components discussed. You need to add additional policies to give users and groups access to AWS SSO.

Some examples:

Identify your permission set and AWS SSO instance IDs

You can use either the AWS Command Line Interface (AWS CLI) v2 or the AWS Management Console to identify your permission set and AWS SSO instance IDs.

Use the AWS CLI

To use the AWS CLI to identify the Amazon resource names (ARNs) of the AWS SSO instance and permission set, make sure you have AWS CLI v2 installed.

To list the AWS SSO instance ID ARN

Run the following command:

aws sso-admin list-instances

To list the permission set ARN

Run the following command:

aws sso-admin list-permission-sets --instance-arn <instance arn from above>

Use the console

You can also use the console to identify your permission sets and AWS SSO instance IDs.

To list the AWS SSO Instance ID ARN

  1. Navigate to the AWS SSO in your Region. Choose the Dashboard and then choose Choose your identity source.
  2. Copy the AWS SSO ARN ID.
Figure 2: AWS SSO ID ARN

Figure 2: AWS SSO ID ARN

To list the permission set ARN

  1. Navigate to the AWS SSO Service in your Region. Choose AWS Accounts and then Permission Sets.
  2. Select the permission set you want to use.
  3. Copy the ARN of the permission set.
Figure 3: Permission set ARN

Figure 3: Permission set ARN

Use case 1: Accounts-based delegation model

In this use case, you create a single policy to allow administrators to assign any permission set to a specific set of accounts.

First, you need to create a custom permission set to use with the following example policy.

The example policy is as follows.

            "Sid": "DelegatedAdminsAccounts",
            "Effect": "Allow",
            "Action": [
                "sso:ProvisionPermissionSet",
                "sso:CreateAccountAssignment",
                "sso:DeleteInlinePolicyFromPermissionSet",
                "sso:UpdateInstanceAccessControlAttributeConfiguration",
                "sso:PutInlinePolicyToPermissionSet",
                "sso:DeleteAccountAssignment",
                "sso:DetachManagedPolicyFromPermissionSet",
                "sso:DeletePermissionSet",
                "sso:AttachManagedPolicyToPermissionSet",
                "sso:CreatePermissionSet",
                "sso:UpdatePermissionSet",
                "sso:CreateInstanceAccessControlAttributeConfiguration",
                "sso:DeleteInstanceAccessControlAttributeConfiguration"
            ],
            "Resource": [
                "arn:aws:sso:::account/112233445566",
                "arn:aws:sso:::account/223344556677",
                "arn:aws:sso:::account/334455667788"
            ]
        }

This policy specifies that delegated admins are allowed to provision any permission set to the three accounts listed in the policy.

Note: To apply this permission set to your environment, replace the account numbers following Resource with your account numbers.

Use case 2: Permission-based delegation model

In this use case, you create a single policy to allow administrators to assign a specific permission set to any account. The policy is as follows.

{
                    "Sid": "DelegatedPermissionsAdmin",
                    "Effect": "Allow",
                    "Action": [
                        "sso:ProvisionPermissionSet",
                        "sso:CreateAccountAssignment",
                        "sso:DeleteInlinePolicyFromPermissionSet",
                        "sso:UpdateInstanceAccessControlAttributeConfiguration",
                        "sso:PutInlinePolicyToPermissionSet",
                        "sso:DeleteAccountAssignment",
                        "sso:DetachManagedPolicyFromPermissionSet",
                        "sso:DeletePermissionSet",
                        "sso:AttachManagedPolicyToPermissionSet",
                        "sso:CreatePermissionSet",
                        "sso:UpdatePermissionSet",
                        "sso:CreateInstanceAccessControlAttributeConfiguration",
                        "sso:DeleteInstanceAccessControlAttributeConfiguration",
                        "sso:ProvisionApplicationInstanceForAWSAccount"
                    ],
                    "Resource": [
                        "arn:aws:sso:::instance/ssoins-1111111111",
                        "arn:aws:sso:::account/*",
                        "arn:aws:sso:::permissionSet/ssoins-1111111111/ps-112233abcdef123"

            ]


        },          

This policy specifies that delegated admins are allowed to provision only the specific permission set listed in the policy to any account.

Note:

Use case 3: OU-based delegation model

In this use case, the Identity team wants to delegate the management of the Development permission sets (identified by the tag key Environment) to the Test OU (identified by the tag key OU). You use the Environment and OU tags on permission sets to restrict access to only the permission sets that contain both tags.

To build this permission set for delegation, you need to create two policies in the same permission set:

  • A policy that filters the permission sets based on both tags—Environment and OU.
  • A policy that filters the accounts belonging to the Development OU.

The policies are as follows.

{
                    "Sid": "DelegatedOUAdmin",
                    "Effect": "Allow",
                    "Action": [
                        "sso:ProvisionPermissionSet",
                        "sso:CreateAccountAssignment",
                        "sso:DeleteInlinePolicyFromPermissionSet",
                        "sso:UpdateInstanceAccessControlAttributeConfiguration",
                        "sso:PutInlinePolicyToPermissionSet",
                        "sso:DeleteAccountAssignment",
                        "sso:DetachManagedPolicyFromPermissionSet",
                        "sso:DeletePermissionSet",
                        "sso:AttachManagedPolicyToPermissionSet",
                        "sso:CreatePermissionSet",
                        "sso:UpdatePermissionSet",
                        "sso:CreateInstanceAccessControlAttributeConfiguration",
                        "sso:DeleteInstanceAccessControlAttributeConfiguration",
                        "sso:ProvisionApplicationInstanceForAWSAccount"
                    ],
                    "Resource": "arn:aws:sso:::permissionSet/*/*",
                    "Condition": {
                        "StringEquals": {
                            "aws:ResourceTag/Environment": "Development",
                            "aws:ResourceTag/OU": "Test"
                        }
                    }
        },
        {
            "Sid": "Instance",
            "Effect": "Allow",
            "Action": [
                "sso:ProvisionPermissionSet",
                "sso:CreateAccountAssignment",
                "sso:DeleteInlinePolicyFromPermissionSet",
                "sso:UpdateInstanceAccessControlAttributeConfiguration",
                "sso:PutInlinePolicyToPermissionSet",
                "sso:DeleteAccountAssignment",
                "sso:DetachManagedPolicyFromPermissionSet",
                "sso:DeletePermissionSet",
                "sso:AttachManagedPolicyToPermissionSet",
                "sso:CreatePermissionSet",
                "sso:UpdatePermissionSet",
                "sso:CreateInstanceAccessControlAttributeConfiguration",
                "sso:DeleteInstanceAccessControlAttributeConfiguration",
                "sso:ProvisionApplicationInstanceForAWSAccount"
            ],
            "Resource": [
                "arn:aws:sso:::instance/ssoins-82593a6ed92c8920",
                "arn:aws:sso:::account/112233445566",
                "arn:aws:sso:::account/223344556677",
                "arn:aws:sso:::account/334455667788"

            ]
        }

In the delegated policy, the user or group is only allowed to provision permission sets that have both tags, OU and Environment, set to “Development” and only to accounts in the Development OU.

Note: In the example above arn:aws:sso:::instance/ssoins-11112222233333 is the ARN for the AWS SSO Instance ID. To get your AWS SSO Instance ID, refer to Identify your permission set and AWS SSO Instance IDs.

Create a delegated admin profile in AWS SSO

Now that you know what’s required to delegate permissions, you can create a delegated profile and deploy that to your users and groups.

To create a delegated AWS SSO profile

  1. In the AWS SSO console, sign in to your management account and browse to the Region where AWS SSO is provisioned.
  2. Navigate to AWS Accounts and choose Permission sets, and then choose Create permission set.
     
    Figure 4: AWS SSO permission sets menu

    Figure 4: AWS SSO permission sets menu

  3. Choose Create a custom permission set.
     
    Figure 5: Create a new permission set

    Figure 5: Create a new permission set

  4. Give a name to your permission set based on your naming standards and select a session duration from your organizational policies.
  5. For Relay state, enter the following URL:
    https://<region>.console.aws.amazon.com/singlesignon/home?region=<region>#/accounts/organization 
    

    where <region> is the AWS Region in which you deployed AWS SSO.

    The relay state will automatically redirect the user to the Accounts section in the AWS SSO console, for simplicity.
     

    Figure 6: Custom permission set

    Figure 6: Custom permission set

  6. Choose Create new permission set. Here is where you can decide the level of delegation required for your application or domain administrators.
     
    Figure 7: Assign users

    Figure 7: Assign users

    See some of the examples in the earlier sections of this post for the permission set.

  7. If you’re using AWS SSO with AWS Directory Service for Microsoft Active Directory, you’ll need to provide access to AWS Directory Service in order for your administrator to assign permission sets to users and groups.

    To provide this access, navigate to the AWS Accounts screen in the AWS SSO console, and select your management account. Assign the required users or groups, and select the permission set that you created earlier. Then choose Finish.

  8. To test this delegation, sign in to AWS SSO. You’ll see the newly created permission set.
     
    Figure 8: AWS SSO sign-on page

    Figure 8: AWS SSO sign-on page

  9. Next to developer-delegated-admin, choose Management console. This should automatically redirect you to AWS SSO in the AWS Accounts submenu.

If you try to provision access by assigning or creating new permission sets to accounts or permission sets you are not explicitly allowed, according to the policies you specified earlier, you will receive the following error.
 

Figure 9: Error based on lack of permissions

Figure 9: Error based on lack of permissions

Otherwise, the provisioning will be successful.

Summary

You’ve seen that by using conditions and tags on permission sets, application and account owners can use delegation models to manage the deployment of permission sets across the accounts they manage, providing them and their teams with secure access to AWS accounts and services.

Additionally, because AWS SSO supports attribute-based access control (ABAC), you can create a more dynamic delegation model based on attributes from your identity provider, to match the tags on the permission set.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Louay Shaat

Louay is a Security Solutions Architect with AWS. He spends his days working with customers, from startups to the largest of enterprises, helping them build cool new capabilities and accelerating their cloud journey. He has a strong focus on security and automation to help customers improve their security, risk, and compliance in the cloud.

How ERGO implemented an event-driven security remediation architecture on AWS

Post Syndicated from Adam Sikora original https://aws.amazon.com/blogs/architecture/how-ergo-implemented-an-event-driven-security-remediation-architecture-on-aws/

ERGO is one of the major insurance groups in Germany and Europe. Within the ERGO Group, ERGO Technology & Services S.A. (ET&S), a part of ET&SM holding, has competencies in digital transformation, know-how in creating and implementing complex IT systems with focus on the quality of solutions and a portfolio aligned with the entire value chain of the insurance market.

Business Challenge and Solution

ERGO has a multi-account AWS environment where each project team subscribes to a set of AWS accounts that conforms to workload requirements and security best practices. As ERGO began its cloud journey, CIS Foundations Benchmark Standard was used as the key indicator for measuring compliance. The report showed significant room for security posture improvements. ERGO was looking for a solution that could enable the management of security events at scale. At the same time, they needed to centralize the event response and remediation in near-real time. The goal was to improve the CIS compliance metric and overall security posture.

Architecture

ERGO uses AWS Organizations to centrally govern the multi-account AWS environment. Integration of AWS Security Hub with AWS Organizations enables ERGO to designate ERGO’s Security Account as the Security Hub administrator/primary account. Other organization accounts are automatically registered as Security Hub member accounts to send events to the Security Account.

An important aspect of the workflow is to maintain segregation of duties and separation of environments. ERGO uses two separate AWS accounts to implement automatic finding remediation:

  • Security Account – this is the primary account with Security Hub where security alerts (findings) from all the AWS accounts of the project are gathered.
  • Service Account – this is the account that can take action on target project (member) AWS accounts. ERGO uses AWS Lambda functions to run remediation actions through AWS Identity and Access Management (IAM) permissions, VPC resources actions, and more.

Within the Security Account, AWS Security Hub serves as the event aggregation solution that gathers multi-account findings from AWS services such as Amazon GuardDuty. ERGO was able to centralize the security findings. But they still needed to develop a solution that routed the filtered, actionable events to the Service Account. The solution had to automate the response to these events based on ERGO’s security policy. ERGO built this solution with the help of Amazon CloudWatch, AWS Step Functions, and AWS Lambda.

ERGO used the integration of AWS Security Hub with Amazon CloudWatch to send all the security events to CloudWatch. The filtering logic of events was managed at two levels. At the first level, ERGO used CloudWatch Events rules that match event patterns to refine the types of events ERGO wanted to focus on.

The second level of filtering logic was more nuanced and related to the remediation action ERGO wanted to take on a detected event. ERGO chose AWS Step Functions to build a workflow that enabled them to further filter the events, in addition to matching them to the suitable remediation action.

Choosing AWS Step Functions enabled ERGO to orchestrate multiple steps. They could also respond to errors in the overall workflow. For example, one of the issues that ERGO encountered was the sporadic failure of the Archival Lambda function. This was due to the Security Hub API Rate Throttling.

ERGO evaluated several workarounds to deal with this situation. They considered using the automatic retries capability of the AWS SDK to make the API call in the Archival function. However, the built-in mechanism was not sufficient in this case. Another option for dealing with rate limit was to throttle the Archival Lambda functions by applying a low reserved concurrency. Another possibility was to batch the events to be SUPPRESSED and process them as one batch at a time. The benefit was in making a single API call at a time, over several parameters.

After much consideration, ERGO decided to use the “retry on error” mechanism of the Step Function to circumvent this problem. This allowed ERGO to manage the error handling directly in the workflow logic. It wasn’t necessary to change the remediation and archival logic of the Lambda functions. This was a huge advantage. Writing and maintaining error handling logic in each one of the Lambda functions would have been time-intensive and complicated.

Additionally, the remediation actions had to be configured and run from the Service Account. That means the Step Function in the Security Account had to trigger a cross-account resource. ERGO had to find a way to integrate the Remediation Lambda in the Service Account with the state machine of the Security Account. ERGO achieved this integration using a Proxy Lambda in the Security Account.

The Proxy Lambda resides in the Security Account and is initiated by the Step Function. It takes as its argument, the function name and function version to start the Remediation function in the service account.

The Remediation functions in the Service Account have permission to take action on Project accounts. As the next step, the Remediation function is invoked on the impacted accounts. This is filtered by the Step Function, which passes the Account ID to Proxy Lambda, which in turn passes this argument to Remediation Lambda. The Remediation function runs the actions on the Project accounts and returns the output to the Proxy Lambda. This is then passed back to the Step Function.

The role that Lambda assumes using the AssumeRole mechanism, is an Organization Level role. It is deployed on every account and has proper permission to perform the remediation.

ERGO Architecture

Figure 1. Technical Solution implementation

  1. Security Hub service in ERGO Project accounts sends security findings to Administrative Account.
  2. Findings are aggregated and sent to CloudWatch Events for filtering.
  3. CloudWatch rules invoke Step Functions as the target. Step Functions process security events based on the event type and treatment required as per CIS Standards.
  4. For events that need to be suppressed without any dependency on the Project Accounts, the Step Function invokes a Lambda function to archive the findings.
  5. For events that need to be executed on the Project accounts, a Step Function invokes a Proxy Lambda with required parameters.
  6. Proxy Lambda in turn, invokes a cross-account Remediation function in Service Account. This has the permissions to run actions in Project accounts.
  7. Based on the event type, corresponding remediation action is run on the impacted Project Account.
  8. Remediation function passes the execution result back to Proxy Lambda to complete the Security event workflow.

Failed remediations are manually resolved in exceptional conditions.

Summary

By implementing this event-driven solution, ERGO was able to increase and maintain automated compliance with CIS AWS Foundation Benchmark Standard to about 95%. The remaining findings were evaluated on case basis, per specific Project requirements. This measurable improvement in ERGO compliance posture was achieved with an end-to-end serverless workflow. This offloaded any on-going platform maintenance efforts from the ERGO cloud security team. Working closely with our AWS account and service teams, ERGO will continue to evaluate and make improvements to our architecture.

Analyze and understand IAM role usage with Amazon Detective

Post Syndicated from Sheldon Sides original https://aws.amazon.com/blogs/security/analyze-and-understand-iam-role-usage-with-amazon-detective/

In this blog post, we’ll demonstrate how you can use Amazon Detective’s new role session analysis feature to investigate security findings that are tied to the usage of an AWS Identity and Access Management (IAM) role. You’ll learn about how you can use this new role session analysis feature to determine which Amazon Web Services (AWS) resource assumed the role that triggered a finding, and to understand the context of the activities that the resource performed when the finding was triggered. As a result of this walkthrough, you’ll gain an understanding of how to quickly ascertain anomalous identity and access behaviors. While this demonstration utilizes an Amazon GuardDuty finding as a starting point, the techniques demonstrated within this post highlight how Detective can be utilized to investigate any access behaviors that are tied to using IAM roles.

IAM roles provide a valuable mechanism that you can use to delegate access to users and services for managing and accessing your AWS resources, but using IAM roles can make it more complex to determine who performed an action. AWS CloudTrail logs do track all usage tied to IAM roles, but attributing activity to a specific resource that assumed a role requires storage of CloudTrail logs and analysis of this log telemetry. Understanding role usage through log analysis gets even more complex if cross-account role assumptions are involved, since that requires you to collate and analyze logs from multiple accounts. In some cases, permissions may allow a resource to sequentially assume a series of different roles (role chaining), further complicating the attribution of activity to a specific resource.

With its built-in, multi-account log analysis, Detective’s new role session analysis feature provides visibility into role usage, cross-account role assumptions and into any role chaining activities that may have been performed across the accounts. With this feature, you can quickly determine who or what assumed a role, regardless of whether this was a federated, IAM user or other resource. The feature shows you when roles were assumed and for how long, and helps you determine the activities that were performed during the assumption. Detective visualizes these results based upon its automatic analysis of CloudTrail logs and VPC flow log traffic that it continuously processes for enabled accounts, regardless of whether these log sources are enabled on each account.

To demonstrate this feature, we’ll investigate a “CloudTrail logging disabled” finding that is triggered by Amazon GuardDuty as a result of activity performed by a resource that has assumed an IAM role. Amazon GuardDuty is an AWS service that continuously monitors for malicious or unauthorized behavior to help protect your AWS resources, including your AWS accounts, access keys, and EC2 instances. GuardDuty identifies unusual or unauthorized activity, like crypto-currency mining, access to data stored in S3 from unusual locations, or infrastructure deployments in a region that has never been used.

Start the investigation in GuardDuty

GuardDuty issues a CloudTrailLoggingDisabled finding to alert you that CloudTrail logging has been disabled in one of your accounts. This is an important finding, because it could indicate that an attacker is attempting to hide their tracks. Since Detective receives a copy of CloudTrail traffic directly from the AWS infrastructure, Detective will continue to receive API calls that are made after CloudTrail logging is disabled.

In order to properly investigate this type of finding and determine if this is an issue that you need to be concerned about, you’ll need to answer a few specific questions:

  1. You’ll need to determine which user or resource disabled CloudTrail.
  2. You’ll need to see what other actions they performed after disabling logging.
  3. You’ll want to understand if their access pattern and behavior is consistent with their previous access patterns and behaviors.

Let’s take a look at a CloudTrailLoggingDisabled finding in GuardDuty as we start trying to answer these questions. When you access the GuardDuty console, a list of your recent findings is displayed. In Figure 1, a filter has been applied to display the CloudTrailLoggingDisabled finding.

Figure 1: A GuardDuty finding showing that CloudTrail was disabled

Figure 1: A GuardDuty finding showing that CloudTrail was disabled

After you select the GuardDuty finding, you can see the finding details, including some of the user information related to the finding. Figure 2 shows the Resources affected section of the finding.

Figure 2: Viewing user data related to the GuardDuty finding

Figure 2: Viewing user data related to the GuardDuty finding

The Affected resources field indicates that the demo-trail-2 trail was where logging was disabled. You can also see that User type is set to AssumedRole and that User name contains the role AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5. This was the role that was assumed and using which CloudTrail logging was disabled. This information can help you understand the resources this role delegates access to and the permissions it provides. You still need to identify who specifically assumed the role to disable CloudTrail logging and the activities they performed afterwards. You can use Amazon Detective to answer these questions.

Investigate the finding in Detective

In order to investigate this GuardDuty finding in Detective, you select the finding and then select Investigate in the Actions menu, as shown in Figure 3.

Figure 3: Choose 'Investigate with Detective' and select the GuardDuty finding ID on the pop-up to investigate the finding

Figure 3: Choose ‘Investigate with Detective’ and select the GuardDuty finding ID on the pop-up to investigate the finding

View the finding profile page

Choosing the Investigate action for this CloudTrailLoggingDisabled finding in GuardDuty opens the finding’s profile page in the Detective console, as shown in Figure 4. Detective has the concept of a profile page, which displays summaries and analytics gleaned from CloudTrail management logs, VPC flow traffic and GuardDuty findings for AWS resources, IP addresses, and user agents. Each profile page can display up to 12 months of information for the selected resource and is intended to help an investigator review and understand the behavior of a resource, or quickly triage and delve into potential issues. Detective doesn’t require a customer to enable CloudTrail or VPC Flog logging in order to retrieve this data and provides these 12 months of visibility regardless of the customers log retention or archiving policies.

Figure 4: Viewing a GuardDuty finding in Detective

Figure 4: Viewing a GuardDuty finding in Detective

Scope time

To help focus your investigation, Detective defaults the time range and thus the displayed information in a finding profile to cover the period of time from when the finding was created through when it was last updated. In the case of this finding, the scope time covers a 1-hour period of time. You can change the scope time by choosing the calendar icon at the top right of the page, if you want to examine additional information before or after the finding was created. The defaulted scope time is sufficient for this investigation, so we can leave it as-is.

Role session overview

Detective uses tabs to group information on profile pages, and for this finding it shows the role session overview tab by default. The role session represents the activities and behavior of the resource that assumed the role tied to our finding. In this case, the role was assumed by someone with the user name sara, as shown in the Assumed by field. (We’ll assume that the user’s first name is Sara.) By analyzing the role session information in the CloudTrail logs, Detective was able to immediately identify that sara was the user who disabled CloudTrail logging and caused the finding to be triggered. You now have an answer to the question of who did this action.

Before we move to answer our other questions about what Sara did after disabling logging and whether her behavior changed, let’s discuss role sessions in more detail. Every role session has a role session name, sara in this case, and a unique role session identifier. The role session identifier is the role ID of the role assumed and the role session name, concatenated together. Best practices dictate that for a specific role that’s assumed by a specific resource, the role session name represents the user name of the IAM or federated user, or includes other useful information about the resource that assumed the role (for more information, see the Naming of individual IAM role sessions blog post). In this case, because the best practices are being followed, Detective is able to track Sara’s activities and behavior each time she assumes the AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 role.

Detective tracks statistics such as when a role session was first observed (October in Sara’s case, for this role), as well as the actions performed and behavioral insights such as the geolocations where Sara initiated her role assumptions. Knowing that Sara has assumed this role before is useful, because you can now assess whether her usage of the role changed during the 1-hour window of the scope time that you’re looking at now, compared to all of her previous assumptions of this role.

Review changes in Sara’s access patterns and operations

Detective tracks changes in geographical access and operations on the New behavior tab. Let’s choose the New behavior tab for the role session to see this information, as displayed in Figure 5.

Figure 5: Viewing new role session behavior

Figure 5: Viewing new role session behavior

During a security investigation, determining that access patterns have changed can be helpful in highlighting malicious activity. Since Detective tracks Sara’s assumptions of the AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 role, it can show the location where Sara assumed the role and whether the current assumption took place from the same location as her previous ones.

In Figure 5, you can see that Sara has a history of assuming the AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 role from Bellevue, WA and Ashburn, VA, since those geographies are shown in blue. If she had assumed this role from a new location, you would see the new location indicated on the map in orange. Since the API calls being made by this user are from a previously observed location, it’s very unlikely that the user’s credentials were compromised. Making this determination through a manual analysis of CloudTrail logs would have been much more time consuming.

Other information that you can gather from the New behavior role session tab includes newly observed API calls, API calls with increased volume, newly observed autonomous system organizations, and newly observed user agents. It’s useful to be able to validate that the operations Sara performed during the current scope time are relatively consistent with the operations she has performed in the past. This helps us be more certain that it was indeed Sara who was conducting this activity.

Investigate Sara’s API activity

Now that we’ve determined that Sara’s access pattern and activities are consistent with previous behavior, let’s use Detective to look further into Sara’s activity to determine if she accidentally disabled CloudTrail logging or if there was possible malicious intent behind her action.

To investigate the user’s actions

  1. On the finding profile page, in the dropdown list at the top of the screen, select Overview: Role Session to go back to the Overview tab for the role session.

    Figure 6: Navigating to the 'Overview: Role Session' page

    Figure 6: Navigating to the ‘Overview: Role Session’ page

  2. Once you’re on the Overview tab, navigate to the Overall API call volume panel.
    Figure 7: Navigating to the Overall API call volume panel

    Figure 7: Navigating to the ‘Overall API call volume’ panel

    This panel displays a chart of the successful and failed API calls that Sara has made while she assumes the AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 role. The chart shows a black rectangle around activities that were performed during the CloudTrail findings scope time. It also displays historical activities and shows a baseline across the chart so that you can understand how actively she uses the permissions granted to her by assuming this role.

  3. Choose the display details for scope time button to retrieve the details of the API calls that were invoked by Sara during the scope time, so that you can determine her actions after she disabled CloudTrail logging.
    Figure 8: Displaying details based on scope time

    Figure 8: Displaying details based on scope time

    You will now see the Overall API call volume panel expand to show you all the IP addresses, API calls, and access keys used by Sara during the scope time window of this finding.

  4. Choose the API method tab to see a list of all the API calls that were made.
    Figure 9: Viewing the API methods called

    Figure 9: Viewing the API methods called

    She invoked just two API calls during this scope time: the StopLogging and AssumeRole API calls. You were already aware that Sara disabled CloudTrail logging, but you weren’t aware that she assumed another role. When a user assumes a role while they have another one assumed, this is called role chaining. Although role chaining can be used because a user needs additional permissions, it can also be used to hide activities. Because we don’t know what other actions Sara performed after assuming this second role, let’s dig further. That may shed light on why she chose to disable CloudTrail logging.

Examine chained role assumptions

To find out more about Sara’s use of role chaining, let’s look at the other role that she assumed during this role session.

To view the user’s other role

  1. Navigate back to the top of the finding profile page. In the Role session details panel, choose AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5.
    Figure 10: Locating the 'Assumed role' name

    Figure 10: Locating the ‘Assumed role’ name

    Detective displays the AWS Role profile page for this role, and you can now see the activity that has occurred across all resources that have assumed this role. In order to highlight information that’s relevant to the time frame of your investigation, Detective maintains your scope time as you move from the CloudTrailLoggingDisabled finding profile page to this role profile page.

  2. The goal for coming to this page is to determine which other role Sara assumed after assuming the AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 role, so choose the Resource interaction tab. On this tab, you will see the following three panels: Resources that assumed this role, Assumed roles, and Sessions involved.In Figure 11, you can see the Resources that assumed this role panel, which lists all the AWS resources that have assumed this role, their type (EC2 instance, federated or IAM user, IAM role), their account, and when they assumed the role for the first and last time. Sara is on this list, but Detective does not show an AWS account next to her because federated users aren’t tied to a specific account. The account field is populated for other resource types that are displayed on this panel and can be useful to understand cross-account role assumptions.

    Figure 11: Viewing resources that have assumed a role

    Figure 11: Viewing resources that have assumed a role

  3. On the same Resource Interaction tab, as you scroll down you will see the Assumed Roles panel, Figure 12, which helps you understand role chaining by listing the other roles that have been assumed by the AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 role. In this case, the role has assumed several other roles, including DemoRole1 during the same window of time when the CloudTrailLoggingDisabled finding occurred.

    Figure 12: Viewing the roles that have been assumed

    Figure 12: Viewing the roles that have been assumed

  4. In Figure 13, you can see the Sessions involved panel, which shows the role sessions for all the resources that have assumed this role, and role sessions where this role has assumed other roles within the current scope time. You see two role sessions with the session name sara, one where Sara assumed the AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 role and another where AWSReservedSSO_AdministratorAccess_598c5f73f8b2b4e5 assumed DemoRole1.

    Figure 13: Viewing the role sessions this role was involved with

    Figure 13: Viewing the role sessions this role was involved with

Now that you know that Sara also used the role DemoRole1 during her role session, let’s take a closer look at what actions she performed.

View API operations that were called within the chained role

In this step, we’ll view Sara’s activity within the DemoRole1 role, focusing on the API calls that were made.

To view the user’s activity in another role

  1. In the Sessions involved panel, in the Session name column, find the row where DemoRole1 is the Assumed Role value. Choose the session name in this row, sara, to go to the role session profile page.
  2. You will be most interested in the API methods that were called during this role session, and you can view those in the Overall API call volume panel. As shown in Figure 14, you can see that Sara has accessed DemoRole1 before, because there are calls graphed prior to the calls in our scope time.
  3. Choose the display details for scope time button on the Overall API call volume panel, and then choose the API method tab.

    Figure 14: Viewing the role session API method calls

    Figure 14: Viewing the role session API method calls

In Figure 14, you can see that calls were made to the DescribeInstances and RunInstances API methods. So you now know that Sara determined the type of Amazon Elastic Compute Cloud (Amazon EC2) instances that were running in your account and then successfully created an EC2 instance by calling the RunInstances API method. You can also see that successful and failed calls were made to the AttachRolePolicy API method as a part of the session. This could possibly be an attempt to elevate permissions in the account and would justify further investigation into the user’s actions.

As an investigator, you’ve determined that Sara was the user who disabled CloudTrail logging and that her access pattern was consistent with her past accesses. You’ve also determined the other actions she performed after she disabled logging and assumed a second role, but you can continue to investigate further by answering additional questions, such as:

  • What did Sara do with DemoRole1 when she assumed this role in the past? Are her current activities consistent with those past activities?
  • What activities are being performed across this account? Are those consistent with Sara’s activities?

By using Detective’s features that have been demonstrated in this post, you will be able to answer the questions like the ones listed above.

Summary

After you read this post, we hope you have a better understanding of the ways in which Amazon Detective collects, organizes, and presents log data to simplify your security investigations. All Detective service subscriptions include the new role session analysis capabilities. With these capabilities, you can quickly attribute activity performed under a role to a specific resource in your environment, understand cross-account role assumptions, determine role chaining behavior, and quickly see called APIs.

All customers receive a 30-day free trial when they enable Amazon Detective. See the AWS Regional Services page for all the Regions where Detective is available. To learn more, visit the Amazon Detective product page or see the additional resources at the end of this post to further expand your knowledge of Detective capabilities and features.

Additional resources

Amazon Detective features

Amazon Detective overview and demo

Amazon Detective FAQs

Amazon Detective Regions, endpoints, and quotas

Naming of individual IAM role sessions

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sheldon Sides

Sheldon is a Senior Solutions Architect, focused on helping customers implement native AWS security services. He enjoys using his experience as a consultant and running a cloud security startup to help customers build secure AWS Cloud solutions. His interests include working out, software development, and learning about the latest technologies.

Author

Gagan Prakash

Gagan is a Product Manager on the Amazon Detective team and is passionate about ensuring that Detective’s new features and capabilities address the needs of our customers. He leverages his past experiences in cybersecurity startups and software companies to help drive product improvements. His interests include spending time with family, reading, and learning more about the world at large.

Use tags to manage and secure access to additional types of IAM resources

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/use-tags-to-manage-and-secure-access-to-additional-types-of-iam-resources/

AWS Identity and Access Management (IAM) now enables Amazon Web Services (AWS) administrators to use tags to manage and secure access to more types of IAM resources, such as customer managed IAM policies, Security Assertion Markup Language (SAML) providers, and virtual multi-factor authentication (MFA) devices. A tag is an attribute that consists of a key and an optional value that you can attach to an AWS resource. With this launch, administrators can attach tags to additional IAM resources to identify resource owners and grant fine-grained access to these resources at scale using attribute-based access control. For example, a security administrator in an AWS organization can now attach tags to all customer managed policies and then create a single policy for local administrators within the member accounts, which grants them permissions to manage only those customer managed policies that have a matching tag.

In this post, I first discuss the additional IAM resources that now support tags. Then I walk you through two use cases that demonstrate how you can use tags to identify an IAM resource owner, and how you can further restrict access to AWS resources based on prefixes and tag values.

Which IAM resources now support tags?

In addition to IAM roles and IAM users that already support tags, you can now tag more types of IAM resources. The following table shows other IAM resources that now support tags. The table also highlights which of the IAM resources support tags on the IAM console level and at the API/CLI level.

IAM resources Support tagging at IAM console Support tagging at API and CLI level
Customer managed IAM policies Yes Yes
Instance profiles No Yes
OpenID Connect Provider Yes Yes
SAML providers Yes Yes
Server certificates No Yes
Virtual MFAs No Yes

Fine-grained resource ownership and access using tags

In the next sections, I will walk through two examples of how to use tagging to classify your IAM resources and define least-privileged access for your developers. In the first example, I explain how to use tags to allow your developers to declare ownership of a customer managed policy they create. In the second example, I explain how to use tags to enforce least privilege allowing developers to only pass IAM roles with Amazon Elastic Compute Cloud (Amazon EC2) instance profiles they create.

Example 1: Use tags to identify the owner of a customer managed policy

As an AWS administrator, you can require your developers to always tag the customer managed policies they create. You can then use the tag to identify which of your developers owns the customer managed policies.

For example, as an AWS administrator you can require that your developers in your organization to tag any customer managed policy they create. To achieve this, you can require the policy creator to enter their username as the value for the key titled Owner on resource tag creation. By enforcing tagging on customer managed policies, administrators can now easily identify the owner of these IAM policy types.

To enforce customer managed policy tagging, you first grant your developer the ability to create IAM customer managed policies, and include a conditional statement within the IAM policy that requires your developer to apply their AWS user name in the tag value field titled Owner when they create the policy.

Step 1: Create an IAM policy and attach it to your developer role

Following is a sample IAM policy (TagCustomerManagedPolicies.json) that you can assign to your developer. You can use this policy to follow along with this example in your own AWS account. For your own policies and commands, replace the instances of <AccountNumber> in this example with your own AWS account ID.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TagCustomerManagedPolicies",
            "Effect": "Allow",
            "Action": [
                "iam:CreatePolicy",
                "iam:TagPolicy"
            ],
            "Resource": "arn:aws:iam::: <AccountNumber>:policy/Developer-*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/Owner": "${aws:username}"
                }
            }
        }
    ]
} 

This policy requires the developer to enter their AWS user name as the tag value to declare AWS resource ownership during customer managed policy creation. The TagCustomerManagedPolicies.json also requires the developer to name any customer managed policy they create with the Developer- prefix.

Create the TagCustomerManagedPolicies.json file, then create a managed policy using the the following CLI command:

$aws iam create-policy --policy-name TagCustomerManagedPolicies --policy-document file://TagCustomerManagedPolicies.json

When you create the TagCustomerManagedPolicies.json policy, attach the policy to your developer with the following command. Assume your developer has an IAM user profile and their AWS user name is JohnA.

$aws iam attach-user-policy --policy-arn arn:aws:iam::<AccountNumber>:policy/TagCustomerManagedPolicies --user-name JohnA

Step 2: Ensure the developer uses appropriate tags when creating IAM policies

If your developer attempts to create a customer managed policy without applying their AWS user name as the value for the Owner tag and fails to name the customer managed policy with the required prefix Developer-, this IAM policy will not allow the developer to create this AWS resource. The error received by the developer is shown in the following example.

$ aws iam create-policy --policy-name TestPolicy --policy-document file://Developer-TestPolicy.json 

An error occurred (AccessDenied) when calling the CreatePolicy operation: User: arn:aws:iam::<AccountNumber>:user/JohnA is not authorized to perform: iam:CreatePolicy on resource: policy TestPolicy

However, if your developer applies their AWS user name as the value for the Owner tag and names the policy with the Developer- prefix, the IAM policy will enable your developer to successfully create the customer managed policy, as shown in the following example.

$aws iam create-policy --policy-name Developer-TestPolicy --policy-document file://Developer-TestPolicy.json --tags '{"Key": "Owner", "Value": "JohnA"}'

{
  "Policy": {
    "PolicyName": "Developer-Test_policy",
    "PolicyId": "<PolicyId>",
    "Arn": "arn:aws:iam::<AccountNumber>:policy/Developer-Test_policy",
    "Path": "/",
    "DefaultVersionId": "v1",
    "Tags": [
      {
        "Key": "Owner",
        "Value": "JohnA"
      }
    ],
    "AttachmentCount": 0,
    "PermissionsBoundaryUsageCount": 0,
    "IsAttachable": true,
    "CreateDate": "2020-07-27T21:18:10Z",
    "UpdateDate": "2020-07-27T21:18:10Z"
  }
}

Example 2: Use tags to control which IAM roles your developers attach to an instance profile

Amazon EC2 enables customers to run compute resources in the cloud. AWS developers use IAM instance profiles to associate IAM roles to EC2 instances hosting their applications. This instance profile is used to pass an IAM role to an EC2 instance to grant it privileges to invoke actions on behalf of an application hosted within it.

In this example, I show how you can use tags to control which IAM roles your developers can add to instance profiles. You can use this as a starting point for your own workloads, or follow along with this example as a learning exercise. For your own policies and commands, replace the instances of <AccountNumber> in this example with your own AWS account ID.

Let’s assume your developer is running an application on their EC2 instance that needs read and write permissions to objects within various developer owned Amazon Simple Storage Service (S3) buckets. To allow your application to perform these actions, you need to associate an IAM role with the required S3 permissions to an instance profile of your EC2 instance that is hosting your application.

To achieve this, you will do the following:

  1. Create a permissions boundary policy and require your developer to attach the permissions boundary policy to any IAM role they create. The permissions boundary policy defines the maximum permissions your developer can assign to any IAM role they create. For examples of how to use permissions boundary policies, see Add Tags to Manage Your AWS IAM Users and Roles.
  2. Grant your developer permissions to create and tag IAM roles and instance profiles. Your developer will use the instance profile to pass the IAM role to their EC2 instance hosting their application.
  3. Grant your developer permissions to create and apply IAM permissions to the IAM role they create.
  4. Grant your developer permissions to assign IAM roles to instance profiles of their EC2 instances based on the Owner tag they applied to the IAM role and instance profile they created.

Step 1: Create a permissions boundary policy

First, create the permissions boundary policy (S3ActionBoundary.json) that defines the maximum S3 permissions for the IAM role your developer creates. Following is an example of a permissions boundary policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3ActionBoundary",
            "Effect": "Allow",
            "Action": [
                "S3:CreateBucket",
                "S3:ListAllMyBuckets",
                "S3:GetBucketLocation",
                "S3:PutObject",
                "S3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::Developer-*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestedRegion": "us-east-1"
                }
            }
        }
    ]
}

When used as a permissions boundary, this policy enables your developers to grant permissions to some S3 actions, as long as two requirements are met. First, the S3 bucket must begin with the Developer prefix. Second, the region used to make the request must be US East (N. Virginia).

Similar to the previous example, you can create the S3ActionBoundary.json, then create a managed IAM policy using the following CLI command:

$aws iam create-policy --policy-name S3ActionBoundary --policy-document file://S3ActionBoundary.json

Step 2: Grant your developer permissions to create and tag IAM roles and instance profiles

Next, create the IAM permission (DeveloperCreateActions.json) that allows your developer to create IAM roles and instance profiles. Any roles they create will not be allowed to exceed the permissions of the boundary policy we created in step 1, and any resources they create must be tagged according to the guideline we established earlier. Following is an example DeveloperCreateActions.json policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CreateRole",
            "Effect": "Allow",
            "Action": "iam:CreateRole",
            "Resource": "arn:aws:iam::<AccountNumber>:role/Developer-*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/Owner": "${aws:username}",
                    "iam:PermissionsBoundary": "arn:aws:iam::<AccountNumber>:policy/S3ActionBoundary"
                }
            }
        },
        {
            "Sid": "CreatePolicyandInstanceProfile",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:CreatePolicy"
            ],
            "Resource": [
                "arn:aws:iam::<AccountNumber>:instance-profile/Developer-*",
                "arn:aws:iam::<AccountNumber>:policy/Developer-*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/Owner": "${aws:username}"
                }
            }
        },
        {
            "Sid": "TagActionsAndAttachActions",
            "Effect": "Allow",
            "Action": [
                "iam:TagInstanceProfile",
                "iam:TagPolicy",
                "iam:AttachRolePolicy",
                "iam:TagRole"
            ],
            "Resource": [
                "arn:aws:iam::<AccountNumber>:instance-profile/Developer-*",
                "arn:aws:iam::<AccountNumber>:policy/Developer-*",
                "arn:aws:iam::<AccountNumber>:role/Developer-*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/Owner": "${aws:username}"
                }
            }
        }
    ]
}

I will walk through each statement in the policy to explain its function.

The first statement CreateRole allows creating IAM roles. The Condition element of the policy requires your developer to apply their AWS user name as the Owner tag to any IAM role or instance profile they create. It also requires your developer to attach the S3ActionBoundary as a permissions boundary policy to any IAM role they create.

The next statement CreatePolicyAndInstanceProfile allows creating IAM policies and instance profiles. The Condition element requires your developer to name any IAM role or instance profile they create with the Developer- prefix, and to attach the Owner tag to the resources they create.

The last statement TagActionsAndAttachActions allows tagging managed policies, instance profiles and roles with the Owner tag. It also allows attaching role policies, so they can configure the permissions for the roles they create. The Resource and Condition elements of the policy require the developer to use the Developer- prefix and their AWS user name as the Owner tag, respectively.

Once you create the DeveloperCreateActions.json file locally, you can create it as an IAM policy and attach it to your developer role using the following CLI commands:

$aws iam create-policy --policy-name DeveloperCreateActions --policy-document file://DeveloperCreateActions.json 

$aws iam attach-user-policy --policy-arn arn:aws:iam::<AccountNumber>:policy/DeveloperCreateActions --user-name JohnA

With the preceding policy, your developer can now create an instance profile, an IAM role, and the permissions they will attach to the IAM role. For example, if your developer creates an instance profile and doesn’t apply their AWS user name as the Owner tag, the IAM Policy will prevent the resource creation process from occurring render an error as shown in the following example.

$aws iam create-instance-profile --instance-profile-name Developer-EC2-InstanceProfile

An error occurred (AccessDenied) when calling the CreateInstanceProfile operation: User: arn:aws:iam::<AccountNumber>:user/JohnA is not authorized to perform: iam:CreateInstanceProfile on resource: arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2

When your developer names the instance profile with the prefix Developer- and includes their AWS user name as value for the Owner tag in the create request, the IAM policy allows the create action to occur as shown in the following example.

$aws iam create-instance-profile --instance-profile-name Developer-EC2-InstanceProfile --tags '{"Key": "Owner", "Value": "JohnA"}'

{
    "InstanceProfile": {
        "Path": "/",
        "InstanceProfileName":"Developer-EC2-InstanceProfile",
        "InstanceProfileId":" AIPAR3HKUNWB24NBA3HRC",
        "Arn": "arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2-InstanceProfile",
        "CreateDate": "2020-07-30T21:24:30Z",
        "Roles": [],
        "Tags": [
            {
                "Key": "Owner",
                "Value": "JohnA"
            }
        ]

    }
}

Let’s assume your developer creates an IAM role called Developer-EC2. The Developer-EC2 role has your developer’s AWS user name (JohnA) as the Owner tag. The developer has the S3ActionBoundaryPolicy.json as their permissions boundary policy and the Developer-ApplicationS3Access.json policy as the permissions policy that your developer will pass to their EC2 instance to allow it to call S3 on behalf of their application. This is shown in the following example.

<Details of the role trust policy – RoleTrustPolicy.json>
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
<Details of IAM role permissions – Developer-ApplicationS3Access.json>

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3Access",
            "Effect": "Allow",
            "Action": [
                "S3:GetBucketLocation",
                "S3:PutObject",
                "S3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::Developer-*"
        }
    ]
}

<Your developer creates IAM role with a permissions boundary policy and a role trust policy>

$aws iam create-role --role-name Developer-EC2
--assume-role-policy-document file://RoleTrustPolicy.json
--permissions-boundary arn:aws:iam::<AccountNumber>:policy/S3ActionBoundary --tags '{"Key": "Owner", "Value": "JohnA"}'


<Your developer creates IAM policy for the newly created IAM role>
$aws iam create-policy –-policy-name Developer-ApplicationS3Access –-policy-document file://Developer-ApplicationS3Access.json --tags '{"Key": "Owner", "Value": "JohnA"}'

<Your developer attaches newly created IAM policy to the newly created IAM role >
$aws iam attach-role-policy --policy-arn arn:aws:iam::<AccountNumber>:policy/Developer-ApplicationS3Access --role-name Developer-EC2

Step 3: Grant your developer permissions to create and apply IAM permissions to the IAM role they create

By using the AddRoleAssociateInstanceProfile.json IAM Policy provided below, you are allowing your developers the permissions to pass their new IAM role to an instance profile they create. They need to follow these requirements because the DeveloperCreateActions.json permission, which you already assigned to your developer in an earlier step, allows your developer to only administer resources that are properly prefixed with Developer- and have their user name assigned to the resource tag. The following example shows details of the AddRoleAssociateInstanceProfile.json policy.

< AddRoleAssociateInstanceProfile.json>
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddRoleToInstanceProfile",
            "Effect": "Allow",
            "Action": [
                "iam:AddRoleToInstanceProfile",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::<AccountNumber>:instance-profile/Developer-*",
                "arn:aws:iam::<AccountNumber>:role/Developer-*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/Owner": "${aws:username}"
                }
            }
        },
        {
            "Sid": "AssociateInstanceProfile",
            "Effect": "Allow",
            "Action": "ec2:AssociateIamInstanceProfile",
            "Resource": "arn:aws:ec2:us-east-1:<AccountNumber>:instance/Developer-*"
        }
    ]
}

Once you create the DeveloperCreateActions.json file locally, you can create it as an IAM policy and attach it to your developer role using the following CLI commands:

$aws iam create-policy –-policy-name AddRoleAssociateInstanceProfile –-policy-document file://AddRoleAssociateInstanceProfile.json

$aws iam attach-user-policy –-policy-arn arn:aws:iam::<AccountNumber>:policy/ AddRoleAssociateInstanceProfile –-user-name Developer

If your developer’s AWS user name is the Owner tag for the Developer-EC2-InstanceProfile instance profile and the Developer-EC2 IAM role, then AWS allows your developer to add the Developer-EC2 role to the Developer-EC2-InstanceProfile instance profile. However, if your developer attempts to add the Developer-EC2 role to an instance profile they don’t own, AWS won’t allow the action, as shown in the following example.

aws iam add-role-to-instance-profile --instance-profile-name EC2-access-Profile --role-name Developer-EC2

An error occurred (AccessDenied) when calling the AddRoleToInstanceProfile operation: User: arn:aws:iam::<AccountNumber>:user/Developer is not authorized to perform: iam:AddRoleToInstanceProfile on resource: instance profile EC2-access-profile

When your developer adds the IAM role to the instance profile they own, the IAM policy allows the action, as shown in the following example.

aws iam add-role-to-instance-profile --instance-profile-name Developer-EC2-InstanceProfile --role-name Developer-EC2

You can verify this by checking which instance profiles contain the Developer-EC2 role, as follows.

$aws iam list-instance-profiles-for-role --role-name Developer-EC2


<Result>
{
    "InstanceProfiles": [
        {
            "InstanceProfileId": "AIDGPMS9RO4H3FEXAMPLE",
            "Roles": [
                {
                    "AssumeRolePolicyDocument": "<URL-encoded-JSON>",
                    "RoleId": "AIDACKCEVSQ6C2EXAMPLE",
                    "CreateDate": "2020-06-07T20: 42: 15Z",
                    "RoleName": "Developer-EC2",
                    "Path": "/",
                    "Arn":"arn:aws:iam::<AccountNumber>:role/Developer-EC2"
                }
            ],
            "CreateDate":"2020-06-07T21:05:24Z",
            "InstanceProfileName":"Developer-EC2-InstanceProfile",
            "Path": "/",
            "Arn":"arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2-InstanceProfile"
        }
    ]
}

Step 4: Grant your developer permissions to add IAM roles to instance profiles based on the Owner tag

Your developer can then associate the instance profile (Developer-EC2-InstanceProfile) to their EC2 instance running their application, by using the following command.

aws ec2 associate-iam-instance-profile --instance-id i-1234567890EXAMPLE --iam-instance-profile Name="Developer-EC2-InstanceProfile"

{
    "IamInstanceProfileAssociation": {
        "InstanceId": "i-1234567890EXAMPLE",
        "State": "associating",
        "AssociationId": "iip-assoc-0dbd8529a48294120",
        "IamInstanceProfile": {
            "Id": "AIDGPMS9RO4H3FEXAMPLE",
            "Arn": "arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2-InstanceProfile"
        }
    }
}

Summary

You can use tags to manage and secure access to IAM resources such as IAM roles, IAM users, SAML providers, server certificates, and virtual MFAs. In this post, I highlighted two examples of how AWS administrators can use tags to grant access at scale to IAM resources such as customer managed policies and instance profiles. For more information about the IAM resources that support tagging, see the AWS Identity and Access Management (IAM) User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

 

Contributor

Special thanks to Derrick Oigiagbe who made significant contributions to this post.

Use new account assignment APIs for AWS SSO to automate multi-account access

Post Syndicated from Akhil Aendapally original https://aws.amazon.com/blogs/security/use-new-account-assignment-apis-for-aws-sso-to-automate-multi-account-access/

In this blog post, we’ll show how you can programmatically assign and audit access to multiple AWS accounts for your AWS Single Sign-On (SSO) users and groups, using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

With AWS SSO, you can centrally manage access and user permissions to all of your accounts in AWS Organizations. You can assign user permissions based on common job functions, customize them to meet your specific security requirements, and assign the permissions to users or groups in the specific accounts where they need access. You can create, read, update, and delete permission sets in one place to have consistent role policies across your entire organization. You can then provide access by assigning permission sets to multiple users and groups in multiple accounts all in a single operation.

AWS SSO recently added new account assignment APIs and AWS CloudFormation support to automate access assignment across AWS Organizations accounts. This release addressed feedback from our customers with multi-account environments who wanted to adopt AWS SSO, but faced challenges related to managing AWS account permissions. To automate the previously manual process and save your administration time, you can now use the new AWS SSO account assignment APIs, or AWS CloudFormation templates, to programmatically manage AWS account permission sets in multi-account environments.

With AWS SSO account assignment APIs, you can now build your automation that will assign access for your users and groups to AWS accounts. You can also gain insights into who has access to which permission sets in which accounts across your entire AWS Organizations structure. With the account assignment APIs, your automation system can programmatically retrieve permission sets for audit and governance purposes, as shown in Figure 1.

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Overview

In this walkthrough, we’ll illustrate how to create permission sets, assign permission sets to users and groups in AWS SSO, and grant access for users and groups to multiple AWS accounts by using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

To grant user permissions to AWS resources with AWS SSO, you use permission sets. A permission set is a collection of AWS Identity and Access Management (IAM) policies. Permission sets can contain up to 10 AWS managed policies and a single custom policy stored in AWS SSO.

A policy is an object that defines a user’s permissions. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. AWS evaluates these policies when an IAM principal (a user or role) makes a request.

When you provision a permission set in the AWS account, AWS SSO creates a corresponding IAM role on that account, with a trust policy that allows users to assume the role through AWS SSO. With AWS SSO, you can assign more than one permission set to a user in the specific AWS account. Users who have multiple permission sets must choose one when they sign in through the user portal or the AWS CLI. Users will see these as IAM roles.

To learn more about IAM policies, see Policies and permissions in IAM. To learn more about permission sets, see Permission Sets.

Assume you have a company, Example.com, which has three AWS accounts: an organization management account (ExampleOrgMaster), a development account (ExampleOrgDev), and a test account (ExampleOrgTest). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has the IT security lead, Frank Infosec, who needs PowerUserAccess to the test account (ExampleOrgTest) and SecurityAudit access to the development account (ExampleOrgDev). Alice Developer, the developer, needs full access to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) through the development account (ExampleOrgDev). We’ll show you how to assign and audit the access for Alice and Frank centrally with AWS SSO, using the AWS CLI.

The flow includes the following steps:

  1. Create three permission sets:
    • PowerUserAccess, with the PowerUserAccess policy attached.
    • AuditAccess, with the SecurityAudit policy attached.
    • EC2-S3-FullAccess, with the AmazonEC2FullAccess and AmazonS3FullAccess policies attached.
  2. Assign permission sets to the AWS account and AWS SSO users:
    • Assign the PowerUserAccess and AuditAccess permission sets to Frank Infosec, to provide the required access to the ExampleOrgDev and ExampleOrgTest accounts.
    • Assign the EC2-S3-FullAccess permission set to Alice Developer, to provide the required permissions to the ExampleOrgDev account.
  3. Retrieve the assigned permissions by using Account Entitlement APIs for audit and governance purposes.

    Note: AWS SSO Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. In this blog we attach AWS managed polices to the AWS SSO Permission sets for simplicity. To help secure your AWS resources, follow the standard security advice of granting least privilege access using AWS SSO custom policy while creating AWS SSO Permission set.

Figure 2: AWS Organizations accounts access for Alice and Frank

Figure 2: AWS Organizations accounts access for Alice and Frank

To help simplify administration of access permissions, we recommend that you assign access directly to groups rather than to individual users. With groups, you can grant or deny permissions to groups of users, rather than having to apply those permissions to each individual. For simplicity, in this blog you’ll assign permissions directly to the users.

Prerequisites

Before you start this walkthrough, complete these steps:

Use the AWS SSO API from the AWS CLI

In order to call the AWS SSO account assignment API by using the AWS CLI, you need to install and configure AWS CLI v2. For more information about AWS CLI installation and configuration, see Installing the AWS CLI and Configuring the AWS CLI.

Step 1: Create permission sets

In this step, you learn how to create EC2-S3FullAccess, AuditAccess, and PowerUserAccess permission sets in AWS SSO from the AWS CLI.

Before you create the permission sets, run the following command to get the Amazon Resource Name (ARN) of the AWS SSO instance and the Identity Store ID, which you will need later in the process when you create and assign permission sets to AWS accounts and users or groups.

aws sso-admin list-instances

Figure 3 shows the results of running the command.

Figure 3: AWS SSO list instances

Figure 3: AWS SSO list instances

Next, create the permission set for the security team (Frank) and dev team (Alice), as follows.

Permission set for Alice Developer (EC2-S3-FullAccess)

Run the following command to create the EC2-S3-FullAccess permission set for Alice, as shown in Figure 4.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'EC2-S3-FullAccess' --description 'EC2 and S3 access for developers'
Figure 4: Creating the permission set EC2-S3-FullAccess

Figure 4: Creating the permission set EC2-S3-FullAccess

Permission set for Frank Infosec (AuditAccess)

Run the following command to create the AuditAccess permission set for Frank, as shown in Figure 5.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'AuditAccess' --description 'Audit Access for security team on ExampleOrgDev account'
Figure 5: Creating the permission set AuditAccess

Figure 5: Creating the permission set AuditAccess

Permission set for Frank Infosec (PowerUserAccess)

Run the following command to create the PowerUserAccess permission set for Frank, as shown in Figure 6.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'PowerUserAccess' --description 'Power User Access for security team on ExampleOrgDev account'
Figure 6: Creating the permission set PowerUserAccess

Figure 6: Creating the permission set PowerUserAccess

Copy the permission set ARN from these responses, which you will need when you attach the managed policies.

Step 2: Assign policies to permission sets

In this step, you learn how to assign managed policies to the permission sets that you created in step 1.

Attach policies to the EC2-S3-FullAccess permission set

Run the following command to attach the amazonec2fullacess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 7.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazonec2fullaccess'
Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Run the following command to attach the amazons3fullaccess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 8.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazons3fullaccess'
Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Attach a policy to the AuditAccess permission set

Run the following command to attach the SecurityAudit managed policy to the AuditAccess permission set that you created earlier, as shown in Figure 9.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/SecurityAudit'
Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Attach a policy to the PowerUserAccess permission set

The following command is similar to the previous command; it attaches the PowerUserAccess managed policy to the PowerUserAccess permission set, as shown in Figure 10.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/PowerUserAccess'
Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

In the next step, you assign users (Frank Infosec and Alice Developer) to their respective permission sets and assign permission sets to accounts.

Step 3: Assign permission sets to users and groups and grant access to AWS accounts

In this step, you assign the AWS SSO permission sets you created to users and groups and AWS accounts, to grant the required access for these users and groups on respective AWS accounts.

To assign access to an AWS account for a user or group, using a permission set you already created, you need the following:

  • The principal ID (the ID for the user or group)
  • The AWS account ID to which you need to assign this permission set

To obtain a user’s or group’s principal ID (UserID or GroupID), you need to use the AWS SSO Identity Store API. The AWS SSO Identity Store service enables you to retrieve all of your identities (users and groups) from AWS SSO. See AWS SSO Identity Store API for more details.

Use the first two commands shown here to get the principal ID for the two users, Alice (Alice’s user name is [email protected]) and Frank (Frank’s user name is [email protected]).

Alice’s user ID

Run the following command to get Alice’s user ID, as shown in Figure 11.

aws identitystore list-users --identity-store-id '<Identity Store ID>' --filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 11: Retrieving Alice’s user ID

Figure 11: Retrieving Alice’s user ID

Frank’s user ID

Run the following command to get Frank’s user ID, as shown in Figure 12.

aws identitystore list-users --identity-store-id '<Identity Store ID>'--filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 12: Retrieving Frank’s user ID

Figure 12: Retrieving Frank’s user ID

Note: To get the principal ID for a group, use the following command.

aws identitystore list-groups --identity-store-id '<Identity Store ID>' --filter AttributePath='DisplayName',AttributeValue='<Group Name>'

Assign the EC2-S3-FullAccess permission set to Alice in the ExampleOrgDev account

Run the following command to assign Alice access to the ExampleOrgDev account using the EC2-S3-FullAccess permission set. This will give Alice full access to Amazon EC2 and S3 services in the ExampleOrgDev account.

Note: When you call the CreateAccountAssignment API, AWS SSO automatically provisions the specified permission set on the account in the form of an IAM policy attached to the AWS SSO–created IAM role. This role is immutable: it’s fully managed by the AWS SSO, and it cannot be deleted or changed by the user even if the user has full administrative rights on the account. If the permission set is subsequently updated, the corresponding IAM policies attached to roles in your accounts won’t be updated automatically. In this case, you will need to call ProvisionPermissionSet to propagate these updates.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Assign the AuditAccess permission set to Frank Infosec in the ExampleOrgDev account

Run the following command to assign Frank access to the ExampleOrgDev account using the EC2-S3- AuditAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Assign the PowerUserAccess permission set to Frank Infosec in the ExampleOrgTest account

Run the following command to assign Frank access to the ExampleOrgTest account using the PowerUserAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

To view the permission sets provisioned on the AWS account, run the following command, as shown in Figure 16.

aws sso-admin list-permission-sets-provisioned-to-account --instance-arn '<Instance ARN>' --account-id '<AWS Account ID>'
Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

To review the created resources in the AWS Management Console, navigate to the AWS SSO console. In the list of permission sets on the AWS accounts tab, choose the EC2-S3-FullAccess permission set. Under AWS managed policies, the policies attached to the permission set are listed, as shown in Figure 17.

Figure 17: Review the permission set in the AWS SSO console

Figure 17: Review the permission set in the AWS SSO console

To see the AWS accounts, where the EC2-S3-FullAccess permission set is currently provisioned, navigate to the AWS accounts tab, as shown in Figure 18.

Figure 18: Review permission set account assignment in the AWS SSO console

Figure 18: Review permission set account assignment in the AWS SSO console

Step 4: Audit access

In this step, you learn how to audit access assigned to your users and group by using the AWS SSO account assignment API. In this example, you’ll start from a permission set, review the permissions (AWS-managed policies or a custom policy) attached to the permission set, get the users and groups associated with the permission set, and see which AWS accounts the permission set is provisioned to.

List the IAM managed policies for the permission set

Run the following command to list the IAM managed policies that are attached to a specified permission set, as shown in Figure 19.

aws sso-admin list-managed-policies-in-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 19: View the managed policies attached to the permission set

Figure 19: View the managed policies attached to the permission set

List the assignee of the AWS account with the permission set

Run the following command to list the assignee (the user or group with the respective principal ID) of the specified AWS account with the specified permission set, as shown in Figure 20.

aws sso-admin list-account-assignments --instance-arn '<Instance ARN>' --account-id '<Account ID>' --permission-set-arn '<Permission Set ARN>'
Figure 20: View the permission set and the user or group attached to the AWS account

Figure 20: View the permission set and the user or group attached to the AWS account

List the accounts to which the permission set is provisioned

Run the following command to list the accounts that are associated with a specific permission set, as shown in Figure 21.

aws sso-admin list-accounts-for-provisioned-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 21: View AWS accounts to which the permission set is provisioned

Figure 21: View AWS accounts to which the permission set is provisioned

In this section of the post, we’ve illustrated how to create a permission set, assign a managed policy to the permission set, and grant access for AWS SSO users or groups to AWS accounts by using this permission set. In the next section, we’ll show you how to do the same using AWS CloudFormation.

Use the AWS SSO API through AWS CloudFormation

In this section, you learn how to use CloudFormation templates to automate the creation of permission sets, attach managed policies, and use permission sets to assign access for a particular user or group to AWS accounts.

Sign in to your AWS Management Console and create a CloudFormation stack by using the following CloudFormation template. For more information on how to create a CloudFormation stack, see Creating a stack on the AWS CloudFormation console.

//start of Template//
{
    "AWSTemplateFormatVersion": "2010-09-09",
  
    "Description": "AWS CloudFormation template to automate multi-account access with AWS Single Sign-On (Entitlement APIs): Create permission sets, assign access for AWS SSO users and groups to AWS accounts using permission sets. Before you use this template, we assume you have enabled AWS SSO for your AWS Organization, added the AWS accounts to which you want to grant AWS SSO access to your organization, signed in to the AWS Management Console with your AWS Organizations management account credentials, and have the required permissions to use the AWS SSO console.",
  
    "Parameters": {
      "InstanceARN" : {
        "Type" : "String",
        "AllowedPattern": "arn:aws:sso:::instance/(sso)?ins-[a-zA-Z0-9-.]{16}",
        "Description" : "Enter AWS SSO InstanceARN. Ex: arn:aws:sso:::instance/ssoins-xxxxxxxxxxxxxxxx",
        "ConstraintDescription": "must be the name of an existing AWS SSO InstanceARN associated with the management account."
      },
      "ExampleOrgDevAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit Developer AWS Account ID. Ex: 123456789012"
        },
      "ExampleOrgTestAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit AWS Account ID. Ex: 123456789012"
        },
      "AliceDeveloperUserId" : {
        "Type" : "String",
        "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
        "Description" : "Enter Developer UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
        },
        "FrankInfosecUserId" : {
            "Type" : "String",
            "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
            "Description" : "Enter Test UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
            }
    },
    "Resources": {
        "EC2S3Access": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "EC2 and S3 access for developers",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : ["arn:aws:iam::aws:policy/amazonec2fullaccess","arn:aws:iam::aws:policy/amazons3fullaccess"],
                "Name" : "EC2-S3-FullAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "EC2S3Access"
                 } ]
              }
        },  
        "SecurityAuditAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Audit Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/SecurityAudit" ],
                "Name" : "AuditAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "SecurityAuditAccess"
                 } ]
              }
        },    
        "PowerUserAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Power User Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/PowerUserAccess"],
                "Name" : "PowerUserAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "PowerUserAccess"
                 } ]
              }      
        },
        "EC2S3userAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "EC2S3Access",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "AliceDeveloperUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "SecurityAudituserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "SecurityAuditAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "PowerUserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "PowerUserAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgTestAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          }
    }
}
//End of Template//

When you create the stack, provide the following information for setting the example permission sets for Frank Infosec and Alice Developer, as shown in Figure 22:

  • The Alice Developer and Frank Infosec user IDs
  • The ExampleOrgDev and ExampleOrgTest account IDs
  • The AWS SSO instance ARN

Then launch the CloudFormation stack.

Figure 22: User inputs to launch the CloudFormation template

Figure 22: User inputs to launch the CloudFormation template

AWS CloudFormation creates the resources that are shown in Figure 23.

Figure 23: Resources created from the CloudFormation stack

Figure 23: Resources created from the CloudFormation stack

Cleanup

To delete the resources you created by using the AWS CLI, use these commands.

Run the following command to delete the account assignment.

delete-account-assignment --instance-arn '<Instance ARN>' --target-id '<AWS Account ID>' --target-type 'AWS_ACCOUNT' --permission-set-arn '<PermissionSet ARN>' --principal-type '<USER/GROUP>' --principal-id '<user/group ID>'

After the account assignment is deleted, run the following command to delete the permission set.

delete-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<PermissionSet ARN>'

To delete the resource that you created by using the CloudFormation template, go to the AWS CloudFormation console. Select the appropriate stack you created, and then choose delete. Deleting the CloudFormation stack cleans up the resources that were created.

Summary

In this blog post, we showed how to use the AWS SSO account assignment API to automate the deployment of permission sets, how to add managed policies to permission sets, and how to assign access for AWS users and groups to AWS accounts by using specified permission sets.

To learn more about the AWS SSO APIs available for you, see the AWS Single Sign-On API Reference Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akhil Aendapally

Akhil is a Solutions Architect at AWS focused on helping customers with their AWS adoption. He holds a master’s degree in Network and Computer Security. Akhil has 8+ years of experience working with different cloud platforms, infrastructure automation, and security.

Author

Yuri Duchovny

Yuri is a New York-based Solutions Architect specializing in cloud security, identity, and compliance. He supports cloud transformations at large enterprises, helping them make optimal technology and organizational decisions. Prior to his AWS role, Yuri’s areas of focus included application and networking security, DoS, and fraud protection. Outside of work, he enjoys skiing, sailing, and traveling the world.

Author

Ballu Singh

Ballu is a principal solutions architect at AWS. He lives in the San Francisco Bay area and helps customers architect and optimize applications on AWS. In his spare time, he enjoys reading and spending time with his family.

Author

Nir Ozeri

Nir is a Solutions Architect Manager with Amazon Web Services, based out of New York City. Nir specializes in application modernization, application delivery, and mobile architecture.

Techniques for writing least privilege IAM policies

Post Syndicated from Ben Potter original https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-policies/

In this post, I’m going to share two techniques I’ve used to write least privilege AWS Identity and Access Management (IAM) policies. If you’re not familiar with IAM policy structure, I highly recommend you read understanding how IAM works and policies and permissions.

Least privilege is a principle of granting only the permissions required to complete a task. Least privilege is also one of many Amazon Web Services (AWS) Well-Architected best practices that can help you build securely in the cloud. For example, if you have an Amazon Elastic Compute Cloud (Amazon EC2) instance that needs to access an Amazon Simple Storage Service (Amazon S3) bucket to get configuration data, you should only allow read access to the specific S3 bucket that contains the relevant data.

There are a number of ways to grant access to different types of resources, as some resources support both resource-based policies and IAM policies. This blog post will focus on demonstrating how you can use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

In AWS, an IAM principal can be a user, role, or group. These identities start with no permissions and you add permissions using a policy. In AWS, there are different types of policies that are used for different reasons. In this blog, I only give examples for identity-based policies that attach to IAM principals to grant permissions to an identity. You can create and attach multiple identity-based policies to your IAM principals, and you can reuse them across your AWS accounts. There are two types of managed policies. Customer managed policies are created and managed by you, the customer. AWS managed policies are provided as examples, cannot be modified, but can be copied, enhanced, and saved as Customer managed policies. The main elements of a policy statement are:

  • Effect: Specifies whether the statement will Allow or Deny an action.
  • Action: Describes a specific action or actions that will either be allowed or denied to run based on the Effect entered. API actions are unique to each service. For example, s3:ListBuckets is an Amazon S3 service API action that enables an IAM Principal to list all S3 buckets in the same account.
  • NotAction: Can be used as an alternative to using Action. This element will allow an IAM principal to invoke all API actions to a specific AWS service except those actions specified in this list.
  • Resource: Specifies the resources—for example, an S3 bucket or objects—that the policy applies to in Amazon Resource Name (ARN) format.
  • NotResource: Can be used instead of the Resource element to explicitly match every AWS resource except those specified.
  • Condition: Allows you to build expressions to match the condition keys and values in the policy against keys and values in the request context sent by the IAM principal. Condition keys can be service-specific or global. A global condition key can be used with any service. For example, a key of aws:CurrentTime can be used to allow access based on date and time.

Starting with the visual editor

The visual editor is my default starting place for building policies as I like the wizard and seeing all available services, actions, and conditions without looking at the documentation. If there is a complex policy with many services, I often look at the AWS managed policies as a starting place for the actions that are required, then use the visual editor to fine tune and check the resources and conditions.

The policy I’m going to walk you through creating is to grant an AWS Lambda function permission to get specific objects from Amazon S3, and put items in a specific table in Amazon DynamoDB. You can access the visual editor when you choose Create policy under policies in the IAM console, or add policies when viewing a role, group, or user as shown in Figure 1. If you’re not familiar with creating policies, you can follow the full instructions in the IAM documentation.

Figure 1: Use the visual editor to create a policy

Figure 1: Use the visual editor to create a policy

Begin by choosing the first service—S3—to grant access to as shown in Figure 2. You can only choose one service at a time, so you’ll need to add DynamoDB after.

Figure 2: Select S3 service

Figure 2: Select S3 service

Now you will see a list of access levels with the option to manually add actions. Expand the read access level to show all read actions that are supported by the Amazon S3 service. You can now see all read access level actions. For getting an object, check the box for GetObject. Selecting the ? next to an action expands information including a description, supported resource types, and supported condition keys as shown in Figure 3.

Figure 3: Expand Read in Access level, select GetObject, and select the ? next to GetObject

Figure 3: Expand Read in Access level, select GetObject, and select the ? next to GetObject

Expand Resources, you will see that the visual editor has listed object as that is the only resource supported by the GetObject action as shown in Figure 4.

Figure 4: Expand Resources

Figure 4: Expand Resources

Select Add ARN, which opens a dialogue to help you specify the ARN for the objects. Enter a bucket name—such as doc-example-bucket—and then the object name. For the object name you can use a wildcard (*) as a suffix. For example, to allow objects beginning with alpha you would enter alpha*. This is an important step. For this least privileged policy, you are restricting to a specific bucket, and an object prefix. You could even specify an individual object depending on your use case.

Figure 5: Enter bucket name and object name

Figure 5: Enter bucket name and object name

If you have multiple ARNs (bucket and objects) to allow, you can repeat the step.

Figure 6: ARN added for S3 object

Figure 6: ARN added for S3 object

The final step is to expand the request conditions, and choose Add condition. The Add request condition dialogue will open. Select the drop down next to Condition key to list the global condition keys, then the service level condition keys are listed after. You’ll see that there’s an s3:ExistingObjectTag condition that—as the name suggests—matches an existing object tag. You can use this condition key to allow the GetObject request only when the object tag meets your condition. That means you can tag your objects with a specific tag key and value pair, and your policy condition must match this key-value pair to allow the action to execute. When you’re using condition keys with multiple keys or values, you can use condition operators and evaluation logic. As shown in Figure 7, tag-key is entered directly below the condition key. This is the key of the tag to match. For the Operator, select StringEquals to match the tag exactly. Checking If exists tests at least one member of the set of request values, and at least one member of the set of condition key values. The Value to enter is the actual tag value: tag-value as shown in figure 7.

Figure 7: ARN added for S3 object

Figure 7: ARN added for S3 object

That’s it for adding the S3 action, as shown in figure 8.

Figure 8: S3 GetObject action with resource and conditions configured

Figure 8: S3 GetObject action with resource and conditions configured

Now you need to add the DynamoDB permissions by selecting Add additional permissions. Select Choose a service and then select DynamoDB. For actions, expand the Write access level, then choose PutItem.

Figure 9: Choose write access level

Figure 9: Choose write access level

Expand Resources and then select Add ARN. The dialogue that appears will help you build the ARN just like it did for the Amazon S3 service. Enter the Region, for example the ap-southeast-2 (Sydney) Region, the account ID, and the table name. Choosing Add will add the resource ARN to your policy.

Figure 10: Enter Region, account, and table name

Figure 10: Enter Region, account, and table name

Now it’s time to add conditions. Expand Request conditions and then choose Add condition.

There are many DynamoDB conditions that you could use, however you can choose dynamodb:LeadingKeys to represent the first key, or partition keys in a table. You can see from the documentation that a qualifier of For all values in request is recommend. For the Operator you can use StringEquals as your string is going to exactly match, then a Value can use a prefix with wildcard, such as alpha* as shown in figure 11.

Figure 11: Add request conditions

Figure 11: Add request conditions

Choosing Add will take you back to the main visual editor where you can choose Review policy to continue. Enter a name and description for the policy, and then choose Create policy.

You can now attach it to a role to test.

You can see in this example that a policy can use least privilege by using specific resources and conditions. Note that sometimes when you use the AWS Management Console, it requires additional permissions to provide information for the console experience.

Starting with AWS managed policies

AWS managed policies can be a good starting place to see the actions typically associated with a particular service or job function. For example, you can attach the AmazonS3ReadOnlyAccess policy to a role used by an Amazon EC2 instance that allows read-only access to all Amazon S3 buckets. It has an effect of Allow to allow access, and there are two actions that use wildcards (*) to allow all Get and List actions for S3—for example, s3:GetObject and s3:ListBuckets. The resource is a wildcard to allow all S3 buckets the account has access to. A useful feature of this policy is that it only allows read and list access to S3, but not to any other services or types of actions.

Let’s make our own custom IAM policy to make it least privilege. Starting with the action element, you can use the reference for Amazon S3 to see all actions, a description of what each action does, the resource type for each action, and condition keys for each action. Now let’s imagine this policy is used by an Amazon EC2 instance to fetch an application configuration object from within an S3 bucket. Looking at the descriptions for actions starting with Get you can see that the only action that we really need is GetObject. You can then use the resource element to restrict an action to a set of objects prefixed with config within a specific bucket.

         "Effect": "Allow",
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3::: <doc-example-bucket>/<config*>"

Now that you’ve reduced the scope of what this policy can do for service actions and resources, you can add a condition element that uses attribute based access control (ABAC) to define conditions based on attributes—in this case, a resource tag. In this example, when you’re reading objects from a single bucket, you can set specific conditions to further reduce the scope of permissions given to an IAM principal. There’s an s3:ExistingObjectTag condition that you can use to allow the GetObject request only when the object tag meets your condition. That means you can tag your objects with a specific tag key and value pair, and your IAM policy condition must match this key-value pair to allow the API action to successfully run. When you’re using condition keys with multiple keys or values, you can use condition operators and evaluation logic. You can see that ForAnyValue tests at least one member of the set of request values, and at least one member of the set of condition key values. Alternatively, you can use global condition keys that apply to all services:

         "Effect": "Allow",
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3:::<doc-example-bucket>/<config*>",
         "Condition": {
                "ForAnyValue:StringEquals": {
                    "s3:ExistingObjectTag/<tag-key>": "<tag-value>"
            }

In the preceding policy example, the condition element only allows s3:GetObject permissions if the object is tagged with a key of tag-key and a value of tag-value. While you’re experimenting, you can identify errors in your custom policies by using the IAM policy simulator or reviewing the errors messages recorded in AWS CloudTrail logs.

Conclusion

In this post, I’ve shown two different techniques that you can use to create least privilege policies for IAM. You can adapt these methods to create AWS Single Sign-On permission sets and AWS Organizations service control policies (SCPs). Starting with managed policies is a useful strategy when an AWS supplied managed policy already exists for your use case, and then to reduce the scope of what it can do through permissions. I tend to use the visual editor the most for editing policies because it saves looking up the resource and conditions for each action. I suggest that you start by reviewing the policies you’re already using. Start with policies that grant excessive permissions—like the example Administrator policy—and tie them back to the use case of the users or things that need the access. Use the last accessed information, IAM best practices, and look at the AWS Well-Architected best practices and AWS Well-Architected tool.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Potter

Ben is the global security leader for the AWS Well-Architected Framework and is responsible for sharing best practices in security with customers and partners. Ben is also an ambassador for the No More Ransom initiative helping fight cyber crime with Europol, McAfee, and law enforcement across the globe. You can learn more about him in this interview.

Automatically update security groups for Amazon CloudFront IP ranges using AWS Lambda

Post Syndicated from Yeshwanth Kottu original https://aws.amazon.com/blogs/security/automatically-update-security-groups-for-amazon-cloudfront-ip-ranges-using-aws-lambda/

Amazon CloudFront is a content delivery network that can help you increase the performance of your web applications and significantly lower the latency of delivering content to your customers. For CloudFront to access an origin (the source of the content behind CloudFront), the origin has to be publicly available and reachable. Anyone with the origin domain name or IP address could request content directly and bypass CloudFront. In this blog post, I describe an automated solution that uses security groups to permit only CloudFront to access the origin.

Amazon Simple Storage Service (Amazon S3) origins provide a feature called Origin Access Identity, which blocks public access to selected buckets, making them accessible only through CloudFront. When you use CloudFront to secure your web applications, it’s important to ensure that only CloudFront can access your origin (such as Amazon Elastic Cloud Compute (Amazon EC2) or Application Load Balancer (ALB)) and any direct access to origin is restricted. This blog post shows you how to create an AWS Lambda function to automatically update Amazon Virtual Private Cloud (Amazon VPC) security groups with CloudFront service IP ranges to permit only CloudFront to access the origin.

AWS publishes the IP ranges in JSON format for CloudFront and other AWS services. If your origin is an Elastic Load Balancer or an Amazon EC2 instance, you can use VPC security groups to allow only CloudFront IP ranges to access your applications. The IP ranges in the list are separated by service and Region, and you must specify only the IP ranges that correspond to CloudFront.

The IP ranges that AWS publishes change frequently and without an automated solution, you would need to retrieve this document frequently to understand the current IP ranges for CloudFront. Frequent polling is inefficient because there is no notice of when the IP ranges change, and if these IP ranges aren’t modified immediately, your client might see 504 errors when they access CloudFront. Additionally, there are numerous IP ranges for each service, performing the change manually isn’t an efficient way of updating these ranges. This means you need infrastructure to support the task. However, in that case you end up with another host to manage—complete with the typical patching, deployment, and monitoring. As you can see, a small task could quickly become more complicated than the problem you intended to solve.

An Amazon Simple Notification Service (Amazon SNS) message is sent to a topic whenever the AWS IP ranges change. Enabling you to build an event-driven, serverless solution that updates the IP ranges for your security groups, as needed by using a Lambda function that is triggered in response to the SNS notification.

Here are the steps we are going to take to implement the solution:

  1. Create your resources
    1. Create an IAM policy and execution role for the Lambda function
    2. Create your Lambda function
  2. Test your Lambda function
  3. Configure your Lambda function’s trigger

Create your resources

The first thing you need to do is create a Lambda function execution role and policy. Lambda function uses execution role to access or create AWS resources. This Lambda function is triggered by an SNS notification whenever there’s a change in the IP ranges document. Based on the number of IP ranges present for CloudFront and also the number of ports (for example, 80,443) that you want to whitelist on the origin, this Lambda function creates the required security groups. These security groups will allow only traffic from CloudFront to your ELB load balancers or EC2 instances.

Create an IAM policy and execution role for the Lambda function

When you create a Lambda function, it’s important to understand and properly define the security context for the Lambda function. Using AWS Identity and Access Management (IAM), you can create the Lambda execution role that determines the AWS service calls that the function is authorized to complete. (Learn more about the Lambda permissions model.)

To create the IAM policy for your role

  1. Log in to the IAM console with the user account that you will use to manage the Lambda function. This account must have administrator permissions.
  2. In the navigation pane, choose Policies.
  3. In the content pane, choose Create policy.
  4. Choose the JSON tab and copy the text from the following JSON policy document. Paste this text into the JSON text box.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "CloudWatchPermissions",
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource": "arn:aws:logs:*:*:*"
        },
        {
          "Sid": "EC2Permissions",
          "Effect": "Allow",
          "Action": [
            "ec2:DescribeSecurityGroups",
            "ec2:AuthorizeSecurityGroupIngress",
            "ec2:RevokeSecurityGroupIngress",
            "ec2:CreateSecurityGroup",
            "ec2:DescribeVpcs",
    		"ec2:CreateTags",
            "ec2:ModifyNetworkInterfaceAttribute",
            "ec2:DescribeNetworkInterfaces"
            
          ],
          "Resource": "*"
        }
      ]
    }
    

  5. When you’re finished, choose Review policy.
  6. On the Review page, enter a name for the policy name (e.g. LambdaExecRolePolicy-UpdateSecurityGroupsForCloudFront). Review the policy Summary to see the permissions granted by your policy, and then choose Create policy to save your work.

To understand what this policy allows, let’s look closely at both statements in the policy. The first statement allows the Lambda function to create and write to CloudWatch Logs, which is vital for debugging and monitoring our function. The second statement allows the function to get information about existing security groups, get existing VPC information, create security groups, and authorize and revoke ingress permissions. It’s an important best practice that your IAM policies be as granular as possible, to support the principal of least privilege.

Now that you’ve created your policy, you can create the Lambda execution role that will use the policy.

To create the Lambda execution role

  1. In the navigation pane of the IAM console, choose Roles, and then choose Create role.
  2. For Select type of trusted entity, choose AWS service.
  3. Choose the service that you want to allow to assume this role. In this case, choose Lambda.
  4. Choose Next: Permissions.
  5. Search for the policy name that you created earlier and select the check box next to the policy.
  6. Choose Next: Tags.
  7. (Optional) Add metadata to the role by attaching tags as key-value pairs. For more information about using tags in IAM, see Tagging IAM Users and Roles.
  8. Choose Next: Review.
  9. For Role name (e.g. LambdaExecRole-UpdateSecurityGroupsForCloudFront), enter a name for your role.
  10. (Optional) For Role description, enter a description for the new role.
  11. Review the role, and then choose Create role.

Create your Lambda function

Now, create your Lambda function and configure the role that you created earlier as the execution role for this function.

To create the Lambda function

  1. Go to the Lambda console in N. Virginia region and choose Create function. On the next page, choose Author from scratch. (I’ll be providing the code for your Lambda function, but for other functions, the Use a blueprint option can be a great way to get started.)
  2. Give your Lambda function a name (e.g UpdateSecurityGroupsForCloudFront) and description, and select Python 3.8 from the Runtime menu.
  3. Choose or create an execution role: Select the execution role you created earlier by selecting the option Use an Existing Role.
  4. After confirming that your settings are correct, choose Create function.
  5. Paste the Lambda function code from here.
  6. Select Save.

Additionally, in the Basic Settings of the Lambda function, increase the timeout to 10 seconds.

To set the timeout value in the Lambda console

  1. In the Lambda console, choose the function you just created.
  2. Under Basic settings, choose Edit.
  3. For Timeout, select 10s.
  4. Choose Save.

By default, the Lambda function has these settings:

  • The Lambda function is configured to create security groups in the default VPC.
  • CloudFront IP ranges are updated as inbound rules on port 80.
  • The created security groups are tagged with the name prefix AUTOUPDATE.
  • Debug logging is turned off.
  • The service for which IP ranges are extracted is set to CloudFront.
  • The SDK client in the Lambda function set to us-east-1(N. Virginia).

If you want to customize these settings, set the following environment variables for the Lambda function. For more details, see Using AWS Lambda environment variables.

Action Key-value data
To create security groups in a specific VPC Key: VPC_ID
Value: vpc-id
To create security groups rules for a different port or multiple ports
 
The solution in this example supports a total of two ports. One can be used for HTTP and another for HTTPS.

Key: PORTS
Value: portnumber
or
Key: PORTS
Value: portnumber,portnumber
To customize the prefix name tag of your security groups Key: PREFIX_NAME
Value: custom-name
To enable debug logging to CloudWatch Key: DEBUG
Value: true
To extract IP ranges for a different service other than CloudFront Key: SERVICE
Value: servicename
To configure the Region for the SDK client used in the Lambda function
 
If the CloudFront origin is present in a different Region than N. Virginia, the security groups must be created in that region.
Key: REGION
Value: regionname

To set environment variables in the Lambda console

  1. In the Lambda console, choose the function you created.
  2. Under Environment variables, choose Edit.
  3. Choose Add environment variable.
  4. Enter a key and value.
  5. Choose Save.

Test your Lambda function

Now that you’ve created your function, it’s time to test it and initialize your security group.

To create your test event for the Lambda function

  1. In the Lambda console, on the Functions page, choose your function. In the drop-down menu next to Actions, choose Configure test events.
  2. Enter an Event Name (e.g. TriggerSNS)
  3. Replace the following as your sample event, which will represent an SNS notification and then select Create.
    {
        "Records": [
            {
                "EventVersion": "1.0",
                "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
                "EventSource": "aws:sns",
                "Sns": {
                    "SignatureVersion": "1",
                    "Timestamp": "1970-01-01T00:00:00.000Z",
                    "Signature": "EXAMPLE",
                    "SigningCertUrl": "EXAMPLE",
                    "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
                    "Message": "{\"create-time\": \"yyyy-mm-ddThh:mm:ss+00:00\", \"synctoken\": \"0123456789\", \"md5\": \"7fd59f5c7f5cf643036cbd4443ad3e4b\", \"url\": \"https://ip-ranges.amazonaws.com/ip-ranges.json\"}",
                    "Type": "Notification",
                    "UnsubscribeUrl": "EXAMPLE",
                    "TopicArn": "arn:aws:sns:EXAMPLE",
                    "Subject": "TestInvoke"
                }
      		}
        ]
    }
    

  4. After you’ve added the test event, select Save and then select Test. Your Lambda function is then invoked, and you should see log output at the bottom of the console in Execution Result section, similar to the following.
    Updating from https://ip-ranges.amazonaws.com/ip-ranges.json
    MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b: Exception
    Traceback (most recent call last):
      File "/var/task/lambda_function.py", line 29, in lambda_handler
        ip_ranges = json.loads(get_ip_groups_json(message['url'], message['md5']))
      File "/var/task/lambda_function.py", line 50, in get_ip_groups_json
        raise Exception('MD5 Missmatch: got ' + hash + ' expected ' + expected_hash)
    Exception: MD5 Mismatch: got 2e967e943cf98ae998efeec05d4f351c expected 7fd59f5c7f5cf643036cbd4443ad3e4b
    

  5. Edit the sample event again, and this time change the md5 value in the sample event to be the first MD5 hash provided in the log output. In this example, you would update the md5 value in the sample event configured earlier with the hash value seen in the error ‘2e967e943cf98ae998efeec05d4f351c’. Lambda code successfully executes only when the original hash of the IP ranges document and the hash received from the event trigger match. After you modify the hash value from the error message received earlier, the test event matches the hash of the IP ranges document.
  6. Select Save and test. This invokes your Lambda function.

After the function is invoked the second time with updated md5 has Lambda function should execute without any errors. You should be able to see the new security groups created and the IP ranges of CloudFront updated in the rules in the EC2 console, as shown in Figure 1.
 

Figure 1: EC2 console showing the security groups created

Figure 1: EC2 console showing the security groups created

In the initial successful run of this function, it created the total number of security groups required to update all the IP ranges of CloudFront for the ports mentioned. The function creates security groups based on the maximum number of rules that can be added to individual security groups. The new security groups can be identified from the EC2 console by the name AUTOUPDATE_random if you used the default configuration, or a custom name if you provided a PREFIX_NAME.

You can now attach these security groups to your Elastic LoadBalancer or EC2 instances. If your log output is different from what is described here, the output should help you identify the issue.

Configure your Lambda function’s trigger

After you’ve validated that your function is executing properly, it’s time to connect it to the SNS topic for IP changes. To do this, use the AWS Command Line Interface (CLI). Enter the following command, making sure to replace Lambda ARN with the Amazon Resource Name (ARN) of your Lambda function. You can find this ARN at the top right when viewing the configuration of your Lambda function.

aws sns subscribe - -topic-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged" - -region us-east-1 - -protocol lambda - -notification-endpoint "Lambda ARN"

You should receive the ARN of your Lambda function’s SNS subscription.

Now add a permission that allows the Lambda function to be invoked by the SNS topic. The following command also adds the Lambda trigger.

aws lambda add-permission - -function-name "Lambda ARN" - -statement-id lambda-sns-trigger - -region us-east-1 - -action lambda:InvokeFunction - -principal sns.amazonaws.com - -source-arn "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"

When AWS changes any of the IP ranges in the document, an SNS notification is sent and your Lambda function will be triggered. This Lambda function verifies the modified ranges in the document and efficiently updates the IP ranges on the existing security groups. Additionally, the function dynamically scales and creates additional security groups if the number of IP ranges for CloudFront is increased in future. Any newly created security groups are automatically attached to the network interface where the previous security groups are attached in order to avoid service interruption.

Summary

As you followed this blog post, you created a Lambda function to create a security groups and update the security group’s rules dynamically whenever AWS publishes new internal service IP ranges. This solution has several advantages:

  • The solution isn’t designed as a periodic poll, so it only runs when it needs to.
  • It’s automatic, so you don’t need to update security groups manually which lowers the operational cost.
  • It’s simple, because you have no extra infrastructure to maintain as the solution is completely serverless.
  • It’s cost effective, because the Lambda function runs only when triggered by the AmazonIpSpaceChanged SNS topic and only runs for a few seconds, this solution costs only pennies to operate.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon CloudFront forum. If you have any other use cases for using Lambda functions to dynamically update security groups, or even other networking configurations such as VPC route tables or ACLs, we’d love to hear about them!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Yeshwanth Kottu

Yeshwanth is a Systems Development Engineer at AWS in Cupertino, CA. With a focus on CloudFront and [email protected], he enjoys helping customers tackle challenges through cloud-scale architectures. Yeshwanth has an MS in Computer Systems Networking and Telecommunications from Northeastern University. Outside of work, he enjoys travelling, visiting national parks, and playing cricket.

New! Streamline existing IAM Access Analyzer findings using archive rules

Post Syndicated from Andrea Nedic original https://aws.amazon.com/blogs/security/new-streamline-existing-iam-access-analyzer-findings-using-archive-rules/

AWS Identity and Access Management (IAM) Access Analyzer generates comprehensive findings to help you identify resources that grant public and cross-account access. Now, you can also apply archive rules to existing findings, so you can better manage findings and focus on the findings that need your attention most.

You can think of archive rules as similar to email rules. You define email rules to automatically organize emails. With IAM Access Analyzer, you can define archive rules to automatically mark findings as intended access. Now, those rules can apply to existing as well as new IAM Access Analyzer findings. This helps you focus on findings for potential unintended access to your resources. You can then easily track and resolve these findings by reducing access, helping you to work towards least privilege.

In this post, first I give a brief overview of IAM Access Analyzer. Then I show you an example of how to create an archive rule to automatically archive findings for intended access. Finally, I show you how to update an archive rule to mark existing active findings as intended.

IAM Access Analyzer overview

IAM Access Analyzer helps you determine which resources can be accessed publicly or from other accounts or organizations. IAM Access Analyzer determines this by mathematically analyzing access control policies attached to resources. This form of analysis—called automated reasoning—applies logic and mathematical inference to determine all possible access paths allowed by a resource policy. This is how IAM Access Analyzer uses provable security to deliver comprehensive findings for potential unintended bucket access. You can enable IAM Access Analyzer in the IAM console by creating an analyzer for an account or an organization. Once you’ve created your analyzer, you can review findings for resources that can be accessed publicly or from other AWS accounts or organizations.

Create an archive rule to automatically archive findings for intended access

When you review findings and discover common patterns for intended access, you can create archive rules to automatically archive those findings. This helps you focus on findings for unintended access to your resources, just like email rules help streamline your inbox.

To create an archive rule

In the IAM console, choose Archive rules under Access Analyzer. Then, choose Create archive rule to display the Create archive rule page shown in Figure 1. There, you find the option to name the rule or use the name generated by default. In the Rule section, you define criteria to match properties of findings you want to archive. Just like email rules, you can add multiple criteria to the archive rule. You can define each criterion by selecting a finding property, an operator, and a value. To help ensure a rule doesn’t archive findings for public access, the criterion Public access is false is suggested by default.
 

Figure 1: IAM Access Analyzer create archive rule page where you add criteria to create a new archive rule

Figure 1: IAM Access Analyzer create archive rule page where you add criteria to create a new archive rule

For example, I have a security audit role external to my account that I expect to have access to resources in my account. To mark that access as intended, I create a rule to archive all findings for Amazon S3 buckets in my account that can be accessed by the security audit role outside of the account. To do this, I include two criteria: Resource type matches S3 bucket, and the AWS Account value matches the security audit role ARN. Once I add these criteria, the Results section displays the list of existing active findings the archive rule matches, as shown in Figure 2.
 

Figure 2: A rule to archive all findings for S3 buckets in an account that can be accessed by the audit role outside of the account, with matching findings displayed

Figure 2: A rule to archive all findings for S3 buckets in an account that can be accessed by the audit role outside of the account, with matching findings displayed

When you’re done adding criteria for your archive rule, select Create and archive active findings to archive new and existing findings based on the rule criteria. Alternatively, you can choose Create rule to create the rule for new findings only. In the preceding example, I chose Create and archive active findings to archive all findings—existing and new—that match the criteria.

Update an archive rule to mark existing findings as intended

You can also update an archive rule to archive existing findings retroactively and streamline your findings. To edit an archive rule, choose Archive rules under Access Analyzer, then select an existing rule and choose Edit. In the Edit archive rule page, update the archive rule criteria and review the list of existing active findings the archive rule applies to. When you save the archive rule, you can apply it retroactively to existing findings by choosing Save and archive active findings as shown in Figure 3. Otherwise, you can choose Save rule to update the rule and apply it to new findings only.

Note: You can also use the new IAM Access Analyzer API operation ApplyArchiveRule to retroactively apply an archive rule to existing findings that meet the archive rule criteria.

 

Figure 3: IAM Access Analyzer edit archive rule page where you can apply the rule retroactively to existing findings by choosing Save and archive active findings

Figure 3: IAM Access Analyzer edit archive rule page where you can apply the rule retroactively to existing findings by choosing Save and archive active findings

Get started

To turn on IAM Access Analyzer at no additional cost, open the IAM console. IAM Access Analyzer is available at no additional cost in the IAM console and through APIs in all commercial AWS Regions, AWS China Regions, and AWS GovCloud (US). To learn more about IAM Access Analyzer and which resources it supports, visit the feature page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Andrea Nedic

Andrea is a Sr. Tech Product Manager for AWS Identity and Access Management. She enjoys hearing from customers about how they build on AWS. Outside of work, Andrea likes to ski, dance, and be outdoors. She holds a PhD from Princeton University.

New IAMCTL tool compares multiple IAM roles and policies

Post Syndicated from Sudhir Reddy Maddulapally original https://aws.amazon.com/blogs/security/new-iamctl-tool-compares-multiple-iam-roles-and-policies/

If you have multiple Amazon Web Services (AWS) accounts, and you have AWS Identity and Access Management (IAM) roles among those multiple accounts that are supposed to be similar, those roles can deviate over time from your intended baseline due to manual actions performed directly out-of-band called drift. As part of regular compliance checks, you should confirm that these roles have no deviations. In this post, we present a tool called IAMCTL that you can use to extract the IAM roles and policies from two accounts, compare them, and report out the differences and statistics. We will explain how to use the tool, and will describe the key concepts.

Prerequisites

Before you install IAMCTL and start using it, here are a few prerequisites that need to be in place on the computer where you will run it:

To follow along in your environment, clone the files from the GitHub repository, and run the steps in order. You won’t incur any charges to run this tool.

Install IAMCTL

This section describes how to install and run the IAMCTL tool.

To install and run IAMCTL

  1. At the command line, enter the following command:
    pip3 install git+ssh://[email protected]/aws-samples/[email protected]
    

    You will see output similar to the following.

    Figure 1: IAMCTL tool installation output

    Figure 1: IAMCTL tool installation output

  2. To confirm that your installation was successful, enter the following command.
    iamctl –h
    

    You will see results similar to those in figure 2.

    Figure 2: IAMCTL help message

    Figure 2: IAMCTL help message

Now that you’ve successfully installed the IAMCTL tool, the next section will show you how to use the IAMCTL commands.

Example use scenario

Here is an example of how IAMCTL can be used to find differences in IAM roles between two AWS accounts.

A system administrator for a product team is trying to accelerate a product launch in the middle of testing cycles. Developers have found that the same version of their application behaves differently in the development environment as compared to the QA environment, and they suspect this behavior is due to differences in IAM roles and policies.

The application called “app1” primarily reads from an Amazon Simple Storage Service (Amazon S3) bucket, and runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance. In the development (DEV) account, the application uses an IAM role called “app1_dev” to access the S3 bucket “app1-dev”. In the QA account, the application uses an IAM role called “app1_qa” to access the S3 bucket “app1-qa”. This is depicted in figure 3.

Figure 3: Showing the “app1” application in the development and QA accounts

Figure 3: Showing the “app1” application in the development and QA accounts

Setting up the scenario

To simulate this setup for the purpose of this walkthrough, you don’t have to create the EC2 instance or the S3 bucket, but just focus on the IAM role, inline policy, and trust policy.

As noted in the prerequisites, you will switch between the two AWS accounts by using the AWS CLI named profiles “dev-profile” and “qa-profile”, which are configured to point to the DEV and QA accounts respectively.

Start by using this command:

mkdir -p iamctl_test iamctl_test/dev iamctl_test/qa

The command creates a directory structure that looks like this:
Iamctl_test
|– qa
|– dev

Now, switch to the dev folder to run all the following example commands against the DEV account, by using this command:

cd iamctl_test/dev

To create the required policies, first create a file named “app1_s3_access_policy.json” and add the following policy to it. You will use this file’s content as your role’s inline policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-dev/shared/*"
            ]
        }
    ]
}

Second, create a file called “app1_trust_policy.json” and add the following policy to it. You will use this file’s content as your role’s trust policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now use the two files to create an IAM role with the name “app1_dev” in the account by using these command(s), run in the same order as listed here:

#create role with trust policy

aws --profile dev-profile iam create-role --role-name app1_dev --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role created above
 
aws --profile dev-profile iam put-role-policy --role-name app1_dev --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

In the QA account, the IAM role is named “app1_qa” and the S3 bucket is named “app1-qa”.

Repeat the steps from the prior example against the QA account by changing dev to qa where shown in bold in the following code samples. Change the directory to qa by using this command:

cd ../qa

To create the required policies, first create a file called “app1_s3_access_policy.json” and add the following policy to it.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Next, create a file, called “app1_trust_policy.json” and add the following policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now, use the two files created so far to create an IAM role with the name “app1_qa” in your QA account by using these command(s), run in the same order as listed here:

#create role with trust policy
aws --profile qa-profile iam create-role --role-name app1_qa --assume-role-policy-document file://app1_trust_policy.json

#put inline policy to the role create above
aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

So far, you have two accounts with an IAM role created in each of them for your application. In terms of permissions, there are no differences other than the name of the S3 bucket resource the permission is granted against.

You can expect IAMCTL to generate a minimal set of differences between the DEV and QA accounts, assuming all other IAM roles and policies are the same, but to be sure about the current state of both accounts, in IAMCTL you can run a process called baselining.

Through the process of baselining, you will generate an equivalency dictionary that represents all the known string patterns that reduce the noise in the generated deviations list, and then you will introduce a change into one of the IAM roles in your QA account, followed by a final IAMCTL diff to show the deviations.

Baselining

Baselining is the process of bringing two accounts to an “equivalence” state for IAM roles and policies by establishing a baseline, which future diff operations can leverage. The process is as simple as:

  1. Run the iamctl diff command.
  2. Capture all string substitutions into an equivalence dictionary to remove or reduce noise.
  3. Save the generated detailed files as a snapshot.

Now you can go through these steps for your baseline.

Go ahead and run the iamctl diff command against these two accounts by using the following commands.

#change directory from qa to iamctl-test
cd ..

#run iamctl init
iamctl init

The results of running the init command are shown in figure 4.

Figure 4: Output of the iamctl init command

Figure 4: Output of the iamctl init command

If you look at the iamctl_test directory now, shown in figure 5, you can see that the init command created two files in the iamctl_test directory.

Figure 5: The directory structure after running the init command

Figure 5: The directory structure after running the init command

These two files are as follows:

  1. iam.jsonA reference file that has all AWS services and actions listed, among other things. IAMCTL uses this to map the resource listed in an IAM policy to its corresponding AWS resource, based on Amazon Resource Name (ARN) regular expression.
  2. equivalency_list.jsonThe default sample dictionary that IAMCTL uses to suppress false alarms when it compares two accounts. This is where the known string patterns that need to be substituted are added.

Note: A best practice is to make the directory where you store the equivalency dictionary and from which you run IAMCTL to be a Git repository. Doing this will let you capture any additions or modifications for the equivalency dictionary by using Git commits. This will not only give you an audit trail of your historical baselines but also gives context to any additions or modifications to the equivalency dictionary. However, doing this is not necessary for the regular functioning of IAMCTL.

Next, run the iamctl diff command:

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa
Figure 6: Result of diff command

Figure 6: Result of diff command

Figure 6 shows the results of running the diff command. You can see that IAMCTL considers the app1_qa and app1_dev roles as unique to the DEV and QA accounts, respectively. This is because IAMCTL uses role names to decide whether to compare the role or tag the role as unique.

You will add the strings “dev” and “qa” to the equivalency dictionary to instruct IAMCTL to substitute occurrences of these two strings with “accountname” by adding the follow JSON to the equivalency_list.json file. You will also clean up some defaults already present in there.

echo “{“accountname”:[“dev”,”qa”]}” > equivalency_list.json

Figure 7 shows the equivalency dictionary before you take these actions, and figure 8 shows the dictionary after these actions.

Figure 7: Equivalency dictionary before

Figure 7: Equivalency dictionary before

Figure 8: Equivalency dictionary after

Figure 8: Equivalency dictionary after

There’s another thing to notice here. In this example, one common role was flagged as having a difference. To know which role this is and what the difference is, go to the detail reports folder listed at the bottom of the summary report. The directory structure of this folder is shown in figure 9.

Notice that the reports are created under your home directory with a folder structure that mimics the time stamp down to the second. IAMCTL does this to maintain uniqueness for each run.

tree /Users/<username>/aws-idt/output/2020/08/24/08/38/49/
Figure 9: Files written to the output reports directory

Figure 9: Files written to the output reports directory

You can see there is a file called common_roles_in_dev_with_differences.csv, and it lists a role called “AwsSecurity***Audit”.

You can see there is another file called dev_to_qa_common_role_difference_items.csv, and it lists the granular IAM items from the DEV account that belong to the “AwsSecurity***Audit” role as compared to QA, but which have differences. You can see that all entries in the file have the DEV account number in the resource ARN, whereas in the qa_to_dev_common_role_difference_items.csv file, all entries have the QA account number for the same role “AwsSecurity***Audit”.

Add both of the account numbers to the equivalency dictionary to substitute them with a placeholder number, because you don’t want this role to get flagged as having differences.

echo “{“accountname”:[“dev”,”qa”],”000000000000”:[“123456789012”,”987654321098”]}” > equivalency_list.json

Now, re-run the diff command.

#run iamctl diff
iamctl diff dev-profile dev qa-profile qa

As you can see in figure 10, you get back the result of the diff command that shows that the DEV account doesn’t have any differences in IAM roles as compared to the QA account.

Figure 10: Output showing no differences after completion of baselining

Figure 10: Output showing no differences after completion of baselining

This concludes the baselining for your DEV and QA accounts. Now you will introduce a change.

Introducing drift

Drift occurs when there is a difference in actual vs expected values in the definition or configuration of a resource. There are several reasons why drift occurs, but for this scenario you will use “intentional need to respond to a time-sensitive operational event” as a reason to mimic and introduce drift into what you have built so far.

To simulate this change, add “s3:PutObject” to the qa app1_s3_access_policy.json file as shown in the following example.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*",
                "s3:PutObject"
            ],
            "Resource": [
         "arn:aws:s3:::app1-qa/shared/*"
            ]
        }
    ]
}

Put this new inline policy on your existing role “app1_qa” by using this command:

aws --profile qa-profile iam put-role-policy --role-name app1_qa --policy-name s3_inline_policy --policy-document file://app1_s3_access_policy.json

The following table represents the new drift in the accounts.

Action Account-DEV
Role name: app1_dev
Account-QA
Role name: app1_qa
s3:Get* Yes Yes
s3:List* Yes Yes
s3: PutObject No Yes

Next, run the iamctl diff command to see how it picks up the drift from your previously baselined accounts.

#change directory from qa to iamctL-test
cd ..
iamctl diff dev-profile dev qa-profile qa
Figure 11: Output showing the one deviation that was introduced

Figure 11: Output showing the one deviation that was introduced

You can see that IAMCTL now shows that the QA account has one difference as compared to DEV, which is what we expect based on the deviation you’ve introduced.

Open up the file qa_to_dev_common_role_difference_items.csv to look at the one difference. Again, adjust the following path example with the output from the iamctl diff command at the bottom of the summary report in Figure 11.

cat /Users/<username>/aws-idt/output/2020/09/18/07/38/15/qa_to_dev_common_role_difference_items.csv

As shown in figure 12, you can see that the file lists the specific S3 action “PutObject” with the role name and other relevant details.

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

Figure 12: Content of file qa_to_dev_common_role_difference_items.csv showing the one deviation that was introduced

You can use this information to remediate the deviation by performing corrective actions in either your DEV account or QA account. You can confirm the effectiveness of the corrective action by re-baselining to make sure that zero deviations appear.

Conclusion

In this post, you learned how to use the IAMCTL tool to compare IAM roles between two accounts, to arrive at a granular list of meaningful differences that can be used for compliance audits or for further remediation actions. If you’ve created your IAM roles by using an AWS CloudFormation stack, you can turn on drift detection and easily capture the drift because of changes done outside of AWS CloudFormation to those IAM resources. For more information about drift detection, see Detecting unmanaged configuration changes to stacks and resources. Lastly, see the GitHub repository where the tool is maintained with documentation describing each of the subcommand concepts. We welcome any pull requests for issues and enhancements.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sudhir Reddy Maddulapally

Sudhir is a Senior Partner Solution Architect who builds SaaS solutions for partners by day and is a tech tinkerer by night. He enjoys trips to state and national parks, and Yosemite is his favorite thus far!

Author

Soumya Vanga

Soumya is a Cloud Application Architect with AWS Professional Services in New York, NY, helping customers design solutions and workloads and to adopt Cloud Native services.

Enhance programmatic access for IAM users using a YubiKey for multi-factor authentication

Post Syndicated from Edouard Kachelmann original https://aws.amazon.com/blogs/security/enhance-programmatic-access-for-iam-users-using-yubikey-for-multi-factor-authentication/

Organizations are increasingly providing access to corporate resources from employee laptops and are required to apply the correct permissions to these computing devices to make sure that secrets and sensitive data are adequately protected. The combination of Amazon Web Services (AWS) long-term credentials and a YubiKey security token for multi-factor authentication (MFA) is an option for providing secure programmatic access to AWS for organizations that aren’t yet ready or able to use identity federation. For example, a user should be able to list AWS Identity and Access Management (IAM) roles with their default programmatic access, but would be required to provide MFA to assume an IAM role.

In this blog post, we show you how to use a YubiKey token for MFA with the AWS Command Line Interface (AWS CLI) to create temporary credentials with the permissions that developers need to perform tasks. The user will configure the long-term credentials and then temporarily assume a role with broader permissions by using MFA when needed. MFA adds extra security, because it requires users to provide second-factor authentication from an AWS-supported MFA mechanism in addition to static security credentials such as their user name and password.

The goal for any organization is to move to the recommended best practices for allowing individual programmatic access that include using temporary security credentials that aren’t stored with the user, but are generated dynamically and provided to the user when requested, such as identity federation due to the temporary nature of those credentials. If your organization uses AWS Single Sign-On (AWS SSO) along with an identity provider (IdP) such as Okta, Azure Active Directory (AD), or AWS Managed Microsoft AD, you can then use the instructions from this earlier blog post to leverage the AWS CLI v2 native integration with AWS SSO and take advantage of the multi-factor authentication support of your IdP.

Overview

This post describes the configuration of IAM users and roles and initialization of the YubiKey token as an MFA device by an administrator, and then how developers can use the YubiKey device to retrieve temporary credentials and assume a role with elevated permissions within the AWS CLI.

The overall process flow looks like this:

  1. Create an IAM user with programmatic access, MFA, and a policy that allows you to assume a more privileged IAM role. The user will retrieve a Time-based One-time Password (TOTP) token code by using a YubiKey as MFA.
  2. Assume the more privileged role, which is restricted by an MFA conditional, by using the TOTP token code.

Figure 1 shows the steps of the process.

Figure 1: A visual overview of the steps to assume roles with elevated permissions by using a YubiKey for MFA

Figure 1: A visual overview of the steps to assume roles with elevated permissions by using a YubiKey for MFA

Prerequisites

To get started you need:

  • An AWS account.
  • A YubiKey (available on Amazon.com). YubiKey 4 and 5 series are compatible, because they support the required OATH application.

    Note: The Yubico Security Keys (the blue tokens) aren’t supported, because they lack the OATH application. If you already have a corporate YubiKey device, this capability might have been disabled.

  • To complete the process for:

Notes:

  • AWS CLI v2 doesn’t yet support Universal 2nd factor (U2F) MFA. As a workaround, we use a YubiKey as a virtual device MFA.
  • OATH (Initiative for Open Authentication) is an organization that specifies two open authentication standards: TOTP and HMAC-based One-time Password (HOTP). For this solution, we use the TOTP standard.

Getting started

Initializing YubiKey for MFA

The following steps show you, as cloud administrator, how to initialize the YubiKey as a virtual MFA device and configure an IAM user that can assume a role with elevated permissions, on the condition that the user is using an MFA device. In this example, your developers will assume a role with permissions to access Amazon Elastic Compute Cloud (Amazon EC2).

To configure the IAM user and initialize the YubiKey device as MFA

  1. Create a role with elevated permissions that your developers can assume.
    1. Sign in to the AWS IAM console, and in the right-hand pane, choose Roles. Then choose Create role.

      Figure 2: Create a role in the IAM console

      Figure 2: Create a role in the IAM console

    2. For the type of trusted entity, choose Another AWS account. Enter your account ID, which you can find by using these methods, described in the IAM User Guide. Choose Next:Permissions.

      Figure 3: Select the type of trusted entity and provide the account ID

    3. Search for the AmazonEC2FullAccess policy, and select the check box next to it. Choose Next:Tags, and add relevant tags if needed. Choose Next:Review.
    4. Name the role developer-ec2-mfa, and then choose Create role.
    5. Go back to the role you just created. Change the maximum session duration value to limit how long the developer’s session can be valid after assuming the role. For this example, we set the duration to 1 hour (3,600 seconds) by using a custom value. Limit this duration to abide by your organization’s recommended authentication time.
    6. Take note of the Amazon Resource Name (ARN) for the new role as shown on the summary page.

      Figure 4: Summary page of the new role

      Figure 4: Summary page of the new role

  2. Create a new IAM policy that provides a limited scope of actions for users when they use their long-term credentials.
    1. Navigate to the AWS IAM console, and in the navigation pane, choose Policies. Choose Create policy.

      Figure 5: Create a policy in the IAM console

      Figure 5: Create a policy in the IAM console

    2. Because we’ve already written the policy in JSON, you don’t need to use the Visual Editor, so you can choose the JSON tab and paste the content of the following JSON policy document (remember to replace the placeholder for the role ARN).Following the least privilege approach, add only the Amazon Resource Names (ARNs) of the role or roles with required elevated permissions that the developer will be able to assume. In this case, use the developer-ec2-mfa ARN for the role that you created previously.
      {
         "Version": "2012-10-17",
         "Statement": [
            {
               "Sid": "",
               "Effect": "Allow",
               "Action": "sts:AssumeRole",
               "Resource": <Elevated Role ARN(s)>,
               "Condition": {
                  "Bool": {
                     "aws:multifactorAuthPresent": true
                  }
               }
            }
         ]
      }
      

      Note: The condition “aws:MultiFactorAuthPresent”: “true” requires that the user who assumes the role has been authenticated with an AWS MFA device.

    3. Choose Review policy.
    4. Name the policy yubi-policy-mfa-level-one. Choose Create policy.
  3. Create a new IAM group that lets you specify permissions for multiple users and makes it easier to manage the permissions for those users.
    1. Navigate to the IAM console, and in the navigation pane, choose Groups. Choose Create New Group.

      Figure 6: Create a group in the IAM console

      Figure 6: Create a group in the IAM console

    2. Enter developers-mfa as the group name. Choose Next Step.
    3. On the Attach Policy screen, in the filter box, search for the policy yubi-policy-mfa-level-one that you created in the previous step. Make sure you select the check box next to the policy, and then choose Next Step.

      Figure 7: Attach the policy to the IAM group

      Figure 7: Attach the policy to the IAM group

    4. Review the group information, and then choose Create Group.
  4. Create a user in IAM for the developer using the AWS CLI.
    1. Navigate to the IAM console and in the navigation pane, choose Users. Choose Add user.
    2. On the Add user screen, enter the name for your user. In this example, our developer is named JohnDoe. For Access type, select the check box next to Programmatic access. Choose Next: Permissions.

      Figure 8: Create an IAM user with programmatic access

      Figure 8: Create an IAM user with programmatic access

    3. For permissions, select Add user to group, and select the developers-mfa group. Choose Next: Tags.
    4. Add the relevant tags if needed, and then choose Next: Review.
    5. Review the user configuration, and then choose Create user.
    6. Make sure you save the access key ID and secret access key to share with your user. Choose Close.
  5. Assign an MFA device to the user.
    1. Go back to the Users section of the IAM console. Choose the IAM user that you created previously, and go to the Security credentials tab. For Assigned MFA device, choose Manage.

      Figure 9: Assign MFA device to the IAM user

      Figure 9: Assign MFA device to the IAM user

    2. Select Virtual MFA device, because the AWS CLI doesn’t yet support U2F MFA. Choose Continue.

      Figure 10: Select the Virtual MFA device type

      Figure 10: Select the Virtual MFA device type

    3. Instead of using the QR code, choose Show secret key.

      Note: The secret key is a randomly generated string shared between IAM and the physical YubiKey. It is used to generate a one-time password using a hash function with the current timestamp.

       

      Figure 11: Retrieve the secret key on the virtual MFA device configuration page

      Figure 11: Retrieve the secret key on the virtual MFA device configuration page

    4. Copy the secret key to use in the next step as the MFA_SECRET to configure the MFA device.
  6. To obtain the TOTP token codes from the YubiKey to synchronize the key with the IAM user, do the following.
    1. Insert the YubiKey token in your USB port, and verify that the OATH application is enabled for your YubiKey by running the following command and looking for Enabled USB interfaces: OTP+FIDO+CCID in the output.
      $ ykman info
      
      Device type: YubiKey 5 NFC
      Serial number: 123456789
      Firmware version: 5.2.4
      Form factor: Keychain (USB-A)
      Enabled USB interfaces: OTP+FIDO+CCID
      NFC interface is enabled.
      
      Applications USB NFC
      OTP Enabled Enabled
      FIDO U2F Enabled Enabled
      OpenPGP Enabled Enabled
      PIV Enabled Enabled
      OATH Enabled Enabled
      FIDO2 Enabled Enabled
      

    2. For each MFA device, you need to generate a unique identifier that will be used during the process. We recommend that you create this identifier based on the ARN of the IAM user, by using the following template.
      arn:aws:iam::<ACCOUNT_ID>:mfa/<IAM_USERNAME>
      

    3. Add a new credential to your YubiKey based on the MFA device ARN. Use the MFA_SECRET that you copied in the previous step (step 5).
      ykman oath add -t arn:aws:iam::<ACCOUNT_ID>:mfa/<IAM_USERNAME> <MFA_SECRET>
      

    4. Obtain two TOTP token codes by using the following command (remember to replace the placeholder for the <MFA device ARN>). Wait up to 30 seconds for the device to generate the second token code (you will be prompted to touch the token).
      ykman oath code <MFA device ARN>
      

    5. After obtaining each of the TOTP token codes, go back to the IAM console where you were setting up the virtual MFA device, and enter the code in the MFA code box. After entering the two MFA codes, choose Assign MFA.

      Figure 12: Enter the two consecutive YubiKey codes in the virtual MFA device configuration page

      Figure 12: Enter the two consecutive YubiKey codes in the virtual MFA device configuration page

  7. You can then provide the following information to your developer:
    1. The YubiKey device along with the generated MFA device ARN
    2. The ARNs for the roles that will be assumed
    3. The long-term AWS credentials

Assuming a role with the YubiKey as MFA

The following steps show how you, as a developer, can retrieve temporary credentials using the YubiKey device as MFA, and assume a role with wider permissions. You can do this after the YubiKey device, one or more role ARNs, and long-term credentials have been shared with you by the cloud administrator.

To assume a role with broader permissions by using YubiKey

  1. As part of the prerequisites, you should have the AWS CLI v2 already installed. Now configure the default profile with the long-term credentials provided by your cloud administrator, by using the following command.
    $ aws configure
    
    AWS Access Key ID [None]: <Enter your AWS access key>
    AWS Secret Access Key [None]: <Enter your AWS secret access key>
    Default region name [None]: <Enter your AWS default region>
    Default output format [None]: <Enter your output default format>
    

  2. Obtain a TOTP code from YubiKey (you will be prompted to touch the token). Submit your request immediately after generating the codes. If you generate the codes and then wait too long to submit the request, the code won’t be valid anymore.
    ykman oath code arn:aws:iam::<ACCOUNT_ID>:mfa/<IAM_USERNAME>
    

  3. Using the MFA token code you obtained by using the YubiKey, assume the relevant role that will provide access to larger permissions. In our example, the ARN is for the role developer-ec2-mfa that was provided by the IAM administrator. Enter a role session name that will uniquely identify a session when the same role is assumed by different principals.
    aws sts assume-role --role-arn <Role ARN> --role-session-name <Role Name> --serial-number <MFA device ARN> --token-code <token code> --duration-seconds 3600 
    

    Note: The user should only have access to sts:AssumeRole for a specific set of roles. Here we chose the session duration of one hour. You can edit the session duration so the developer can authenticate for the duration of a workday (the default value is 1 hour and can be up to 12 hours). Limit this duration to abide by your organization’s recommended authentication time.

    You should see the following output.

    {
       "AssumedRoleUser": {
          "AssumedRoleId": "ABCD123ABCDEFGHIJKLMN:<role-session-name>",
          "Arn": "arn:aws:sts::<ACCOUNT_ID>:assumed-role/developer-ec2-mfa/<role-session-name>"
       },
       "Credentials": {
          "SecretAccessKey": <aws_secret_access_key>,
          "SessionToken": <aws_session_token>,
          "Expiration": "2020-07-13T19:24:20Z",
          "AccessKeyId": <aws_access_key_id>
       }
    }
    

  4. Edit a new AWS CLI profile named johndoe-developer-role as seen following. Copy the access key and secret key that were retrieved as temporary credentials from the get-session-token command. Then set the additional parameter aws_session_token, which was returned along with the temporary credentials. Edit your CLI with the information for the new role.
    aws configure --profile johndoe-developer-role
    aws configure --profile johndoe-developer-role set aws_session_token <Session Token> 
    

  5. Attempt to make a call to relevant services that are allowed by the newly assumed role. Here’s an example using the Amazon EC2 API to describe the EC2 instances.
    aws --profile johndoe-developer-role ec2 describe-instances
    

The developer now has access to the larger permissions set through the assumed role for the next hour.

Summary

In this post, we introduced the capability to further secure long-term AWS credentials with a YubiKey for MFA, for organizations that are still using long-term credentials. These credentials are stored in the ~/.aws/credentials file. If an unauthorized user was able to retrieve these long-term credentials, they wouldn’t be able to use them, because the user needs to have the physical MFA in order to assume a role with broader permissions. The steps in this blog post can be converted to a script that your developers can use repeatedly to simplify the process.

In general, we recommend that all customers move away from using IAM users and static credentials and instead use IAM roles and temporary credentials wherever possible. An easy way to get started down that road is by using AWS SSO for identity federation.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Edouard Kachelmann

Edouard is an Enterprise Solutions Architect at Amazon Web Services. He is a passionate technology enthusiast who enjoys working with customers and helping them build innovative solutions. Prior to his work at AWS, Edouard worked for the French National Cybersecurity Agency, sharing its security expertise and assisting government departments and operators of vital importance. In his free time, Edouard likes to explore new places to eat, try new French recipes, and play with his kid.

Author

Anthony Pasquariello

Anthony is an Enterprise Solutions Architect based in New York City. He provides technical consultation to customers during their cloud journey, especially around security best practices. He has an MS and BS in electrical & computer engineering from Boston University. In his free time, he enjoys ramen, writing nonfiction, and philosophy.

How to use trust policies with IAM roles

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/

AWS Identity and Access Management (IAM) roles are a significant component in the way customers operate in Amazon Web Service (AWS). In this post, I’ll dive into the details on how Cloud security architects and account administrators can protect IAM roles from misuse by using trust policies. By the end of this post, you’ll know how to use IAM roles to build trust policies that work at scale, providing guardrails to control access to resources in your organization.

In general, there are four different scenarios where you might use IAM roles in AWS:

  • One AWS service accesses another AWS service – When an AWS service needs access to other AWS services or functions, you can create a role that will grant that access.
  • One AWS account accesses another AWS account – This use case is commonly referred to as a cross-account role pattern. This allows human or machine IAM principals from other AWS accounts to assume this role and act on resources in this account.
  • A third-party web identity needs access – This use case allows users with identities in third-party systems like Google and Facebook, or Amazon Cognito, to use a role to access resources in the account.
  • Authentication using SAML2.0 federation – This is commonly used by enterprises with Active Directory that want to connect using an IAM role so that their users can use single sign-on workflows to access AWS accounts.

In all cases, the makeup of an IAM role is the same as that of an IAM user and is only differentiated by the following qualities:

  • An IAM role does not have long term credentials associated with it; rather, a principal (an IAM user, machine, or other authenticated identity) assumes the IAM role and inherits the permissions assigned to that role.
  • The tokens issued when a principal assumes an IAM role are temporary. Their expiration reduces the risks associated with credentials leaking and being reused.
  • An IAM role has a trust policy that defines which conditions must be met to allow other principals to assume it. This trust policy reduces the risks associated with privilege escalation.

Recommendation: You should make extensive use of temporary IAM roles rather than permanent credentials such as IAM users. For more information review this page: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

While the list of users having access to your AWS accounts can change over time, the roles used to manage your AWS account probably won’t. The use of IAM roles essentially decouples your enterprise identity system (SAML 2.0) from your permission system (AWS IAM policies), simplifying management of each.

Managing access to IAM roles

Let’s dive into how you can create relationships between your enterprise identity system and your permissions system by looking at the policy types you can apply to an IAM role.

An IAM role has three places where it uses policies:

  • Permission policies (inline and attached) – These policies define the permissions that a principal assuming the role is able (or restricted) to perform, and on which resources.
  • Permissions boundary – A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based permission policies and its permissions boundaries.
  • Trust relationship – This policy defines which principals can assume the role, and under which conditions. This is sometimes referred to as a resource-based policy for the IAM role. We’ll refer to this policy simply as the ‘trust policy’.

A role can be assumed by a human user or a machine principal, such as an Amazon Elastic Computer Cloud (Amazon EC2) instance or an AWS Lambda function. Over the rest of this post, you’ll see how you’re able to reduce the conditions for principals to use roles by configuring their trust policies.

An example of a simple trust policy

A common use case is when you need to provide security audit access to your account, allowing a third party to review the configuration of that account. After attaching the relevant permission policies to an IAM role, you need to add a cross-account trust policy to allow the third-party auditor to make the sts:AssumeRole API call to elevate their access in the audited account. The following trust policy shows an example policy created through the AWS Management Console:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

As you can see, it has the same structure as other IAM policies with Effect, Action, and Condition components. It also has the Principal parameter, but no Resource attribute. This is because the resource, in the context of the trust policy, is the IAM role itself. For the same reason, the Action parameter will only ever be set to one of the following values: sts:AssumeRole, sts:AssumeRoleWithSAML, or sts:AssumeRoleWithWebIdentity.

Note: The suffix root in the policy’s Principal attribute equates to “authenticated and authorized principals in the account,” not the special and all-powerful root user principal that is created when an AWS account is created.

Using the Principal attribute to reduce scope

In a trust policy, the Principal attribute indicates which other principals can assume the IAM role. In the example above, 111122223333 represents the AWS account number for the auditor’s AWS account. In effect, this allows any principal in the 111122223333 AWS account with sts:AssumeRole permissions to assume this role.

To restrict access to a specific IAM user account, you can define the trust policy like the following example, which would allow only the IAM user LiJuan in the 111122223333 account to assume this role. LiJuan would also need to have sts:AssumeRole permissions attached to their IAM user for this to work:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/LiJuan"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

The principals set in the Principal attribute can be any principal defined by the IAM documentation, and can refer to an AWS or a federated principal. You cannot use a wildcard (“*” or “?”) within a Principal for a trust policy, other than one special condition, which I’ll come back to in a moment: You must define precisely which principal you are referring to because there is a translation that occurs when you submit your trust policy that ties it to each principal’s hidden principal ID, and it can’t do that if there are wildcards in the principal.

The only scenario where you can use a wildcard in the Principal parameter is where the parameter value is only the “*” wildcard. Use of the global wildcard “*” for the Principal isn’t recommended unless you have clearly defined Conditional attributes in the policy statement to restrict use of the IAM role, since doing so without Conditional attributes permits assumption of the role by any principal in any AWS account, regardless of who that is.

Using identity federation on AWS

Federated users from SAML 2.0 compliant enterprise identity services are given permissions to access AWS accounts through the use of IAM roles. While the user-to-role configuration of this connection is established within the SAML 2.0 identity provider, you should also put controls in the trust policy in IAM to reduce any abuse.

Because the Principal attribute contains configuration information about the SAML mapping, in the case of Active Directory, you need to use the Condition attribute in the trust policy to restrict use of the role from the AWS account management perspective. This can be done by restricting the SourceIp address, as demonstrated later, or by using one or more of the SAML-specific Condition keys available. My recommendation here is to be as specific as you can in reducing the set of principals that can use the role as is practical. This is best achieved by adding qualifiers into the Condition attribute of your trust policy.

There’s a very good guide on creating roles for SAML 2.0 federation that contains a basic example trust policy you can use.

Using the Condition attribute in a trust policy to reduce scope

The Condition statement in your trust policy sets additional requirements for the Principal trying to assume the role. If you don’t set a Condition attribute, the IAM engine will rely solely on the Principal attribute of this policy to authorize role assumption. Given that it isn’t possible to use wildcards within the Principal attribute, the Condition attribute is a really flexible way to reduce the set of users that are able to assume the role without necessarily specifying the principals.

Limiting role use based on an identifier

Occasionally teams managing multiple roles can become confused as to which role achieves what and can inadvertently assume the wrong role. This is referred to as the Confused Deputy problem. This next section shows you a way to quickly reduce this risk.

The following trust policy requires that principals from the 111122223333 AWS account have provided a special phrase when making their request to assume the role. Adding this condition reduces the risk that someone from the 111122223333 account will assume this role by mistake. This phrase is configured by specifying an ExternalID conditional context key.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "ExampleSpecialPhrase"
        }
      }
    }
  ]
}

In the example trust policy above, the value ExampleSpecialPhrase isn’t a secret or a password. Adding the ExternalID condition limits this role from being assumed using the console. The only way to add this ExternalID argument into the role assumption API call is to use the AWS Command Line Interface (AWS CLI) or a programming interface. Having this condition doesn’t prevent a user who knows about this relationship and the ExternalId from assuming what might be a privileged set of permissions, but does help manage risks like the Confused Deputy problem. I see customers using an ExternalID that matches the name of the AWS account, which works to ensure that an operator is working on the account they believe they’re working on.

Limiting role use based on multi-factor authentication

By using the Condition attribute, you can also require that the principal assuming this role has passed a multi-factor authentication (MFA) check before they’re permitted to use this role. This again limits the risk associated with mistaken use of the role and adds some assurances about the principal’s identity.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "BoolIfExists": { 
          "aws:MultiFactorAuthPresent" : "true" 
		}
      }
    }
  ]
}

In the example trust policy above, I also introduced the MultiFactorAuthPresent conditional context key. Per the AWS global condition context keys documentation, the MultiFactorAuthPresent conditional context key does not apply to sts:AssumeRole requests in the following contexts:

  • When using access keys in the CLI or with the API
  • When using temporary credentials without MFA
  • When a user signs in to the AWS Console
  • When services (like AWS CloudFormation or Amazon Athena) reuse session credentials to call other APIs
  • When authentication has taken place via federation

In the example above, the use of the BoolIfExists qualifier to the MultiFactorAuthPresent conditional context key evaluates the condition as true if:

  • The principal type can have an MFA attached, and does.
    or
  • The principal type cannot have an MFA attached.

This is a subtle difference but makes the use of this conditional key in trust policies much more flexible across all principal types.

Limiting role use based on time

During activities like security audits, it’s quite common for the activity to be time-bound and temporary. There’s a risk that the IAM role could be assumed even after the audit activity concludes, which might be undesirable. You can manage this risk by adding a time condition to the Condition attribute of the trust policy. This means that rather than being concerned with disabling the IAM role created immediately following the activity, customers can build the date restriction into the trust policy. You can do this by using policy attribute statements, like so:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-01T12:00:00Z"
        },
        "DateLessThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

Limiting role use based on IP addresses or CIDR ranges

If the auditor for a security audit is using a known fixed IP address, you can build that information into the trust policy, further reducing the opportunity for the role to be assumed by unauthorized actors calling the assumeRole API function from another IP address or CIDR range:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        }
      }
    }
  ]
}

Limiting role use based on tags

IAM tagging capabilities can also help to build flexible and adaptive trust policies, too, so that they create an attribute-based access control (ABAC) model for IAM management. You can build trust policies that only permit principals that have already been tagged with a specific key and value to assume a specific role. The following example requires that IAM principals in the AWS account 111122223333 be tagged with department = OperationsTeam for them to assume the IAM role.


tagged with department = OperationsTeam for them to assume the IAM role.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/department": "OperationsTeam"
        }
      }
    }
  ]
}

If you want to create this effect, I highly recommend the use of the PrincipalTag pattern above, but you must also be cautious about which principals are then also given iam:TagUser, iam:TagRole, iam:UnTagUser, and iam:UnTagRole permissions, perhaps even using the aws:PrincipalTag condition within the permissions boundary policy to restrict their ability to retag their own IAM principal or that of another IAM role they can assume.

Limiting or extending access to a role based on AWS Organizations

Since its announcement in 2016, almost every enterprise customer I work with uses AWS Organizations. This AWS service allows customers to create an organizational structure for their accounts by creating hard boundaries to manage blast-radius risks, among other advantages. You can use the PrincipalOrgID condition to limit assumption of an organization-wide core IAM role.

Caution: As you’ll see in the example below, you need to set the Principal attribute to “*” to do this, which would, without the conditional restriction, allow all role assumption requests to be accepted for this role, irrespective of the source of that assumption request. For that reason, be especially careful about the use of this pattern.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-abcd12efg1"
        }
      }
    }
  ]
}

It isn’t practical to write out all the AWS account identifiers into a trust policy, and because of the way policies like this are evaluated, you can’t include wildcard characters for the account number in the principal’s account number field. The use of the PrincipalOrgID global condition context key provides us with a neat and dynamic mechanism to create a short policy statement.

Role chaining

There are instances where a third party might themselves be using IAM roles, or where an AWS service resource that has already assumed a role needs to assume another role (perhaps in another account), and customers might need to allow only specific IAM roles in that remote account to assume the IAM role you create in your account. You can use role chaining to build permitted role escalation routes using role assumption from within the same account or AWS organization, or from third-party AWS accounts.

Consider the following trust policy example where I use a combination of the Principal attribute to scope down to an AWS account, and the aws:UserId global conditional context key to scope down to a specific role using its RoleId. To capture the RoleId for the role you want to be able to assume, you can run the following command using the AWS CLI:


# aws iam get-role --role-name CrossAccountAuditor
{
    "Role": {
        "Path": "/",
        "RoleName": "CrossAccountAuditor",
        "RoleId": "ARO1234567123456D",
        "Arn": "arn:aws:iam:: 111122223333:role/CrossAccountAuditor",
        "CreateDate": "2017-08-31T14:24:20+00:00",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Sid": "",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "arn:aws:iam::111122223333:root"
                    },
                    "Action": "sts:AssumeRole",
                    "Condition": {
                        "StringEquals": {
                            "sts:ExternalId": "ExampleSpecialPhrase"
                        }
                    }
                }
            ]
        }
    }
}

Here is the example trust policy that limits to only the CrossAccountAuditor role from AWS Account 111122223333.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringLike": {
          "aws:userId": "ARO1234567123456D:*"
        }
      }
    }
  ]
}

If you’re using an IAM user and have assumed the CrossAccountAuditor IAM role, the policy above will work through the AWS CLI with a call to aws sts assume-role and through the console.

This type of trust policy also works for services like Amazon EC2, allowing those instances using their assigned instance profile role to assume a role in another account to perform actions. We’ll touch on this use case later in the post.

Putting it all together

AWS customers can use combinations of all the above Principal and Condition attributes to hone the trust they’re extending out to any third party, or even within their own organization. They might create an accumulated trust policy for an IAM role which achieves the following effect:

Allows only a user named PauloSantos, in AWS account number 111122223333, to assume the role if they have also authenticated with an MFA, are logging in from an IP address in the 203.0.113.0 to 203.0.113.24 CIDR range, and the date is between noon of September 1, 2020, and noon of September 7, 2020.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111122223333:user/PauloSantos"
        ]
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "BoolIfExists": {
          "aws:MultiFactorAuthPresent": "true"
        },
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        },
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-01T12:00:00Z"
        },
        "DateLessThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

I’ve seen customers use this to create IAM users who have no permissions attached besides sts:AssumeRole. Trust relationships are then configured between the IAM users and the IAM roles, creating ultimate flexibility in defining who has access to what roles without needing to update the IAM user identity pool at all.

A word on Effect: Deny and NotPrincipal in IAM role trust policies

I have seen some customers make use of an “Effect”: “Deny” clause in their trust policies. This pattern can help manage a wildcard statement in another “Effect”: “Allow” clause of the same trust policy. However, this isn’t the best approach for most scenarios. You will typically be able to define each principal in your policy as being allowed access. An example of where this might not be true is where you have a clause that uses the global wildcard “*” as a principal, in which case it will be necessary to add Deny statements to further filter the access.

Putting a wildcard into the Principal attribute of an Allow policy statement, particularly in relation to trust policies, can be dangerous if you haven’t done a robust job of managing the Condition attribute in the same statement. Be as specific as possible in your Allow statement, and use Principal attributes first, rather than then relying on Deny statements to manage potential security gaps created by your use of wildcards.

The following trust policy allows all IAM principals within the o-abcd12efg1 organization to assume the IAM role, but only if it’s before September 7, 2020:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-abcd12efg1"
        }
      }
    },
    {
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "DateGreaterThan": {
          "aws:CurrentTime": "2020-09-07T12:00:00Z"
        }
      }
    }
  ]
}

The use of NotPrincipal in trust policies

You can also build into your trust policies a NotPrincipal condition. Again, this is rarely the best choice, because you can introduce unnecessary complexity and confusion into your policies. Instead, you can avoid that problem by using fairly simple and prescriptive Principal statements.

Statements with NotPrincipal can also use a Deny statement as well, so it can create quite baffling policy logic, which if misunderstood could create unintended opportunities for misuse or abuse.

Here’s an example where you might think to use Deny and NotPrincipal in a trust policy—but notice this has the same effect as adding arn:aws:iam::123456789012:role/CoreAccess in a single Allow statement. In general, Deny with NotPrincipal statements in trust policies create unnecessary complexity, and should be avoided.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Deny",
      "NotPrincipal": {
        "AWS": "arn:aws:iam::111122223333:role/CoreAccess"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Remember, your Principal attribute should be very specific, to reduce the set of those able to assume the role, and an IAM role trust policy won’t permit access if a corresponding Allow statement isn’t explicitly present in the trust policy. It’s better to rely on the default deny policy evaluation logic where you’re able, rather than introducing unnecessary complexity into your policy logic.

Creating trust policies for AWS services that assume roles

There are two types of contexts where AWS services need access to IAM roles to function:

  1. Resources managed by an AWS service (like Amazon EC2 or Lambda, for example) need access to an IAM role to execute functions on other AWS resources, and need permissions to do so.
  2. An AWS service that abstracts its functionality from other AWS services, like Amazon Elastic Container Service (Amazon ECS) or Amazon Lex, needs access to execute functions on AWS resources. These are called service-linked roles and are a special case that’s out of the scope of this post.

In both contexts, you have the service itself as an actor. The service is assuming your IAM role so it can provide your credentials to your Lambda function (the first context) or use those credentials to do things (the second context). In the same way that IAM roles are used by human operators to provide an escalation mechanism for users operating with specific functions in the examples above, so, too, do AWS resources, such as Lambda functions, Amazon EC2 instances, and even AWS CloudFormation, require the same mechanism. You can find more information about how to create IAM Roles for AWS Services here.

An IAM role for a human operator and for an AWS service are exactly the same, even though they have a different principal defined in the trust policy. The policy’s Principal will define the AWS service that is permitted to assume the role for its function.

Here’s an example trust policy for a role designed for an Amazon EC2 instance to assume. You can see that the principal provided is the ec2.amazonaws.com service:


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Every configuration of an AWS resource should be passed a specific role unique to its function. So, if you have two Amazon EC2 launch configurations, you should design two separate IAM roles, even if the permissions they require are currently the same. This allows each configuration to grow or shrink the permissions it requires over time, without needing to reattach IAM roles to configurations, which might create a privilege escalation risk. Instead, you update the permissions attached to each IAM role independently, knowing that it will only be used by that one service resource. This helps reduce the potential impact of risks. Automating your management of roles will help here, too.

Several customers have asked if it’s possible to design a trust policy for an IAM role such that it can only be passed to a specific Amazon EC2 instance. This isn’t directly possible. You cannot place the Amazon Resource Name (ARN) for an EC2 instance into the Principal of a trust policy, nor can you use tag-based condition statements in the trust policy to limit the ability for the role to be used by a specific resource.

The only option is to manage access to the iam:PassRole action within the permission policy for those IAM principals you expect to be attaching IAM roles to AWS resources. This special Action is evaluated when a principal tries to attach another IAM role to an AWS service or AWS resource.

You should use restrictions on access to the iam:PassRole action with permission policies and permission boundaries. This means that the ability to attach roles to instance profiles for Amazon EC2 is limited, rather than using the trust policy on the role assumed by the EC2 instance to achieve this. This approach makes it much easier to manage scaling for both those principals attaching roles to EC2 instances, and the instances themselves.

You could use a permission policy to limit the ability for the associated role to attach other roles to Amazon EC2 instances with the following permission policy, unless the role name is prefixed with EC2-Webserver-:


{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "iam:GetRole",
            "iam:PassRole"
        ],
        "Resource": "arn:aws:iam::111122223333:role/EC2-Webserver-*"
    }]
}

Conclusion

You now have all the tools you need to build robust and effective trust policies that work at scale, providing guardrails for your users and those who might want to access resources in your account from outside your organization.

Policy logic isn’t always simple, and I encourage you to use sandbox accounts to try out your ideas. In general, simplicity should win over cleverness. IAM policies and statements that might well be frugal in their use of policy language might also be difficult to read, interpret, and update by other IAM administrators in the future. Keeping your trust policies simple helps to build IAM relationships everyone understands and can manage, and use, effectively.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Securing resource tags used for authorization using a service control policy in AWS Organizations

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/

In this post, I explain how you can use attribute-based access controls (ABAC) in Amazon Web Services (AWS) to help provision simple, maintainable access controls to different projects, teams, and workloads as your organization grows. ABAC gives you access to granular permissions and employee-attribute based authorization. By using ABAC, you need fewer AWS Identity and Access Management (IAM) roles and have less administrative overhead. The attributes used by ABAC in AWS are called tags, which are metadata associated with the principals (users or roles) and resources within your AWS account. Securing tags designated for authorization is important because they’re used to grant access to your resources. This post describes an approach to secure these tags across all of your AWS accounts through the use of AWS Organizations service control policies (SCPs) so that only authorized users can modify a resource’s tags. Organizations helps you centrally govern your accounts as you grow and scale your workloads on AWS; SCPs are a feature of Organizations that restricts what services and actions are allowed in your accounts. Together they provide flexible account management and security features that help you centrally secure your business.

Let’s say you want to give users a standard way to access their accounts and resources in AWS. For authentication, you want to continue using your existing identity provider (IdP). For authorization, you’ll need a method that can be scaled to meet the needs of your organization. To accomplish this, you can use ABAC in AWS, which requires fewer roles, helps you control permissions for many resources at once, and allows for simple access rules to different projects and teams. For example, all developers in your organization can have a project name assigned to their identities as a tag. The tag will be used as the basis for access to AWS resources via ABAC. Because access is granted based on these tags, they need to be secured from unauthorized changes.

Securing authorization tags

Using ABAC as your access control strategy for AWS depends on securing the tags associated with identities in your AWS organization’s accounts as well as with the resources those identities need to access. When users sign in, the authentication process looks something like this:
 

Figure 1: Using tags for secure access to resources

Figure 1: Using tags for secure access to resources

  1. Their identities are passed into AWS through federation.
  2. Federation is implemented through SAML assertions.
  3. Their identities each assume an AWS IAM role (via AWS Security Token Service).
  4. The project attribute is passed to AWS as a session tag named access-project (Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS has full details).
  5. The session tag acts as a principal tag, and if its value matches the access-project authorization tag of the resource, the user is given access to that resource.

To ensure that the role’s session tags persist even after assuming subsequent roles, you also need to set the TransitiveTagKeys SAML attribute.

Now that identity tags are persisted, you need to ensure that the resource tags used for authorization are secured. For sake of brevity, let’s call these authorization tags. Let’s assume that you intend to use ABAC for all the accounts in your AWS organization and control access through an SCP. SCPs ensure that resource authorization tags are initially set to an authorized value, and secure the tags against modification once set.

Authorization tag access control requirements

Before you implement a policy to secure your resource’s authorization tag from unintended modification, you need to define the requirements that the policy will enforce:

  • After the authorization tag is set on a resource:
    • It cannot be modified except by an IAM administrator.
    • Only a principal whose authorization tag key and value pair exactly match the key and value pair of the resource can modify the non-authorization tags for that resource.
  • If no authorization tag has been set on a resource, and if the authorization tag is present on the principal:
    • The resource’s authorization tag key and value pair can only be set if exactly matching the key and value tag of the principal.
    • All other non-authorization tags can be modified.
  • If any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations.

Review an SCP for securing resource tags

Let’s take a look at an SCP that fulfills the access control requirements listed above. SCPs are similar to IAM permission policies, however, an SCP never grants permissions. Instead, SCPs are IAM policies that specify the maximum permissions for an organization, organizational unit (OU), or account. Therefore, the recommended solution is to also assign identity policies that allow modification of tags to all roles that need the ability to create and delete tags. You can then assign your SCP to an OU that includes all of the accounts that need resource authorization tags secured. In the example that follows, the AWS Services That Work with IAM documentation page would be used to identify the first set of resources with ABAC support that your organization’s developers will use. These resources include Amazon Elastic Compute Cloud (Amazon EC2), IAM users and roles, Amazon Relational Database Service (Amazon RDS), and Amazon Elastic File System (Amazon EFS). The SCP itself consists of three statements, which I review in the following section.

Deny modification and deletion of tags if a resource’s authorization tags don’t match the principal’s

This first policy statement is below. It addresses what needs to be enforced after the authorization tag is set on a resource. It denies modification of any tag—including the resource’s authorization tags—if the resource’s authorization tag doesn’t match the principal’s tag. Note that the resource’s authorization tag must exist. Let’s review the components of the statement.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
            "Effect": "Deny",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags"
            ],
            "Resource": [
                "*"
            ],
            "Condition": {
                "StringNotEquals": {
                    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
                    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
                },
                "Null": {
                    "ec2:ResourceTag/access-project": false
                }
            }
        },
  • Line 6:
    
    "Effect": "Deny",
    

    First, specify the Effect to be Deny to deny the following actions under certain conditions.

  • Lines 7-10:
    
    "Action": [
    	"ec2:CreateTags",
    	"ec2:DeleteTags"
    ],
    

    Specify the policy actions to be denied, which are ec2:CreateTags and ec2:DeleteTags. When combined with line 6, this denies modification of tags. The ec2:CreateTags action is included as it also grants the permission to overwrite tags, and we want to prevent that.

  • Lines 14-22:
    
    "Condition": {
    	"StringNotEquals": {
    		"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    		"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    	},
    	"Null": {
    		"ec2:ResourceTag/access-project": false
    	}
    }
    

    Specify the conditions for which this statement—to deny policy actions—is in effect.

    The first condition operator is StringNotEquals. According to policy evaluation logic, if multiple condition keys are attached to an operator, each needs to evaluate as true in order for StringNotEquals to also evaluate as true.

  • Line 16:
    
    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Evaluate if the access-project EC2 resource tag’s key and value pairs don’t match the principal’s.

  • Line 17:
    
    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    

    Evaluate if the principal is not the iam-admin role for the account. aws:PrincipalArn is a global condition key for comparing the Amazon Resource Name (ARN) of the principal that made the request with the ARN that you specify in the policy.

  • Line 19:
    
    "Null": {
    

    The second condition operator is a Null condition. It evaluates if the attached condition key’s tag exists in the request context. If the key’s right side value is set to false, the expression will return true if the tag exists and has a value. The following table illustrates the evaluation logic:

    Condition key and righthand side value Description Evaluation of expression if tag value exists Evaluation of expression if tag value doesn’t exist
    ec2:ResourceTag/access-project: true If my access-project resource tag is null (set to true) FALSE TRUE
    ec2:ResourceTag/access-project: false If my access-project resource tag is not null (set to false) TRUE FALSE
  • Line 20:
    
    "ec2:ResourceTag/access-project": false
    

    Deny only when the access-project tag is present on the resource. This is needed because tagging actions shouldn’t be denied for new resources that might not yet have the access-project authorization tag set, which is covered next in the DenyModifyResAuthzTagIfPrinTagNotMatched statement.

Note that all elements of this condition are evaluated with a logical AND. For a deny to occur, both the StringNotEquals and Null operators must evaluate as true.

Deny modification and deletion of tags if requesting that a resource’s authorization tag be set to any value other than the principal’s

This second statement addresses what to do if there’s no authorization tag, or if the tag is on the principal but not on the resource. It denies modification of any tag, including the resource’s authorization tags, if the resource’s access-project tag is one of the requested tags to be modified and its value doesn’t match the principal’s access-project tag. The statement is needed when the resource might not have an authorization tag yet, and access shouldn’t be granted without a tag. Fortunately, access can be based on what the principal is requesting the resource authorization tag to be set to, which must be equal to the principal’s tag value.


{   
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]   
        }   
    }       
},
  • Line 36:
    
    "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Check to see if the desired access-project tag value for your resource, aws:RequestTag/access-project, isn’t equal to the principal’s access-project tag value. aws:RequestTag is a global condition key that can be used for tag actions to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy.

  • Line 39-43:
    
    "ForAnyValue:StringEquals": {
         "aws:TagKeys": [
             "access-project"
         ]               
    }          
    

    This ensures that a deny can only occur if the access-project tag is among the tags in the request context, which would be the case if the resource’s authorization tag was one of the requested tags to be modified.

Deny modification and deletion of tags if the principal’s access-project tag does not exist

The third statement addresses the requirement that if any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations. This policy statement will deny modification of any tag if the principal’s access-project tag doesn’t exist. The reason for this is that if authorization is intended to be based on the principal’s authorization tag, it must be present for any tag modification operation to be allowed.


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],      
    "Resource": [
        "*"     
    ],      
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {
            "aws:PrincipalTag/access-project": true
        }       
    }       
}

You’ve now created a basic SCP that protects against unintended modification of Amazon EC2 resource tags used for authorization. You will also need to create identity policies for your project’s roles that enable users to tag resources. You will also want to test your EC2 instances and ensure that your tags cannot be modified except by authorized principals. The next step is to add IAM, Amazon RDS, and Amazon EFS resources to this SCP.

Adding IAM users and roles to the SCP

Now that you’re confident you’ve done what you can to secure your Amazon EC2 tags, you can to do the same for your IAM resources, specifically users and roles. This is important because user and role resources can also be principals, and have authorization based off their tags. For example, not all roles will be assumed by employees and receive session tags. Applications can also assume a role and thus have authorization based on a non-session tag. To account for that possibility, you can add the following IAM actions to the prior statement, which denies modification and deletion of tags if a resource’s authorization tags don’t match the principal:


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser" 
    ],          
    "Resource": [
        "*" 
    ],          
    "Condition": {  
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {   
            "aws:PrincipalTag/access-project": true
        }   
    }   
}

You can use the following statements to add the IAM tagging actions to the previous statements that (1) deny modification and deletion of tags if requesting that a resource’s authorization tag be set to a different value than the principal’s and (2) denying modification and deletion of tags if the principal’s project tag doesn’t exist.


{
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]
        }
    }
},
{
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:PrincipalTag/access-project": true
        }
    }
} 

Adding resources that support the global aws:ResourceTag to the SCP

Now we will add Amazon RDS and Amazon EFS resources to complete the SCP. RDS and EFS are different with regards to tagging because they support the global aws:ResourceTag condition key instead of a service specific key such as ec2:ResourceTag. Instead of requiring you to create multiple statements similar to the code used above to deny modification and deletion of tags if a resource’s authorization tags don’t match the principal, you can create a single reusable policy statement that you can continue to add more actions to, as shown in the following code.


{
    "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "elasticfilesystem:TagResource",
        "elasticfilesystem:UntagResource",
        "rds:AddTagsToResource",
        "rds:RemoveTagsFromResource"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:ResourceTag/access-project": false
        }
    }
}, 

Review the full SCP to secure authorization tags

Let’s review the final SCP. It includes all the services you’ve targeted, as shown in the code sample below. This policy is 2,859 characters in length, and uses non-Unicode characters, which means that it uses a byte of data for each character. The policy therefore uses 2,859 bytes of the current 5,120 byte quota for an SCP. This means that roughly 4 to 18 more services can be added to this policy, depending on if the services use their own service-specific resource tag condition key or the global aws:ResourceTag key. Keep these numbers and limits in mind when adding additional resources, and remember that a maximum of five SCPs can be associated with an OU. It is unwise to consume the entirety of your available SCP policy space, as you’ll want to ensure you can make later changes and have growing room for your future business requirements.

This SCP doesn’t enforce tag-on-create, which lets you require that tags be applied when a resource is created, as shown in Example service control polices. Enforcing tag-on-create is necessary if you need resources to be accessible only from appropriately tagged identities from the time resources are created. Enforcement can be done through an SCP or by creating identity policies for the roles in each account that require it.

If necessary, you can use another employee attribute in addition to access-project, but that will use more of the available quota of bytes. Adding another attribute can potentially double the bytes used in a policy, especially for services that require use of a service-specific resource tag condition key. Using Amazon EC2 as an example, you would need to create separate statements just for the added employee attribute, duplicating the policy statements in the first three code samples.

Here’s the final SCP in its entirety:


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
			"Effect": "Deny",
			"Action": [
				"ec2:CreateTags",
				"ec2:DeleteTags"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"ec2:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedIam",
			"Effect": "Deny",
			"Action": [
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"iam:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"iam:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"ForAnyValue:StringEquals": {
					"aws:TagKeys": [
						"access-project"
					]
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfPrinTagNotExists",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:PrincipalTag/access-project": true
				}
			}
		}
	]
}

Next Steps

Now that you have an SCP that protects your resources’ authorization tags, you’ll also want to consider ensuring that identities cannot be assigned unauthorized tags when assuming roles, as shown in Using SAML Session Tags for ABAC. Additionally, you’ll also want detective and responsive controls to verify that identities are permitted to only access their own resources, and that responsive action can be taken if unintended access occurs.

Summary

You’ve created a multi-account organizational guardrail to protect the tags used for your ABAC implementation. Through the use of employee attributes, transitive session tags, and a service control policy you’ve created a preventive control that will deny anyone except an IAM administrator from modifying tags used for authorization once set on a resource.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael Chan

Michael is a Developer Advocate for AWS Identity. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Reduce Cost and Increase Security with Amazon VPC Endpoints

Post Syndicated from Nigel Harris original https://aws.amazon.com/blogs/architecture/reduce-cost-and-increase-security-with-amazon-vpc-endpoints/

Introduction

This blog explains the benefits of using Amazon VPC endpoints and highlights a self-paced workshop that will help you to learn more about them. Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

A VPC endpoint allows you to privately connect your VPC to supported AWS services without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Endpoints are virtual devices that are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

VPC endpoints enable you to reduce data transfer charges resulting from network communication between private VPC resources (such as Amazon Elastic Cloud Compute—or EC2—instances) and AWS Services (such as Amazon Quantum Ledger Database, or QLDB). Without VPC endpoints configured, communications that originate from within a VPC destined for public AWS services must egress AWS to the public Internet in order to access AWS services. This network path incurs outbound data transfer charges. Data transfer charges for traffic egressing from Amazon EC2 to the Internet vary based on volume. However, at the time of writing, after the first 1GB / Month ($0.00 per GB), transfers are charged at a rate of $ 0.09/GB (for AWS US-East 1 Virginia). With VPC endpoints configured, communication between your VPC and the associated AWS service does not leave the Amazon network. If your workload requires you to transfer significant volumes of data between your VPC and AWS, you can reduce costs by leveraging VPC endpoints.

There are two types of VPC endpoints: interface endpoints and gateway endpoints. Amazon Simple Storage Service (S3) and Amazon DynamoDB are accessed using gateway endpoints. You can configure resource policies on both the gateway endpoint and the AWS resource that the endpoint provides access to. A VPC endpoint policy is an AWS Identity and Access Management (AWS IAM) resource policy that you can attach to an endpoint. It is a separate policy for controlling access from the endpoint to the specified service. This enables granular access control and private network connectivity from within a VPC. For example, you could create a policy that restricts access to a specific DynamoDB table through a VPC endpoint.

Figure 1: Accessing S3 via a Gateway VPC Endpoint

Figure 1: Accessing S3 via a Gateway VPC Endpoint

Interface endpoints enable you to connect to services powered by AWS PrivateLink. This includes a large number of AWS services, services hosted by other AWS customers and partners in their own VPCs, and supported AWS Marketplace partner services. Like gateway endpoints, interface endpoints can be secured using resource policies on the endpoint itself and the resource that the endpoint provides access to. Interface endpoints enable the use of security groups to restrict access to the endpoint.

Figure 2: Accessing QLDB via an Interface VPC Endpoint

Figure 2: Accessing QLDB via an Interface VPC Endpoint

In larger multi-account AWS environments, network design can vary considerably. Consider an organization that has built a hub-and-spoke network with AWS Transit Gateway. VPCs have been provisioned into multiple AWS accounts, perhaps to facilitate network isolation or to enable delegated network administration. When deploying distributed architectures such as this, a popular approach is to build a “shared services VPC, which provides access to services required by workloads in each of the VPCs. This might include directory services or VPC endpoints. Sharing resources from a central location instead of building them in each VPC may reduce administrative overhead and cost. This approach was outlined by my colleague Bhavin Desai in his blog post Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway.

Figure 3: Centralized VPC Endpoints (multiple VPCs)

Figure 3: Centralized VPC Endpoints (multiple VPCs)

Alternatively, an organization may have centralized its network and chosen to leverage VPC sharing to enable multiple AWS accounts to create application resources (such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, and AWS Lambda functions) into a shared, centrally managed network. With either pattern, establishing granular set of controls to limit access to resources can be critical to support organizational security and compliance objectives while maintaining operational efficiency.

Figure 4: Centralized VPC Endpoints (shared VPC)

Figure 4: Centralized VPC Endpoints (shared VPC)

Learn how with the VPC Endpoint Workshop

Understanding how to appropriately restrict access to endpoints and the services they provide connectivity to is an often-misunderstood topic. I recently authored a hands-on workshop to help customers learn how to provision appropriate levels of access. Continue to learn about Amazon VPC Endpoints by taking the VPC Endpoint Workshop and then improve the security posture of your cloud workloads by leveraging network controls and VPC endpoint policies to manage access to your AWS resources.

How to use resource-based policies in the AWS Secrets Manager console to securely access secrets across AWS accounts

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-use-resource-based-policies-aws-secrets-manager-console-to-securely-access-secrets-aws-accounts/

AWS Secrets Manager now enables you to create and manage your resource-based policies using the Secrets Manager console. With this launch, we are also improving your security posture by both identifying and preventing creation of resource policies that grant overly broad access to your secrets across your Amazon Web Services (AWS) accounts. To achieve this, we use the Zelkova engine to mathematically analyze access granted by your resource policy and alert you if such permissions are found. The analysis verifies access across all resource-policy statements, actions, and the set of condition keys used in your policies. To be considered non-public, the resource policy must grant access only to fixed values (values that don’t contain a wildcard) of one or more of the following: aws:SourceArn, aws:SourceVpc, aws:SourceVpce, aws:SourceAccount, aws:SourceIP, and ensure the Principal does not include a “*” entry.

If the policy grants Public or overly broad access to your secrets across AWS accounts, Secrets Manager will block you from applying the policy in the console and alert you with a dashboard message. This prevents your policy from accidentally granting broader access to your secrets, instead ensuring you are restricting it to the intended AWS accounts, AWS services, and AWS Identity and Access Management (IAM) entities. Access to AWS Secrets Manager requires AWS credentials. Those credentials must contain permission to access the AWS resources you want to access, such as your Secrets Manager secrets. In this blog post, we use Public or broad access to refer to values (or a combination of values) in the resource policy that result in a wide access across AWS accounts and principals.

With AWS Secrets Manager, you have the option to store, rotate, manage, and retrieve many types of secrets. These can be database usernames and passwords, API keys, string values, and binary data. AWS supports the ability to share these secrets cross-account by applying resource policies via the AWS Command Line Interface (AWS CLI) and now via the Secrets Manager console.

Why would you need to share a secret? There are many reasons. Perhaps you have database credentials managed in a central account that are needed by applications in your production account. Maybe you have the binary stored for an encryption key that other accounts will use to create AWS Key Management Service (AWS KMS) keys in their accounts. To achieve this goal while ensuring a secure transfer of information and least privilege permissions, you will need a resource-based policy on your secret, a resource-based policy on your AWS KMS Customer Managed Key (CMK) used for encrypting the secret, and a user-based policy on your IAM principal.

You can still create a policy using AWS CLI or AWS SDK permitting access to a broader scope of entities if your business needs dictate. If you do permit this type of broader access, AWS Secrets Manager will show a notification in your dashboard, as shown in Figure 2, below.

Figure 1. This shows the warning when you try to create a resource policy that grants broad access to your secrets via the AWS Secrets Manager console.

Figure 1. This shows the warning when you try to create a resource policy that grants broad access to your secrets via the AWS Secrets Manager console.

 

Figure 2. This alert pops up when you click on a secret that has a resource policy attached via the CLI that grants broad access to the secret.

Figure 2. This alert pops up when you click on a secret that has a resource policy attached via the CLI that grants broad access to the secret.

In the example below, you’ll see how to use the AWS Secrets Manager console to attach a resource-based policy and allow access to your secret from a secondary account. A secret in the CENTRAL_SECURITY_ACCOUNT will be set to allow it to be accessed by an IAM role in the PRODUCTION_ACCOUNT.

In this example:

  • SECURITY_SECRET = The secret created in the CENTRAL_SECURITY_ACCOUNT.
  • SECURITY_CMK = The AWS KMS CMK used to encrypt the SECURITY_SECRET.
  • PRODUCTION_ROLE = The AWS IAM role used to access the SECURITY_SECRET.
  • PRODUCTION_ACCOUNT = The AWS account that owns the AWS IAM role used for cross-account access.

Overview of solution

The architecture of the solution can be broken down into four steps, which are outlined in Figure 3. The four main steps are:

  1. Create the resource-based policy via the AWS Secrets Manager console on the SECURITY_SECRET in the CENTRAL_SECURITY_ACCOUNT.
  2. Update the SECURITY_CMK policy in the CENTRAL_SECURITY_ACCOUNT to allow the role from the PRODUCTION account access.
  3. Grant the AWS IAM role in the PRODUCTION_ACCOUNT permissions to access the secret.
  4. Test and verify access from the PRODUCTION_ACCOUNT.

 

Figure 3. A visual overview of the four steps to use the AWS Secrets Manager console to attach a resource-based policy, allow access to your secret from a secondary account, and test and verify the process.

Figure 3. A visual overview of the four steps to use the AWS Secrets Manager console to attach a resource-based policy, allow access to your secret from a secondary account, and test and verify the process.

Prerequisites

To use the example in this post, you need:

  • An AWS account.
  • An IAM Role with permissions to make modifications in both the CENTRAL_SECURITY_ACCOUNT and the PRODUCTION_ACCOUNT.
  • An IAM role in the PRODUCTION_ACCOUNT you wish to grant permissions to access the SECURITY_SECRET.

Deploying the solution

Step 1: Create a resource-based policy in your CENTRAL_SECURITY account on the SECURITY_SECRET secret

  1. Log in to the AWS Secrets Manager console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_SECRET.
  3. Choose Edit Permissions next to Resource Permissions (optional).

    Figure 4. The dashboard view where you edit permissions.

    Figure 4. The dashboard view where you edit permissions.

  4. This will bring you to the page to add the resource policy. It will give you a basic template as shown in Figure 5, below.

    Figure 5. The basic template to add the resource policy.

    Figure 5. The basic template to add the resource policy.

  5. Since the full policy is provided for you in this example, delete the template from the text box.
  6. Copy the policy below and paste it in the text box. Make sure to replace PRODUCTION with your AWS account ID. You can also adjust the permissions you grant if needed. This policy allows a specific role in the PRODUCTION_ACCOUNT account to retrieve the current version of your secret. In the example, my IAM Role is called PRODUCTION_ROLE. Note you do not need to replace AWSCURRENT with any other value.
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::PRODUCTION:role/PRODUCTION_ROLE"
                },
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "*",
                "Condition": {
                    "ForAnyValue:StringEquals": {
                        "secretsmanager:VersionStage": "AWSCURRENT"
                    }
                }
            }
        ]
    }
    

    As shown in Figure 6, below, you’ll see this in the resource policy text area (with your AWS account ID in place of PRODUCTION).

    Figure 6. The example resource policy shown in the console.

    Figure 6. The example resource policy shown in the console.

  7. Choose Save.

Step 2: Update the resource-based policy in your CENTRAL_SECURITY account on the SECURITY_CMK

Note: Secrets in AWS Secrets Manager are encrypted by default. However, it is important for you to provide authorization for IAM Principals that need to access your secrets. Complete authorization requires access to the secret and the KMS CMK used to encrypt it, which prevents accidental public permissions on the secret. It is important to maintain both sets of authorization to provide appropriate access to secrets.

  1. Log in to the AWS KMS console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_CMK.
  3. Next to Key policy choose the Edit button.
  4. Paste the below code snippet into your key policy to allow the PRODUCTION_ACCOUNT access:
    
    {
        "Sid": "AllowUseOfTheKey",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::PRODUCTION:role/PRODUCTION_ROLE"
        },
        "Action": [
            "kms:Decrypt",
            "kms:DescribeKey"
        ],
        "Resource": "arn:aws:kms:us-east-1:CENTRAL_SECURITY:key/SECURITY_CMK"
    }
    

    You will need to replace PRODUCTION with your production account ID, PRODUCTION_ROLE with your production Role name, CENTRAL_SECURITY with your security account ID, and SECURITY_CMK with the CMK key ID of your security CMK. If you forget to swap out the account IDs in the policy with your own, you’ll see an error message similar to the one shown in Figure 7, below.

    Figure 7. Error message that appears if you don’t swap out your account number correctly.

    Figure 7. Error message that appears if you don’t swap out your account number correctly.

  5. Choose Save changes.

Step 3: Add permissions to the PRODUCTION_ROLE in the PRODUCTION account

  1. Log in to the AWS IAM console in the PRODUCTION_ACCOUNT account.
  2. In the left navigation pane, choose Roles.
  3. Select PRODUCTION_ROLE.
  4. Under the Permissions tab, choose Add inline policy.
  5. Choose the JSON tab and paste the below policy:
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": " arn:aws:secretsmanager:us-east-1:CENTRAL_SECURITY:secret:SECURITY_SECRET"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "kms:Decrypt",
                    "kms:DescribeKey"
                ],
                "Resource": "arn:aws:kms:us-east-1:CENTRAL_SECURITY:key/SECURITY_CMK"
            }
        ]
    }
    

    You will need to replace CENTRAL_SECURITY with your security account id, SECURITY_SECRET with the secret id, and SECURITY_CMK with the CMK key id of your security CMK.

  6. Choose Review policy.
  7. Name the policy Central_Security_Account_Security-Secret-Access, and choose Create policy.

Step 4: Test access to the SECURITY_SECRET from the PRODUCTION account

Verification of access via AWS CLI

  1. From the AWS CLI, use the PRODUCTION_ROLE credentials to run the get-secret-value command.
  2. Returned output should look like the example, below, in Figure 8.
    
    $aws secretsmanager get-secret-value --secret-id SECURITY_SECRET --version-stage AWSCURRENT
    
    {
        “ARN”: “arn:aws:secretsmanager:us-east-1:CENTRAL_SECURITY:secret:SECURITY_SECRET”,
        “Name”: “SECURITY_SECRET”,
        “SecretString”: “TheSecretString”,
        “CreatedDate”: 123456789,
        “VersionId”: “64c4250d-0b81-42e0-9a0c-e189d3c9aea8”,
        “VersionsStages”: [
            “AWSCURRENT”
        ]
    }
    

You can also verify the policy was attached from the CENTRAL_SECURITY_ACCOUNT by following the steps below.

Verification of policy via console

  1. Log into the AWS Secrets Manager console in the CENTRAL_SECURITY_ACCOUNT.
  2. Choose SECURITY_SECRET.
  3. Scroll down to where it shows Resource Permissions (optional), and you’ll see your resource policy stored in the console, as shown in Figure 8, below.

    Figure 8. What the example resource policy looks like in the console.

    Figure 8. What the example resource policy looks like in the console.

Conclusion

In this post, you saw how to add a resource-based policy on a secret in AWS Secrets Manager using the console, and how to update your AWS KMS CMK resource-based policy to enable access. The example showed setting up cross-account access, and allowing a role from the PRODUCTION_ACCOUNT to use the secret in the CENTRAL_SECURITY_ACCOUNT. By using the AWS Secrets Manager console to set up the resource-based policy, you now have a straight-forward, visual way to add and manage resource-based policies for your secrets and receive notifications if that policy is too broad.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Secrets Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security and Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

How to use G Suite as an external identity provider for AWS SSO

Post Syndicated from Yegor Tokmakov original https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/

Do you want to control access to your Amazon Web Services (AWS) accounts with G Suite? In this post, we show you how to set up G Suite as an external identity provider in AWS Single Sign-On (SSO). We also show you how to configure permissions for your users, and how they can access different accounts.

Introduction

G Suite is used for common business functions like email, calendar, and document sharing. If your organization is using AWS and G Suite, you can use G Suite as an identity provider (IdP) for AWS. You can connect AWS SSO to G Suite, allowing your users to access AWS accounts with their G Suite credentials.

You can grant access by assigning G Suite users to accounts governed by AWS Organizations. The user’s effective permissions in an account are determined by permission sets defined in AWS SSO. They allow you to define and grant permissions based on the user’s job function (such as administrator, data scientist, or developer). These should follow the least privilege principle, granting only permissions that are necessary to perform the job. This way, you can centrally manage user accounts for your employees in the Google Admin console and have fine-grained control over the access permissions of individual users to AWS resources.

In this post, we walk you through the process of setting up G Suite as an external IdP in AWS SSO.

How it works

AWS SSO authenticates your G Suite users by using Security Assertion Markup Language (SAML) 2.0 authentication. SAML is an open standard for secure exchange of authentication and authorization data between IdPs and service providers without exposing users’ credentials. When you use AWS as a service provider and G Suite as an external IdP, the login process is as follows:

  1. A user with a G Suite account opens the link to the AWS SSO user portal of your AWS Organizations.
  2. If the user isn’t already authenticated, they will be redirected to the G Suite account login. The user will log in using their G Suite credentials.
  3. If the login is successful, a response is created and sent to AWS SSO. It contains three different types of SAML assertions: authentication, authorization, and user attributes.
  4. When AWS SSO receives the response, the user’s access to the AWS SSO user portal is determined. A successful login shows accessible AWS accounts.
  5. The user selects the account to access and is redirected to the AWS Management Console.

This authentication flow is shown in the following diagram.

Figure 1: AWS SSO authentication flow

Figure 1: AWS SSO authentication flow

The user journey starts at the AWS SSO user portal and ends with the access to the AWS Management Console. Your users experience a unified access to the AWS Cloud, and you don’t have to manage user accounts in AWS Identity and Access Management (IAM) or AWS Directory Service.

User permissions in an AWS account are controlled by permission sets and groups in AWS SSO. A permission set is a collection of administrator-defined policies that determine a user’s effective permissions in an account. They can contain AWS managed policies or custom policies that are stored in AWS SSO, and are ultimately created as IAM roles in a given AWS account. Your users assume these roles when they access a given AWS account and get their effective permissions. This obliges you to fine control the access to the accounts, following the shared-responsibility model established in the cloud.

When you use G Suite to authenticate and manage your users, you have to create a user entity in AWS SSO. The user entity is not a user account, but a logical object. It maps a G Suite user via its primary email address as the username to the user account in AWS SSO. The user entity in AWS SSO allows you to grant a G Suite user access to AWS accounts and define its permissions in those accounts.

AWS SSO initial setup

The AWS SSO service has some prerequisites. You need to first set up AWS Organizations with All features set to enabled, and then sign in with the AWS Organization’s master account credentials. You also need super administrator privileges in G Suite and access to the Google Admin console.

If you’re already using AWS SSO in your account, refer to Considerations for Changing Your Identity Source before making changes.

To set up an external identity provider in AWS SSO

  1. Open the service page in the AWS Management Console. Then choose Enable AWS SSO.

    Figure 2: AWS SSO service welcome page

    Figure 2: AWS SSO service welcome page

  2. After AWS SSO is enabled, you can connect an identity source. On the overview page of the service, select Choose your identity source.

    Figure 3: Choose your identity source

    Figure 3: Choose your identity source

  3. In the Settings, look for Identity source and choose Change.

    Figure 4: Settings

    Figure 4: Settings

  4. By default, AWS SSO uses its own directory as the identity provider. To use G Suite as your identity provider, you have to switch to an external identity provider. Select External identity provider from the available identity sources.

    Figure 5: AWS SSO identity provider options

    Figure 5: AWS SSO identity provider options

  5. Choosing the External identity provider option reveals additional information needed to configure it. Choose Show individual metadata values to show the information you need to configure a custom SAML application.

    Figure 6: AWS SSO SAML metadata

    Figure 6: AWS SSO SAML metadata

For the next steps, you need to switch to your Google Admin console and use the service provider metadata information to configure AWS SSO as a custom SAML application.

G Suite SAML application setup

Open your Google Admin console in a new browser tab, so that you can easily copy the metadata information from the previous step. Now use the information to configure a custom SAML application.

To configure a custom SAML application in G Suite

  1. Navigate to the SAML Applications section in the Admin console and choose Add a service/App to your domain.

    Figure 7: Add a service or app

    Figure 7: Add a service or app

  2. In the modal dialog that opens, choose SETUP MY OWN CUSTOM APP.

    Figure 8: Set up a custom app

    Figure 8: Set up a custom app

  3. Go to Option 2 and choose Download to download the Google IdP metadata. It downloads an XML file named GoogleIDPMetadata-your_domain.xml, which you will use to configure G Suite as the IdP in AWS SSO. Choose Next.

    Figure 9: Download IdP metadata

    Figure 9: Download IdP metadata

  4. Configure the name and description of the application. Enter AWS SSO as the application name or use a name that clearly identifies this application for your users. Choose Next to continue.

    Figure 10: Name and describe your app

    Figure 10: Name and describe your app

  5. Fill in the Service Provider Details using the metadata information from AWS SSO, then choose Next to create your custom application. The mapping for the metadata is:
    • Enter the AWS SSO Sign in URL as the Start URL
    • Enter the AWS SSO ACS URL as the ACS URL
    • Enter the AWS SSO Issue URL as the Entity ID

     

    Figure 11: Add service provider details

    Figure 11: Add service provider details

  6. Next is a confirmation screen with a reminder that you have some steps still to do. Choose OK to continue.

    Figure 12: Reminder of remaining steps

    Figure 12: Reminder of remaining steps

  7. The final steps enable the application for your users. Select the application from the list and choose EDIT SERVICE from the top corner.

    Figure 13: Edit service

    Figure 13: Edit service

  8. Change the service status to ON for everyone and choose SAVE. If you want to manage access for particular users you can do this via organizational units (for example, you can enable the AWS SSO application for your engineering department). This doesn’t give access to any resources inside of your AWS accounts. Permissions are granted in AWS SSO.

    Figure 14: Service on for everyone

    Figure 14: Service on for everyone

You’re done configuring AWS SSO in G Suite. Return to the browser tab with the AWS SSO configuration.

AWS SSO configuration

After creating the G Suite application, you can finish SSO setup by uploading Google IdP metadata in the AWS Management Console.

To add identity provider metadata in AWS SSO

  1. When you configured the custom application in G Suite, you downloaded the GoogleIDPMetadata-your_domain.xml file. Choose Browse… on the configuration page and select this file from your download folder. Finish this step by choosing Next: Review.

    Figure 15: Upload IdP SAML metadata file

    Figure 15: Upload IdP SAML metadata file

  2. Type CONFIRM at the bottom of the list of changes and choose Change identity source to complete the setup.

    Figure 16: Confirm changes

    Figure 16: Confirm changes

  3. Next is a message that your change to the configuration is complete. At this point, you can choose Return to settings and proceed to user provisioning.

    Figure 17: Settings complete

    Figure 17: Settings complete

Manage Users and Permissions

AWS SSO supports automatic user provisioning via the System for Cross-Identity Management (SCIM). However, this is not yet supported for G Suite custom SAML applications. To add a user to AWS SSO, you have to add the user manually. The username in AWS SSO must be the primary email address of that user, and it must follow the pattern username@gsuite_domain.com.

To add a user to AWS SSO

  1. Select Users from the sidebar of the AWS SSO overview and then choose Add user.

    Figure 18: Add user

    Figure 18: Add user

  2. Enter the user details and use your user’s primary email address as the username. Choose Next: Groups to add the user to a group.

    Figure 19: Add user

    Figure 19: Add user

  3. We aren’t going to create user groups in this walkthrough. Skip the Add user to groups step by choosing Add user. You will reach the user list page displaying your newly created user and status enabled.

    Figure 20: User list

    Figure 20: User list

  4. The next step is to assign the user to a particular AWS account in your AWS Organization. This allows the user to access the assigned account. Select the account you want to assign your user to and choose Assign users.

    Figure 21: Assign users

    Figure 21: Assign users

  5. Select the user you just added, then choose Next: Permission sets to continue configuring the effective permissions of the user in the assigned account.

    Figure 22: Select user

    Figure 22: Select user

  6. Since you didn’t configure a permission set before, you need to configure one now. Choose Create new permission set.

    Figure 23: Create new permission set

    Figure 23: Create new permission set

  7. AWS SSO has managed permission sets that are similar to the AWS managed policies you already know. Make sure that Use an existing job function policy is selected, then select PowerUserAccess from the list of existing job function policies and choose Create.

    Figure 24: Create permission set

    Figure 24: Create permission set

  8. You can now select the created permission set from the list of available sets for the user. Select the PowerUserAccess permission set and choose Finish to assign the user to the account.

    Figure 25: Select permission set

    Figure 25: Select permission set

  9. You see a message that the assignment has been successful.

    Figure 26: Assignment complete

    Figure 26: Assignment complete

Access an AWS Account with G Suite

You can find your user portal URL in the AWS SSO settings, as shown in the following screenshot. Unauthenticated users who use the link will be redirected to the Google account login page and use their G Suite credentials to log in.

Figure 27: The user portal URL

Figure 27: The user portal URL

After authenticating, users are redirected to the user portal. They can select from the list of assigned accounts, as shown in the following example, and access the AWS Management Console of these accounts.

Figure 28: Select an assigned account

Figure 28: Select an assigned account

You’ve successfully set up G Suite as an external identity provider for AWS SSO. Your users can access your AWS accounts using the credentials they already use.

Another way your users can use AWS SSO is by selecting it from their Google Apps to be redirected to the user portal, as shown in the following screenshot. That is one of the quickest ways for users to access accounts.

Figure 29: Apps in the user portal

Figure 29: Apps in the user portal

Using AWS CLI with SSO

You can use the AWS Command Line Interface (CLI) to access AWS resources. AWS CLI version 2 supports access via AWS SSO. You can automatically or manually configure a profile for the CLI to access resources in your AWS accounts. To authenticate your user, it opens the user portal in your default browser. If you aren’t authenticated, you’re redirected to the G Suite login page. After a successful login, you can select the AWS account you want to access from the terminal.

To upgrade to AWS CLI version 2, follow the instructions in the AWS CLI user guide.

Conclusion

You’ve set up G Suite as an external IdP for AWS SSO, granted access to an AWS account for a G Suite user, and enforced fine-grained permission controls for this user. This enables your business to have easy access to the AWS Cloud.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-on forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Yegor Tokmakov

Yegor is a solutions architect at AWS, working with startups. Previously, he was Chief Technology Officer at a healthcare startup in Berlin and was responsible for architecture and operations, as well as product development and growth of the tech team. Yegor is passionate about novel AI applications and data analytics. In his free time, he enjoys traveling, surfing, and cycling.

Sebastian Doell

Sebastian is a solutions architect at AWS. He helps startups to execute on their ideas at speed and at scale. Sebastian maintains a number of open source projects and is an advocate of Dart and Flutter.

Tighten S3 permissions for your IAM users and roles using access history of S3 actions

Post Syndicated from Mathangi Ramesh original https://aws.amazon.com/blogs/security/tighten-s3-permissions-iam-users-and-roles-using-access-history-s3-actions/

Customers tell us that when their teams and projects are just getting started, administrators may grant broad access to inspire innovation and agility. Over time administrators need to restrict access to only the permissions required and achieve least privilege. Some customers have told us they need information to help them determine the permissions an application really needs, and which permissions they can remove without impacting applications. To help with this, AWS Identity and Access Management (IAM) reports the last time users and roles used each service, so you can know whether you can restrict access. This helps you to refine permissions to specific services, but we learned that customers also need to set more granular permissions to meet their security requirements.

We are happy to announce that we now include action-level last accessed information for Amazon Simple Storage Service (Amazon S3). This means you can tighten permissions to only the specific S3 actions that your application requires. The action-level last accessed information is available for S3 management actions. As you try it out, let us know how you’re using action-level information and what additional information would be valuable as we consider supporting more services.

The following is an example snapshot of S3 action last accessed information.
 

Figure 1: S3 action last accessed information snapshot

Figure 1: S3 action last accessed information snapshot

You can use the new action last accessed information for Amazon S3 in conjunction with other features that help you to analyze access and tighten S3 permissions. AWS IAM Access Analyzer generates findings when your resource policies allow access to your resources from outside your account or organization. Specifically for Amazon S3, when an S3 bucket policy changes, Access Analyzer alerts you if the bucket is accessible by users from outside the account, which helps you to protect your data from unintended access. You can use action last accessed information for your user or role, in combination with Access Analyzer findings, to improve the security posture of your S3 permissions. You can review the action last accessed information in the IAM console, or programmatically using the AWS Command Line Interface (AWS CLI) or a programmatic client.

Example use case for reviewing action last accessed details

Now I’ll walk you through an example to demonstrate how you identify unused S3 actions and reduce permissions for your IAM principals. In this example a system administrator, Martha Rivera, is responsible for managing access for her IAM principals. She periodically reviews permissions to ensure that teams follow security best practices. Specifically, she ensures that the team has only the minimum S3 permissions required to work on their application and achieve their use cases. To do this, Martha reviews the last accessed timestamp for each supported S3 action that the roles in her account have access to. Martha then uses this information to identify the S3 actions that are not used, and she restricts access to those actions by updating the policies.

To view action last accessed information in the AWS Management Console

  1. Open the IAM Console.
  2. In the navigation pane, select Roles, then choose the role that you want to analyze (for example, PaymentAppTestRole).
  3. Select the Access Advisor tab. This tab displays all the AWS services to which the role has permissions, as shown in Figure 2.
     
    Figure 2: List of AWS services to which the role has permissions

    Figure 2: List of AWS services to which the role has permissions

  4. On the Access Advisor tab, select Amazon S3 to view all the supported actions to which the role has permissions, when each action was last used by the role, and the AWS Region in which it was used, as shown in Figure 3.
     
    Figure 3: List of S3 actions with access data

    Figure 3: List of S3 actions with access data

In this example, Martha notices that PaymentAppTestRole has read and write S3 permissions. From the information in Figure 3, she sees that the role is using read actions for GetBucketLogging, GetBucketPolicy, and GetBucketTagging. She also sees that the role hasn’t used write permissions for CreateAccessPoint, CreateBucket, PutBucketPolicy, and others in the last 30 days. Based on this information, Martha updates the policies to remove write permissions. To learn more about updating permissions, see Modifying a Role in the AWS IAM User Guide.

At launch, you can review 50 days of access data, that is, any use of S3 actions in the preceding 50 days will show up as a last accessed timestamp. As this tracking period continues to increase, you can start making permissions decisions that apply to use cases with longer period requirements (for example, when 60 or 90 days is available).

Martha sees that the GetAccessPoint action shows Not accessed in the tracking period, which means that the action was not used since IAM started tracking access for the service, action, and AWS Region. Based on this information, Martha confidently removes this permission to further reduce permissions for the role.

Additionally, Martha notices that an action she expected does not show up in the list in Figure 3. This can happen for two reasons, either PaymentAppTestRole does not have permissions to the action, or IAM doesn’t yet track access for the action. In such a situation, do not update permission for those actions, based on action last accessed information. To learn more, see Refining Permissions Using Last Accessed Data in the AWS IAM User Guide.

To view action last accessed information programmatically

The action last accessed data is available through updates to the following existing APIs. These APIs now generate action last accessed details, in addition to service last accessed details:

  • generate-service-last-accessed-details: Call this API to generate the service and action last accessed data for a user or role. You call this API first to start a job that generates the action last accessed data for a user or role. This API returns a JobID that you will then use with get-service-last-accessed-details to determine the status of the job completion.
  • get-service-last-accessed-details: Call this API to retrieve the service and action last accessed data for a user or role based on the JobID you pass in. This API is paginated at the service level.

To learn more, see GenerateServiceLastAccessedDetails in the AWS IAM User Guide.

Conclusion

By using action last accessed information for S3, you can review access for supported S3 actions, remove unused actions, and restrict access to S3 to achieve least privilege. To learn more about how to use action last accessed information, see Refining Permissions Using Last Accessed Data in the AWS IAM User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Mathangi Ramesh

Mathangi Ramesh

Mathangi is the product manager for AWS Identity and Access Management. She enjoys talking to customers and working with data to solve problems. Outside of work, Mathangi is a fitness enthusiast and a Bharatanatyam dancer. She holds an MBA degree from Carnegie Mellon University.